The present disclosure generally relates to data center management. More particularly, and not by way of any limitation, the present disclosure is directed to a system and method for managing one or more tenants in a cloud computing environment comprising one or more data centers.
Most cloud computing tenant, user and/or subscriber (hereinafter “tenant”) management systems use a centralized account management system in which one or a replicated collection of nodes contain records in an SQL tenant database where a single node acts as the primary node. An example is the OpenStack Keystone tenant identity management system. In some cases, the tenant management system only handles identity management. In other, mostly proprietary, solutions, the tenant management system also handles charging. The replication procedure between nodes is usually handled by a single node acting as the primary and/or designated (hereinafter “primary”) controller, which takes transactions and propagates them to other nodes.
If the primary controller experiences an anomaly, such as a crash, before propagating transactions to the replicated nodes, transactions can be lost or corrupted. Corruption introduced in the tenant database can propagate to the replicas. Further, if the capacity of the primary controller to handle traffic is limited it can become overwhelmed, also causing corruption to data.
A primary controller is typically scaled by replicating it in a cluster. This limits the number of clients a single controller node must handle. If tenant charges are processed by more than one controller, the database used for recording charging transactions must be reconciled. This is an additional time consuming step that is introduced into tenant charge reporting due to wide area network latency.
The present patent disclosure is broadly directed to systems, methods, apparatuses, devices, and associated non-transitory computer-readable media and network architecture for effectuating a tenant management system and method operative in a cloud-based database environment. In one aspect, an embodiment of the present invention comprises an apparatus and a method to manage cloud computing tenant account policy using contracts involving a blockchain ledger (hereinafter “smart contracts”). Smart contracts are written on a distributed system comprising a blockchain database, a state machine where the contracts are executed, and a consensus protocol to ensure all nodes agree on the ordering and content of transactions. In one embodiment, a consensus protocol such as RAFT may be used for purposes of achieving consensus among a plurality of nodes configured to effectuate tenant policy management decisions.
In a further aspect, a tenant management system (TMS) and associated method operative in a cloud-based database environment is disclosed. A distributed blockchain ledger is provided for holding tenant records embodied in smart contracts, the consistency of which is maintained by a consensus protocol between multiple chain servers processing requests from leaf servers for tenant authorization and charging. The tenant records contain the bytecode for the tenant management contracts, the tenant's credit, and other state associated with the contract such as the services the tenant is authorized to access.
In a further aspect, an embodiment of a system or apparatus for managing a cloud-based data center operative to support a plurality of tenants is disclosed. The claimed embodiment comprises, inter alia, a plurality of leaf servers each configured to execute a tenant policy enforcement module (TPEM) operative to facilitate enrollment of one or more tenants for resources and services supported by the data center and to control a tenant's access to at least one of the resources and services upon authentication and authorization. A plurality of chain servers are coupled to the TPEM nodes, wherein a chain server may be configured to execute a tenant policy decision/management module (TPDM, for short) in association with a smart contract execution module, wherein the TPDM service logic executing on a chain server is operative responsive to a request from a leaf server for access on behalf of a tenant to one or more resources or services supported by the data center. A plurality of persistent storage devices are coupled to the plurality of chain servers, wherein each persistent storage device is coupled to a corresponding chain server and configured to store tenant records comprising tenant management contract and transaction information in a blockchain replica. In one arrangement, the claimed apparatus may include a communications network interconnecting the plurality of leaf servers, the plurality of chain servers and at least a subset of the plurality of the persistent storage devices for effectuating communications therebetween. In a further arrangement, the TPEM/TPDM service logic may be co-located, in a single node or a set of nodes, of a tenant management architecture associated with the cloud-based data center.
In a still further aspect, an embodiment of a method of managing a cloud-based data center operative to support a plurality of tenants is disclosed. The claimed method comprises, inter alia, enrolling one or more tenants for obtaining resources and services supported by the data center and implementing one or more smart contracts by a TPDM executing on a plurality of chain servers for each of the tenants responsive to the enrolling of the tenants. The claimed method further involves compiling the one or more smart contracts into bytecode data and organizing tenant records in a blockchain replica associated with a corresponding chain server, wherein the tenant records each contain the compiled bytecode generated from the one or more smart contracts created with respect to a tenant's service management agreement, a plurality of state variables describing a current state of the tenant's account, and one or more data fields operative to support blockchain management and navigation within the blockchain replica. In one implementation, the claimed method also involves maintaining coherency among the blockchain replicas by executing a consensus protocol engine on at least a portion of the plurality of chain servers. In a still further implementation, the claimed method also involves storing each blockchain replica in a persistent storage device associated with the corresponding chain server, and causally disconnecting each persistent storage device from other persistent storage devices with respect to a malfunction on any of the other persistent storage devices.
In a still further aspect, an embodiment of the invention comprises: (i) a blockchain ledger for holding tenant records, the consistency of which is maintained by a distributed consensus protocol between multiple chain servers processing requests from leaf servers for tenant authorization and charging, wherein the tenant records contain the bytecode for the tenant management contracts, the tenant's credit, and other state associated with the contracts such as the services the tenant is authorized to access; (ii) a tenant policy decision mechanism consisting of executable code in smart contracts, written in a simplified smart contract language such as Solidity and executed in program language virtual machines designed for executing the smart contract language, located on the chain servers; and (iii) a policy enforcement mechanism consisting of software agents on leaf servers that query the chain servers when tenants want access to resources such as basic connectivity to the data center, as when logging in, compute time or cycles for executing processes, megabytes of storage and/or network bandwidth. The results from the chain servers determine whether the tenant request is granted or denied. The policy enforcement can additionally be used for higher level services, such as charging for watching streaming video, etc.
In a still further aspect, an embodiment of the present invention is a cloud tenant management system having hardware and software components, comprising a tenant policy decision module resident on any subset or all of a plurality of chain servers for implementing smart contracts; the one or a plurality of chain servers each generating an entry in a blockchain ledger for holding tenant records embodied by smart contracts; and one or a plurality of leaf servers having thereon a policy enforcement module.
In a still further aspect, an embodiment of the present invention comprises a non-transitory machine-readable storage medium that provides instructions that, if executed by a processor, will cause a processor to perform operations comprising implementing smart contracts by a tenant policy decision module or agent resident on any or all of a plurality of chain servers; generating, by one of the plurality of chain servers, an entry in a blockchain ledger for holding tenant records embodied by smart contracts; and enforcing policy defined by the smart contracts by one or a plurality of leaf servers having thereon a policy enforcement module. The non-transitory machine-readable storage medium that provides instructions to be executed by a processor maintains consistency by a distributed consensus protocol between multiple chain servers that are operative to process requests from the one or plurality of leaf servers. The non-transitory machine-readable storage medium that provides instructions to be executed by a processor stores tenant records containing the bytecode for tenant management contracts, tenant credit, and other state associated with the contracts such as the services the tenant is authorized to access.
In a further variation, an embodiment of the non-transitory machine-readable storage medium that provides instructions to be executed by a processor includes a tenant policy decision agent/module that executes code in smart contracts written in a simplified smart contract language stored in an associated chain server. The non-transitory machine-readable storage medium that provides instructions to be executed by a processor stores and executes a policy enforcement agent/module on a leaf server operable to query any one or all of the chain servers when a tenant requests access to resources, such resources including connectivity to a data center, compute time or cycles for executing processes, megabytes of storage and/or network bandwidth. The non-transitory machine-readable storage medium that provides instructions to be executed by a processor can be implemented in any of a network device (ND), a network element (NE), as a network function, as a virtual NE, virtual ND, virtual appliance or virtual machine.
In still further aspects, an embodiment of a system, apparatus, or network element is disclosed which comprises, inter alia, suitable hardware such as processors and persistent memory having program instructions for executing an embodiment of the methods set forth herein.
In still further aspects, one or more embodiments of a non-transitory computer-readable medium or distributed media containing computer-executable program instructions or code portions stored thereon are disclosed for performing one or more embodiments of the methods of the present invention when executed by a processor entity of a network node, apparatus, system, network element, subscriber device, and the like, mutatis mutandis. Further features of the various embodiments are as claimed in the dependent claims.
Advantageously, having a tenant database maintained as a distributed system and managed by a blockchain-based TMS as set forth in the present patent application ensures that the crashing of one chain server will not cause the database to become corrupt or invalid. If the storage of a chain server becomes corrupt, it can be renewed by copying the storage of one of the other chain servers. Further benefits of the present invention include greater degree of scalability, wherein individual chain server nodes can be added to the blockchain by simply booting them up with the chain server/TPDM modules on them. Not only will this allow the TMS architecture to autoscale, it can additionally scale to a distributed cloud by simply bringing up one or a collection of chain servers in each data center, and having them communicate with each other over the wide area network. Furthermore, having the tenant management policies embodied in smart contracts provides a high degree of flexibility beyond current systems since a customized contract can easily be made to match the particular requirements of a tenant, wherein new services can be added to the tenant authorization and charging system by simply adding additional functions to the contract libraries.
Additional benefits and advantages of the embodiments will be apparent in view of the following description and accompanying Figures.
Embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the Figures of the accompanying drawings in which like references indicate similar elements. It should be noted that different references to “an” or “one” embodiment in this disclosure are not necessarily to the same embodiment, and such references may mean at least one. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
The accompanying drawings are incorporated into and form a part of the specification to illustrate one or more exemplary embodiments of the present disclosure. Various advantages and features of the disclosure will be understood from the following Detailed Description taken in connection with the appended claims and with reference to the attached drawing Figures in which:
In the description herein for embodiments of the present invention, numerous specific details are provided, such as examples of components and/or methods, to provide a thorough understanding of embodiments of the present invention. One skilled in the relevant art will recognize, however, that an embodiment of the invention can be practiced without one or more of the specific details, or with other apparatus, systems, assemblies, methods, components, materials, parts, and/or the like. In other instances, well-known structures, materials, or operations are not specifically shown or described in detail to avoid obscuring aspects of embodiments of the present invention. Accordingly, it will be appreciated by one skilled in the art that the embodiments of the present disclosure may be practiced without such specific components. It should be further recognized that those of ordinary skill in the art, with the aid of the Detailed Description set forth herein and taking reference to the accompanying drawings, will be able to make and use one or more embodiments without undue experimentation.
Additionally, terms such as “coupled” and “connected,” along with their derivatives, may be used in the following description, claims, or both. It should be understood that these terms are not necessarily intended as synonyms for each other. “Coupled” may be used to indicate that two or more elements, which may or may not be in direct physical or electrical contact with each other, co-operate or interact with each other. “Connected” may be used to indicate the establishment of communication, i.e., a communicative relationship, between two or more elements that are coupled with each other. Further, in one or more example embodiments set forth herein, generally speaking, an element, component or module may be configured to perform a function if the element may be programmed for performing or otherwise structurally arranged to perform that function.
As used herein, a network element (e.g., a router, switch, bridge, etc.) is a piece of networking equipment, including hardware and software that communicatively interconnects other equipment on a network (e.g., other network elements, end stations, etc.). Some network elements may comprise “multiple services network elements” that provide support for multiple networking functions (e.g., routing, bridging, switching, Layer-2 aggregation, session border control, Quality of Service, and/or subscriber management, and the like), and/or provide support for multiple application services (e.g., data, voice, and video). Subscriber/tenant end stations (e.g., servers, workstations, laptops, netbooks, palm tops, mobile phones, smartphones, multimedia phones, Voice Over Internet Protocol (VoIP) phones, user equipment, terminals, portable media players, GPS units, gaming systems, set-top boxes) may access or consume resources/services, including cloud-centric resources/services, provided over a packet-switched wide area public network such as the Internet via suitable service provider access networks, wherein one or more data centers hosting such resources and services on behalf of a plurality of tenants may be managed according to some embodiments set forth hereinbelow. Subscriber/tenant end stations may also access or consume resources/services provided on virtual private networks (VPNs) overlaid on (e.g., tunneled through) the Internet. Typically, subscriber/tenant end stations may be coupled (e.g., through customer/tenant premise equipment or CPE/TPE coupled to an access network (wired or wirelessly)) to edge network elements, which are coupled (e.g., through one or more core network elements) to other edge network elements, and to cloud-based data center elements with respect to consuming hosted resources/services according to service management agreements, contracts, etc.
One or more embodiments of the present patent disclosure may be implemented using different combinations of software, firmware, and/or hardware. Thus, one or more of the techniques shown in the Figures (e.g., flowcharts) may be implemented using code and data stored and executed on one or more electronic devices or nodes (e.g., a subscriber client device or end station, a network element, etc.). Such electronic devices may store and communicate (internally and/or with other electronic devices over a network) code and data using computer-readable media, such as non-transitory computer-readable storage media (e.g., magnetic disks, optical disks, random access memory, read-only memory, flash memory devices, phase-change memory, etc.), transitory computer-readable transmission media (e.g., electrical, optical, acoustical or other form of propagated signals—such as carrier waves, infrared signals, digital signals), etc. In addition, such network elements may typically include a set of one or more processors coupled to one or more other components, such as one or more storage devices (e.g., non-transitory machine-readable storage media) as well as storage database(s), user input/output devices (e.g., a keyboard, a touch screen, a pointing device, and/or a display), and network connections for effectuating signaling and/or bearer media transmission. The coupling of the set of processors and other components may be typically through one or more buses and bridges (also termed as bus controllers), arranged in any known (e.g., symmetric/shared multiprocessing) or heretofore unknown architectures. Thus, the storage device or component of a given electronic device or network element may be configured to store code and/or data for execution on one or more processors of that element, node or electronic device for purposes of implementing one or more techniques of the present disclosure.
Referring now to the drawings and more particularly to
Broadly, with a multitenant architecture, the data center 108 may be arranged to provide every tenant a dedicated or configurable share of a resource/service including its data, configuration, user management, tenant individual functionality as well as properties such as security, charging, etc. At a macro level, the data center 108 may be implemented in a hierarchically interconnected system of multiple nodes including appropriate compute, storage and network elements disposed in a wide area backbone (e.g., IP or Next Generation Network (NGN)), to which a tenant premises equipment or subscriber end station may have secure Internet access. In one embodiment, a tenant premise can have its own compute resources logically separated from the cloud-based data center resources/services 110. In another arrangement, a tenant's private cloud may be accessed remotely via suitable Secure Sockets Layer (SSL) or IPSec VPN connections. Regardless of a particular multitenant architecture, example data center 108 may be organized based on a multi-layer hierarchical network model which may in general include three layers of hierarchy: a core layer (typically characterized by a high degree of redundancy and bandwidth capacity, optimized for high availability and performance), an aggregation layer that may be characterized by a high degree of high-bandwidth port density capacity (optimized for traffic distribution and link fan-out capabilities to access layer switches, and an access layer serving to connect host/server nodes to the network infrastructure. In one embodiment, example nodes in an aggregation layer may be configured to serve functionally as a boundary layer between OSI Layers 2 and 3 (i.e., an L2/L3 boundary) while the access layer elements may be configured to serve at L2 level (e.g., LANs or VLANs).
From the perspective of a functional model, example data center 108 may be comprised of the following layers: (i) network layer, (ii) services layer, (iii) compute layer, (iv) storage layer, and (v) management layer. Skilled artisans will recognize that with respect to the services layer there can be a difference between a conventional data center services layer and the cloud-based data center services layer in that the functional reference model of the cloud-based data center services layer may be architected for supporting application of L4-L7 services at a per-tenant level, e.g., through logical abstraction of the physical resources including hardware and software resources. Even with L4-L7 integrated services being provided, a cloud-based data center services layer may be configured to implement centralized services which may be more useful in applying policies that are broadly applicable across a range of tenants (or across different workgroups within a tenant premises network). An example management layer of the data center 108 may be architected as set of logical, functional and structural resources required to support and manage the overall multitenant architecture, including domain element management systems as well as higher level service orchestration systems, preferably configured to executing various data center administration functions regarding storage, compute, and network resources, including elements which allow for more dynamic resource allocation and automated processes (i.e., instantiating administrative or tenant user portals, service catalogs, workflow automation, tenant lifecycle management, scripting smart contracts, and the like). In one arrangement, a tenant management system (TMS) 112 may therefore be implemented as a “superset” or “backend” functionality of the cloud-based data center 108 in connection with the hosted resources/services 110 configured to serve the plurality of tenants 102-1 to 102-N for purposes of an example embodiment of the present invention as will be set forth in further detail hereinbelow.
Broadly, an embodiment of the management system 200 involves replacing a cluster of conventional databases (such as, e.g., Structured Query Language (SQL) databases) that are typically used for tenant records management with a distributed blockchain ledger operating in conjunction with smart contracts for executing transactions on the ledger, which may be implemented as a distributed permission-based structure. The blockchain ledger may be maintained by a collection of servers (hereinafter “chain servers”) coupled to persistent storage where the state and copies of the blockchain (e.g., blockchain replicas) may be stored. In one implementation, a suitable consensus protocol (e.g., RAFT) may be executed between the chain servers in order to ensure consistency of transactions. A plurality of smart contracts associated with the tenants may be executed in conjunction with a state machine or engine (e.g., such as the Ethereum VM used by Solidity, a smart contract programming language that is part of the Ethereum system) running on one or more chain servers, in association with suitable blockchain navigation logic as will be set forth below. In one arrangement, each chain server may be configured to run a copy of the state machine with respect to the smart contracts that embody respective tenant management policies and service level agreements. In one arrangement, the execution of smart contracts at a chain server in response to queries about resource usage renders the chain server a policy management/decision point. Further, policy enforcement agents or modules executing at one or more leaf nodes or servers provide access to tenants with respect to various resources/services (e.g., compute, storage, networking, and the like) in a query-based mechanism with the chain servers to determine a tenant's credit availability and obtain authorization for the tenant to utilize resources/services. The leaf servers may accordingly be disposed in a cloud-based TMS architecture as access as well as policy enforcement nodes, where access to resources is either granted or denied based on the decisions made in accordance with the smart contracts. If any question arises with respect to a particular tenant, the transactions may be replayed to determine what exactly happened by launching a diagnostics/logging session.
Continuing to refer to
Components, modules or blocks associated with the various servers set forth above may be executed on dedicated platforms or using resources that are virtualized in an architecture embodying one or more hypervisors or virtual machine monitors (VMMs) comprising computer software, firmware and hardware that creates and runs virtual machines optimized for specific functionalities. Regardless of how such components may be realized in a particular implementation, the structural/functional aspects of the chain servers including one or more TPDMs running thereon and the structural/functional aspects of the leaf servers including one or more TPEMs running thereon may be integrated or distributed in a number of ways, depending on the tenant density, scalability, form factor constraints (e.g., rack/blade server architectures), etc. For example, where the number of tenants is not large or the amount of storage required by a blockchain ledger is not an issue, the leaf nodes and chain nodes can be integrated or co-located in a single node. A chain server may also be configured to convert to a leaf server in one arrangement where, upon boot up, it discovers that the blockchain database has been corrupted. It can then restore the blockchain database while taking user requests and sending them to another chain server. When the database has been restored, it can convert back into a chain server. In a still further arrangement, to keep the storage used by the blockchain small, an example blockchain ledger can be periodically trimmed, removing older records and/or blocks.
Accordingly, in one example embodiment, each chain server of the plurality of chain servers 202-1 to 202-M may be configured with a corresponding tenant policy decision module, e.g., TPDM modules 212-1 to 212-M, at least a portion of which may be configured to execute a suitable consensus protocol engine, e.g., RAFT, with respect to the transactions carried out by the TMS architecture 200. Example TPDM modules 212-1 to 212-M may also be configured to initiate, control and/or manage inter-server communications among the chain servers 202-1 to 202-M via the fabric 250. Further, example TPDM modules 212-1 to 212-M may also be configured to handle and respond to requests from one or more leaf servers 204-1 to 204-K with respect to tenants' access to resources and services, and coordinate the execution of the smart contracts in conjunction with a smart contract virtual machine (VM) 214-1 through 214-M associated with respective chain servers. One skilled in the art will recognize that a smart contract VM in the context of the present patent application does not refer to an Operating System (OS) image executed along with other images on a server. Rather, a smart contract VM may be embodied as a system process that executes bytecodes generated from a language used for creating/coding a program, specifically, a smart contract program. In general, bytecode is programming code that, once compiled, may be executed on virtual machine instead of a computer processor platform. Using this approach, source code of a smart contract can be run on any platform once it has been compiled and run through the VM. For purposes of the present patent application, a smart contract may be a specific computer protocol generated from a tenant's service agreement or clauses therein that can be rendered partially or fully self-executing, self-enforcing, or both, wherein the protocol is operative to facilitate, verify, or enforce the negotiation and/or performance of a clause. It should be appreciated that a tenant management system based on smart contracts as set forth herein is not only operable to provide security that is superior to traditional contract law management, but it can also advantageously reduce transaction costs of enforcement.
In one example embodiment, a smart contract can be implemented in Solidity, a contract-oriented, high-level language whose syntax is similar to that of JavaScript, which is designed to interoperate with the Ethereum Virtual Machine (EVM) technology. Solidity is statically typed, and may be configured to support inheritance, libraries and complex user-defined types, among other features. A smart contract as implemented by Solidity may therefore be embodied in one arrangement as a collection of code (its functions) and data (its state) that resides at a specific address on an Ethereum-based blockchain. A smart contract virtual machine or engine 214-1 through 214-M operating under the control of the respective chain server's TPDM 202-1 to 202-M may accordingly be configured to execute the smart contract bytecode for each tenant's management contract(s) in association with the state machine implementation for executing smart contracts provided thereon.
As noted above, each leaf server node 204-1 to 204-K is operative to execute a tenant policy enforcement module (e.g., TPEM 210), which coordinates and processes access requests to resources and services on behalf of each of the tenants served by the leaf server. Further, TPEM 210 may also be configured to execute and facilitate tenant life cycle management functionalities, e.g., enrollment, removal, service look-up, etc., in association with the TPDM entities 202-1 to 202-M of the system 200, as will be set forth in additional detail further below.
Persistent data structures 216-1 through 216-M may each be provided as a replica of the blockchain in respective storage devices 206-1 through 206-M for holding the tenant records in a distributed digital ledger. Although a blockchain structure is exemplified herein for implementing the tenant record distributed ledger (e.g., as a consensus-based replicated, shared and synchronized digital data, and secured using cryptography), other implementations of a distributed ledger (e.g., based on directed acyclic graphs) may also be used in an additional or alternative embodiment of the present invention. Generally, each record may be configured, at a low level, to include the compiled bytecode for a smart contract for each tenant as well as each tenant's state variables describing the current state of such tenant's account. In addition, the following values may be included in an example record to support blockchain navigation and the TPDM functionality of the TPM architecture 200: (a) a timestamp, giving the last time the record was modified; and (b) the hash value of the previous block in the chain, which acts as a pointer to the rest of the chain.
Taking reference to
In an example embodiment involving Solidity-based smart contracts implementation, a single Solidity contract object may be provided in a block of a blockchain along with other objects that may have been recorded into the blockchain, at least some of which may or may not belong to the same tenant. From the tenant and service perspective, however, a tenant's contract may comprise a number of Solidity contract objects whose mapping to the actual storage may be varied (and dependent upon) how a blockchain structure is organized. For instance, they could all be bundled into a small number of blocks (including, as an extreme example, a single block), or they could be spread across multiple blocks. At the level of a Solidity contract, the logic just sees the addresses of the contract objects in one implementation. Accordingly, in such an implementation, it is not critical as to how the contract objects are stored or partitioned among the blocks. By way of a further arrangement, each block of a blockchain may be configured to contain a single transaction where a blockchain validator may be configured to as a transaction processor. A transaction may have any number of items or objects in it; not just a single tenant record, wherein a transaction may be recorded or recognized each time something is written into the blockchain. Accordingly, it should be appreciated that there can be a number of ways to partition transitions among the blocks, depending on how a particular blockchain structure is implemented by a data center operator.
Regardless of a specific blockchain implementation, an example embodiment of the present invention may involve a permission-based or private blockchain arrangement, where only verified and authorized data center nodes or agents are allowed to access the modify the blockchain (i.e., a private chain). As such, the term “blockchain” may be applied within the context of an example embodiment of the present patent application to a data structure that batches data into time-stamped blocks and prohibits two or more transactions from concurrently modifying an object in the database. Irrespective of whether permissionless or permissioned structures are used, a blockchain may be implemented as a continuously expanding list of records, called blocks, which are linked and secured using cryptography. Each block typically contains a hash pointer as a link to a previous block, a timestamp and transaction data. In this manner, blockchains resist modification of its underlying data. Functionally, a blockchain is a distributed ledger (private or open) that can record transactions between two parties efficiently and in a verifiable and permanent way. A distributed ledger, or blockchain, of an embodiment of the present invention may be managed by a peer-to-peer network involving blockchain logic modules executing on the chain servers, which may be configured to use the same protocol to validate new blocks. Once recorded, the data in any given block cannot be altered retroactively without the alteration of all subsequent blocks. As can be seen, this would require significant collusion, which makes a blockchain-based tenant records management system as set forth herein inherently secure.
Taking reference to
Based on the foregoing, it should be appreciated that a blockchain-based TMS according to an embodiment of the present invention is inherently secure by design, and may be implemented as a distributed computing system with high Byzantine fault tolerance, while still having decentralized consensus. This set of features makes a TMS blockchain ideally suitable for the recording of events and records pertaining to a large number of tenants, with potentially unlimited scalability. Whereas consensus is a fundamental problem in fault-tolerant distributed systems, consensus involving multiple servers such as TPDM chain servers may be achieved using a number of suitable consensus protocols such as RAFT, as noted previously. RAFT is disclosed in the document “In Search of an Understandable Consensus Algorithm”, D. Ongaro and J. Osterhaut, Proceedings of USENIX ATC '14: 2014 USENIX Annual Technical Conference, June 2014, pp. 305-319, incorporated by reference herein.
In general, consensus involves multiple servers agreeing on values and once they reach a decision on a value, that decision may be treated as final. Typical consensus algorithms make progress when any majority of the servers of a distributed system is available. For example, a cluster of five servers can continue to operate even if two servers fail. If more servers fail, they may stop making progress but will never return an incorrect result. Skilled artisans will recognize that by applying a consensus protocol among multiple TPDM nodes, a tenant management policy may be rendered directly executable. Although RAFT consensus protocol has been exemplified herein, it should be appreciated that other consensus protocols may be applied in additional or alternative embodiments of TMS architecture according to the teachings of the present patent disclosure. An example TMS architecture embodiment using RAFT may employ a stronger form of leadership than other consensus algorithms, however. For example, log entries may be configured to only flow from the leader to other servers in one arrangement, which may simplify the management of the replicated log and makes RAFT easier to understand. Further, a TMS architecture embodiment using RAFT may employ randomized timers to elect leaders, which may add only a small amount of resources/overhead to the heartbeats already required for any consensus algorithm, while resolving conflicts simply and rapidly. In a still further arrangement, RAFT's mechanism for changing a set of servers in the cluster may use a joint consensus approach where the majorities of two different configurations overlap during transitions. This may allow the cluster to continue operating normally during configuration changes. Whereas RAFT is one of a number of high performance consensus algorithms exemplified herein, additional/alternative embodiments may involve other consensus protocols as noted previously. One such example consensus protocol is Proof of Elapsed Time (PoET), which is used in the Hyperledger Sawtooth blockchain. Still further example consensus protocols for purposes of an embodiment of the present invention are: Practical Byzantine Fault Tolerance (PBFT), Proof of Work (PoW), Proof of Stake (PoS), Delegated PoS, etc. One skilled in the art will therefore appreciate that the embodiments described herein are not dependent on the details of a particular consensus algorithm so long as the performance is sufficient such that a transaction can complete in approximately less than 50 milliseconds or so.
In the context of the multiple TPDM based chain servers, consensus typically arises in connection with replicated state machines executing thereon, which is a general approach to building a fault-tolerant distributed TMS system. Thus, on one arrangement, each server may be provided with a state machine and a log, wherein it is desired that the state machine component is rendered fault-tolerant, such as a hash table. In one arrangement, therefore, it will appear to clients that they are interacting with a single, reliable state machine, even if a minority of the servers in the cluster fail. Each state machine takes as input commands from its log, whereby a consensus algorithm is executed to agree on the commands in the servers' logs.
Various sets of steps, acts, or functionalities, as well as associated components, of an embodiment of the foregoing TMS architecture 200 may comprise one or more processes, sub-processes, or sub-systems that may be grouped into a plurality of blocks associated with a tenant service management functional model 300 as exemplified in
In a further or alternative arrangement, the chain servers configured to find each other using the DNS SRV REC process may involve a SRV record having the data defining the location, e.g., the hostname and port number, of the servers for specified services, as set forth in RFC 2782, incorporated by reference herein. The chain servers managing the same blockchain may all be configured to use an SRV record for type “_TADMIN_BLOCK_CS”. In a scenario involving load balancing, an example embodiment may use DNS for passive load balancing or an active load balancer. In one arrangement, all chain servers maintaining a tenant ledger may be required to record their DNS names in the _TADMIN_BLOCK_CS SRV record for the data center DNS domain.
In a still further or alternative arrangement, the leaf servers may also similarly use the DNS SRV Rec “_TADMIN_BLOCK” to find a chain server. If DNS load balancing is used, this record may include the names of all chain servers maintaining the block chain, together with priorities and weights. If load balancing is implemented using an active load balancer, this SRV REC may contain the name of a load balancing server, which may be configured to select a chain server upon first contact. In still further or alternative arrangements, an embodiment of the present invention may include one or more mechanisms for HTTP service discovery using suitable tools for discovering and configuring services in an infrastructure, e.g., including Consul, as previously noted.
In a still further or alternative arrangement, additional steps, blocks and components implementing steps relate to chain server enrollment, e.g., as part of block 304 of the service functional model 300 depicted in
(1) generating a public/private key pair for communication between chain servers and with leaf servers using a public key crypto-algorithm such as EC, as noted previously. Elliptic curve cryptography (ECC) is an approach to public-key cryptography based on the algebraic structure of elliptic curves over finite fields. ECC requires smaller keys compared to non-ECC cryptography (based on plain Galois fields) to provide equivalent security. Elliptic curves are applicable for key agreement, digital signatures, pseudo-random generators and other tasks. Indirectly, they can be used for encryption by combining the key agreement with a symmetric encryption scheme. They can also be used in integer factorization algorithms based on elliptic curves that have applications in cryptography, such as Lenstra elliptic curve factorization. Using any or a combination of the foregoing techniques, communication among the chain servers as well as between the chain and leaf servers may be encrypted;
(2) sending a message to each of the other chain servers' TPDMs listed in the DNS SRV record informing them that it has arrived and is ready to participate in consensus;
(3) when responses have been received from all servers in the DNS SRV record, opening the blockchain ledger in its attached storage and performing any caching or other actions necessary to initialize its access to the blockchain;
(4) if the storage is empty or not up to date, requesting a copy from one of the other servers participating in consensus and downloading it. The newly booted chain server determines if its blockchain is up to date by requesting the currently active record from one of the other chain servers and comparing the date to the date on the current record of its copy from storage; and
(5) updating the load balancer (if necessary and/or where implemented) with a message informing the load balancer that the server is up and ready to take transactions, or, if DNS load balancing is being used, update the _TADMIN_BLOCK SRV record Rec with its address, weight and priority, the weight and priority being obtained from a configuration file. The newly booted chain server determines which of these procedures to use based on a configuration file.
In a still further or alternative arrangement, additional steps, blocks and components implementing steps relate to leaf server enrollment, e.g., as part of block 306 depicted in
(1) when a leaf server is booted, generating by the TPEM executing thereon, a public/private key pair with a suitable public key crypto-algorithm such as EC or variants thereof. All messages between the TPEM and one or more TPDMs may then be encrypted using the public key;
(2) requesting, by the TPEM, a chain server through a DNS SRV record for the _TADMIN_BLOCK service, with the server either selected from the record if DNS load balancing is used, otherwise through the load balancer;
(3) contacting, by the leaf server, the chain server and requesting the chain server's public key which it uses to encrypt further communication. All communications between the leaf server and chain server, including the initial contact, are accordingly encrypted.
Further steps and acts, and blocks/components required to implement the steps, of an embodiment of the present invention relate to a tenant's life cycle management 308 as noted above.
(1) enrolling, by a tenant, in the cloud through a publicly accessible web portal offered by a cloud service provider. The web portal server may be configured as a leaf server operating to run the tenant policy enforcement agent/module (block 402);
(2) connecting, by or via the portal, directly to the tenant policy enforcement agent/module which may be either built into the program and accessible via a graphical user interface, or through inter-process communication (IPI) (block 404);
(3) providing, by the tenant, various pieces of information to the cloud provider through the web portal, inter alia (blocks 406-410):
(a) a public key generated using a crypto-algorithm so that communication from the tenant can be decrypted by the tenant management system;
(b) a tenant name and password. The password can be obscured using a suitable hashing or encryption algorithm as it is entered to avoid it appearing in clear text. The tenant name can act as the identifier for the tenant account. It should be noted that other means of identification can be used, for example, a public key, requiring a tenant certificate;
(c) credentials sufficient for the cloud provider to maintain the tenant's credit, for example, a credit card number;
(d) an initial amount of credit that should be charged to the tenant's account; and
(e) one or more service types for which the tenant desires to enroll;
(4) selecting, by the tenant policy enforcement agent/module, a service contract type based on the tenant's choice and parameterizing the service contract types; communicating the parameters to the tenant policy decision management agent/module on a chain server via smart contract remote procedure call (RPC) such as a REST call, to the Tenant_Management contract in an enroll( ) RPC call (block 412 and 414);
(5) enrolling by the tenant policy management/decision module, e.g., via enroll( )method, the tenant into the data center by creating a services contract for the tenant and charging the initial amount of credit to the tenant's charging credentials by calling the external charging provider to obtain credit card authorization (block 416);
(6) installing, by the enroll( )method, the service management contract into a mapping such as a Solidity hash table with the key being the account identifier such as a user name, and the value of the contract (block 418).
Removal of a tenant from the TMS may be effectuated in a similar manner in accordance with the following steps in an example implementation:
(1) calling by the tenant policy enforcement agent/module the remove( )method; and
(2) nulling out the hash mapping for the account identifier, returning the remaining credit to the tenant's external account, and deleting the contract by calling the Service kill( )method; thereby effectively removing the tenant from the system. It should be noted that depending on the services for which the tenant is authorized, some cleanup action(s) and/or processes might be required such as deleting the tenant's remaining files on storage.
Lookup of a tenant on the TMS may be effectuated in accordance with the following steps in an example implementation:
(1) fetching, when a tenant logs into the data center, by the tenant policy enforcement agent/module on the server assigned to the tenant, the tenant contract using a lookup( )method using the account identifier as the key;
(2) returning, by the method, a contract of type Service, an abstract super type from which all tenant contracts inherit.
With respect to tenant management contracts, a TPDM of the present invention may be configured to select a contract type based on the service type selected by the tenant at the time it enrolls. In one example implementation, each of the options provided by the web portal may correspond to a predefined smart contract type, which the TPDM may create and return to the serving TPEM. Further, the smart contract may be inserted into the blockchain as an encrypted block along with the tenant name, hashed password, and the payment credentials, as noted elsewhere in the present patent application. The block may be encrypted and a suitable consensus protocol engine may then be executed in conjunction with other chain servers to insert the block into the chain. Once consensus is achieved, the contract becomes the basis of the tenant's service agreement pursuant to which the tenant may receive resources and/or services upon authentication and authorization.
Set forth below is an example Solidity pseudocode block in a table illustrating a tenant lifecycle including a contract as well as tenant enrollment/removal processes:
With respect to example contract type interfaces for purposes of the present patent disclosure, a Service contract may be defined as the root type for a tenant management contract, as set forth in an illustrative pseudocode portion set forth below. It should be noted that the below illustrative pseudocode exemplifies contract type interfaces for building specific tenant management contracts. In one arrangement, a Service contract may contain one or more data structures for managing tenant information and for handling type safe casts. The tenant structure type defines a tenant record, and the owner variable contains information on the tenant that owns the contract. Tenant information from the tenant structure may comprise an example tenant record shown in
A tenant management contract may combine Service with other types, as may be exemplified by a pseudocode portion provided below. As illustrated, the pseudocode portion provides a definition for the BasicLogin contract type, a type that gives the tenant authorization to log into the data center using a remote shell in an example implementation of the present invention. The BasicLogin contract may be provided with two state variables—one each for recording disk and network quota, and an additional state variable containing the authorization token. The BasicLogin( ) constructor sets the disk and network quota, calls the Service( ) constructor to fill in the tenant information, and then records the types it supports for typesafe upcast. The Service contract type method authorize( ) is implemented by returning the authorization token, because the BasicLogin contract requires a user to login before being authorized. The revoke( )method in contrast calls the logout( ) method to remove the tenant authorization. The Service charge( )method charges for login time. Charges for monthly disk quota may be handled separately. The login( )method checks if the user name and hashed password provided as parameters match the user name and password on the contract, and, if so, generates an authorization token. The logout( ) method returns any remaining credit to the external credit provider and invalidates the authorization token.
Referring now to
As noted previously, coherency and consistency among the multiple blockchain Instances may be maintained by executing a suitable consensus protocol (e.g., upon every transaction in a blockchain replica, after a new block is created, or boot-up, or upon recovery from a failure, etc.). In one arrangement, causal disconnectivity among the multiple blockchain instances may be maintained or enforced while maintaining coherency/consensus, whereby failure or malfunction of one blockchain instance is restricted from propagating to other blockchain instances (block 712).
(1) sending, by the user's remote shell, a request 812 for shell access, to TPEM 806 on the leaf server. The message includes the user name and hashed password, which may be suitably encrypted;
(2) calling, by TPEM 806, as noted at message flow path 814, a lookup( )method on the TenantManagement contract as managed by TPDM 808-1 with the user name. The Service contract for the tenant is then accessed;
(3) fetching, by TPDM 808-1, the tenant's Service contract as noted at block 816;
(4) returning, by TPDM 808-1, a reference to the Service contract to TPEM 806 as noted at message flow path 818;
(5) typesafe casting, by TPEM 806, the Service contract to BasicLogin as noted at block 819;
(6) invoking, by TPEM 806, the login( )method on the BasicLogin contract and passing in the user name and hashed password as noted at message flow path 820;
(7) checking, by TPDM 808-1, the login credentials and generating an authorization token as noted at block 822;
(8) running, by TPDM 808-1, a consensus protocol (e.g., RAFT) across the plurality of TPDMs 808-1 to 808-K, as noted at block 824;
(9) returning, by TPDM 808-1, the authorization token to TPEM 806 as noted at message flow path 825;
(10) returning, by TPEM 806 to the tenant's remote shell 802, indicating that the login was successful, resulting in granting access, as noted at message flow path 826;
(11) passing, by TPEM 806, control to the data center's shell server agent 810 on the same or another data center server, as noted at block 828;
(12) establishing secure access path 830 between the remote shell 802 and the data center shell server 810; and
(13) consuming/receiving services or resources and charging therefor as noted at service session 832.
Skilled artisans will recognize that additional and/or alternative services may be provided by writing smart contracts that extend the Service contract type to suit different tenants' requirements, constraints, policies, etc.
Turning to
Two of the exemplary ND implementations in
The special-purpose network device 1002 includes appropriate hardware 1010 (e.g., custom or application-specific hardware) comprising compute resource(s) 1012 (which typically include a set of one or more processors), forwarding resource(s) 1014 (which typically include one or more ASICs and/or network processors), and physical network interfaces (NIs) 1016 (sometimes called physical ports), as well as non-transitory machine readable storage media 1018 having stored therein suitable application-specific software or program instructions 1020 (e.g., switching, routing, call processing, etc). A physical NI is a piece of hardware in an ND through which a network connection (e.g., wirelessly through a wireless network interface controller (WNIC) or through plugging in a cable to a physical port connected to a network interface controller (NIC)) is made, such as those shown by the connectivity between NDs 1000A-H. During operation, the application software 1020 may be executed by the hardware 1010 to instantiate a set of one or more application-specific or custom software instance(s) 1022. Each of the custom software instance(s) 1022, and that part of the hardware 1010 that executes that application software instance (be it hardware dedicated to that application software instance and/or time slices of hardware temporally shared by that application software instance with others of the application software instance(s) 1022), form a separate virtual network element 1030A-R. Each of the virtual network element(s) (VNEs) 1030A-R includes a control communication and configuration module 1032A-R (sometimes referred to as a local control module or control communication module) and forwarding table(s) 1034A-R with respect to suitable application/service instances 1033A-R, such that a given virtual network element (e.g., 1030A) includes the control communication and configuration module (e.g., 1032A), a set of one or more forwarding table(s) (e.g., 1034A), and that portion of the application hardware 1010 that executes the virtual network element (e.g., 1030A) for supporting one or more suitable application instances 1033A, e.g., tenant enrollment, TPDM and/or TPEM functionality, blockchain logic, consensus protocols, smart contracts execution, and the like in relation to an TMS architecture/subsystem virtualization.
In an example implementation, the special-purpose network device 1002 is often physically and/or logically considered to include: (1) a ND control plane 1024 (sometimes referred to as a control plane) comprising the compute resource(s) 1012 that execute the control communication and configuration module(s) 1032A-R; and (2) a ND forwarding plane 1026 (sometimes referred to as a forwarding plane, a data plane, or a bearer plane) comprising the forwarding resource(s) 1014 that utilize the forwarding or destination table(s) 1034A-R and the physical NIs 1016. By way of example, where the ND is a data center resource node, the ND control plane 1024 (the compute resource(s) 1012 executing the control communication and configuration module(s) 1032A-R) is typically responsible for participating in controlling how bearer traffic (e.g., voice/data/video) is to be routed. Likewise, ND forwarding plane 1026 is responsible for receiving that data on the physical NIs 1016 (e.g., similar to I/Fs 912 and 914 in
Returning to
The instantiation of the one or more sets of one or more applications 1064A-R, as well as the virtualization layer 1054 and software containers 1062A-R if implemented, are collectively referred to as software instance(s) 1052. Each set of applications 1064A-R, corresponding software container 1062A-R if implemented, and that part of the hardware 1040 that executes them (be it hardware dedicated to that execution and/or time slices of hardware temporally shared by software containers 1062A-R), forms a separate virtual network element(s) 1060A-R.
The virtual network element(s) 1060A-R perform similar functionality to the virtual network element(s) 1030A-R—e.g., similar to the control communication and configuration module(s) 1032A and forwarding table(s) 1034A (this virtualization of the hardware 1040 is sometimes referred to as NFV architecture, as mentioned above. Thus, NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which could be located in data centers, NDs, and customer premise equipment (CPE). However, different embodiments of the invention may implement one or more of the software container(s) 1062A-R differently. For example, while embodiments of the invention may be practiced in an arrangement wherein each software container 1062A-R corresponds to one VNE 1060A-R, alternative embodiments may implement this correspondence at a finer level granularity (e.g., line card virtual machines virtualize line cards, control card virtual machine virtualize control cards, etc.); it should be understood that the techniques described herein with reference to a correspondence of software containers 1062A-R to VNEs also apply to embodiments where such a finer level of granularity is used.
In certain embodiments, the virtualization layer 1054 includes a virtual switch that provides similar forwarding services as a physical Ethernet switch. Specifically, this virtual switch forwards traffic between software containers 1062A-R and the NIC(s) 1044, as well as optionally between the software containers 1062A-R. In addition, this virtual switch may enforce network isolation between the VNEs 560A-R that by policy are not permitted to communicate with each other (e.g., by honoring virtual local area networks (VLANs)).
The third exemplary ND implementation in
Regardless of the above exemplary implementations of an ND, when a single one of multiple VNEs implemented by an ND is being considered (e.g., only one of the VNEs is part of a given virtual network) or where only a single VNE is currently being implemented by an ND, the shortened term network element (NE) is sometimes used to refer to that VNE. Also in all of the above exemplary implementations, each of the VNEs (e.g., VNE(s) 1030A-R, VNEs 1060A-R, and those in the hybrid network device 1006) receives data on the physical NIs (e.g., 1016, 1046) and forwards that data out the appropriate ones of the physical NIs (e.g., 1016, 1046).
Furthermore, an example NFV implementation such as the one described above may also be integrated or otherwise associated with a metrics/charging system component 1055, at least parts of which may be interfaced to various components, e.g., TMS 1033A, compute resources 1012, virtualization layers 1054, etc., depending on whether special purpose or COTS network devices are used.
It will be recognized that communication latencies between the data centers 1102-1 to 1102-K may determine whether real-time charging transactions can be processed. In one implementation, the distributed data center environment 1100 may be architected such that communication plus processing latencies are under a preconfigured timeout (e.g., 20-second TCP timeout) for effectuating real-time charging.
It will be apparent upon reference hereto that an embodiment of tenant management scheme comprising the smart contract framework in conjunction with a distributed digital ledger as disclosed herein can be extended to other data center services, including higher level services such as media access, VoIP, etc. To do so, a contract type may be written, e.g., inheriting from a Service, with additional service interface contract types being written where necessary, such as the Login service contract type, for example. A concrete contract type may then implement the new methods relative to the new services on the contract.
As noted above, various hardware and software blocks configured for effectuating a TMS architecture for a localized data center or a distributed collection of data centers may be embodied in NDs, NEs, NFs, VNE/VNF/VND, virtual appliances, virtual machines, and the like, as well as electronic devices and machine-readable media, which may be configured as any of the apparatuses described herein (e.g., without limitation,
An electronic device stores and transmits (internally and/or with other electronic devices over a network) code (which is composed of software instructions and which is sometimes referred to as computer program code or a computer program) and/or data using machine-readable media (also called computer-readable media), such as machine-readable storage media (e.g., magnetic disks, optical disks, solid state drives, read only memory (ROM), flash memory devices, phase change memory) and machine-readable transmission media (also called a carrier) (e.g., electrical, optical, radio, acoustical or other form of propagated signals—such as carrier waves, infrared signals). Thus, an electronic device (e.g., a computer) includes hardware and software, such as a set of one or more processors (e.g., wherein a processor is a microprocessor, controller, microcontroller, central processing unit, digital signal processor, application specific integrated circuit, field programmable gate array, other electronic circuitry, a combination of one or more of the preceding) coupled to one or more machine-readable storage media to store code for execution on the set of processors and/or to store data. For instance, an electronic device may include non-volatile memory containing the code since the non-volatile memory can persist code/data even when the electronic device is turned off (when power is removed), and while the electronic device is turned on that part of the code that is to be executed by the processor(s) of that electronic device is typically copied from the slower non-volatile memory into volatile memory (e.g., dynamic random access memory (DRAM), static random access memory (SRAM)) of that electronic device. Typical electronic devices also include a set or one or more physical network interface(s) (NI(s)) to establish network connections (to transmit and/or receive code and/or data using propagating signals) with other electronic devices. For example, the set of physical NIs (or the set of physical NI(s) in combination with the set of processors executing code) may perform any formatting, coding, or translating to allow the electronic device to send and receive data whether over a wired and/or a wireless connection. In some embodiments, a physical NI may comprise radio circuitry capable of receiving data from other electronic devices over a wireless connection or channel and/or sending data out to other devices via a wireless connection or channel. This radio circuitry may include transmitter(s), receiver(s), and/or transceiver(s) suitable for radiofrequency communication. The radio circuitry may convert digital data into a radio signal having the appropriate parameters (e.g., frequency, timing, channel, bandwidth, etc.). The radio signal may then be transmitted via antennas to the appropriate recipient(s).
In some embodiments, the set of physical NI(s) may comprise network interface controller(s) (NICs), also known as a network interface card, network adapter, or local area network (LAN) adapter. The NIC(s) may facilitate in connecting the electronic device to other electronic devices allowing them to communicate via wire through plugging in a cable to a physical port connected to a NIC. One or more parts of an embodiment of the invention may be implemented using different combinations of software, firmware, and/or hardware.
A network device (ND) or network element (NE) as set hereinabove is an electronic device that communicatively interconnects other electronic devices on the network (e.g., other network devices, end-user devices, etc.). Some network devices are “multiple services network devices” that provide support for multiple networking functions (e.g., routing, bridging, switching, Layer 2 aggregation, session border control, Quality of Service, and/or subscriber management), and/or provide support for multiple application services (e.g., data, voice, and video). The apparatus, and method performed thereby, of the present invention may be embodied in one or more ND/NE nodes that may be, in some embodiments, communicatively connected to other electronic devices on the network (e.g., other network devices, servers, nodes, terminals, etc.). The example NE/ND node may comprise processor resources, memory resources, and at least one interface. These components may work together to provide various TMS functionalities as disclosed herein.
Memory may store code (which is composed of software instructions and which is sometimes referred to as computer program code or a computer program) and/or data using non-transitory machine-readable (e.g., computer-readable) media, such as machine-readable storage media (e.g., magnetic disks, optical disks, solid state drives, ROM, flash memory devices, phase change memory) and machine-readable transmission media (e.g., electrical, optical, radio, acoustical or other form of propagated signals—such as carrier waves, infrared signals). For instance, memory may comprise non-volatile memory containing code to be executed by processor. Where memory is non-volatile, the code and/or data stored therein can persist even when the network device is turned off (when power is removed). In some instances, while network device is turned on that part of the code that is to be executed by the processor(s) may be copied from non-volatile memory into volatile memory of network device.
The at least one interface may be used in the wired and/or wireless communication of signaling and/or data to or from network device. For example, interface may perform any formatting, coding, or translating to allow network device to send and receive data whether over a wired and/or a wireless connection. In some embodiments, interface may comprise radio circuitry capable of receiving data from other devices in the network over a wireless connection and/or sending data out to other devices via a wireless connection. In some embodiments, interface may comprise network interface controller(s) (NICs), also known as a network interface card, network adapter, local area network (LAN) adapter or physical network interface. The NIC(s) may facilitate in connecting the network device to other devices allowing them to communicate via wire through plugging in a cable to a physical port connected to a NIC. As explained above, in particular embodiments, the processor may represent part of interface, and some or all of the functionality described as being provided by interface may be provided more specifically by processor.
The components of network device are each depicted as separate boxes located within a single larger box for reasons of simplicity in describing certain aspects and features of network device disclosed herein. In practice however, one or more of the components illustrated in the example network device may comprise multiple different physical elements
One or more embodiments described herein may be implemented in the network device by means of a computer program comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the actions according to any of the invention's features and embodiments, where appropriate. While the modules are illustrated as being implemented in software stored in memory, other embodiments implement part or all of each of these modules in hardware.
In one embodiment, the software implements the modules described with regard to the Figures herein. During operation, the software may be executed by the hardware to instantiate a set of one or more software instance(s). Each of the software instance(s), and that part of the hardware that executes that software instance (be it hardware dedicated to that software instance, hardware in which a portion of available physical resources (e.g., a processor core) is used, and/or time slices of hardware temporally shared by that software instance with others of the software instance(s)), form a separate virtual network element. Thus, in the case where there are multiple virtual network elements, each operates as one of the network devices.
Some of the described embodiments may also be used where various levels or degrees of virtualization has been implemented. In certain embodiments, one, some or all of the applications relating to a TMS architecture may be implemented as unikernel(s), which can be generated by compiling directly with an application only a limited set of libraries (e.g., from a library operating system (LibOS) including drivers/libraries of OS services) that provide the particular OS services needed by the application. As a unikernel can be implemented to run directly on hardware directly on a hypervisor (in which case the unikernel is sometimes described as running within a LibOS virtual machine), or in a software container, embodiments can be implemented fully with unikernels running directly on a hypervisor represented by virtualization layer, unikernels running within software containers represented by instances, or as a combination of unikernels and the above-described techniques (e.g., unikernels and virtual machines both run directly on a hypervisor, unikernels and sets of applications that are run in different software containers).
The instantiation of the one or more sets of one or more applications, as well as virtualization if implemented are collectively referred to as software instance(s). Each set of applications, corresponding virtualization construct if implemented, and that part of the hardware that executes them (be it hardware dedicated to that execution and/or time slices of hardware temporally shared by software containers), forms a separate virtual network element(s).
A virtual network is a logical abstraction of a physical network that provides network services (e.g., L2 and/or L3 services). A virtual network can be implemented as an overlay network (sometimes referred to as a network virtualization overlay) that provides network services (e.g., Layer 2 (L2, data link layer) and/or Layer 3 (L3, network layer) services) over an underlay network (e.g., an L3 network, such as an Internet Protocol (IP) network that uses tunnels (e.g., generic routing encapsulation (GRE), layer 2 tunneling protocol (L2TP), IPSec) to create the overlay network).
A network virtualization edge (NVE) sits at the edge of the underlay network and participates in implementing the network virtualization; the network-facing side of the NVE uses the underlay network to tunnel frames to and from other NVEs; the outward-facing side of the NVE sends and receives data to and from systems outside the network. A virtual network instance (VNI) is a specific instance of a virtual network on a NVE (e.g., a NE/VNE on an ND, a part of a NE/VNE on a ND where that NE/VNE is divided into multiple VNEs through emulation); one or more VNIs can be instantiated on an NVE (e.g., as different VNEs on an ND). A virtual access point (VAP) is a logical connection point on the NVE for connecting external systems to a virtual network; a VAP can be physical or virtual ports identified through logical interface identifiers (e.g., a VLAN ID).
Examples of network services also include: 1) an Ethernet LAN emulation service (an Ethernet-based multipoint service similar to an Internet Engineering Task Force (IETF) Multiprotocol Label Switching (MPLS) or Ethernet VPN (EVPN) service) in which external systems are interconnected across the network by a LAN environment over the underlay network (e.g., an NVE provides separate L2 VNIs (virtual switching instances) for different such virtual networks, and L3 (e.g., IP/MPLS) tunneling encapsulation across the underlay network); and 2) a virtualized IP forwarding service (similar to IETF IP VPN (e.g., Border Gateway Protocol (BGP)/MPLS IPVPN) from a service definition perspective) in which external systems are interconnected across the network by an L3 environment over the underlay network (e.g., an NVE provides separate L3 VNIs (forwarding and routing instances) for different such virtual networks, and L3 (e.g., IP/MPLS) tunneling encapsulation across the underlay network)). Example network services that may be hosted by a data center may also include quality of service capabilities (e.g., traffic classification marking, traffic conditioning and scheduling), security capabilities (e.g., filters to protect customer premises from network—originated attacks, to avoid malformed route announcements), and management capabilities (e.g., full detection and processing).
Embodiments of a TMS architecture may involve distributed routing, centralized routing, or a combination thereof. The distributed approach distributes responsibility for generating the reachability and forwarding information across the NEs; in other words, the process of neighbor discovery and topology discovery is distributed. For example, where the network device is a traditional router, the control communication and configuration module(s) of the ND control plane typically include a reachability and forwarding information module to implement one or more routing protocols (e.g., an exterior gateway protocol such as Border Gateway Protocol (BGP), Interior Gateway Protocol(s) (IGP) (e.g., Open Shortest Path First (OSPF), Intermediate System to Intermediate System (IS-IS), Routing Information Protocol (RIP), Label Distribution Protocol (LDP), Resource Reservation Protocol (RSVP) (including RSVP-Traffic Engineering (TE): Extensions to RSVP for LSP Tunnels and Generalized Multi-Protocol Label Switching (GMPLS) Signaling RSVP-TE)) that communicate with other NEs to exchange routes, and then selects those routes based on one or more routing metrics. Thus, the NEs perform their responsibility for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) by distributively determining the reachability within the network and calculating their respective forwarding information. Routes and adjacencies are stored in one or more routing structures (e.g., Routing Information Base (RIB), Label Information Base (LIB), one or more adjacency structures) on the ND control plane. The ND control plane programs the ND forwarding plane with information (e.g., adjacency and route information) based on the routing structure(s). For example, the ND control plane programs the adjacency and route information into one or more forwarding table(s) (e.g., Forwarding Information Base (FIB), Label Forwarding Information Base (LFIB), and one or more adjacency structures) on the ND forwarding plane. For Layer 2 forwarding, the ND can store one or more bridging tables that are used to forward data based on the layer 2 information in that data. While the above example uses the special-purpose network device, the same distributed approach can be implemented on a general purpose network device and a hybrid network device, e.g., as exemplified in the embodiments of
Skilled artisans will further recognize that an example TMS arrangement may also be implemented using various SDN architectures based on known protocols such as, e.g., OpenFlow protocol or Forwarding and Control Element Separation (ForCES) protocol, etc. Regardless of whether distributed or centralized networking is implemented with respect to a data center management, some NDs may be configured to include functionality for authentication, authorization, and accounting (AAA) protocols (e.g., RADIUS (Remote Authentication Dial-In User Service), Diameter, and/or TACACS+ (Terminal Access Controller Access Control System Plus), which may interoperate with TPEM/TPDM functionalities of TMS. AAA can be provided through a client/server model, where the AAA client is implemented on a ND and the AAA server can be implemented either locally on the ND or on a remote electronic device coupled with the ND. Authentication is the process of identifying and verifying a subscriber. For instance, a subscriber/tenant might be identified by a combination of a username and a password or through a unique key. Authorization determines what a subscriber can do after being authenticated, such as gaining access to certain electronic device information resources (e.g., through the use of access control policies). Accounting is recording user activity. By way of a summary example, end user devices may be coupled (e.g., through an access network) through an edge ND (supporting AAA processing) coupled to core NDs coupled to electronic devices implementing servers of service/content providers. AAA processing is performed to identify for a subscriber the subscriber record stored in the AAA server for that subscriber. A subscriber record includes a set of attributes (e.g., subscriber name, password, authentication information, access control information, rate-limiting information, policing information) used during processing of that subscriber's traffic.
Certain NDs (e.g., certain edge NDs) internally represent end user devices (or sometimes customer premise equipment (CPE) such as a residential gateway (e.g., a router, modem)) using subscriber circuits. A subscriber circuit uniquely identifies within the ND a subscriber session and typically exists for the lifetime of the session. Thus, a ND typically allocates a subscriber circuit when the subscriber connects to that ND, and correspondingly de-allocates that subscriber circuit when that subscriber disconnects. Each subscriber session represents a distinguishable flow of packets communicated between the ND and an end user device (or sometimes CPE such as a residential gateway or modem) using a protocol, such as the point-to-point protocol over another protocol (PPPoX) (e.g., where X is Ethernet or Asynchronous Transfer Mode (ATM)), Ethernet, 802.1Q Virtual LAN (VLAN), Internet Protocol, or ATM). A subscriber session can be initiated using a variety of mechanisms (e.g., manual provisioning a dynamic host configuration protocol (DHCP), DHCP/client-less internet protocol service (CLIPS) or Media Access Control (MAC) address tracking). For example, the point-to-point protocol (PPP) is commonly used for digital subscriber line (DSL) services and requires installation of a PPP client that enables the subscriber to enter a username and a password, which in turn may be used to select a subscriber record. When DHCP is used (e.g., for cable modem services), a username typically is not provided; but in such situations other information (e.g., information that includes the MAC address of the hardware in the end user device (or CPE)) is provided. The use of DHCP and CLIPS on the ND captures the MAC addresses and uses these addresses to distinguish subscribers and access their subscriber records.
Accordingly, one skilled in the art will recognize that various apparatuses and systems with respect to the foregoing embodiments, as well as the underlying network infrastructures set forth above may be architected in a virtualized environment according to a suitable NFV architecture in additional or alternative embodiments of the present patent disclosure. For instance, various physical resources, databases, services, applications and functions supported in a TMS-based data center set forth hereinabove may be provided as virtual appliances, machines or functions, wherein the resources and applications are virtualized into suitable VNFs) or virtual network elements (VNEs) via a suitable virtualization layer whose overall management and orchestration functionality may be supported by a virtualized infrastructure manager (VIM) in conjunction with a VNF manager and an NFV orchestrator. An Operation Support System (OSS) and/or Business Support System (BSS) component may typically be provided for handling network-level functionalities such as network management, fault management, configuration management, service management, and subscriber management, etc., which may interface with VNF layer and NFV orchestration components via suitable interfaces.
Furthermore, skilled artisans will also appreciate that such an example cloud-computing data center environment may comprise one or more of private clouds, public clouds, hybrid clouds, community clouds, distributed clouds, multiclouds and interclouds (e.g., “cloud of clouds”), and the like.
In the above-description of various embodiments of the present disclosure, it is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art and may not be interpreted in an idealized or overly formal sense expressly so defined herein.
At least some example embodiments are described herein with reference to block diagrams and/or flowchart illustrations of computer-implemented methods, apparatus (systems and/or devices) and/or computer program products. It is understood that a block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions that are performed by one or more computer circuits. Such computer program instructions may be provided to a processor circuit of a general purpose computer circuit, special purpose computer circuit, and/or other programmable data processing circuit to produce a machine, so that the instructions, which execute via the processor of the computer and/or other programmable data processing apparatus, transform and control transistors, values stored in memory locations, and other hardware components within such circuitry to implement the functions/acts specified in the block diagrams and/or flowchart block or blocks, and thereby create means (functionality) and/or structure for implementing the functions/acts specified in the block diagrams and/or flowchart block(s). Additionally, the computer program instructions may also be stored in a tangible computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instructions which implement the functions/acts specified in the block diagrams and/or flowchart block or blocks.
As pointed out previously, tangible, non-transitory computer-readable medium may include an electronic, magnetic, optical, electromagnetic, or semiconductor data storage system, apparatus, or device. More specific examples of the computer-readable medium would include the following: a portable computer diskette, a random access memory (RAM) circuit, a ROM circuit, an erasable programmable read-only memory (EPROM or Flash memory) circuit, a portable compact disc read-only memory (CD-ROM), and a portable digital video disc read-only memory (DVD/Blu-ray). The computer program instructions may also be loaded onto or otherwise downloaded to a computer and/or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer and/or other programmable apparatus to produce a computer-implemented process. Accordingly, embodiments of the present invention may be embodied in hardware and/or in software (including firmware, resident software, micro-code, etc.) that runs on a processor or controller, which may collectively be referred to as “circuitry,” “a module” or variants thereof. Further, an example processing unit may include, by way of illustration, a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Array (FPGA) circuits, any other type of integrated circuit (IC), and/or a state machine. As can be appreciated, an example processor unit may employ distributed processing in certain embodiments.
Further, in at least some additional or alternative implementations, the functions/acts described in the blocks may occur out of the order shown in the flowcharts. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Moreover, the functionality of a given block of the flowcharts and/or block diagrams may be separated into multiple blocks and/or the functionality of two or more blocks of the flowcharts and/or block diagrams may be at least partially integrated. Furthermore, although some of the diagrams include arrows on communication paths to show a primary direction of communication, it is to be understood that communication may occur in the opposite direction relative to the depicted arrows. Finally, other blocks may be added/inserted between the blocks that are illustrated.
It should therefore be clearly understood that the order or sequence of the acts, steps, functions, components or blocks illustrated in any of the flowcharts depicted in the drawing Figures of the present disclosure may be modified, altered, replaced, customized or otherwise rearranged within a particular flowchart, including deletion or omission of a particular act, step, function, component or block. Moreover, the acts, steps, functions, components or blocks illustrated in a particular flowchart may be inter-mixed or otherwise inter-arranged or rearranged with the acts, steps, functions, components or blocks illustrated in another flowchart in order to effectuate additional variations, modifications and configurations with respect to one or more processes for purposes of practicing the teachings of the present patent disclosure.
Although various embodiments have been shown and described in detail, the claims are not limited to any particular embodiment or example. None of the above Detailed Description should be read as implying that any particular component, element, step, act, or function is essential such that it must be included in the scope of the claims. Reference to an element in the singular is not intended to mean “one and only one” unless explicitly so stated, but rather “one or more.” All structural and functional equivalents to the elements of the above-described embodiments that are known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the present claims. Accordingly, those skilled in the art will recognize that the exemplary embodiments described herein can be practiced with various modifications and alterations within the spirit and scope of the claims appended below.
This nonprovisional application claims priority based upon the following prior United States provisional patent application(s): (i) “APPARATUS AND METHOD FOR MANAGING TENANT ACCOUNTING POLICY AND RECORDS IN A CLOUD EXECUTION ENVIRONMENT,” Application No. 62/546,225, filed Aug. 16, 2017, in the name(s) of James Kempf, Joacim Halen and Tomas Mecklin; each of which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62546225 | Aug 2017 | US |