This application claims priority from European Application No. 16305801.9, entitled “METHOD AND DEVICE FOR PROCESSING, AT A NETWORK EQUIPMENT, A PROCESSING REQUEST FROM A TERMINAL,” filed on Jun. 30, 2016, the contents of which are hereby incorporated by reference in its entirety.
The present disclosure generally relates to the network virtualization and more particularly to network virtualization services and functions associated with cloud commodity computing hardware.
This section is intended to introduce the reader to various aspects of art, which may be related to various aspects of the present disclosure that are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.
A residential or corporate gateway is a network equipment interfacing a LAN (Local Area Network) to the Internet. Such an equipment may usually provide—in addition to being a cable, DSL (Digital Subscriber Line) or fiber modem—different features like router, DNS (Domain Name System) proxy, local DHCP (Dynamic Host Configuration Protocol) server, wireless access point, firewall, DynDNS, bridge, etc.
The development of the cloud technologies (such as the virtualization of network functions) allows the emergence of a new architecture for Internet access wherein services running in the residential gateway are moved in the NSP's (Network Service Provider) datacenter. By reducing the complexity of the residential gateway, NSPs hope to reduce the time to market to deploy new services and to ease troubleshooting operations.
Networking Function Virtualization (NFV) enables the provision of network functions for home or corporate gateways directly from the NSP's facility in a cloud provisioning manner. Virtual Customer Premise Equipment (VCPE) is part of the so called Network Function Virtualization paradigm that is about executing network functions (e.g. Router, Deep Packet Inspection, DNS server, Firewall) onto commoditized hardware hosting a virtual machine infrastructure (e.g. private or public cloud infrastructure) instead of requiring specific on purpose hardware. To that end, the home gateway acts as a bridge (BRG) and needs to connect to a virtual gateway (VG) in the cloud to reach the hosts where the network functions are provisioned and run, even for basic functions such as DHCP, Firewall, DNS and UI (User Interface).
Nevertheless, the virtualization of network functions (such as DHCP function) raises some constraints because of the fact that such functions were not originally configured to fully scale with cloud infrastructure unlike applicative functions. To that end, virtual network functions (VNF) processing should also conform to a stateless paradigm to allow vertical and horizontal scaling at any time.
In particular, as already known, DHCP functions can be threefold:
Besides, a DHCP processing is defined by a set of constraints:
When renew time T1 has elapsed, the client can send a unicast DHCP renewal request to the previous server having provided an IP address. When the previous server does not respond, the client waits for the Rebind Time T2 and then sends a broadcast DHCP Request to reach a DHCP server on the network. The DHCP server should save the leases until and after expiration of the rebind time. The same client—which requests or renews a lease—expects to get the same IP address previously allocated to it; —the DHCP server can offer IP addresses from the same subnet for any customer and the IP addresses leases must belong to each particular customer lease pool.
Thus, the basic architecture to provide a mutualized DHCP service would be to create a particular DHCP virtual network function per customer and then to deploy a list of particular VNFs on host or clusters (e.g. VNF customer from #1 to #100 on Host #1, then VNF #101 to 200 on host #2 . . . ). This is a way to scale (usually called “sharding”) intensively used for scaling some types of databases.
For instance, KEA (http://kea.isc.org/wiki) is a current DHCP service that is able to serve DHCP leases from a database leases server. Each new lease including renewal lease corresponds to a new row in the lease file database for performances reasons. Then, an automatic leases file cleanup service is required to clean up the database from the old leases rows that are no more used.
Nevertheless, none of these above-mentioned solutions can provide a full stateless mutualized DHCP function at least for the following reasons:
Therefore, there is a need to provide a stateless DHCP service at a customer basis.
The disclosure concerns a method for processing, at a network equipment, a processing request from a terminal configured to be associated with a network to which the network equipment can be connected. In particular, said method comprises, at the network equipment:
Thus, each processing request can be served by any processing unit on a message basis. A message part of a processing request from a given terminal can be processed by one processing unit while a subsequent message of said processing request can be processed by another processing unit.
In an embodiment, said method can further comprise an update of the state of the processing request in the database unit, after processing of the received message.
In an embodiment, the processing can comprise sending a response to the terminal, said response depending on the state of the processing request.
In an embodiment, when a discrepancy is detected between the received message and the corresponding state of the processing request, the received message can be dropped.
In an embodiment, said method can further comprise, when an update of the state of the processing request in the database unit fails due to a concurrent update by another processing unit of the network equipment:
In an embodiment, the processing request can be a DHCP request configured to obtain an IP address.
In this embodiment, the received message can be either a DISCOVER message or a REQUEST message.
In an embodiment, the network identification information can specify a VxLAN identification number associated with said network.
In an embodiment, the retrieved context information can further comprise at least one of the following information:
The present disclosure further concerns a network equipment for processing a processing request from a terminal configured to be associated with a network to which the network equipment can be connected. Said network equipment can comprise:
In an embodiment, the database unit can be configured to update the state of the processing request, after the processing of the received message.
In an embodiment, the processing unit can be further configured to send a response to the terminal, said response depending on the state of the processing request.
In an embodiment, when a discrepancy is detected between the received message and the corresponding state of the processing request, the received message can be dropped.
The present disclosure also relies on a network equipment for processing a processing request from a terminal configured to be associated with a network to which the network equipment can be connected. Said network equipment can comprise at least one memory and at least one processing circuitry configured to perform, at the network equipment:
Besides, the present disclosure is further directed to a non-transitory program storage device, readable by a computer, tangibly embodying a program of instructions executable by the computer to perform a method for processing, at a network equipment, a processing request from a terminal configured to be associated with a network to which the network equipment can be connected, said method comprising:
The present disclosure also relies on computer program product which is stored on a non-transitory computer readable medium and comprises program code instructions executable by a processor for implementing a method for processing, at a network equipment, a processing request from a terminal configured to be associated with a network to which the network equipment can be connected, said method comprising:
The method according to the disclosure may be implemented in software on a programmable apparatus. It may be implemented solely in hardware or in software, or in a combination thereof.
Some processes implemented by elements of the present disclosure may be computer implemented. Accordingly, such elements may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as “circuit”, “module” or “system”. Furthermore, such elements may take the form of a computer program product embodied in any tangible medium of expression having computer usable program code embodied in the medium.
Since elements of the present disclosure can be implemented in software, the present disclosure can be embodied as computer readable code for provision to a programmable apparatus on any suitable carrier medium. A tangible carrier medium may comprise a storage medium such as a floppy disk, a CD-ROM, a hard disk drive, a magnetic tape device or a solid state memory device and the like.
The disclosure thus provides a computer-readable program comprising computer-executable instructions to enable a computer to perform the method for processing, at a network equipment, a processing request according to the disclosure.
Certain aspects commensurate in scope with the disclosed embodiments are set forth below. It should be understood that these aspects are presented merely to provide the reader with a brief summary of certain forms the disclosure might take and that these aspects are not intended to limit the scope of the disclosure. Indeed, the disclosure may encompass a variety of aspects that may not be set forth below.
The disclosure will be better understood and illustrated by means of the following embodiment and execution examples, in no way limitative, with reference to the appended figures on which:
Wherever possible, the same reference numerals will be used throughout the figures to refer to the same or like parts.
The following description illustrates the principles of the present disclosure. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the disclosure and are included within its scope.
All examples and conditional language recited herein are intended for educational purposes to aid the reader in understanding the principles of the disclosure and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions.
Moreover, all statements herein reciting principles, aspects, and embodiments of the disclosure, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.
Thus, for example, it will be appreciated by those skilled in the art that the block diagrams presented herein represent conceptual views of illustrative circuitry embodying the principles of the disclosure. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudocode, and the like represent various processes which may be substantially represented in computer readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
The functions of the various elements shown in the figures may be provided with dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term “processor” or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, read only memory (ROM) for storing software, random access memory (RAM), and nonvolatile storage.
In the claims hereof, any element expressed as a means and/or module for performing a specified function is intended to encompass any way of performing that function including, for example, a) a combination of circuit elements that performs that function or b) software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function. The disclosure as defined by such claims resides in the fact that the functionalities provided by the various recited means are combined and brought together in the manner which the claims call for. It is thus regarded that any means that can provide those functionalities are equivalent to those shown herein.
In addition, it is to be understood that the figures and descriptions of the present disclosure have been simplified to illustrate elements that are relevant for a clear understanding of the present disclosure, while eliminating, for purposes of clarity, many other elements found in typical digital multimedia content delivery methods, devices and systems. However, because such elements are well known in the art, a detailed discussion of such elements is not provided herein. The disclosure herein is directed to all such variations and modifications known to those skilled in the art.
The left-hand side (represented by the broadband residential gateway 104 (BRG)) can be considered, in one embodiment, to be at the customer's premises, whereas the right-hand side (represented by the virtual gateway 100) can be located in a datacenter hosted, for instance, by a network operator. This datacenter can be distributed across multiple locations. In one embodiment, virtualized gateway functions can be mutualized to facilitate scaling and maintenance. The bridge 104 can be connected to a home (or business) network 105 (e.g. private network) such a LAN (Local Area Network) or WAN (Wide Area Network).
Virtual gateway deployment can be managed by a service orchestrator (not shown in the Figures) which coordinates the compute and networking configuration from the broadband residential gateway 104 to the datacenter so as to manage virtual gateway migration, service addition/removal or adjustment of QoS policies.
As shown in
The virtual gateway 100 can be connected to one or several broadband residential gateways 104, each BRG 104 being further adapted to be connected to a LAN 105 comprising one or several terminals 106.
As shown in
Configuring the switching according to rules derived from customer specific preferences can allow building a service chain, connecting inputs and outputs of services. At a high level, gateway directed traffic can be switched to the adequate services (DHCP, DNS), whereas WAN directed traffic is switched toward NAT or firewall services.
Each service can be mutualized and parametrized with customer settings or implemented as a dedicated customer specific service. In such a manner, new services can be trialed with a subset of customers before being fully deployed and mutualized. For computing based mutualized services, using the VxLAN id as a derivation from the customer id (identifier) can allow retrieving the customer's context from a central database. For networking based services, the orchestrator is able to configure per customer-differentiated services, which can rely on the VxLAN id for flow differentiation. Finally, different overlays can be built for each VxLAN id, resulting in customer tailored service chaining. It is to be understood that the customer id identifier can be carried in a specific field of the encapsulation header of the overlay network such as VXLANs/VXLAN-GPE Identifier, GRE Identifier and so forth.
The BRG 104 can be built around its own switching fabric that interconnects the different network ports. The BRG 104 can implement in particular the logic for enabling Generic Routing Encapsulation (GRE) tunneling between the BRG 104 and a VCPE host (not shown). The GRE tunneling can be configured through an existing procedure (consecutively to a provisioning operation realized by the operator) like the one specified by Broadband Forum. Once the BRG 104 is connected to the access network, after the physical attachment (e.g. xDSL), the BRG 104 can broadcast a DHCP request. This is caught by the DHCP server residing in the first upstream IP enabled device which may be the Broadband Network Gateway (BNG, the old BRAS). The DHCP server (more exactly its associated AAA server) can authenticate the BRG 104 thanks to its MAC (Media Access Control) address and return the corresponding configuration. This corresponding configuration can include the BRG IP address and the GRE tunnel endpoint IP address which is the IP address of the virtual machine hosting the tunnel end point (TEP) virtual function of the virtual gateway 100.
Once the network configuration is obtained, the BRG 104 can be configured to provide tunnel access (here is a GRE tunnel interface) while the virtual gateway 100 can be automatically configured in the virtual gateway host infrastructure.
In addition, as shown in the embodiment of
In one embodiment, the virtual gateway 100 can further comprise a DHCP load balancer 222, a cluster manager 223, one or several DHCP processing servers 224 (in the non-limitative example of
The DHCP load balancer 222 can compute or obtain from the processing unit manager 223 (also called cluster manager 223) workload information associated with DHCP processing servers 224 (i.e. the current workload of each processing server) and can decide to distribute the DHCP messages to balance the workload among the available DHCP processing servers 224. The load balancer 222 can redirect each DHCP message (e.g. DHCP Discover or DHCP Request) to one targeted DHCP processing server instance 224 (e.g. IP@10.10.10.4 to 10.10.10.6 in the example of
In the example shown in
A DHCP client can have different states depending on messages which have been exchanged with a server. In particular, a database server can keep a status of the transaction step and a status of the IP address lease associated with the DHCP client to reply coherently with the different messages coming from this DHCP client. In one embodiment, the state for a client can be:
In addition, the scalable database server 225 can be configured to maintain a consistent DHCP status (or context) for each customer. In particular, the database server 225 can be adapted to reply to specific requests (or commands) used by the DHCP processing servers 224 such as GetContext (Customer, MAC address) or SaveContext (Customer, MAC address). Such specific requests can further be used by other services (than the DHCP processing servers 224) such as monitoring, troubleshooting and scalability features (e.g. network provisioning).
In particular, the GetContext request can send back the DHCP context (or configuration) of a terminal 106, such as network information, IP address range, current leases and also the state of the DHCP Request for that particular terminal 106 (i.e. MAC address). The data returned by the GetContext request can further be tagged with a revision number allowing resolving ties in case of concurrent updates.
The SaveContext request—which can include the revision number—can indicate the lease that has been offered by a DHCP processing server 224 for that particular terminal (i.e. MAC address).
As previously stated,
The scalable database server 225 can be further configured to store terminal configurations and each particular DHCP request session from a MAC address. Depending on the backend model, the database server 225 can clean up the unused database entries. It can further implement a redundancy mechanism.
At step 510, the virtual gateway 100 (e.g. thanks to a multiplexing function 210) can receive, from the requesting terminal 106, one DHCP message (e.g. DHCP DISCOVER or DHCP Request) part of the DHCP request. The received DHCP message can comprise, in particular, the MAC address of the requesting terminal 106.
At step 520, the virtual gateway 100 (e.g. thanks to a DHCP relay agent 221) can insert a network identification information (such a VxLAN id) into the DHCP message received from the requesting terminal 106.
At step 530, the DHCP received message as modified can be forwarded (e.g. by a load balancer 222 of the virtual gateway 100) to the one of the DHCP processing servers 224, depending on workload information associated with the DHCP servers 224 provided by the DHCP processing unit manager 223 to the load balancer 222.
At step 540, the virtual gateway 100 (e.g. thanks to an appropriate request sent by a DHCP processing server) can retrieve, based on the network identification information extracted from the received message, context information from a database server 225 shared between the DHCP processing servers 224.
At step 550, the virtual gateway 100 (e.g. by the DHCP processing server 224) can process the DHCP received message depending on a state of the processing request, the processing request state being retrieved from the context information.
At step 560, the virtual gateway 100 (e.g. thanks to an appropriate request sent by a DHCP processing servers) can update the state of the DHCP request in the database of the database server 225, after the processing of the DHCP received message.
At step 570, the virtual gateway 100 (e.g. thanks to the DHCP processing server 225) can send a response message to the requesting terminal 106, which depends on the state of the DHCP request. When a discrepancy is detected between the DHCP received message and the corresponding state of the DHCP, the received message can be dropped.
In addition, in one embodiment, when an update of the state of the processing request in the database server 225 fails due to a concurrent update by another DHCP processing unit 224, the virtual gateway 100 can cancel the processing of the DHCP received message and can start over from the retrieving step 540. In this way, any DHCP processing request (e.g. DHCP DISCOVER, DHCP Request) can be served by any DHCP processing server 224 on a DHCP message basis.
Furthermore, a DHCP DISCOVER message from a given terminal 106 can be processed by one DHCP processing server 224 (e.g. called Host #1) while the subsequent DHCP Request message of the 2 steps handshake may be processed by another DHCP processing server 224 (e.g. called host #2). Consequently, the provided DHCP service is stateless at a customer basis, since DHCP processing servers 224 do not create or open any objects that track information regarding the received processing request sent by terminals. No information on the requesting requests is retained at the DHCP processing servers 224.
References disclosed in the description, the claims and the drawings might be provided independently or in any appropriate combination. Features may be, where appropriate, implemented in hardware, software, or a combination of the two.
Reference herein to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one implementation of the method and device described. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments necessarily mutually exclusive of other embodiments.
Reference numerals appearing in the claims are by way of illustration only and shall have no limiting effect on the scope of the claims.
Although certain embodiments only of the disclosure have been described herein, it will be understood by any person skilled in the art that other modifications, variations, and possibilities of the disclosure are possible. Such modifications, variations and possibilities are therefore to be considered as falling within the spirit and scope of the disclosure and hence forming part of the disclosure as herein described and/or exemplified.
The flowchart and/or block diagrams in the Figures illustrate the configuration, operation and functionality of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, or blocks may be executed in an alternative order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of the blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. While not explicitly described, the present embodiments may be employed in any combination or sub-combination.
Number | Date | Country | Kind |
---|---|---|---|
16305801.9 | Jun 2016 | EP | regional |