Embodiments of the present invention generally relate to track allocation and more specifically, to a method and system for dynamic track allocation in a network.
Mesh networks consist of network nodes that connect directly, dynamically, and non-hierarchically to other nodes and cooperate with one another to efficiently route data through the network. Some mesh network nodes can dynamically serve as a router for every other node. In that way, even in the event of a failure of some nodes, the remaining nodes may continue to communicate with each other and if necessary, serve as downlinks/uplinks for other nodes.
Currently, the Internet Engineering Task Force (IETF) 6TiSCH Operation sublayer (6TOP) protocol allows for allocation of resources in a network; however, IETF 6TOP uses a three or four-way handshake, which increases the traffic within the network. Furthermore, network resources need to be allocated in advance to allow their use by the routed IP traffic. This static allocation of resources is inefficient in networks for which traffic flow and routing paths are constantly changing.
Today, some applications targeted for mesh network deployments, for example distribution automation, have specific real-time requirements, such as strict latency requirements. Furthermore, sharing of a common mesh network infrastructure between multiple applications with different latency and bandwidth requirements may also be required. As such, in a resource constrained network, it is important to be able to quickly and efficiently allocate and release network resources in the course of normal messages without the need to send explicit configuration messages.
Therefore, there is a need for a method and apparatus for dynamic track allocation in a network.
An apparatus and/or method is provided for dynamic track allocation in a network substantially as shown in and/or described in connection with at least one of the figures, as set forth more completely in the claims.
These and other features and advantages of the present disclosure may be appreciated from a review of the following detailed description of the present disclosure, along with the accompanying figures in which like reference numerals refer to like parts throughout.
While the method and apparatus are described herein by way of example for several embodiments and illustrative drawings, those skilled in the art will recognize that the method and apparatus for dynamic track allocation in a network is not limited to the embodiments or drawings described. It should be understood, that the drawings and detailed description thereto are not intended to limit embodiments to the particular form disclosed. Rather, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the method and apparatus for dynamic track allocation in a network defined by the appended claims. Any headings used herein are for organizational purposes only and are not meant to limit the scope of the description or the claims. As used herein, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean including, but not limited to. Although the word “node” is used herein, those skilled in the art will appreciate that the disclosed invention may be implemented on any network device.
Embodiments of the invention provide a system and method for dynamic track allocation in a mesh network. A centralized management of network resources is a path computation element (PCE) that can allocate a new track to be used to transfer a message from a source node in the network to a destination node. The source node appends to the message a track identifier, an expiration time, and for each node within the path, one or more link resources allocated by the PCE to this node for downstream transmission, and link resources to be allocated for upstream transmission. Multiple link resources may be assigned to a same track to increase the amount of bandwidth allocated for transmission. Alternately, instead of allocating unused link resources to a new track, the PCE may assign already allocated link resources to share the bandwidth between multiple tracks. In some embodiments, a channel offset is specified to be used for transmission within these link resources. Once appended to the message, the PCE hands off the message for transmission. At each node, the assigned link resources (e.g. timeslot, channel offset, transmit vs. receive, and the destination address of nodes that are configured to listen) are configured based on the information appended in the message by the source node. During the transfer of this first message, each node also records the track identifier in conjunction with the next and previous addresses. This information is than used to forward the subsequent messages on this track using, for example Multiprotocol Label Switching (MPLS).
Various embodiments of a method and apparatus for dynamic track allocation in a network are described. In the following detailed description, numerous specific details are set forth to provide a thorough understanding of claimed subject matter. However, it will be understood by those skilled in the art that claimed subject matter may be practiced without these specific details. In other instances, methods, apparatuses or systems that would be known by one of ordinary skill have not been described in detail so as not to obscure claimed subject matter.
Some portions of the detailed description that follow are presented in terms of algorithms or symbolic representations of operations on digital signals stored within a memory of a specific apparatus or special purpose computing device or platform. In the context of this specification, the term specific apparatus or the like includes a general-purpose computer once it is programmed to perform particular functions pursuant to instructions from program software. Algorithmic descriptions or symbolic representations are examples of techniques used by those of ordinary skill in the signal processing or related arts to convey the substance of their work to others skilled in the art. An algorithm is here, and is generally, considered to be a self-consistent sequence of operations or similar signal processing leading to a desired result. In this context, operations or processing involve physical manipulation of physical quantities. Typically, although not necessarily, such quantities may take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared or otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to such signals as bits, data, values, elements, symbols, characters, terms, numbers, numerals or the like. It should be understood, however, that all of these or similar terms are to be associated with appropriate physical quantities and are merely convenient labels. Unless specifically stated otherwise, as apparent from the following discussion, it is appreciated that throughout this specification discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining” or the like refer to actions or processes of a specific apparatus, such as a special purpose computer or a similar special purpose electronic computing device. In the context of this specification, therefore, a special purpose computer or a similar special purpose electronic computing device can manipulate or transforming signals, typically represented as physical electronic or magnetic quantities within memories, registers, or other information storage devices, transmission devices, or display devices of the special purpose computer or similar special purpose electronic computing device. As used herein, the term device may include a mesh device, a routed device, or any device or node in a network.
The present invention is a method and apparatus dynamic track allocation in a network 100. In one embodiment of the invention, a source node 102x (where x is an integer), for example, node 1021 generates a message for transmission to a destination node 102y (where y is an integer), for example, node 1026. In some embodiments, the source node 1021 may be a path computation element (PCE) that computes paths through the network on behalf of the nodes. In some embodiments, the PCE may be an entity running at the boundary or outside the managed network. The PCE allocates a set of link resources from the source node 1021 to the destination node 1026 that define a track from the source node 1021 to the source node 1026. The defined track is bidirectional. The PCE assigns an identifier to the track and a track expiration time. The source node 1021 then appends the defined track and track identifier to the message. In some embodiments, the PCE allocates a second set of link resources from the destination node 1026 to the source node 1021. In such embodiments, the second set of link resources is also appended to the message.
In some embodiments, node 1021 is a border router that acts as an access point to network 100 routing messages to and from this network. The message is then transmitted from the source node 1021 to the destination node 1026. This process is described in further detail with respect to
The node 102 comprises a CPU 202, support circuits 206, memory 204 and a network interface 208. The CPU 202 may comprise one or more readily available microprocessors or microcontrollers. The support circuits 206 are well known circuits that are used to support the operation of the CPU and may comprise one or more of cache, power supplies, input/output circuits, network interface cards, clock circuits, and the like. Memory 204 may comprise random access memory, read only memory, removable disk memory, flash memory, optical memory or various combinations of these types of memory. The memory 204 is sometimes referred to as main memory and may, in part, be used as cache memory or buffer memory. The memory 204 stores various forms of software and files, such as, an operating system (OS) 210, communication software 212 and the optional path computation element (PCE) 214 code implemented by one of these nodes. The operating system 210 may be one of several well-known operating systems or real time operating systems such as FreeRTOS, Contiki, LINUX, WINDOWS, and the like.
The network interface 208 connects the node 102 to the network 100. The network interface 208 may facilitate a wired or wireless connection to other nodes. In some embodiments, for example when node 102 is a border router, node 102 may have multiple interfaces 208 for routing within the network 100 as well as to connect to another network.
Link resources or a subset of the link resources are managed by a centralized entity (i.e. a path computation element (PCE) (not shown)), which may be remote from the network nodes. In one embodiment of the invention, the PCE is used to increase the efficiency of a network by quickly and efficiently assigning tracks through the track allocation process in the course of normal messages without the need to send explicit configuration messages. To allocate a track, the PCE computes the route between source node 1021 and the destination node 1026 and then allocates to the source, the intermediate and destination nodes, one or multiple link resources for downstream and for upstream transmission.
At step 304, a message is determined to be ready to be forwarded to the network. The message may have been received by the current node or is generated by the current node. The message is to be routed to a target node in the managed network on its way to a destination node. At step 306, the route to the target node within the managed network is retrieved from the routing protocol. A route is the series of nodes traversed to get from the source node to the destination node. Each step from a first node to a second node is referred to has a hop. A track follows the route and specifies the link resources used at each hop.
At step 308, it is determined whether a track already exists for this destination. If a track already exists for the destination, the method proceeds to step 316. If it is determined that a track does not currently exist for the track, then the method proceeds to step 310, where quality of service (QOS) rules associated with this traffic are retrieved. The retrieval of QOS rules may be performed by the PCE, locally by the node, or a combination of both.
At step 312, network resources are retrieved. The node retrieves the network resources and a unique track identifier from the PCE. The track is based on a maximum bandwidth and maximum channel sharing allowed by the QOS rules. At step 314, it is determined whether the retrieved network resources are currently available to route the message. If it is determined that all network resources are currently in used and the sharing of link resources is not allowed or possible, then the method proceeds to step 320, where the message is temporary queued until network resources are freed. Queued message can be prioritized to avoid timeouts or to apply quality of service (QOS) rules. Queued messages also get assigned a limited queuing lifetime to avoid transmitting stale messages. From step 320 the method proceeds to step 322 where the method ends.
However, if at step 314, it is determined that network resource links are available, then at step 315 the link resources and track expiration time are appended to the message and the method proceeds to step 316.
At step 316, the assigned track identifier is appended to the message. At step 318, the message is transmitted and the message proceeds to step 322 where the method 300 ends.
At step 404, a track expiration notification is received. Link resources are freed up by the expiration of a track. In some embodiments, the notification may be received from the PCE. At step 406, the node selects a message from one of the queues based on their current sizes and the number of consecutive selections. The intent is to give more opportunity to high priority messages without starving to low priority ones. If at step 406, a maximum number of consecutive high priority messages has been reached, then the method proceeds to step 410 and retrieves a low priority message from one of the queues; however, if at step 406, the maximum number of consecutive high priority messages has not been reached, then the method proceeds to step 408 and retrieves a high priority message from the high priority queue.
At step 412, the node retrieves from the PCE the resources and creates a unique identifier for the track to send the retrieved message. At step 414, the node verifies whether network resources are available to route the message. If all network resources are currently in used and the sharing of link resources is not allowed or possible, then at step 416, the method attempts to allocate a track for a different destination up to pre-defined maximum number of lookups. If at step 416, the maximum number of lookups is not reached, the method proceeds to step 418, where the message is placed back in the associated priority queue and the method proceeds to step 406 to retrieve a next message from a queue. However, if at step 414, network resources are available, the method proceeds to step 420, where the link resources and track expiration time are appended to the message. At step 422 the track identifier is appended to the message. At step 424 the message is transmitted to the next node. The method then proceeds to step 406 to investigate whether tracks for other messages can be formed. If no messages are left on any queue, the method 400 ends.
In
In
In
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the present disclosure and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as may be suited to the particular use contemplated.
The methods described herein may be implemented in software, hardware, or a combination thereof, in different embodiments. In addition, the order of methods may be changed, and various elements may be added, reordered, combined, omitted, modified, etc. All examples described herein are presented in a non-limiting manner. Various modifications and changes may be made as would be obvious to a person skilled in the art having benefit of this disclosure. Realizations in accordance with embodiments have been described in the context of particular embodiments. These embodiments are meant to be illustrative and not limiting. Many variations, modifications, additions, and improvements are possible. Accordingly, plural instances may be provided for components described herein as a single instance. Boundaries between various components, operations and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of claims that follow. Finally, structures and functionality presented as discrete components in the example configurations may be implemented as a combined structure or component. These and other variations, modifications, additions, and improvements may fall within the scope of embodiments as defined in the claims that follow.
This application claims priority to U.S. Provisional Patent Application Ser. No. 62/757,343 filed Nov. 8, 2018, which is incorporated by reference herein in its entirety.
Number | Date | Country | |
---|---|---|---|
62757343 | Nov 2018 | US |