This disclosure relates generally to communication networks and more specifically, but not exclusively, to routing for a communication network.
Conventional communication networks, such as optical communication networks, are composed of a series of switches or routers called nodes or network elements interconnected by transmission mediums called links or paths (e.g. optical fibers) that allow the transmission of data between the nodes. Traffic Engineering (TE) is a technology that is concerned with performance optimization of communication networks. In general, Traffic Engineering includes a set of applications, mechanisms, tools, and scientific principles that allow for measuring, modeling, characterizing and controlling data traffic within the network in order to achieve specific performance objectives of the network. Data traffic signifies data exchanged between two nodes, such as an originating node (e.g. a source node) and a terminating node (e.g. a destination or sink node). Within the network, data traffic can be transported between any two locations using predefined connections specifying particular links and/or switch nodes for conveying the data traffic.
The switch nodes in the network are each provided with a control module. The control modules of the switch nodes function together to aid in the control and management of the switched networks. The control modules can run a variety of protocols for conducting the control and management of the switched networks. One prominent protocol is referred to in the art as “Generalized Multiprotocol Label Switching (GMPLS)”.
Generalized Multiprotocol Label Switching (GMPLS) is a type of protocol which extends multiprotocol label switching to encompass network schemes based upon time-division multiplexing (e.g. SONET/SDH, PDH, G.709), wavelength multiplexing, and spatial switching (e.g. incoming port or fiber to outgoing port or fiber). Multiplexing, such as time-division multiplexing is when two or more signals or bit streams are transferred over a common channel.
Generalized Multiprotocol Label Switching includes multiple types of label switched paths including protection and recovery mechanisms which specifies predefined (1) working connections within a network having multiple nodes and communication links for transmitting data between a source node and a destination node; and (2) protecting connections specifying a different group of nodes and/or communication links for transmitting data from the source node to the destination node in the event that one or more of the working connections fail. Working connections may also be referred to as working paths. Protecting connections may also be referred to as protecting paths and/or protection paths. A first node of a path may be referred to as a source node. A last node of a path may be referred to as an end node or destination node. Data is initially transmitted over the working connection (such as an optical channel data unit label switched path) and then, when a working connection fails, the source node or end node activates one of the protecting connections for redirecting data within the network.
The set up and activation of the protecting connections may be referred to as restoration or shared protection. For example, Shared Mesh Protection (SMP) is a common protection and recovery mechanism in transport networks where multiple paths can share the same set of network resources for protection purposes. Resources such as nodes and communication links in protecting connections are typically shared by multiple working connections that are not affected by the same failure, thus increasing efficient use of network resources.
However, current systems inefficiently utilize the provisioned capacity of a network when determining protecting connections, especially for large-scale networks. For example, the working path may not be divided into the optimal segments, where optimal segment choice would provide the least costly alternate path through the network in case of failure in the working path. Systems and methods are needed to determine preferred segments of the working path for segment protection based on the network condition and topology, and for implementation of such protection, in order to optimize network capacity and knowledge of failure locations.
Accordingly, there is a need for systems, apparatus, and methods that improve upon conventional approaches including the improved methods, system and apparatus provided hereby.
The following presents a simplified summary relating to one or more aspects and/or examples associated with the apparatus and methods disclosed herein. As such, the following summary should not be considered an extensive overview relating to all contemplated aspects and/or examples, nor should the following summary be regarded to identify key or critical elements relating to all contemplated aspects and/or examples or to delineate the scope associated with any particular aspect and/or example. Accordingly, the following summary has the sole purpose to present certain concepts relating to one or more aspects and/or examples relating to the apparatus and methods disclosed herein in a simplified form to precede the detailed description presented below.
In one aspect, a method for communication includes: retrieving information of a topology of the communication network, the information including a plurality of connections between each of a plurality of devices in the communication network and latency information of each connection of the plurality of connections; determining a first path from a first device of the plurality of devices to a second device of the plurality of devices, the first path including a first portion of the plurality of connections with a lowest total latency between the first device and the second device; determining a second path from the first device to the second device, the second path including a second portion of the plurality of connections with a second lowest total latency between the first device and the second device; selecting the second path as one of a work path or a protect path; removing the second portion of the plurality of connections from the second path from the information of the topology of the communication network; determining a third path from the first device to the second device, the third path including a third portion of the plurality of connections with a third total latency between the first device and the second device; determining a fourth path from the first device to the second device, the fourth path including a fourth portion of the plurality of connections with a fourth total latency between the first device and the second device; determining a first latency differential between the third total latency and the second lowest total latency; determining a second latency differential between the fourth total latency and the second lowest total latency; selecting the third path as one of the work path or the protect path if the first latency differential is less than the second latency differential; and selecting the fourth path as one of the work path or the protect path if the first latency differential is greater than the second latency differential.
In another aspect, an apparatus includes: means for retrieving information of a topology of the communication network, the information including a plurality of connections between each of a plurality of devices in the communication network and latency information of each connection of the plurality of connections; means for determining a first path from a first device of the plurality of devices to a second device of the plurality of devices, the first path including a first portion of the plurality of connections with a lowest total latency between the first device and the second device; means for determining a second path from the first device to the second device, the second path including a second portion of the plurality of connections with a second lowest total latency between the first device and the second device; means for selecting the second path as one of a work path or a protect path; means for removing the second portion of the plurality of connections from the second path from the information of the topology of the communication network; means for determining a third path from the first device to the second device, the third path including a third portion of the plurality of connections with a third total latency between the first device and the second device; means for determining a fourth path from the first device to the second device, the fourth path including a fourth portion of the plurality of connections with a fourth total latency between the first device and the second device; means for determining a first latency differential between the third total latency and the second lowest total latency; means for determining a second latency differential between the fourth total latency and the second lowest total latency; means for selecting the third path as one of the work path or the protect path if the first latency differential is less than the second latency differential; and means for selecting the fourth path as one of the work path or the protect path if the first latency differential is greater than the second latency differential.
In still another aspect, a non-transient computer readable medium containing program instructions for causing a processor to perform a process including: retrieving information of a topology of the communication network, the information including a plurality of connections between each of a plurality of devices in the communication network and latency information of each connection of the plurality of connections; determining a first path from a first device of the plurality of devices to a second device of the plurality of devices, the first path including a first portion of the plurality of connections with a lowest total latency between the first device and the second device; determining a second path from the first device to the second device, the second path including a second portion of the plurality of connections with a second lowest total latency between the first device and the second device; selecting the second path as one of a work path or a protect path; removing the second portion of the plurality of connections from the second path from the information of the topology of the communication network; determining a third path from the first device to the second device, the third path including a third portion of the plurality of connections with a third total latency between the first device and the second device; determining a fourth path from the first device to the second device, the fourth path including a fourth portion of the plurality of connections with a fourth total latency between the first device and the second device; determining a first latency differential between the third total latency and the second lowest total latency; determining a second latency differential between the fourth total latency and the second lowest total latency; selecting the third path as one of the work path or the protect path if the first latency differential is less than the second latency differential; and selecting the fourth path as one of the work path or the protect path if the first latency differential is greater than the second latency differential.
Other features and advantages associated with the apparatus and methods disclosed herein will be apparent to those skilled in the art based on the accompanying drawings and detailed description.
A more complete appreciation of aspects of the disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings which are presented solely for illustration and not limitation of the disclosure, and in which:
In accordance with common practice, the features depicted by the drawings may not be drawn to scale. Accordingly, the dimensions of the depicted features may be arbitrarily expanded or reduced for clarity. In accordance with common practice, some of the drawings are simplified for clarity. Thus, the drawings may not depict all components of a particular apparatus or method. Further, like reference numerals denote like features throughout the specification and figures.
The exemplary methods, apparatus, and systems disclosed herein advantageously address the industry needs, as well as other previously unidentified needs, and mitigate shortcomings of the conventional methods, apparatus, and systems. For example, a pair of disjointed or unrelated paths through a network may be determined using two paths with the least differential latency between each of the two paths instead of choosing a path with the least latency and then determining a path with the second least latency. This may be accomplished by selecting the second least latency path and removing this path from the possible paths considered. Then compute the latency for all remaining paths until one with the same latency as the removed path is found or the one with the smallest difference in latency is found. These two paths will form the new working and protection paths for this pair with the path having the lowest latency of the two being selected as the working path and the other as the protection path. In certain applications, the latency between the work and protect paths is more important than the absolute latency of the work path, for example. Such applications may include a voice communication session or video streaming type application along with other types where the latency between the work and protect path is important.
Wavelength division multiplexed (WDM) optical communication systems (referred to as “WDM systems”) are systems in which multiple optical signals, each having a different wavelength, are combined onto a single optical fiber using an optical multiplexer circuit (referred to as a “multiplexer”). Such systems may include a transmitter circuit, such as a transmitter (Tx) PIC having a transmitter component to provide a laser associated with each wavelength, a modulator configured to modulate the output of the laser, and multiplexer to combine each of the modulated outputs (e.g., to form a combined output).
A PIC is a device that integrates multiple photonic functions on a single integrated device. PICs may be fabricated in a manner similar to electronic integrated circuits but, depending on the type of PIC, may be fabricated using one or more of a variety of types of materials, including silica on silicon, silicon on insulator, and various polymers and semiconductor materials which are used to make semiconductor lasers, such as GaAs, InP and their alloys.
A WDM system may also include a receiver circuit having a receiver (Rx) PIC and an optical demultiplexer circuit (referred to as a “demultiplexer”) configured to receive the combined output and demultiplex the combined output into individual optical signals. Additionally, the receiver circuit may include receiver components to convert the optical signals into electrical signals, and output the data carried by those electrical signals.
The transmitter (Tx) and receiver (Rx) PICs, in an optical communication system, may support communications over a number of wavelength channels. For example, a pair of Tx/Rx PICs may support ten channels, each spaced by, for example, 50 GHz. The set of channels supported by the Tx and Rx PICs can be referred to as the channel “grid” for the PICs. Channel grids for Tx/Rx PICs may be aligned to standardized frequencies, such as those published by the Telecommunication Standardization Sector (ITU-T). The set of channels supported by the Tx and Rx PICs may be referred to as the ITU frequency grid for the Tx/Rx PICs.
In WDM systems, the demultiplexer may be capable of receiving first and second optical signals associated with the combined output in order to increase data rates associated with the WDM system. In order to further increase the data rates associated with a WDM system, additional WDM components are sometimes incorporated. For example, some WDM systems may include a polarization beam splitter (PBS) to receive the combined output and output first and second optical signals, to increase the data rates associated with the WDM system. The first optical signal may include components having a first polarization and the second optical signal may include components having a second polarization. Some WDM systems may further include a rotator to rotate the polarization of the components associated with the first optical signal such that the components have the second polarization, thereby allowing the demultiplexer to process optical signals associated with one polarization (e.g., the second polarization).
WDM systems are sometimes constructed from discrete components (e.g., a transmitter component, a multiplexer, a demultiplexer, a PBS, a rotator, and/or a receiver component). For example, demultiplexers and receiver components may be packaged separately and provided on a printed circuit board. Alternatively, WDM components are sometimes integrated onto a single chip, also referred to as a photonic integrated circuit (PIC). For example, a PBS and a rotator are provided on the same PIC as a demultiplexer.
Transmitter module 110 may include a number of optical transmitters 112-1 through 112-N (where N 113, optical multiplexer 114, polarizers 120, and/or polarization beam combiner (PBC) 121. Each optical transmitter 112 may receive a data channel (TxCh1 through TxChN), modulate the data channel with an optical signal, and transmit the data channel as an optical signal. In one implementation, transmitter module 110 may include 5, 10, 20, 50, 100, or some other number of optical transmitters 112. Each optical transmitter 112 may be tuned to use an optical carrier of a designated wavelength. It may be desirable that the grid of wavelengths emitted by optical transmitters 112 conform to a known standard, such as a standard published by the Telecommunication Standardization Sector (ITU-T).
In some implementations, each of optical transmitters 112 may include a laser, a modulator, a semiconductor optical amplifier (SOA), and/or some other components. The laser, modulator, and/or SOA may be coupled with a tuning element that can be used to tune the wavelength of the optical signal channel by the laser, modulator, or SOA. In some implementations, a single laser may be shared by multiple optical transmitters 112.
Waveguides 113 may include an optical link or some other link to transmit modulated outputs (referred to as “signal channels”) of optical transmitters 112. In some implementations, each optical transmitter 112 may connect to one waveguide 113 or to multiple waveguides 113 to transmit signal channels of optical transmitters 112 to optical multiplexer 114.
Optical multiplexer 114 may include an arrayed waveguide grating (AWG) or some other multiplexing device. In some implementations, optical multiplexer 114 may combine multiple signal channels, associated with optical transmitters 112, into wave division multiplexed (WDM) signals, such as optical signals 115 and 116. In some implementations, optical multiplexer 114 may include an input, (e.g., a first slab to receive signal channels) and an output (e.g., a second slab to supply WDM signals, such as optical signals 115 and 116, associated with input signal channels). Optical multiplexer 114 may also include waveguides connecting the input and the output. In some implementations, the first slab and the second slab may each act as an input and an output. For example, the first slab and the second slab may each receive multiple signal channels. The first slab may supply a single WDM signal corresponding to the signal channels received by the second slab. The second slab may supply a single optical signal (e.g., a WDM signal) corresponding to the signal channels received by the first slab. As shown in
Rotator 119 may include an optical device or a collection of optical devices. In some implementations, rotator 119 may receive an optical signal with components having a first polarization (e.g., a TM polarization), rotate the polarization of the components, associated with the optical signal, and supply an optical signal with rotated components having a second polarization (e.g., a TE polarization). In some implementations, rotator 119 may be associated with transmitter module 110. Rotator 119 may receive components associated with optical signal 115 having a first polarization (e.g., a TM polarization), and supply optical signal 117 with rotated components having a second polarization (e.g., a TE polarization). As shown in
Additionally, or alternatively, rotator 119 may be associated with receiver module 150 and may receive components associated with optical signal 116 having a first polarization (e.g., a TM polarization), and supply optical signal 118 with rotated components having a second polarization (e.g., a TE polarization). As shown in
As described above, rotator 119 may be capable of receiving multiple sets of components associated with multiple optical signals and supplying multiple sets of rotated components associated with the received components. As shown in
Polarizer 120 may include an optical device, or a collection of optical devices. In some implementations, polarizer 120 may receive an optical signal, and may absorb components of the optical signal having a particular polarization such as a first polarization (e.g., a TM polarization) or a second polarization (e.g., a TE polarization). In some implementations, polarizers 120 may be associated with transmit module 110 and may receive optical signal 115 supplied by optical multiplexer 114 and/or optical signal 117 supplied by rotator 119.
In some implementations, polarizers 120 may absorb residual components of optical signal 117 having the first polarization. For example, as described above rotator 119 may rotate components associated with optical signal 115 having the first polarization, to supply optical signal 117 with components having the second polarization. Optical signal 117 may include residual components associated with the first polarization. Polarizer 120 may be connected along a path associated with optical signal 117 to absorb the residual components associated with the first polarization, thereby absorbing components having an undesirable polarization. Similarly, polarizer 120 may be connected along a path associated with optical signal 115 to absorb components having an undesirable polarization.
Additionally, or alternatively, polarizers 120 may be associated with receiver module 150 and may receive optical signal 117 supplied by PBS 140 and/or optical signal 118 supplied by rotator 119. In a similar manner as described above, polarizers 120 may absorb components of optical signal 118 having the first polarization (e.g., residual components of optical signal 118 having the first polarization when rotator 119 supplies optical signal 118). Similarly, polarizer 120 may be connected along a path associated with optical signal 117 to absorb components having an undesirable polarization.
PBC 121 may include an optical device, or a collection of optical devices. In some implementations, PBC 121 may receive multiple optical signals and supply a combined optical signal (e.g., a WDM signal, or some other type of optical signal). For example, as shown in
PBS 140 may include an optical device or a collection of optical devices. In some implementations, PBS 140 may receive an input optical signal (e.g., optical signal 125 or some other signal), and supply output components associated with the input optical signal (e.g., via a first output and/or a second output of PBS 140). As shown in
As further shown in
Waveguides 152 may include optical links or some other links to transmit outputs of optical demultiplexer 151 to optical receivers 153. In some implementations, each optical receiver 153 may receive outputs via a single waveguide 152 or via multiple waveguides 152.
Optical receivers 153 may each operate to convert the input optical signal to an electrical signal that represents the transmitted data. In some implementations, optical receivers 153 may each include one or more photodetectors and/or related devices to receive respective input optical signals outputted by optical demultiplexer 151 and a local oscillator, convert the signals to a photocurrent, and provide a voltage output to function as an electrical signal representation of the original input signal.
In some implementations, and as shown in
When a path set up request for communication between, for example, the first device 210 and the fourth device 240 is initiated. The path set up request may be initiated by a centralized device (not shown) in communication with the various devices within the network 200, a requesting device (not shown) from another network or client, or by one of the devices within the network 200 such as the first device 210. The path set up request may be for a path between a source node (e.g. the first device 210) and a destination node (e.g. the fourth device 240). With the pair of devices established for the path, the least latency of disjointed paths (work and protect) can be computed. First, for the given node pair (the first device 210 (s) and the fourth device 240 (t)), compute a set ‘S’ of all possible disjoint path pairs through the network 200 between these two nodes. In this example, Step 1 is to use a k-shortest paths algorithm (e.g. solving the NP complete problem based on the network connections) based on latencies as ‘distance’ to find successive paths of increasing latency as one of the candidate paths.
For each candidate path obtained in Step 1, Step 2 is performed. In Step 2: remove the path found in the above step and compute successive k-shortest latency paths in the residual graph. Each pair, where first path is from step 1 and second path is from a single iteration of step 2, goes into set ‘S’. Then, put back the removed path and continue to the next iteration of Step 1. Next, pick the two paths from ‘S’ that have the least differential latency between them. Last, pick the path with lower (absolute) latency as the work path from the source node to the destination node.
For example, assuming each connection or link has equal latency in
Then, set S will contain the following unique pairs:
P1: s-e-t the first step using the network of
P2: s-a-b-t is revealed as a candidate path
P1: s-e-t starting the first step again
P2: s-d-c-t is revealed as a candidate path
P1: s-a-b-t starting the first step again without s-e-t
P2: s-d-c-t is revealed as a candidate path.
The last pair will be picked since the last pair has matching latency or a differential latency of zero. This is a brute-force search for the optimum with several drawbacks. For instance, the complexity of evaluating the solution as described here could take exponential time based on the size of the input/network and difficult to implement for practical applications.
For example,
In this example, the topology of the network 300 including the connections of each of the devices in the network and the latency information is first retrieved. Then, the determination starts with computing a minimum-total cycle (MTC) for the first device 310 and seventh device 370 [s,t] pair. The MTC may be calculated using, for example, Dijkstra's shortest path computation, the Floyd-Warshall algorithm, or similar approach. This gives two paths P1, P2 as shown in
Remove the longer path P2 from the graph, which results in the graph shown in
The KSP on the residual graph is guaranteed to start with a path at least as expensive as P1 due to the MTC process that determined P1 and P2. Therefore, it is not necessary iterating over shorter paths to reach a stage that weighs as much as P1 (which is a solution we already have). This gives a lower bound on the solution at Dijkstra's (shortest path computation) complexity, which may be very close to the next solution if the min-total cycle spans a cut-set of the graph. P2 is selected as one of the final paths and the iterative approach starts finding candidates for a solution pair with P2. The other path is guaranteed to be diverse from P2 and with latency just below or just above P2 or equal to that of P2. Additionally, heuristic may be employed during the iterations to cap the process based on a number of iterations, solution quality, gap etc. In this example, the complexity is: O(E+E log V)+O(V)+O(KV(E+V log V)) where K is the number of iterations spent on the residual graph. First part is the MTC computation time, second term is the path removal time and the third term is the KSP computation time on the residual graph.
Another example of a heuristic approach searches for bottle necks in the solutions. For example, U.S. Pat. No 8,891,360, expressly incorporated by reference in its entirety details an umbrella algorithm for use in a shared mesh network. The approach detailed therein may be modified to have an objective of lowering latency differentials. First, an auxiliary graph is constructed (not shown) by assigning a weight on every link/connection that corresponds to the latency differential of that protect path with respect to the work path. Next, the ‘narrowest’ path in the auxiliary graph is computed. This may be done in real time by computing a minimum spanning tree (MST) and determining paths from source node (s) to destination node (t).
This minimizes the maximum differential latency among the candidate solutions that the umbrella algorithm considers. Finally, the narrowest path on the auxiliary is evaluated to pick protect paths (each link in auxiliary corresponds to one protect path. Optionally, paths that backtrack on the auxiliary graph from ‘s’ to ‘t’ may be pruned since coverage of the shared risk link groups (SRLGs) is guaranteed when these are removed from consideration. For this approach, the complexity analysis indicates that if the working path has ‘m’ hops, then umbrella has mC2 iterations of Dijkstra. Additionally, to meet the differential latency objective, the processing required is to build auxuliary based on the differential latencies that takes O(V) per path and mC2O(V) in the worst case. Another O(m) may be used to evaluate the MST to get the narrowest path. So, total time is mC2×O(E+V log V)+mC2O(V)+O(m) where m can be V in the worst case and V is the number of vertices in the graph.
The heuristics described herein may be used in a network planning system, such as Infinera's NPS, to compute differential latency optimized paths during network planning for example. This would give an estimate of the resources and equipment required to support these features for a subset/all of the services that need to be planned. The same system can also employ these heuristics to do a what-if/failure analysis by choosing this differential latency feature for some services. The heuristics may also be used in a network element such as Infinera's DTN-X, an L2 device or a L3 device like a router, or a centralized software defined network (SDN) controller. For any network element or processing entity that is aware of network topology, the resources in use can use these heuristics to compute paths that are optimized for achieving differential latency for a service to be provisioned. As an example, they may be used to set up 1+1 dedicated protection paths. The heuristics may also be used to compute Infinera Fast SMP paths provided sharing bandwidth is also accounted for as part of the process. In addition, the heuristics may be employed at a higher layer. If it is required at the higher layers (beside the fiber or physical layer), for example, the OTN (L1) layer or the packet (L2/L3) layer. Then, the work and protect paths may need to be SRLG diverse in addition to having minimum differential latency to avoid underlying single failures such as a fiber cut. In cut fiber cases, the extensions need to be used to achieve the same or similar results. Alternatively, if diversity is required only within a layer, then the heuristics as described may be used without changes.
In any process by which a stream of data is partitioned across multiple connections, it is important to control the differential latency between the connections, such as working and protection paths for example. If there is a very high differential latency between two connections in an aggregate group, then reconstituting the original stream can require a great deal of memory at the receiver, and it can add latency to the traffic flow. This leads to a costly and poor overall solution. The differential latency between two pairs is affected by a number of factors, including the line rate of each line and the effective transmission distance of each line of the pair. For example, Ethernet bonding places a restriction on the amount of differential latency that can be tolerated in an aggregate group. Examples of use cases for latency based routing include latency sensitive applications (e.g. stock trading, online gaming, video conferencing), applications that may have real-time requirements, and differential latency sensitive applications that are susceptible to failures (e.g. machine-to-machine interactions, video on demand, load balancing applications). It may not be critical to achieve least latency on a given path in these applications but it might be critical to achieve least possible differential latency between working and protection paths. These may include objectives such as: latency optimized work path, and a protection path that is diverse for latency sensitive applications or differential latency optimized [work+protect] path pair for differential latency sensitive applications.
Examples of extensions to the heuristics described above may be used with shared risks that could be user configured, reconfigurable optical add drop multiplexers (ROADMs), optical expresses etc. For example, to compute a SRLG disjoint path pair, a first heuristic may be used such as NP-complete by itself. Then, the process may process several iterations to get the best total cost for the solution path. Next, the heuristic described with reference to
Examples of the network devices mentioned above (e.g. devices 210-280) may include routers or switches, such as Infinera's DTN-X platform, that may have multiple functionalities like L0 wavelength division multiplexing (WDM) transport capabilities, L1 digital OTN switching capabilities, and L2 packet switching capabilities. The network 100 may be optimized by enabling the packet switching feature in network devices using protocols such as MPLS-TP and switching LSP's, and packet switching in the network core can be performed by the devices.
The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any details described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other examples. Likewise, the term “examples” does not require that all examples include the discussed feature, advantage or mode of operation. Use of the terms “in one example,” “an example,” “in one feature,” and/or “a feature” in this specification does not necessarily refer to the same feature and/or example. Furthermore, a particular feature and/or structure can be combined with one or more other features and/or structures. Moreover, at least a portion of the apparatus described hereby can be configured to perform at least a portion of a method described hereby.
The terminology used herein is for the purpose of describing particular examples only and is not intended to be limiting of examples of the disclosure. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises”, “comprising,” “includes,” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should be noted that the terms “connected,” “coupled,” or any variant thereof, mean any connection or coupling, either direct or indirect, between elements, and can encompass a presence of an intermediate element between two elements that are “connected” or “coupled” together via the intermediate element.
Any reference herein to an element using a designation such as “first,” “second,” and so forth does not limit the quantity and/or order of those elements. Rather, these designations are used as a convenient method of distinguishing between two or more elements and/or instances of an element. Thus, a reference to first and second elements does not mean that only two elements can be employed, or that the first element must necessarily precede the second element. Also, unless stated otherwise, a set of elements can comprise one or more elements.
Further, many examples are described in terms of sequences of actions to be performed by, for example, elements of a computing device. It will be recognized that various actions described herein can be performed by specific circuits (e.g., application specific integrated circuits (ASICs)), by program instructions being executed by one or more processors, or by a combination of both. Additionally, these sequence of actions described herein can be considered to be embodied entirely within any form of computer readable storage medium having stored therein a corresponding set of computer instructions that upon execution would cause an associated processor to perform the functionality described herein. Thus, the various aspects of the disclosure may be embodied in a number of different forms, all of which have been contemplated to be within the scope of the claimed subject matter. In addition, for each of the examples described herein, the corresponding form of any such examples may be described herein as, for example, “logic configured to” perform the described action.
Nothing stated or illustrated depicted in this application is intended to dedicate any component, step, feature, benefit, advantage, or equivalent to the public, regardless of whether the component, step, feature, benefit, advantage, or the equivalent is recited in the claims.
Further, those of skill in the art will appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the examples disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The methods, sequences and/or algorithms described in connection with the examples disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.
The various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices (e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration).
Although some aspects have been described in connection with a device, it goes without saying that these aspects also constitute a description of the corresponding method, and so a block or a component of a device should also be understood as a corresponding method step or as a feature of a method step. Analogously thereto, aspects described in connection with or as a method step also constitute a description of a corresponding block or detail or feature of a corresponding device. Some or all of the method steps can be performed by a hardware apparatus (or using a hardware apparatus), such as, for example, a microprocessor, a programmable computer or an electronic circuit. In some examples, some or a plurality of the most important method steps can be performed by such an apparatus.
In the detailed description above it can be seen that different features are grouped together in examples. This manner of disclosure should not be understood as an intention that the claimed examples require more features than are explicitly mentioned in the respective claim. Rather, the situation is such that inventive content may reside in fewer than all features of an individual example disclosed. Therefore, the following claims should hereby be deemed to be incorporated in the description, wherein each claim by itself can stand as a separate example. Although each claim by itself can stand as a separate example, it should be noted that-although a dependent claim can refer in the claims to a specific combination with one or a plurality of claims-other examples can also encompass or include a combination of said dependent claim with the subject matter of any other dependent claim or a combination of any feature with other dependent and independent claims. Such combinations are proposed herein, unless it is explicitly expressed that a specific combination is not intended. Furthermore, it is also intended that features of a claim can be included in any other independent claim, even if said claim is not directly dependent on the independent claim.
It should furthermore be noted that methods disclosed in the description or in the claims can be implemented by a device comprising means for performing the respective steps or actions of this method.
Furthermore, in some examples, an individual step/action can be subdivided into a plurality of sub-steps or contain a plurality of sub-steps. Such sub-steps can be contained in the disclosure of the individual step and be part of the disclosure of the individual step.
While the foregoing disclosure shows illustrative examples of the disclosure, it should be noted that various changes and modifications could be made herein without departing from the scope of the disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the examples of the disclosure described herein need not be performed in any particular order. Additionally, well-known elements will not be described in detail or may be omitted so as to not obscure the relevant details of the aspects and examples disclosed herein. Furthermore, although elements of the disclosure may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
The present Application for Patent claims priority to Provisional Application No. 62/233,387 entitled “Latency Based Routing” filed Sep. 27, 2015, and assigned to the assignee hereof and hereby expressly incorporated by reference herein.
Number | Date | Country | |
---|---|---|---|
62233387 | Sep 2015 | US |