The present invention relates generally to wireless ad-hoc networks and, more particularly, to systems and methods for minimizing energy consumption associated with communication in such networks.
Large distributed sensing and communication environments often do not have established communication infrastructures. In such environments, wireless ad-hoc networks are used to regulate communication among devices, often over a shared medium that may only accommodate a limited number of simultaneous transmissions at any given time. Wireless ad-hoc networks in such a shared medium may implement functionality at each device for allocating access to the medium so as to minimize the amount of data lost due to network limitations. In particular, transport protocols are used by wireless ad-hoc networks to specify the manner in which data is transmitted between devices. Typically, these transport protocols are designed to enhance transmission qualities without consideration towards energy efficiency or varying levels of reliability requirements among different types of applications.
Hence, there is a need for transport protocols capable of minimizing energy expenditure while overcoming various network limitations to meet the requirements of different applications.
According to one aspect, the invention relates to a method of setting transmission parameters at a first node for a second node in an ad hoc network, based on information transmitted from the second node. In this method, the first node transmits a plurality of packets to the second node along a path. Each packet collects path quality measurements, for example, in its header, as it traverse the path. Path quality measurements include, for example, the amount of energy required to transmit the packet along the path and a minimum availability rate of nodes along the path. The second node, upon receipt of the packets aggregates the path quality measurements collected by the packets. Based on the aggregated data, the second node adjusts a feedback schedule it uses to send transmission parameters back to the first node. In one implementation, the feedback schedule is periodic in nature.
The second node sets a transmission parameter for the first node to use in future transmissions to the first node and transmits the parameter to the first node in a feedback message. Illustrative transmission parameters include an energy budget and a data transmission rate for the first node. The energy budget is determined based on the end-to-end energy expended in transmitting received packets to the second node. The data transmission rate is determined based on the minimum availability of nodes along the transmission path. In one implementation, the transmission parameters are set based on data collected in packets transmitted as part of initiating a connection between the first and second nodes. In one implementation, the transmission parameters are adjusted by the first node based on reliability requirements of application to which a packet is associated.
The feedback message is transmitted according to the adjusted schedule. The second node adjusts the feedback schedule by sending feedback messages to the first node prior to a subsequent scheduled periodic message in response to detecting a significant and persistent change in the path between the first and second nodes. In one particular implementation, the node detects the significant and persistent change in the connection path using a flip-flop filter.
According to another aspect, the invention relates to a method of forwarding a packet based on a per-node loss tolerance associated with the packet. The method includes receiving a packet with a per-node loss tolerance at a first node and forwarding it to a next hop node. In one implementation, the node maintains a copy of received packets in a cache, for example as array of packet lists and a hashing function.
The node then determines whether the packet failed to reach its destination. If the packet fails to reach its destination, the node determines to retransmit the packet based on the per-node loss tolerance associated with the packet, and acts accordingly. The determination, in one implementation is based in part on a per-packet energy budget. If after determining that the next hop node has failed to receive the packet, the node may attempt, if it determines that the next hop node is unresponsive, to transmit the packet to a second next hop node.
According to a third aspect, the invention relates to a stack architecture. The stack includes an interface between a transport layer and an application layer that maps data from an application executed at the application layer into packets at the transport layer. The stack also includes an interface between the transport layer and the link layer and/or the physical layer, that bypasses the intervening network layer. Via these interfaces, the link layer provides the transport layer characteristics of network links and the physical layer provides the transport layer information about packet transmission energy requirements. More particularly, over the interface between the transport layer and the link layer (also referred to as the data-link layer), the transport layer instructs the link layer to transmit a packet according to a number of transmission attempts computed based on a per-node loss tolerance parameter associated with the packet. The transport layer, in various implementations, is also configured to obtain characteristic information about nodes and links from a neighbor discovery module of the link layer via the interface. For example, the transport layer may use the interface to obtain path loss and path loss rates.
According to a fourth aspect, the invention relates to a transport protocol for an ad hoc network. The transport protocol includes at least one module implemented on intermediate nodes of a network and at least one other module implemented at least at the end nodes of the network. The at least one intermediate node module is configured to forward received packets, limit retransmission of received packets based on a per-node loss tolerance associated with respective received packets, and update forwarded packets to reflect the amount of energy the intermediate node expended in forwarding the respective packets. In one implementation, the at least one module implemented on intermediate nodes is configured to limit retransmissions of the received packets failing to reach their destination according to per-packet energy budgets of the respective packets. The at least one node implemented on intermediate nodes may also be configured to cache a received packet until receipt of the packet by a destination nodes is acknowledged, the energy budget for the packet is expended, or a cache replacement policy implemented on the intermediate node requires the packets deletion from the cache to make room for other received packets.
The at least one end node module is configured to set per-node loss tolerances for transmitted packets based on reliability requirements of applications associated with the respective transmitted packets, and transmit path characteristic messages to other end nodes of the network indicating characteristics of paths through the network derived from data obtained from headers of packets received from the respective other end nodes. In one implementation of the protocol, the path characteristic messages include a transmission sending rate for another node to use in transmitting packets to the end node transmitting the path characteristic message. The rate is determined based on availability data aggregated in headers of packets received by the end node over the path. In another implementation, the at least one module implemented on end nodes of the network is configured to set the per-node loss tolerances of respective packets based on reliability requirements of applications associated with respective packets.
The invention may be better understood from the following illustrative description with reference to the following drawings.
In one embodiment, the transport protocol of the present invention employs a variable destination-controlled feedback mechanism to set parameters for specifying transmission criteria of a packet. Exemplary transmission parameters include an energy budget, a sending data rate, and one or more retransmission requests in the case that the packet is missing or lost. According to this feedback mechanism, a destination node, such as node 108 of
The feedback mechanism of steps 208, 210 and 212 may be implemented using an adaptive flip-flop filter that switches between two exponentially-weighted moving average (EWMA) filters depending on the noisiness of the collected measurements that are reflective of path conditions. In general, a current sample mean
where α is a constant that determines the filter's reactivity tied to the sample mean and β is a constant that determines the filter's reactivity in relation to measured variance. In the case that α is small, the corresponding filter is slow to change, hence the corresponding filter is stable. Alternatively, if α is large, the corresponding filter tends to be agile and is able to detect changes quickly. In addition, one or more control limits may be defined around sample mean
where d2 estimates the standard deviation of the sample in view of its range
An availability rate is determined based on an aggregation of availability data attached to the header of a packet as is traverses the network. The availability rate is the minimum of all the available rates measured for the path, and each available rate represents a node's current available reception capacity as determined by its current rate of idle receive-wakeup slots. Let  be such measured minimum available rate. At step 306, if it is determined that Â>β, where β, is a configurable parameter proportional to the current sending rate, for example, between 1.01 and 2.00 times the current sending rate, then the sending data rate for the next packet transmission, i.e., r(i+1), is increased at step 310. For example, the future sending rate may be increased in proportion to the current available capacity, Â, as well as in inverse proportion to the current sending data rate, i.e., r(i), so as to improve fairness among competing flows. This principle is mathematically expressed as:
where δ is a configurable parameter setting how aggressively sending rates should be increased. However, if it is determined at step 312 that there is little available rate associated with the path, i.e., Â<α<β, then the source node decreases its current sending data rate multiplicatively, such that:
r(i+1)=θr(i), 0<θ<1.
Otherwise, the sending data rate remains unchanged. In other examples, additional node-level information such as queuing delays or energy expended per successfully delivered bit may also be used in the determination of a sending data rate associated with a particular path. At step 314, the transport protocol applies the updated sending data rate and energy budget to the transmission of new packets or packets that need retransmission so as to minimize overall energy expenditure while accounting for changes in path conditions as well as satisfying delivery reliability requirements of different applications.
In another aspect of the present invention, the variable destination-based feedback control mechanism of the transport protocol as described above with reference to
An energy budget, in contrast to a time-to-live parameter utilized in many routing protocols, not only takes into a account a raw number of packet transmissions and retransmissions, it also takes into account an energy-related weighting associated with each transmission or retransmission. For example, the energy budget may be equal, proportional, or related to a total number of joules or other unit of energy available for use in forwarding the packet to its ultimate destination. Alternatively, the energy budget may just weight a transmission by the distance between the transmitting and receiving node.
The energy needed to transmit a packet from one node to another varies based on a number of factors, including, for example, distance, channel conditions, and the hardware of the respective nodes. In this implementation, each node, when transmitting or retransmitting a packet, obtains information from the radio layer of the node as to the amount of energy needed to transmit the packet to its next hop, and decrements the energy budget accordingly.
In more sophisticated implementations, nodes evaluate packet energy budgets based on estimates or knowledge of the remainder of the path a packet must traverse in reaching its destination. For example, if a packet at a node must make pass through three additional nodes in reaching its destination, the node need not wait until the energy budget is fully expended before dropping the packet. It need only attempt to retransmit the packet until the remaining energy budget would be insufficient to enable the remaining three hops to made.
In one implementation, the energy budget for a packet along a connection is based on the total energy expended in transmitting a connection establishment packet along a path from the source to the destination. In this case, the energy budget is set to a combination of the total energy, a reliability factor, and/or a volatility factor (to account for a likelihood of changing network topology). The energy budget may then optionally be updated as more information is gained about the connection between the source and destination obtained, for example, from acknowledgement messages.
As indicated above, in addition to, or instead of, utilizing a total path energy budget, in various implementations, the transport protocol utilizes a loss tolerance parameter corresponding to a particular reliability requirement to limit energy expenditure along a path. In such implementations, packets originating from applications requiring higher reliability are granted a lower loss tolerance. Packets originating from applications having lower reliability requirements, for example, VOIP, are granted a higher loss tolerance.
Furthermore, let pi be the link loss probability over the link from node i to the next hop. If process 400 determines at step 408 that pi≦(1−q), then process 400 is adapted to only attempt to transmit the packet once from node i at step 410. Otherwise, the number of transmission attempts ti form node i is calculated at step 412 as:
At step 414, before the packet is forwarded from node i to the next hop in accordance with the calculated transmission attempts, process 400 adjusts the loss tolerance carried in the header of the packet to ensure that any remaining retransmission attempts calculated for node i are not used by downstream nodes. In particular, process 400 may adjust the loss tolerance encoded in the packet header as follows:
This energy-update approach 400 illustrated in
In another aspect of the invention, the transport protocol implements an in-network caching scheme to support the in-network energy control mechanism described above with reference to
Other features of a transport protocol include a receiving-wakeup controller configured to adjust the probability of a node waking up to receive packet transmissions from other nodes. This adjustment may be made based on a current utilization level of the wakeup slots associated with the node. Hence, the node needs to be able to estimate its own resources such as rate of energy consumption and available energy. Exemplary types of a receiving-wakeup controller include a multiple-input-multiple-output (MIMO) control for simultaneously measuring and regulating multiple resources of a node and a stochastic control for taking into consideration probabilistic disturbances and noises at a node.
In yet another aspect of the present invention, an in-network deflection routing mechanism is employed by the transport protocol to recover from a short-term local delivery error at an intermediate node. In certain examples, the deflection routing mechanism is initiated based on a next hop being temporarily down or non-responsive or an occurrence of buffer overflow at the next hop. The scope of the deflection may comprise a single hop or multiple hops. In a single-hop deflection scheme, a current node may choose an immediate neighboring node to re-route a packet if the new next hop from the current node to the neighboring node has a lower path weight than the original next hop. However, if no such neighboring node exists, the current node is adapted to send a signal to its predecessor node to reroute a copy of the packet from the cache of the predecessor node. In a multi-hop deflection scheme, a loose-source routing technique is performed that allows a current node to traverse its neighborhood of nodes, and the scope of nodes that are candidate for such deflection routing may be controlled by the stability of the neighborhood.
The first field 504 of the transport layer packet header section 502 contains a 16-bit source port number of a source node. The second field, the destination port number field 505, stores a 16-bit port for the destination node associated with the packet 500. The transport layer packet header section 502 also includes two energy related fields, fields 506 and 507. Field 506 stores a total energy budget for the packet, and field 507 stores the total energy used to date in attempting to transmit the packet 500 to its destination. Field 508 stores a packet ID number, field 509 stores a minimum availability rate of the nodes traversed along the path, and field 510 stores a loss tolerance parameter for the packet 500. Field 511 stores a packet type identifier (e.g., data, acknowledgement or connection establishment), and a flag field 512 that stores flags for various management functions. In addition, the transport layer packet header section 502 includes a deadline field 513 that indicates a real-time expiration time for the packet, which, if passed, even if the packet has energy remaining in its budget, results in the packet being dropped.
The last field of the transport layer packet header section 502, a feedback field 520, is configured to carry all cumulative positive acknowledgments, selective negative acknowledgements, and ID's of packets that have been retransmitted by one or more intermediate nodes and, therefore, do not need to be retransmitted. Furthermore, the feedback field 522 includes bit vectors encoding contiguous blocks of successfully and/or unsuccessfully transmitted packets, bit vectors encoding missing packets, a current feedback-reporting period used by the destination node, and the sending data rate and per-packet energy computed by the destination node. In implementations, when the feedback field 522 includes a bit vector indicating data packets that were not successfully received, it is assumed that all packets not included in bit vector have been successfully received.
This stack architecture may be implemented at any node in a wireless network, such in as network 100 of
For example, one type of cross-layer interaction implemented in the stack architecture 600 that skips an intervening layer of the stack is between the transport layer 602 and the physical layer 604 (also referred to as the radio layer in wireless nodes) of stack 600. The physical layer 604 is generally configured to deliver data bits between adjacent nodes in a network environment, and it achieves such data delivery using, for example, two types of radios including of a low-data rate, energy optimized hail radio 612 and a high-data rate, frequency-hopping data radio 614. In operation, the hail radio 612 wakes up the data radio 614 for packet delivery only when necessary. The hail radio 612 also establishes and maintains time synchronization of the data radio 614. In alternative implementations, the physical layer 604 may employ a single one-mode or multi-mode radio. By closely interacting with the physical layer 604, the transport layer 602 is able to obtain packet-level transmission quality information such as link path loss or received signal strength indication (RSSI). The transport layer 602 is also able to use the received information to compute packet-level transmission parameters such as per-packet transmit energy which allows the transport protocol to budget an appropriate level of power for reliable one-hop transmission, in addition to keeping track of energy consumption.
Another type of cross layer interaction implemented in various implementations of the stack architecture that bypasses an intervening stack layer is an interaction between transport layer 602 and the data-link layer 606 of stack 600. In general, the data link layer 606 is adapted to generate reports regarding characteristics of various links from a local node to its neighboring nodes, herein referred to as “link metrics,” as well as characteristics of the local node itself, herein referred to as “node metrics.” Exemplary node metrics include an available receiving bandwidth. Exemplary link metrics include path loss measured for each link and a packet loss rate measured based on the fraction of unsuccessful link-layer transmissions to each neighbor. The packet loss rate may be used by the transport protocol to compute, for each packet, a number of link-layer transmission attempts needed to meet an application's reliability requirement. In one implementation, the link metrics are computed at a link characterization module 616 of the data-link layer 606. In one implementation, the metric reports, including the link metrics computed at the link characterization module 616, are provided to the transport layer 602 via a neighbor discovery module 618 of the data link layer 606 and a routing and path management module 626 of the network layer 608.
Furthermore, the data-link layer 606 is configured to support multiple transmission attempts at the local node, where the number of transmission attempts is calculated through the interaction between the transport layer 602 and a DLL module 628 of the data link layer 606. For instance, before transmitting a packet, the DLL module 628 computes the energy that is to be expended for the packet transmission and subsequently subtracts this energy from the total energy budget of the packet. The DLL module 628 computes this allowable per-hop energy expenditure based on a size of the packet and transmission power of the packet which are stored in a radio profile of the packet along with other transmission parameters. Moreover, in order for the transport layer 602 to make sophisticated choices about packets, the transport layer 602 needs to know the fate of each packet after transmission. To this end, the DLL module 628 notifies the transport layer 602 of the transmission status of each packet, such as whether the packet is dropped or transmitted successfully. In addition, the transport layer 602 may instruct the data-link layer 606 to drop a packet when the remaining energy budget for the packet is not enough for another transmission. For example, in the case that a transmission attempt of a packet is not successful, the DLL module 628 checks with the radio profile of the packet to see if any transmission attempts remain or if the packet should be dropped. If there are remaining transmission attempts, the DLL module 628 proceeds to check if there is enough energy for another transmission. If not, the packet is dropped.
In certain embodiments, to deliver data packets from a local node to a neighboring node, the data link layer 606 uses a slotted probabilistic protocol that employs pseudo-random codes to implement uncorrelated, but predictable, schedules for the hail radio of the physical layer 604 to wake up the neighboring node. For example, when the data-link layer 606 associated with the local node predicts that the hail radio of its neighboring node is on, the local node uses its own hail radio 612 to request the neighboring node to wake up its data radio for data reception. One suitable scheduling method is described in U.S. patent application Ser. No. 11/078,257, entitled, “Methods and Apparatus for Reduced Energy Communication in an Ad Hoc Network,” the entirety of which is incorporated herein by reference.
A third type of cross-layer interaction is defined between the transport layer 602 and the network layer 608 of stack 600. The network layer 608 is configured to collect link-state information from neighboring nodes using, for example, a hazy-sighted scoping technique such that more frequent link-state updates are received from closer neighboring nodes than from those that are further away. One suitable technique is described further in U.S. patent application Ser. No. 11/347,963, entitled, “Methods and Apparatus for Improved Efficiency Communication,” the entirety of which is incorporated herein by reference. In addition, the network layer 608 uses knowledge of the transmission power at the neighboring nodes to build a connection set that is reflective of current link-state dissemination. Based on such link-energy topology, the network layer 608 is able to compute minimum link-weight paths to destinations and compile the computed information in a forwarding table. Each link weight of the forwarding table may be computed based on the energy needed to execute a reliable one-hop transmission. In certain examples, forwarding tables for all known destinations are stored in the routing and path management module 626 of the networking layer 608. Hence, through its interaction with the network layer 608, the transport layer 602 is able to use the forwarding tables to accurately transmit packets to destination.
Furthermore, a forwarding module 620 of the network layer 608 allows the transport protocol to influence transmission parameters used by the data link layer 606 for transmitting packets. Exemplary transmission parameters of a packet that are adjustable by the transport protocol include transmission power, number of link access attempts, number of data transmissions, and packet priority. These transmission parameters are stored in a radio profile of the packet which is registered with the forwarding module 620 of the network layer 608 whenever a transmission parameter is changed by the transport layer 602.
A fourth type of cross layer interaction is defined between the transport layer 602 and the application layer 610. An application 622 in the application layer is adapted to interface with the transport layer through an API 623 that directs messages to the appropriate transport protocol. For example, the API 623 may direct packets to the JTP module 624 to take advantage of the energy efficiency provided by the systems and methods described herein, or they may be directed to the standard transport protocol modules, such as a UDP module 625 or a TCP module 627. The JTP module 624 which maps application-level data to and from individual packets. For example, after detecting a delivery requirement of an application in the application layer 610, the transport layer 602 is able to instruct the lower layers in the stack architecture 600 to translate the delivery requirement into specific energy demands or budgets for individual packets, where each energy budget governs the manner with which the corresponding packet is transmitted in an ad-hoc network. Thus, the transport layer 602 serves as an energy-conscious interface between the application layer 610 and the lower layers. This arrangement allows the transport layer 602 to determine variability in delivery service requirements for different applications and, in response, provide suitable levels of packet transmission reliability corresponding to the application-level data. Hence, instead of providing different transport protocols for different applications, the stack architecture 600 only needs to provide a single protocol that offers a range of reliability levels adaptable to different application requirements.
With continued reference to
The second category of modules 704 are only implemented on end nodes, i.e., source and destinations nodes of a wireless network. Transfer module 710 is an example of such module. Transfer module 710 is responsible for performing numerous tasks such as managing connections, handling timeouts, implementing one or more congestion avoidance mechanism, and controlling feedback rates of packet retransmissions. The transfer module 710 further includes two sub-modules, a port manager 712 and a connection manager 714. The port manager 712 is configured to assign and register ports to applications in the application layer 610. For example, an application may send a request to the port manager 712 for a specific port assignment or let the port manager assign to it a free port. The connection manager 714 is configured to maintain a registry of all connections in addition to maintaining a registry for “listening” applications (i.e., applications configured to identify and accept new connection requests) and a separate registry for established connections. Statistics gathered by the transfer module 710 regarding each connection are also stored in the respective registries. The connection manager 714 further categorizes each entry in the registry of established connections into an incoming connection, an outgoing connection, or both, depending on whether the connection is unidirectional or bidirectional. The connection manager 714 is also responsible for properly terminating each connection when appropriate, regardless of whether the connection is terminated due to timeouts or at a request of an application when the transfer is complete. Following a termination, the connection manager 714 releases all pertinent buffers, cancels any set timers, and, in the case of a normal termination, ensures that the transfer is fully complete. Otherwise, the transfer module 710 informs the application of an abnormal termination.
In operation, for each received packet, the transfer module 710 stores information in the header of the packet in the connection registry of the connection manager 714 and uses the information to dynamically adjust feedback rates and transmission parameters so as to avoid congestion, achieve fairness and adapt to changes in network conditions. At a source node, the transfer module 710 has the additional responsibility of responding to retransmission requests made by a destination node. In particular, a transfer module 710 implemented at a source node ensures that all requested packets are retransmitted and such in-network recovery does not affect fair rate resource allocation in the network.
At each end node, a queuing module 717 is implemented for managing queues of packets associated with incoming and outgoing connections. Since buffer management is different at source and destination nodes, the queuing module 717 is able to adapt its functionality to the underlying node type. For example, to process incoming packets at a destination node, the queuing module 717 stores received packets in a queue until the packetization module 718 requests them. The queuing module 717 is also able to provide a list of missing packets to the packetization module 718, remove packets from the queue upon receiving a request from the packetization module 718, remove duplicated packets, and inform the transfer module 710 whenever the queue becomes full so that the transfer module 710 applies flow control to the source node. Furthermore, in the case that a missing packet is not essential for meeting QoS requirements, the queuing module 717 is able to “fake” the reception of packets when instructed to do so by the packetization module 718. Alternatively, to process outgoing packets from an application of a source node, the queuing module 717 stores the packets in two queues, a ready queue and a pending queue, where the ready queue is used to store packets that are ready to be sent and the pending queue is used to store packets that have been sent, but are not yet acknowledged by the destination node. Upon receiving packets from the packetization module 718 and the transfer module 710, the queuing module 717 is responsible for inserting the packets into the ready queue and the pending queue, respectively. In the case that the ready queue is full, the queuing module 717 notifies the packetization module 718 to stop sending packets and, in the case that the pending queue is full, the queue module 717 notifies the transfer module 710 to stop sending packets. The queuing module 717 is also adapted to remove from both queues packets whose receptions have been acknowledged by the destination node. In such case, the queuing module 717 moves all packets for which retransmission is requested to the head of the ready queue and move the packets that have been retransmitted by intermediate nodes to the pending queue.
Furthermore, at each end node, one or more packetization modules 718 are implemented to meet reliability demands of different applications or types of applications corresponding to each module. Each packetization module 718 is responsible for informing an application of a connection error as well as initiating, establishing and terminating a connection on behalf of the application. Similar to the queuing module 717, a packetization module 718 has varied functionalities depending on the underlying node type. At a source node, the packetization module 718 is responsible for receiving data frames from an application and transforming the data frames into valid data packets before sending them to the queuing module 717. The packetization module is also responsible for assigning a loss tolerance to each packet based on the QoS requirements of an application corresponding to the packet. At a destination node, the packetization module 718 is responsible for transforming data packets received from the queuing module 717 to application-level data frames and delivering the frames to the corresponding application. The packetization module 718 is also adapted to specify an energy budget for a packet, terminate a connection when requested by an application, and create a NACK portion of a feedback that is forwarded to the transfer module 710.
In certain implementations, portions of the JTP modules 700 or 750 fitting into the first category of modules 702 are implemented at the link layer 606 in the stack architecture 600, for example in the DLL module 628, as opposed to at the transport layer 602. These portions, however, may maintain direct communication links with portions of the JTP modules 700 and 750 implemented at the transport layer.
The modules described above may be implemented as hardware circuits comprising custom VLSI circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices or the like.
Modules may also be implemented in software for execution by various types of processors. An identified module of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions which may, for instance, be organized as an object, procedure or function. Nevertheless, the executables of an identified module need not be physically located together, but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the module and achieve the stated purpose for the module.
Indeed, a module of executable code could be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices. The executable code may be stored on one or more computer readable media, such as magnetic disks, optical disks, holographic disks, or integrated circuit-based memory, such as flash memory.
If the node is not the destination node, but is a node on the path to the destination node, the node determines, using its forwarding table, whether a next hop node on the way to the destination node is within the radio range of the node (decision block 807). For example, while the node may originally have been on a path to the destination node, a subsequent intermediate node in the path may have moved out of radio range since the transmission of a previous packet. If no next hop is not available, the node transmits a NACK message back to the source (step 808) indicating that the prior path is no longer viable, referred to herein as a “bad path NACK” or “BP NACK”. The node then stores the data packet in its cache (decision block 810, and steps 812 and 814). At decision block 810, the node determines whether its cache is full. If the cache is full, the node applies its cache replacement policy to remove a packet from the cache (step 812). After removing a packet (step 812), or if the cache determined to have room (at decision block 810), the received packet is stored in the cache (step 814), and the node begins processing the next packet (step 816).
If at decision block 807, the node determines that a next hop is available, the node proceeds to determine whether the packet has sufficient energy left in its budget to forward it (decision block 818). If forwarding the packet would result in energy budget of the packet, being exceeded, the method proceeds to decision block 810 to store the packet in the cache.
If, at decision block 810, the packet has sufficient energy left in its budget to be forwarded, the node checks the if the packet's deadline has passed (decision block 820). If the deadline has passed, the packet is dropped (step 822). Otherwise, the node determines whether the packet's loss tolerance parameter allows for its retransmission. If the packet has already been transmitted a maximum number of times at the link layer as determined by the packet's loss tolerance parameter (decision block 824).
Finally, if the packet has a next hop available (decision block 807), has sufficient energy left in its energy budget (decision block 818), has not passed its deadline (decision block 820), and has not already been retransmitted a maximum number of times as determined based on its loss tolerance requirements (decision block 824), the node will update the header of the packet to adjust its energy expended and loss tolerance data fields (step 826), and the node will transmit the packet to its next hop (step 828). Unless the node later receives a BP NACK indicating the packet was not received because of path failure, the node places the packet in its cache beginning with decision block (810). If the node receives a BP NACK, the next hop node is removed from the node's forwarding table (step 832) and the method returns to step 806 to determine whether the packet should be retransmitted.
The method 850 begins with the node receiving a feedback packet (step 852). If the receiving node is determined to be the destination of the feedback packet, i.e., the source of messages for which path feedback is being provided, at decision block 854, the packet is passed up the stack (step 856). Otherwise, the packet is analyzed to extract packet acknowledgement information. The acknowledgement information may be in the form of a bit vector identifying received packets, or a bit vector identifying packets for which retransmission is requested. In the former case, the node assumes that the destination node (i.e., the source of the feedback packet) is requesting retransmission of all packets not identified in the bit vector. In the latter case, the node assumes the destination node successfully received all packets not identified in the bit vector. In either case, successfully received packets, whether specifically identified or assumed based on omission in a retransmission, are removed from the node's cache (step 858). All packets for which retransmission is explicitly or implicitly requested are then slated for retransmission according to method 800, beginning at decision block 807.
The feedback packet, in addition to explicitly or implicitly identifying packets for which retransmission is requested, includes a list, referred to as the recovered bit vector, of which of such identified packets have been retransmitted by nodes along the path back from the destination node to the source node. The node processing the feedback packet, updates the recovered bit vector in the feedback packet based on which requested packets remain in its cache and are capable of retransmission in accordance with the cached packets' energy budgets, deadline, and loss tolerance parameters (step 862).
After the recovered bit vector is updated (step 862), the node determines whether a next hop node is available for the feedback packet (decision block 864). If no next hop node is available, the node sends a BP NACK back to the destination node (i.e., feedback packet source) (step 866) and drops the feedback packet (step 868). If a next hop node available, the node transmits the feedback packet to that node (step 870).
After the feedback packet is forwarded (step 870), the node waits for a NACK message. If no NACK is received (decision block 872), the node drops the feedback packet (step 868) assuming its transmission was successful. If a NACK is received (decision block 872), the NACK is analyzed to determine its type. If the NACK is a BP NACK, the next hop node is removed from the forwarding table (step 876), and the node determines whether another next hop node is available by returning to decision block 864. If the NACK merely indicates the feedback packet was not successfully received, for example, it was corrupted during transmission, the method 850 returns directly to step 864.
The invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The forgoing embodiments are therefore to be considered in all respects illustrative, rather than limiting of the invention.
This application claims priority from U.S. Provisional Application No. 60/840,417, filed Aug. 25, 2006, the disclosures of which are incorporated herein by reference in their entirety.
The U.S. Government has a paid-up license in this invention and the right in limited circumstances to require the patent owner to license others on reasonable terms as provided for by the terms of Contract No. NBCHC050053 awarded by DARPA ATO.
Number | Name | Date | Kind |
---|---|---|---|
4964121 | Moore | Oct 1990 | A |
5128938 | Borras | Jul 1992 | A |
5203020 | Sato et al. | Apr 1993 | A |
5301225 | Suzuki et al. | Apr 1994 | A |
5418539 | Sezai et al. | May 1995 | A |
5430731 | Umemoto et al. | Jul 1995 | A |
5583866 | Vook et al. | Dec 1996 | A |
5590396 | Henry | Dec 1996 | A |
5710975 | Bernhardt et al. | Jan 1998 | A |
5790946 | Rotzoll | Aug 1998 | A |
5987024 | Duch et al. | Nov 1999 | A |
6016322 | Goldman | Jan 2000 | A |
6028853 | Haartsen | Feb 2000 | A |
6052779 | Jackson et al. | Apr 2000 | A |
6058106 | Cudak et al. | May 2000 | A |
6097957 | Bonta et al. | Aug 2000 | A |
6104708 | Bergamo | Aug 2000 | A |
6118769 | Pries et al. | Sep 2000 | A |
6127679 | Elliott et al. | Oct 2000 | A |
6130881 | Stiller et al. | Oct 2000 | A |
6188911 | Wallentin et al. | Feb 2001 | B1 |
6192230 | van Bokhorst et al. | Feb 2001 | B1 |
6208247 | Agre et al. | Mar 2001 | B1 |
6243579 | Kari et al. | Jun 2001 | B1 |
6262684 | Stewart et al. | Jul 2001 | B1 |
6292508 | Hong et al. | Sep 2001 | B1 |
6304215 | Proctor, Jr. et al. | Oct 2001 | B1 |
6359901 | Todd et al. | Mar 2002 | B1 |
6374311 | Mahany et al. | Apr 2002 | B1 |
6377211 | Hsiung | Apr 2002 | B1 |
6381467 | Hill et al. | Apr 2002 | B1 |
6400317 | Rouphael et al. | Jun 2002 | B2 |
6404386 | Proctor, Jr. et al. | Jun 2002 | B1 |
6414955 | Clare et al. | Jul 2002 | B1 |
6418148 | Kumar et al. | Jul 2002 | B1 |
6463307 | Larsson et al. | Oct 2002 | B1 |
6473607 | Shohara et al. | Oct 2002 | B1 |
6476773 | Palmer et al. | Nov 2002 | B2 |
6477361 | LaGrotta et al. | Nov 2002 | B1 |
6490461 | Muller et al. | Dec 2002 | B1 |
6498939 | Thomas et al. | Dec 2002 | B1 |
6512935 | Redi | Jan 2003 | B1 |
6564074 | Romans et al. | May 2003 | B2 |
6574269 | Bergamo | Jun 2003 | B1 |
6583675 | Gomez | Jun 2003 | B2 |
6583685 | Easter et al. | Jun 2003 | B1 |
6590889 | Preuss et al. | Jul 2003 | B1 |
6598034 | Kloth | Jul 2003 | B1 |
6601093 | Peters | Jul 2003 | B1 |
6611231 | Crilly, Jr. et al. | Aug 2003 | B2 |
6611233 | Kimura | Aug 2003 | B2 |
6671525 | Allen et al. | Dec 2003 | B2 |
6694149 | Ady et al. | Feb 2004 | B1 |
6714983 | Koenck et al. | Mar 2004 | B1 |
6721275 | Rodeheffer et al. | Apr 2004 | B1 |
6735178 | Srivastava et al. | May 2004 | B1 |
6735630 | Gelvin et al. | May 2004 | B1 |
6745027 | Twitchell, Jr. | Jun 2004 | B2 |
6757248 | Li et al. | Jun 2004 | B1 |
6760584 | Jou | Jul 2004 | B2 |
6791949 | Ryu et al. | Sep 2004 | B1 |
6804208 | Cain et al. | Oct 2004 | B2 |
6816115 | Redi et al. | Nov 2004 | B1 |
6859135 | Elliott | Feb 2005 | B1 |
6888819 | Mushkin et al. | May 2005 | B1 |
6894975 | Partyka | May 2005 | B1 |
6894991 | Ayyagari et al. | May 2005 | B2 |
6920123 | Shin et al. | Jul 2005 | B1 |
6973039 | Redi et al. | Dec 2005 | B2 |
6981052 | Cheriton | Dec 2005 | B1 |
6990075 | Krishnamurthy et al. | Jan 2006 | B2 |
7020501 | Elliott et al. | Mar 2006 | B1 |
7020701 | Gelvin et al. | Mar 2006 | B1 |
7027392 | Holtzman et al. | Apr 2006 | B2 |
7046639 | Garcia-Luna-Aceves et al. | May 2006 | B2 |
7058031 | Bender et al. | Jun 2006 | B2 |
7072432 | Belcea | Jul 2006 | B2 |
7088678 | Freed et al. | Aug 2006 | B1 |
7103344 | Menard | Sep 2006 | B2 |
7110783 | Bahl et al. | Sep 2006 | B2 |
7133398 | Allen et al. | Nov 2006 | B2 |
7142520 | Haverinen et al. | Nov 2006 | B1 |
7142864 | Laroia et al. | Nov 2006 | B2 |
7151945 | Myles et al. | Dec 2006 | B2 |
7155263 | Bergamo | Dec 2006 | B1 |
7165102 | Shah et al. | Jan 2007 | B2 |
7184413 | Beyer et al. | Feb 2007 | B2 |
7209771 | Twitchell, Jr. | Apr 2007 | B2 |
7218630 | Rahman | May 2007 | B1 |
7286844 | Redi et al. | Oct 2007 | B1 |
7330736 | Redi | Feb 2008 | B2 |
7342876 | Bellur et al. | Mar 2008 | B2 |
7346679 | Padmanabhan et al. | Mar 2008 | B2 |
7363371 | Kirkby et al. | Apr 2008 | B2 |
7369512 | Shurbanov et al. | May 2008 | B1 |
7376827 | Jiao | May 2008 | B1 |
7388847 | Dubuc et al. | Jun 2008 | B2 |
7466655 | Zhao | Dec 2008 | B1 |
7523220 | Tan et al. | Apr 2009 | B2 |
7542437 | Redi et al. | Jun 2009 | B1 |
7583654 | Zumsteg | Sep 2009 | B2 |
7599443 | Ionescu et al. | Oct 2009 | B2 |
7664055 | Nelson | Feb 2010 | B2 |
7668127 | Krishnamurthy et al. | Feb 2010 | B2 |
7688772 | Sinivaara et al. | Mar 2010 | B2 |
7719989 | Yau | May 2010 | B2 |
7742399 | Pun | Jun 2010 | B2 |
7764617 | Cain et al. | Jul 2010 | B2 |
20020067736 | Garcia-Luna-Aceves et al. | Jun 2002 | A1 |
20020071395 | Redi et al. | Jun 2002 | A1 |
20020145978 | Batsell et al. | Oct 2002 | A1 |
20020146985 | Naden | Oct 2002 | A1 |
20020147816 | Hlasny | Oct 2002 | A1 |
20020186167 | Anderson | Dec 2002 | A1 |
20030037167 | Garcia-Luna-Aceves et al. | Feb 2003 | A1 |
20030066090 | Traw et al. | Apr 2003 | A1 |
20030067892 | Beyer et al. | Apr 2003 | A1 |
20030099210 | O'Toole et al. | May 2003 | A1 |
20030114204 | Allen et al. | Jun 2003 | A1 |
20030115369 | Walter et al. | Jun 2003 | A1 |
20030119568 | Menard | Jun 2003 | A1 |
20030152110 | Rune | Aug 2003 | A1 |
20030179742 | Ogier et al. | Sep 2003 | A1 |
20031979742 | Ogier et al. | Sep 2003 | |
20030185170 | Allen et al. | Oct 2003 | A1 |
20030202490 | Gunnarsson et al. | Oct 2003 | A1 |
20030222819 | Karr et al. | Dec 2003 | A1 |
20040077353 | Mahany | Apr 2004 | A1 |
20040125773 | Wilson et al. | Jul 2004 | A1 |
20040176023 | Linder et al. | Sep 2004 | A1 |
20040218580 | Bahl et al. | Nov 2004 | A1 |
20040230638 | Balachandran et al. | Nov 2004 | A1 |
20050009578 | Liu | Jan 2005 | A1 |
20050059347 | Haartsen | Mar 2005 | A1 |
20050124313 | Simpson et al. | Jun 2005 | A1 |
20050134403 | Kajiya | Jun 2005 | A1 |
20050135379 | Callaway et al. | Jun 2005 | A1 |
20050152329 | Krishnan et al. | Jul 2005 | A1 |
20050176468 | Iacono et al. | Aug 2005 | A1 |
20050185632 | Draves, Jr. et al. | Aug 2005 | A1 |
20050207376 | Ashwood-Smith | Sep 2005 | A1 |
20050215196 | Krishnan et al. | Sep 2005 | A1 |
20050215280 | Twitchell, Jr. | Sep 2005 | A1 |
20060007865 | White et al. | Jan 2006 | A1 |
20060010249 | Sabesan et al. | Jan 2006 | A1 |
20060013160 | Haartsen | Jan 2006 | A1 |
20060047807 | Magnaghi et al. | Mar 2006 | A1 |
20060058102 | Nguyen et al. | Mar 2006 | A1 |
20060068837 | Malone | Mar 2006 | A1 |
20060107081 | Krantz et al. | May 2006 | A1 |
20060126514 | Lee et al. | Jun 2006 | A1 |
20060135145 | Redi | Jun 2006 | A1 |
20060215583 | Castagnoli | Sep 2006 | A1 |
20060227724 | Thubert et al. | Oct 2006 | A1 |
20060229083 | Redi | Oct 2006 | A1 |
20070070983 | Redi et al. | Mar 2007 | A1 |
20070110000 | Abedi | May 2007 | A1 |
20070149204 | Redi et al. | Jun 2007 | A1 |
20070153731 | Fine | Jul 2007 | A1 |
20080232258 | Larsson et al. | Sep 2008 | A1 |
20080232344 | Basu et al. | Sep 2008 | A1 |
20090103461 | Tan et al. | Apr 2009 | A1 |
20090129316 | Ramanathan et al. | May 2009 | A1 |
20090161641 | Kim | Jun 2009 | A1 |
Number | Date | Country | |
---|---|---|---|
20080049620 A1 | Feb 2008 | US |
Number | Date | Country | |
---|---|---|---|
60840417 | Aug 2006 | US |