Different communications across a network can have different levels of priority depending on the purpose of the communication. Typically, user operations such as file access, email access and web services are considered of higher priority than background operations such as downloading program updates, synchronizing application data and backing up local files. The user operations are considered higher priority because delays in the traffic are noticeable to the user and can lead to pauses in operations, whilst the background operations require little or no user interaction and therefore a user is unlikely to notice any delays.
In one background operation, program updates can be downloaded to the user's system in the ‘background’, while the user works normally on other tasks in the ‘foreground’. After the program updates are downloaded, the user can be notified of the presence of the program updates on his or her system and prompted for authorization to install the new updates. The higher priority communications are referred to herein as ‘foreground transfers’ whilst the lower priority communications are referred to as ‘background transfers’.
When a network becomes busy or a foreground transfer requests more bandwidth, it is beneficial if the background transfers back-off and reduce their bandwidth so that the foreground transfers are given priority. Ideally the background operations do not impact the foreground transfers.
Some approaches have been proposed for managing background transfers, however, these typically require specific intelligence in the network to achieve the bandwidth management of the background transfer. This requirement limits the applicability of the approaches to only those networks where the specific intelligence is available.
Other approaches that have been proposed that operate without intelligence in the network by controlling the rate at which background transfers are sent to a receiving node. Typically, such approaches are implemented at the sending node, as this node can directly control the data that is sent. However, whilst this is simpler to implement at the sending node, the sending node cannot directly see the effect of the network and its conditions on the transfer. Furthermore, these techniques are limited in their responsiveness to changes in the conditions in the network, and are not adaptable to different types of networks with different characteristics. Furthermore, these do not comply with standardized transport protocols.
The embodiments described below are not limited to implementations which solve any or all of the disadvantages of known background data transfer techniques.
The following presents a simplified summary of the disclosure in order to provide a basic understanding to the reader. This summary is not an extensive overview of the disclosure and it does not identify key/critical elements of the invention or delineate the scope of the invention. Its sole purpose is to present some concepts disclosed herein in a simplified form as a prelude to the more detailed description that is presented later.
Control of background data transfers is described. In an embodiment, a background data transfer is controlled at a receiver node by measuring a time period taken to receive from a sender node a data sequence of the same size as a receive window. The time period is used to evaluate available network capacity, and the network capacity used to calculate a new window size. The new window size is applied and communicated to the sender node. In another embodiment, a background data transfer is controlled at a receiver node by measuring a quantity of data received from a sender node during a first control interval. The measured quantity is used to evaluate available network capacity, and the network capacity used to calculate a new receive window size and a second control interval duration. The new window size is applied for the second control interval, and communicated to the sender node.
Many of the attendant features will be more readily appreciated as the same becomes better understood by reference to the following detailed description considered in connection with the accompanying drawings.
The present description will be better understood from the following detailed description read in light of the accompanying drawings, wherein:
Like reference numerals are used to designate like parts in the accompanying drawings.
The detailed description provided below in connection with the appended drawings is intended as a description of the present examples and is not intended to represent the only forms in which the present example may be constructed or utilized. The description sets forth the functions of the example and the sequence of steps for constructing and operating the example. However, the same or equivalent functions and sequences may be accomplished by different examples.
Although the present examples are described and illustrated herein as being implemented in a transport control protocol (TCP) system, the system described is provided as an example and not a limitation. As those skilled in the art will appreciate, the present examples are suitable for application in a variety of different types of networking systems using different protocols.
TCP is a connection-oriented protocol; that is two nodes in a communications network establish a TCP connection with one another before they exchange data using TCP. Each end of a TCP connection has a finite amount of buffer space. A receiving TCP node only allows the other end to send as much data as the receiver has buffers for. This prevents a fast host from overflowing the buffers on a slow host. TCP's flow control is provided by each end advertising a window size. This is the number of bytes, starting with the one specified by an acknowledgement number field, that the receiver is willing to accept. The window size defines the number of unacknowledged TCP data segments that can be in flight to the receiver node at the same time.
A sliding window method is used to enable the flow control mechanism. In high level terms, this window can be thought of as sliding along as data is received, acknowledged, and read from the receiver buffer. The window is considered to close or shrink (i.e. get smaller) as data is received (but not read from the buffer), and open (i.e. get larger) when the receiving process reads data, freeing up space in its TCP receive buffer. When the window shrinks to become a zero window then the sender is prevented from transmitting any data. In that case, the buffers at the receiver are full; the receiver advertises a zero window and the sender stops transmitting more data. When the receiver later empties its buffers (at least partially) a window update (which is a type of ACK message) is sent advertising that the receiver can now receive a specified number of bytes. This window update message does not acknowledge any new data; rather, it just opens the window.
Described herein are examples of a technique to control a background data transfer by controlling the size of the receive window at the receiver node. The technique can be enabled on a per-connection basis to permit the connection to behave as low-priority (i.e. background) with respect to other traffic on the network. This works by using the round trip time (RTT) and/or the recent bandwidth achieved, to adjust the advertised receive window on the TCP connection. By implementing the technique at the receiver node (rather than the sender node), the effect of the network conditions on the transfer can be directly measured and the background transfer adapted accordingly.
Regular TCP in normal operation has a full receive window's worth of data in flight in the network at any given time. When a network has a bottleneck link, this results in the receive window's worth of data minus the bandwidth delay product being queued at the bottleneck link (where the bandwidth delay product is the product of the link capacity and its end-to-end delay). The background transfer technique described herein finely tunes the advertised receive window to control the sender node to use the maximum data rate available to utilize idle bandwidth without introducing queues at the bottleneck.
This improves the latency of interactive traffic to be almost that which would be observable if the background transfer connection was not present. In addition, the background transfer technique uses the change in the bandwidth obtained on its connection to infer the presence of packets of other connections, and if this occurs the background transfer technique reduces the receive window still further so that a regular TCP connection obtains most of the bandwidth on the bottleneck link.
Reference is first made to
By way of example, the receiver node 101, which can be a PC, is to receive a background transfer (e.g. a program update) from the sender node 102, which can be a network server. At the same time as the background transfer of data from the sender node 102 to the receiver node 101, there can be many other operations, both foreground and background transfers, between the receiver node 101, the sender node 102, and the plurality of other nodes 103. The receiver node 101 includes a receive window 104 associated with the background connection.
Reference is now made to
In both
Previous background transfer control techniques attempted to set the size of the receive window by measuring the throughput of a connection over the course of a fixed control time-interval, and determining whether to increase or decrease the receive window at the end of the control interval. However, using a fixed control interval is problematic because at least one round trip time is needed to observe a change in throughput generated by a window change. Because various connections and networks have different RTTs a control interval is needed that is at least one RTT. The selected control interval must therefore be sufficiently large for the largest RTT expected to be encountered. The use of an excessively small control interval can cause a background transfer to back-off without reason for one control interval, and to increase the window too much the subsequent control interval, causing high oscillations in the receive window size (this is known as the ‘drunk window syndrome’).
As the window can only be changed at the end of a fixed control interval, the responsiveness of such a technique is limited by the length of the control interval. As stated, the control interval can be excessively long with a fixed size interval. Furthermore, because the changes to the receive window size are occurring at the end of the control interval, this is most likely asynchronous with data arrival at the receiver node. Sudden decreases in the window size occurring at the end of the control interval mean that the receiver reneges on window sizes that were advertised in previous acknowledgement messages. Going back on the advertised window breaks RFC recommendations.
Reference is now made to
The process of
Whilst the timer is running, the receiver node receives 402 data from the sender node 102. The data is received, acknowledged and read in accordance with the TCP flow control mechanisms. In other words, the rate at which data is received is controlled by the receive window size advertised by the receiver node in acknowledgement packets and the sliding window mechanism. The first time that the process of
As the data is received and acknowledged by the receiver node 101, a number of parameters are measured 403. The receiver node 101 keeps track of the amount of data received during the control interval (e.g. the number of segments received during this interval). In addition, the receiver node 101 keeps track of an estimate of the current round trip time (RTT), which can be based on an observed delay with which each receive window of data is received during the control interval.
Once it is determined 404 that the timer has expired, and the control interval has completed, then the receiver node calculates 405 the received throughput observed for the control interval. The observed throughput is calculated as:
Where Xn is the observed throughput for the nth control interval, Bn is the number of bytes received during the control interval, and Tn is the duration of the control interval in seconds. As an alternative to the number of bytes Bn, the calculation can use Nn×MSS, where Nn is the number of segments received during the nth control interval.
The observed throughout is then used to evaluate 406 the available network capacity. For example, the observed throughput can be used to determine whether the connection is contending with other traffic on the link. This can be achieved by comparing the observed throughput to an expected throughput that would be attained in the absence of contention on the link (i.e. a throughput expected if the rate is limited by the receive window). In other words, with reference to
The expected throughput can be calculated simply as:
Where Xe is the expected throughput, Wn is the window size for the nth control interval, and RTTmin is an estimate of the minimum RTT for the connection (described hereinafter). In other words, if the throughput is limited by the window, then it would be expected that a full window of bytes is received in one RTT. The gradient 200 in
However, in practice, changes in the advertised window size cannot be observed immediately in the throughput. For example, in this scenario, until the next round trip time after changing the window size at the end of a control interval the throughput does not change, but rather remains at the previous throughput. Therefore, the expected throughput is adjusted to take this into account. The expected throughput can therefore be calculated as:
In other words, the expected throughput is adjusted by a correction throughput given by twice the magnitude of the change in the window size (between the nth and (n−1)th intervals) for the nth control interval duration, due to the lag in the window size affecting the throughput.
The observed throughput is compared to the expected throughput by determining whether the observed throughput is within a certain threshold of the expected throughput. This is determined using the following Boolean test:
X
n≧(1−ε)Xe
Where ε is a tolerance threshold. This test is denoted ‘test 1’ hereinafter. In one example, ε is 0.1. If the result is true, then this indicates that the connection is not contending with other traffic for bandwidth (i.e. the operation is on, or near to, the gradient 200 in
The above ‘test 1’ is then used to calculate 407 a new target receive window size. As mentioned, the aim of the background transfer control process is to control the receive window size to be the maximum value that it can be without contending with other traffic. In other words, the desired receive window size is the maximum value whilst still on the gradient 200 of
If ‘test 1’ holds true, then the receive window is increased in size, i.e. Wn+1>Wn. Conversely, if the test fails, then the receive window is decreased in size, i.e. Wn+1<Wn. The optimum window size to advertise is found and tracked dynamically using a form of binary search. The binary search process maintains a window size search-range between a minimum value Wmin and a maximum value Wmax. The target window size lies between these values. These values can be initialized to appropriate values. In one example, Wmin is initialized to 2×MSS and Wmax is initialized to 12×MSS. The values of Wmin or Wmax are adjusted such that the search range hones in on the optimum receive window size over subsequent control intervals.
The values for Wmin or Wmax are set using the above ‘test 1’ result and a window granularity parameter GW. In one example GW has a value of 2×MSS. The result of ‘test 1’ is used to perform a binary division of the range Wmin to Wmax to narrow in on the optimum window size value. However, since the optimum window size value can change over time it also uses a binary expansion to increase the size of the search range if the range becomes small. This leads to the following four cases:
Note that the binary search above is performed when the operation of the process is in the steady state. When the process is first started, an initialization phase is used, in which ‘test 1’ is run at the end of each control interval. If the test succeeds, Wmax is doubled, RTTmin is updated to Wn/Xn, and the process remains in the initialization phase. If ‘test 1’ fails, Wmax is set to the maximum of GW and Wmax/2, and the process progresses to the steady-state phase, as outlined above.
Once the change in the search space has been calculated (either by the binary search or in the initialization phase), the new value of the receive window size for the forthcoming control interval can be calculated as:
In other words, the new value for the receive window size is the mid-point of the updated search space. However, the value of the receive window size is not given a value less than 2×MSS, to avoid the ‘silly window syndrome’ (SWS) which arises where a receiver advertises and a sender responds to a very small receive window size. Therefore, if the calculated window size is less than 2×MSS, then Wn|1 is set to 2×MSS (although note that a further enhancement can be used in the case where the calculated window is less than 2×MSS, as described below with reference to
The value for the minimum round trip time, RTTmin, is updated 408, if appropriate. If ‘test 1’ holds true (i.e. the connection is not congested), then the value for the minimum RTT value is updated as follows:
Where δ is a smoothing parameter used when tracking the minimum RTT. In one example, δ has a value of 0.1. This updates the value of the minimum RTT to include the current value of the RTT (given by Wn/Xn) weighted by δ.
The duration of the subsequent control interval is then calculated 409. As mentioned hereinabove, an excessively large control interval reduces the responsiveness of the background transfer technique, whereas too small a control interval can cause the drunk window syndrome. To avoid this, the value of the control interval used for the subsequent interval is based on the current measured value of the RTT. In one example, the control interval can be set to a multiple of the current RTT. For example, the subsequent control interval can be set to three times the current RTT. However, having the control interval as a linear function depending on the RTT favors background flows with the largest RTT.
More preferably, a plurality of selectable control interval values can be defined of various durations. For example, specific selectable control intervals can be 100 ms, 500 ms, 1 s, 3 s etc. The process then selects the smallest control interval that is greater than a predefined multiple of the current RTT. In one example, the smallest selectable control interval value is chosen that is greater than three times the current RTT. The use of specific selectable control interval values ensures fairness between background flows is preserved because background flows with similar RTT values use the same control interval value.
The new calculated control interval duration is then stored. The new window size is updated 410, such that acknowledgements begin advertising the updated window size. The process in
Therefore, in summary, the process in
The process in
However, the process of
The flowchart in
Note that, in the illustrative example of
The receiver node 101 then calculates 501 the edge of the current window. In other words, the receiver node 101 calculates the sequence number of the last segment that is to be received for the forthcoming window. This is illustrated in
The receiver node 101 then receives 502 a number of segments, and tracks the sequence numbers until it is determined 503 that the previously-noted window edge sequence number is received (i.e. X+6 in
The times recorded (T1 and T2) are then used to evaluate 505 the available network capacity. For example, the recorded times are used to determine whether the connection is contending with other traffic on the link. This can be achieved by determining whether the connection is operating in the portion of the graph having the gradient 200 in
As outlined above with reference to
The following test is therefore used to determine whether the link is congested (denoted herein as ‘test 2’):
RTTcurrent≦(1+ε)RTTmin
Where RTTcurrent is the current estimate of the RTT (e.g. given by T2-T1), RTTmin is the minimum value for the RTT on the connection (as outlined above with reference to
In one example, ε has a fixed value, e.g. 0.1. In another example, ε is dynamic, and changes to improve the fairness between background flows competing for bandwidth. When a background transfer connection is competing with a regular TCP connection, the background transfer backs off into a mode where it is only occasionally sending packets to continue to monitor for the end of the competing flow (as described in more detail below). However, when a background transfer connection is competing, not with regular TCP, but with another background transfer connection, then it is preferable for it to get fair share and not back off completely. This can be achieved by making the background transfer more sensitive to potential competition as the achieved rate increases. Even a slight change stabilizes at the fair share value. The value for ε is therefore dynamically changed in dependence on the current window size using the following equation:
Where εinit is an initialization value for ε (e.g. 0.1 in one example) and Wcurrent is the current window size.
The above ‘test 2’ is then used to calculate 506 a new target receive window size. As described hereinabove, the aim of the background transfer control process is to control the receive window size to be the maximum value whilst still on the gradient 200 of
If ‘test 2’ holds true, then the receive window is increased in size. Conversely, if the test fails, then the receive window is decreased in size. The optimum window size to advertise is found and tracked dynamically using a binary search of the same type as that described above with reference to
The values for Wmin or Wmax in the binary search are set using the above ‘test 2’ result and the window granularity parameter GW. In one example GW has a value of 2×MSS. The result of ‘test 2’ is used to perform a binary division of the range Wmin to Wmax to narrow in on the optimum window size value. However, since the optimum window size value can change over time it also uses a binary expansion to increase the size of the search range if the range becomes small. This leads to the same four cases as above (included again for completeness):
Once the change in the search space has been calculated, the new value of the receive window size can be calculated as the mid-point of the updated search space, as follows:
The value of the receive window size is not given a value less than 2×MSS, to avoid the ‘silly window syndrome’ (SWS). Therefore, if the calculated window size is less than 2×MSS, then the new window size can be set to 2×MSS (although note that a further enhancement can be used in the case where the calculated window is less than 2×MSS, as described below with reference to
Once the new receive window size has been calculated, the receive window is updated 507. The calculation of the new receive window is performed in the time period between receiving the last segment in the previous window (T2 in
The new receive window size is communicated 508 to the sender node starting with the acknowledgement to last segment in the previous window. The change in the advertised window size is controlled so as not to be excessively large, such that the receiver does not renege on a previously advertised window size. To avoid this occurring, the window-edge tracking scheme is arranged to change the window size progressively in stages, by one MSS per acknowledgement message. This can be seen occurring in the example of
If the advertised window had been decreased from 4×MSS to 2×MSS in one step, then the receiver reneges on the window. For example, if ACK=X+7 in
The receiver node calculates 509 the sequence number of the segment that is to be received responsive to the sender node receiving the acknowledgement comprising the new window size. This is performed so that the receiver node knows when the sender has received and is acting on the new window size (and hence can start accurately monitoring the RTT again). For example, with reference to
The receiver node 101 then receives 510 data segments and tracks the sequence numbers until it is determined 511 that the calculated sequence number has been received (e.g. sequence number X+10). At this point, the process repeats. For example, with reference to
The process illustrated in
The receiver node does not renege on advertised windows in the window-edge tracking scheme, as the window sizes are not decreased by more than one MSS per acknowledgement. This scheme is also very responsive to changes, as it operates on a per-segment basis, and does not require control intervals in the order of a large number of RTTs. In addition, the accuracy of the RTT estimates are high, as they directly relate to the time to transfer a window of data. Furthermore, the window-edge tracking scheme does not require the use of timers (only recording of arrival times), which simplifies implementation.
The two schemes described with reference to
However, if the receive window size is less than 2×MSS, then the process enters rate limiting mode 702. The rate limiting mode sometimes advertises a window of 2×MSS and sometimes advertises a window of zero, as described in more detail with reference to
The first portion of the flowchart in
Once the random sleep time is selected, a sleep timer having the random value is started 802, and the receiver node waits 803 until the timer expires. Upon expiry of the random sleep time, the process enters the second portion of the flowchart in
The receiver node 101 opens 804 the receive window to a size of 2×MSS. This is communicated to the sender node 102 in an acknowledgement message (which is a window update, and not sent responsive to any received data). The time at which the updated window size is sent to the sender node 102 is recorded 805. Upon receiving the updated window size, the sender node 102 immediately sends two segments to the receiver node. The receiver node 101 records 806 the time that it receives the first of these two segments. The receiver node closes the receive window after opening it, such that only the two segments are received.
An accurate measurement of the current RTT can then be made, by calculating 807 the time delay between sending the acknowledgement to open the receive window, and receiving the first segment in response to this. This is found by subtracting the time of sending the window update from the time of arrival of the first segment.
It is then determined whether the current RTT just measured is within a threshold of the minimum RTT (the same minimum RTT used in the processes of
RTTcurrent<(1+ε)RTTmin
If the current RTT is sufficiently close to the minimum RTT, then this indicates that there is no congestion on the link. In order to determine whether to exit the rate limiting mode, a moving average of the results of this Boolean test is maintained. Therefore, if the current RTT is sufficiently close to the minimum RTT, then the result 809 is true, and the moving average is updated 811 with a value of one. Conversely, if the current RTT is not sufficiently close to the minimum RTT, then the result 810 is false, and the moving average is updated 811 with a value of zero. In one example, the moving average is an exponentially weighted moving average. The exponentially weighted moving average can, for example, have a smoothing factor of 0.1.
The updated moving average is then compared 812 to an upper threshold value. If the updated moving average value exceeds the upper threshold, then this indicates that there is a high confidence that there is no competing traffic, and the rate limiting mode is exited 813. The operation then returns to the window searching mode 700, as shown in
If the updated moving average value does not exceed the upper threshold, then it is determined 814 whether the moving average value exceeds a lower threshold. In one example, the lower threshold value is 0.65. If the updated moving average value exceeds the lower threshold, then this indicates that there is a reasonable likelihood that there is no competing traffic, and another sample of the RTT is taken without delay. The process then reopens the window to 2×MSS and takes another RTT sample (i.e. the process in block 703 in
If the updated moving average value does not exceed the lower threshold, then it is possible that congestion remains on the link. However, it is also possible that the situation has recently changed, such that congestion is now no longer present on the link, but the moving average has not yet changed sufficiently to reflect this. Therefore, if the updated moving average value does not exceed the lower threshold, but the current RTT is sufficiently close to the minimum RTT (i.e. it is determined 815 that the result 809 is true) then another sample of the RTT is taken without delay. This ensures that the exit from rate limiting mode is highly responsive to the end of congestion. If, however, the current RTT is not sufficiently close to the minimum RTT, the receiver node remains in rate limiting mode, and goes back to sleep for a randomly selected interval as described above.
The process shown in
Each of the different schemes described above with reference to
In order to obtain an accurate initial estimate of the minimum RTT, the background transfer process can be arranged to start initially in the rate limiting mode described above with reference to
If the background transfer is started in rate limiting mode, then the first RTT sample taken is used as the value for the minimum RTT. Whilst in rate limiting mode, any RTT samples that are lower than the minimum RTT can update the value of the minimum RTT. If there is no competition on the link then the process switches rapidly from rate limiting to window searching mode after obtaining an accurate minimum RTT sample
If the real minimum RTT changes upwards as a result of a route change in the network then the background transfer control process believes that the higher RTTs obtained are indicative of congestion. To avoid this, a minimum RTT change can be distinguished from a long-lasting contention using the shape of the distribution of the RTT values. This can be used to trigger the discarding of the old minimum RTT value, and the measurement of a new one. For example, if one or more measured round trip times differ from a mean value maintained for the RTT by more than a predetermined multiple of the standard deviation, then this can indicate a change in the minimum RTT.
In the above-described examples, the receive window size that is selected by the background transfer control process should reflect that advertised in the acknowledgements. However, in some circumstances this is not necessarily the case, for example if TCP window scaling is enabled. TCP window scaling is an option to increase the TCP receive window size above its maximum value by applying a scaling factor. If TCP window scaling factor is enabled then the background transfer control process makes sure that the advertised window it sets is the least multiple of scaling factor.
Note that, in the above described examples, the receive window size can be adjusted by adjusting the receive buffer size at the receiver node at the application level of the receiver node (e.g. at the socket layer). However, in other implementations, the receive window size can be adjusted directly (e.g. through the transport level) or though other indirect means. By at least these various means, the receive windows size can be adjusted and communicated to the sender node, which can adjust its send window size in accordance with the adjusted received window size.
The computing-based device 900 comprises a communication interface 901 arranged to transmit and receive data over the network 100. The computing-based device 900 also comprises one or more inputs 902 which are of any suitable type for receiving, for example, user inputs, voice data, media content, etc.
Computing-based device 900 also comprises one or more processors 903 which can be microprocessors, controllers or any other suitable type of processors for processing computing executable instructions to control the operation of the device in order to control background data transfers. Platform software comprising an operating system 904 or any other suitable platform software can be provided at the computing-based device to enable application software 905 to be executed on the device. Also executed by the processor 903 is a background transfer module 906, which implements the above-described background transfer control processes. The background transfer module 906 can be implemented as part of the operating system 904, as part of application software 905, embedded within software on the communication interface 901, or implemented as a stand-alone module. A data store 907 can be provided to store data relating to the background transfer module 906 or other software.
The computer executable instructions can be provided using any computer-readable media, such as memory 908. The memory is of any suitable type such as random access memory (RAM), a disk storage device of any type such as a magnetic or optical storage device, a hard disk drive, or a CD, DVD or other disc drive. Flash memory, EPROM or EEPROM can also be used.
An output is also provided such as an audio and/or video output to a display system 909 integral with or in communication with the computing-based device. The display system can provide a graphical user interface, or other user interface of any suitable type although this is not essential.
The term ‘computer’ is used herein to refer to any device with processing capability such that it can execute instructions. Those skilled in the art will realize that such processing capabilities are incorporated into many different devices and therefore the term ‘computer’ includes PCs, servers, mobile telephones, personal digital assistants and many other devices.
The methods described herein may be performed by software in machine readable form on a tangible storage medium. The software can be suitable for execution on a parallel processor or a serial processor such that the method steps may be carried out in any suitable order, or simultaneously.
This acknowledges that software can be a valuable, separately tradable commodity. It is intended to encompass software, which runs on or controls “dumb” or standard hardware, to carry out the desired functions. It is also intended to encompass software which “describes” or defines the configuration of hardware, such as HDL (hardware description language) software, as is used for designing silicon chips, or for configuring universal programmable chips, to carry out desired functions.
Those skilled in the art will realize that storage devices utilized to store program instructions can be distributed across a network. For example, a remote computer may store an example of the process described as software. A local or terminal computer may access the remote computer and download a part or all of the software to run the program. Alternatively, the local computer may download pieces of the software as needed, or execute some software instructions at the local terminal and some at the remote computer (or computer network). Those skilled in the art will also realize that by utilizing conventional techniques known to those skilled in the art that all, or a portion of the software instructions may be carried out by a dedicated circuit, such as a DSP, programmable logic array, or the like.
Any range or device value given herein may be extended or altered without losing the effect sought, as will be apparent to the skilled person.
It will be understood that the benefits and advantages described above may relate to one embodiment or may relate to several embodiments. The embodiments are not limited to those that solve any or all of the stated problems or those that have any or all of the stated benefits and advantages. It will further be understood that reference to ‘an’ item refers to one or more of those items.
The steps of the methods described herein may be carried out in any suitable order, or simultaneously where appropriate. Additionally, individual blocks may be deleted from any of the methods without departing from the spirit and scope of the subject matter described herein. Aspects of any of the examples described above may be combined with aspects of any of the other examples described to form further examples without losing the effect sought.
The term ‘comprising’ is used herein to mean including the method blocks or elements identified, but that such blocks or elements do not comprise an exclusive list and a method or apparatus may contain additional blocks or elements.
It will be understood that the above description of a preferred embodiment is given by way of example only and that various modifications may be made by those skilled in the art. The above specification, examples and data provide a complete description of the structure and use of exemplary embodiments of the invention. Although various embodiments of the invention have been described above with a certain degree of particularity, or with reference to one or more individual embodiments, those skilled in the art could make numerous alterations to the disclosed embodiments without departing from the spirit or scope of this invention.