This invention relates generally to network communication. In particular, the invention relates to a method for assigning a priority to a data transfer in a network, and a network node using the method.
In networks such as e.g. Distributed Storage Systems (DSS), a data transfer can be understood as a task to be done. Data transfers are often responses to requests or tasks. A task may be e.g. a search task or a data transfer task, with a characteristic flow of messages taking place between the nodes that are involved in the task. Usually, several (data transfer) tasks may occur in parallel at the same time. This may lead to conflicts or bottleneck situations due to limited capacity in terms of bandwidth, storage space or other parameters.
Different nodes in a peer-to-peer based network, e.g. an OwnerZone as described in the European Patent Application EP 1 427 141, may try to allocate resources of another node such as storage space or transfer rate. If the available resources are not sufficient to manage all requests, smart ways may be found to get around such bottlenecks or conflicts. This shall be done automatically, i.e. without user interaction. In some cases however it would be good if the user or an application had a possibility to modify an automatically found solution.
Conflict and bottleneck management implies communication between the nodes, based on a number of control messages. These control messages may also be part of a language, e.g. a Distributed Storage Communication and Control Language.
The present invention provides a possibility to manage such conflicts and bottlenecks automatically, and simultaneously provides for a user or an application means to modify the automatically achieved results. It is based on the definition of a dual layer priority system, comprising first layer so-called implicit priority and second layer so-called explicit priority, wherein implicit priorities generally overrule explicit priorities. Therefore the explicit priority layer is only exploited in case of identical implicit priority of tasks. Each of the two layers may be subdivided into different levels.
Advantageously, the present invention requires only little communication effort in the network. Further, it may improve data throughput in the network, exploit storage capacity better and improve availability of data.
According to the invention, conflicts and bottlenecks in terms of storage space, transfer rate, node availability etc. are managed or avoided by using a set of priorities and rules applied by the nodes in the network. While the rules are inherent in the nodes, the priorities are calculated in two steps, as dual layer priorities. The first layer are so-called implicit priorities that are defined in terms of rules or relations, which all involved nodes comply with. The second layer priorities are called explicit priorities, and are user or application defined.
The two-stage priority concept has the advantage that it uses task- and/or node-inherent priorities, which are called “implicit priorities” here and which need not be defined by a user or application, while the additional explicit priorities involve the assignment of priority levels as an information that can be exchanged and altered by the user or by an application. In other words, implicit priorities can be generated automatically without user input. A user or application can do the assignment or alteration of explicit priority levels when considered appropriate.
An advantage of the present invention is that conflicts and bottlenecks, e.g. in a DSS implemented as an OwnerZone, can be properly managed or avoided, thus improving data throughput, better exploiting storage capacity, improving data availability, and preventing network blockings.
The method according to the invention is a method for assigning a priority to a data transfer in a network, the data transfer comprising a first node sending out a first request indicating a particular data unit or particular type of data units, at least a second node receiving and analysing the first request, the second node detecting that it may provide the requested data unit, and sending to the first node a first message indicating that it may provide the requested data unit, the first node receiving and selecting the first message and sending a second request to the second node, requesting transfer of the particular data unit, and the second node transmitting the particular data unit upon reception of the second request. Said method comprises in a first step the first node assigning an identifier to the first request or the second request or both, the identifier corresponding to a first priority, in a second step the second node evaluating the identifier corresponding to the first priority and, based on the identifier, calculating a second priority, and in a third step the second node transferring the particular requested data unit, wherein the calculated second priority is assigned to the transfer. It should be noted that the transfer of the requested data unit needs not necessarily be directed to the first node that launched the requests. It is also possible that a third node is the receiver of the transferred data unit, and the first node is only the initiating node, e.g. because it has a user interface, schedule manager etc. In this case it will be useful for the first node to send at least the second request also to said third node.
A corresponding device contains respective means for executing each of the method steps.
The above-mentioned particular data unit or particular type of data units may be e.g. video data of a movie with a defined title, or video data of all available movies in which a particular defined actor is involved, or the like. This information can be associated to the data units, e.g. as a metadata mark, and can be e.g. in XML format.
Advantageous embodiments of the invention are disclosed in the dependent claims, the following description and the figures.
Exemplary embodiments of the invention are described with reference to the accompanying drawings, which show in
The invention is described exemplarily for an OwnerZone, which is a peer-to-peer based network structure, wherein the nodes have individual node identifiers and a common peer-group identifier, and wherein the nodes that belong to the peer-group may freely communicate with each other, exchange messages and other data etc. It may also be applied to other types of networks, and it is particularly advantageous for networks whose nodes organize themselves quite autonomously.
1. Priority Concept
The present invention introduces the notion of a two-stage concept involving the distinction between first-layer and second-layer priorities: first-layer or implicit priorities are relative priorities, or priority relations that are complied with by the included nodes, e.g. the peers in the OwnerZone. They have no explicit value, e.g. numerical priority level or number, associated with them. The set of implicit priorities thus represents an inherent “knowledge” of the nodes, i.e. depends on a set of rules they comply with. Advantageously, implicit priorities can be generated automatically, so that a user or application needs not define them. Second-layer or explicit priorities involve the assignment of priority levels, e.g. numbers or other identifiers, as a piece of information that can be modified or removed. A user or application can do the assignment or modification if considered appropriate. Explicit priority levels may be relative, e.g. “high” and “low”, or integer numbers, or generally any ranked terms. The explicit priority level of a task is assigned to a task, and can be compared to the explicit priority of another task to derive a decision if necessary, e.g. when deciding which of the two tasks gets higher priority for hardware access, memory space, processing power or similar.
1.1 Implicit Priorities
Nodes are implemented compliant with the following implicit priority rules or relations, in order to help smoothly managing transfers and avoid conflicts and bottlenecks among the nodes and their actions in an OwnerZone.
The fundamental rule is: “First come, first served.” It is implemented evaluating e.g. the TaskInitTime parameter that is defined by the node that sets up a task and establishes the start time of the task. A task may be e.g. a search task or a data transfer task, and has a characteristic flow of messages taking place between the nodes that are involved in the task. Every node in the OwnerZone takes care in all its actions that a task initiated at an earlier time has priority over a task initiated at a later time. A message received at an earlier time has usually priority over a message received at a later time. That means that a node generally responds to requests that it received in the sequence of the initiation of the requests, given by their TaskInitTime parameter. A common time base existing in all involved nodes is therefore helpful.
One aspect of the invention is that, as an exception from this rule, a data transfer task may inherit its priority to a certain extent from a preceding search task that it relates to. This is useful because a search task may be launched in general with the intention of setting up a data transfer task for the piece of content found. For this purpose, the node makes sure that a transfer of a piece of content relating to an earlier search request has, within a granted time period Twft (“wait for transfer” time, e.g. 5 seconds) after the TaskInitTime of the search request, priority over a transfer of a piece of content related to a later search request. However, other tasks may still have higher priority, e.g. the node may make an exception to this deviation in case of a necessary instantaneous start of the transfer, e.g. for a task of recording a live stream.
As a second rule, a task or data transfer is allowed to be started only if the resources that it needs are available, considering all other running or scheduled transfers that involve the respective nodes. That means that a node, before initiating a task, first checks the resources of the nodes that it intends to involve in the task, or maybe of all nodes in the OwnerZone to get an overview. It initiates a transfer for a particular time and includes only those nodes, which have at that time sufficient storage capacity and transfer capacity, i.e. rate and number of possible transfers, available. This refers to both, source and destination nodes. If necessary, the node delays the intended transfer until at a later time the transfer is possible. The nodes involved in the transfer allocate respective resources. They can be de-allocated e.g. by cancelling the task. Thus, a situation where two tasks block each other, and thus the whole network, is prevented.
As a third rule, running transfers should not be interrupted, unless they are explicitly cancelled by the node that initiated them. That means a node may not cancel running transfers from other nodes for getting resources to set up its own transfer. Only the node that initiated a transfer is permitted to cancel it. Then it can set up another transfer if necessary.
As a fourth rule, a transfer is only allowed to be scheduled for a time when the resources it occupies will be available, i.e. after a running transfer has been or will be completed, considering all other running or scheduled transfers involving the respective nodes. That means that a node first checks the availability of the resources it may involve in a data transfer task for a particular time. It initiates a transfer only for those nodes and for that time when sufficient storage capacity on the destination node is available and sufficient transfer capacity, i.e. rate and number of transfers, on both source and destination nodes is available. Then the involved nodes allocate the respective resources for the time when the transfer shall take place. Resources can be de-allocated by cancelling the transfer task at any time, whether the transfer has started already or not. Therefore each node that may provide its resources to others may have a timetable, to control when the resources are “booked”, and by which other node or for which purpose.
As a fifth rule, real-time or streaming transfer has higher priority than non-real-time or file transfer. In a more generalized view, real-time data are data whose source data rate cannot be reduced without reducing the reproduction quality. The idea is that a file transfer can in general take place at any bit rate and over any duration feasible according to network resources, while a real-time or streaming transfer e.g. of audio and/or video data is required to take place with accurate timing, and may involve the necessity of reproducing the content for being consumed, e.g. watched or listened, by a user. A node may slow down or accelerate a running non-real-time/file transfer by changing both bit rate and transfer duration, e.g. using a certain request message like ‘ModifyTransferRequest (“modify”)’. The product of transfer rate and transfer duration is the file size and thus remains unchanged. One possibility for the node that initiated a task to prohibit this is to introduce a task-related parameter such as AllowTransferSpeedChange and setting it “false”.
A sixth rule is that transfers for recording have always a higher priority than transfers for playback. This rule is subordinate to the previous one, i.e. a file transfer always has lower priority than a streaming transfer. It may be assumed that there is a time limitation for recording a piece of content, since it may be available now but not later, while playback of a piece of content could also be done at a later time. Therefore, if a recording task competes with a playback task, the node will preferably assign resources to the recording task. It may even cancel a playback task for enabling a recording task. This may happen on the application or user level or automatically if generally permitted by the application or user. E.g. if a playback transfer has been scheduled for a certain time and an application intends to record another piece of content during the same time while the resources would not allow this, the application may cancel the scheduled playback transfer and schedule the new recording transfer instead.
This situation may occur e.g. in a home network with two recording devices, a playback device, a receiver and a display device. While the user watches on the display device a movie that is played back from the playback device, one of the recording devices is recording a video stream coming from the receiver. Assuming that the storage of the recording device is full after a while, and further assuming that the network and the recording devices are able to continue the recording seamlessly on the second recording device, then probably the traffic on the network will be higher during the switch from the first to the second recording device. This additional traffic is however necessary for recording, and thus has higher priority than the playback data. In this situation, it is acceptable if the playback is shortly interrupted in order to have the recorded data consistent.
1.2 Explicit Priorities
In addition to the above relative implicit priorities, the present invention uses optional explicit priority levels such as “low” and “high” or integer numbers, or any ranked terms in general, based on an explicit Priority parameter that can be associated with a task. The explicit Priority parameter can optionally be assigned to a task e.g. by the node that initiates the task, or by a user. It may also be regarded as a matter of an application to make use of explicit priority levels. A node is able to modify the Priority parameter, and thus the explicit priority of a task, by sending a request message (e.g. ‘ModifyTransferRequest(“modify”)’) to the respective other nodes involved in the task.
In any case, implicit or first-layer priorities overrule explicit priorities. Consequently, explicit priority levels are exploited only when tasks have identical implicit priorities. If a device shall run more than one task at a time, it rates these tasks according to their implicit priorities and, in case of identical implicit priority, according to their explicit priority levels if these have been assigned, and provides its resources according to this rating.
A node may only be allowed to modify explicit priority levels of a task that it has not initiated itself, if the associated user or application running on that node has provided it with the correct UseKey. This is a parameter associated with the respective piece of content, which has optionally been defined by a user for this purpose and may relate e.g. to a particular interest group of users. An explicit priority level may further be modified through the node that runs the application that initiated the task, or in one embodiment through any node in an OwnerZone. In this case anybody in the OwnerZone can modify the explicit priority level of any task that has no associated UseKey parameter.
The following is an example in which two explicit priority levels “low” and “high” are defined, but it can be applied to any scheme of priority levels. If no explicit priority level has been defined for a task A, the following rule shall be applied for treating its undefined (or default) value:
if another, maybe competing, task B with identical implicit priority has an explicit priority level being “high”, then the undefined (or default) explicit priority of task A shall be regarded as “low”;
if another, maybe competing, task B with identical implicit priority has an explicit priority level being “low”, then the undefined (or default) explicit priority of task A shall be regarded as “high”.
This means that an explicit or second-level “high” priority is assigned to a task only if, and with the intention that, it shall be treated as more important than other tasks of identical implicit priority, and vice versa for a “low” level priority.
If possible, a task with a higher implicit or explicit priority than others must be implemented to get its requirements better satisfied than others, in terms of storage capacity, transfer rate, etc. A task set at lower explicit priority should be implemented with the remaining capabilities, after processing above higher priority tasks.
1.3 Implementation of Priority Rules
For implementing the above priority rules, each node may store all running and/or scheduled tasks in which it is involved in a “Task and Schedule Database”. The tasks are stored in serial order according to the time when they were initiated (according to their TaskInitTime), and identified by their respective task identifiers TaskID. A task is removed from the database upon its completion. Each node applies the above-described priority related rules when initiating or serving requests.
The second transfer Tr2 may start at tSRQ2+Twft2 because the available data rate or bandwidth Bmax is higher than the sum of required data rates R1+R2. The transfer request at tTRQ1 may also come later than Twft1 after the search request tSRQ1.
Though the described basic mechanisms are shown exemplarily for only two transfers, they can be used for any number of transfers, and they can be combined. It is e.g. possible that in
A similar situation is shown in
2. Conflicts and Bottlenecks and their Management, and Approaches of Avoidance
A conflict occurs where two or more operations compete with and exclude each other, so that not all of them can be performed. E.g. a first application may try to delete a piece of content while another application is reading it. Hence, the term “conflict” refers to a systematic conflict in the network system, e.g. DSS, and describes a situation where an intended task cannot be performed. However, there may be ways to overcome the conflict. As a possibility in the above example, the deletion task can be performed after the reading task, or the reading task can be cancelled so that the deletion task can follow.
A bottleneck is a physical constraint, e.g. low throughput rate or storage capacity, high delay etc. It is therefore a limiting factor for a process or task to take place. Hence, within this application the term “bottleneck” refers to a situation where an intended task can be performed, but only with a limitation. Other than a conflict, a bottleneck does not block or prevent a task.
The following sections describe a number of conflicts and bottlenecks and their management. Also approaches towards their avoidance are given.
2.1 Conflicts and their Management
Conflicts may occur e.g. with respect to:
Messages and control metadata can be used to overcome conflicts in storage capacity. E.g. in order to overcome a storage space conflict, an application or user may decide to delete or move pieces of content of less interest or importance. This may be decided e.g. according to user preferences. Thus, room for new recordings is made. In order to overcome a conflict in transfer rate, data transfers can be performed in succession.
Managing resources can be done continuously as a precaution or only in urgent cases. Resources in a node are allocated as soon as the node receives or launches a respective request, e.g. to be involved in the transfer of content. At this stage, search requests do not yet imply the allocation of resources, as the intention and decision of the user or application is in general not yet known; e.g. several matches may be found and a choice will have to be made. It is however probable that a data transfer will follow. Therefore it is an object of the present invention that an earlier search request leads to a higher priority for the transfer of the search result. This is explained further in the section on priorities for details below. The time of initiation of a search request, i.e. when the TaskID is defined, is communicated to the other nodes involved in the task.
In order to improve availability, important pieces of content may be copied and stored redundantly on two or more nodes. Thus, a piece of content that is stored on a certain node that is currently not available can be accessed from another node. This is an issue for the Application Layer or Intermediate Control Layer. E.g. the system may learn or ask what genres a user of an OwnerZone is interested in, and automatically create copies of respective pieces of content. The system could also duplicate pieces of content known to be recorded on removable media, and store them on stationary media that are available in the OwnerZone. For this purpose, software needs to keep track of the times of availability of nodes, and of what users regard as important.
If identical pieces of content are available redundantly on different nodes, they may also be used to overcome certain access or transfer rate conflicts. E.g. if two nodes try to access the same piece of content on a third node, one of them may be redirected to an identical piece of content on another node. If a node has found identical content on different nodes, it can select the node that can provide the highest transfer rate.
If a node that is not the source or destination of a task becomes unavailable while the task is running, this is usually not an issue.
If a source or destination node becomes unavailable while a transfer is running, e.g. due to power-off or unplugging, the transfer cannot be completed successfully. Generally, with some exceptions however, the involved nodes shall regard the task as being cancelled and delete the task and its parameters from their task memory as soon as possible. There are different situations and possibilities:
In cases (a) and (b), the destination node and the node that initiated the task then delete the task and its parameters from their task memories; the same holds for the source node when it becomes available again. In case (c), the destination shall keep trying to contact the source node, and as soon as it becomes available again, resume the transfer from the point where it has been interrupted, and inform the node that initiated the task (using a message like TransferStatusInformation(“resumed”)); if the source node does not become available within a given time period Twua (“wait until available” time, e.g. a week), the destination node and the node that initiated the task shall behave like in case (b).
A transfer may also be scheduled for a specified time. If a node is not available while a scheduled transfer should start, the following situations are possible:
Depending on the available resources, the initiating node may (a) wait for the destination node to become available again and then start the transfer, or (b) send a cancellation request. In case (b), it may select another destination node. In case (a), the source node and the initiating node keep the task and its parameters in their task memories for a given time period Twua and delete it afterwards. The same holds for the destination node when it is available again. If the destination node is available again within Twua, it requests the source node to forward the data. If the transfer can be started successfully, the usual message flow is used. If now the source node is unavailable, the destination node shall behave as specified above where the source node becomes unavailable.
In any case, any node shall delete any task that is overdue for more than a specified time Twua from its task memory, including its related parameters.
2.2 Bottlenecks and their Management
Bottlenecks may occur, e.g., with respect to:
Messages and Control Metadata are available to overcome bottlenecks in storage capacity and/or transfer rate. In order to overcome a bottleneck in transfer rate, the application or user may decide to transfer a piece of content—whether it be real-time streaming content or non-real time file content—in non-real time as a file at a lower bit rate so that the transfer time will be longer. As soon as resources become available again, the bit rate can be increased again and the transfer time shortened. Means are available to adjust the bit rate of a file transfer as necessary.
When searching for real-time streaming content in order to transfer it at a low transfer rate, e.g. to a portable or mobile device, a maximum bit rate can be included in the search request. Only devices that hold the required piece of content and that match the bit rate will answer the request. If, in case of a bottleneck in terms of processing power/time, a storage node is not able to perform all received search requests simultaneously or in due time, it communicates periodically that it is still searching. It may manage all of the search requests anyhow, if necessary sequentially.
There are further possibilities mainly on the Application Layer and essentially beyond the scope of the Messages and Control Metadata to overcome bottlenecks. E.g. in case of a bottleneck in terms of transfer rate or storage capacity, an intended real-time streaming transfer for playback or recording purposes may be performed at a decreased bit rate, and therefore degraded in quality, if the node has the ability to do so.
2.3 Towards Avoiding Conflicts and Bottlenecks
It needs not always get to a situation where a conflict or bottleneck occurs. Exemplary, the following steps may be taken in advance in order to avoid, or reduce the number of, bottlenecks and conflicts.
When a record request is scheduled, the content stored on a node or in the OwnerZone may be analysed, and the user or the application may be notified if the same or similar content is already stored. The analysis should consider whether the already stored content is complete and of sufficient quality. Then the application may suggest not to perform the new recording, or to delete the other versions e.g. if it has low quality or is incomplete.
The following is a simple scenario describing an application of the invention in a Distributed Storage System, and the Control Language used for distributed storage management including associated Messages and Control Metadata. Different messages or tasks are used along with specific Control Metadata contained in them as message parameters or arguments. For the ease of writing, messages are represented by message name and arguments, e.g.:
Though every message has its own MessageID, the MessageID is omitted for simplicity. The scenario is based on an example network (Owner Zone) for distributed storage shown in
The user utilises device S0 to search a desired piece of content: device S0 sends a search request message to all devices in the network. Device P receives the message, detects that it contains the content and replies to S0. In a variation to this scenario however, device P could be used instead of S0 to initiate the tasks of searching and copying content. In this case, the node P would not send a reply about content matching the request to itself, it just would get the corresponding information from its content database.
Since the user wants to store the content on any stationary storage device, device S0 is used to ask devices S1, S2 and S3 for their storage and transfer capabilities. S1, S2 and S3 inform S0 about their device capabilities, namely that they all have sufficient free transfer rate available. Limitation in free storage capacity is observed for device S1, while S3 offers the highest amount of free capacity. Device S0 requests P to transfer the content to S3 accordingly, thus making use of the storage capacity available in the network in a well-balanced way. After finishing the associated data transfer, P notifies S3 with a message. After recording the content, S3 informs S0 about the successful completion.
Well-balanced usage of storage capacity in a network, i.e. managing storage space between the nodes, may mean e.g. to record a piece of content on the node offering the highest free transfer rate, or highest absolute or relative free storage capacity as in this scenario. The storage devices in the network can be regarded as one “monolithic block” where the user does not need to distinguish between them. The well-balanced usage of storage capacity, however, is only one possible way for managing the storage capacity in the network. Other strategies could be applied as well when copying content, e.g. in case of capacity limitation.
The following sequence of exemplary messages occurs in this scenario: All messages contain identifiers for the sender and the receiver, and parameters specific to the respective message type.
It is assumed that the user wants to search for a certain piece or type content, e.g. a movie with the title “Octopussy”. As a result of his input the S0 device sends the following search request to all devices; since S0 has some pre-knowledge about S2 or is interested in S2 especially, S0 sends the message specifically to S2:
All devices store the association of the TaskID and the task-related parameters temporarily and search their databases. P finds the requested piece of content, therefore it sends back the following message to S0:
Since “all” receivers have been addressed in the ContentInfoRequest(“search”) message there is no need for a receiver to respond to the request unless it finds content matching the request, except S2 since it is mentioned explicitly as a receiver: S2 must respond to the request whether it holds the desired content or not. S2 needs some time to search its database and sends the following message to S0 when it begins to search:
Devices S2 does not find the requested piece of content. Because S2 has been addressed as a “must respond” receiver in the ContentInfoRequest(“search”) message, it sends back the following message to device S0, although the desired content was not found in S2:
The user may find the content he searches before the search process of all devices has been completed. He may therefore let S0 cancel the search process using the following message:
After receiving this message, all devices stop their search process. Because S2 has been addressed as a “must respond” receiver in the ContentInfoRequest(“search”) message, it sends back the following message to S0 to confirm the CancelTaskRequest(“search”) request:
After sending the ContentInfoResponse message to S0, nodes P and S2 delete the TaskID and the associated parameters from their temporary memory. The same holds for any device sending a CancelTaskResponse message.
The user is satisfied with the search result, and S0 now sends request messages to S1, S2 and S3 asking for their device capabilities, in order to find out their free storage capacities and transfer rates. Devices S1, S2 and S3 respond by informing S0 about their device capabilities:
Alternatively, S0 can also send the RequestDeviceCapability message to all three nodes as follows:
S0 evaluates the free capacities and transfer rates of S1, S2 and S3. S1 does not have sufficient free storage capacity, while S3 offers the highest amount of capacity. In order to make well-balanced use of the storage capacity of the stationary storage devices in the network, S0 automatically selects S3 for recording the content from P, without the user being required to interact, and requests S3 and P to perform the transfer. In variations to this scenario, one Receiver would be omitted and the message would just start:
In this case, node P is allowed to launch this InitiateTransferRequest only if it has the necessary resources available:
This message requests that the piece of content under the location on node P shall be transferred to node S3 and recorded there. The ContentID is a UUID specifying the location of the piece of content on node P. The TaskID is a UUID and could, e.g., be defined based on the NodeIDs of the devices involved, the location of the content to be transferred, and the time when the task was initiated. If device P and/or S3 were too busy at the moment according to their FreeTransferRate, they would send an InitiateTransferResponse(“denied”) message to S0; the task would then be cancelled by S0 by sending a CancelTaskRequest message to P and S3, answered by them through CancelTaskResponse messages to S0; or recording could be tried again later or scheduled using the After parameter according to the Until obtained from the DeviceCapabilitiesInformation messages. After receiving the message above, S3 and P confirm the request and allocate respective resources. The user wants to grant access to the content copy to a certain group of people he manages under the label “John's James Bond friends” defined by himself, and instructs S0 accordingly:
Since the value of the TransferPurpose parameter is “Record”, the Destination node S3 will control the data forwarding process: S3 then (or later, according to the After parameter) requests P to send the respective content data to it:
Device P receives the request from S3, and sends the following response message to S3 accompanied with the requested content, thus starting to transfer content data from P to S3:
S3 now informs S0 about the start of the recording process so that the user can be notified:
Since S3 controls the transfer (starting it through the ForwardDataRequest message), S3 sends the TransferStatusInformation(“starting”) message to S0. When P finishes the data transfer, it sends the following information message to S3, thus confirming that the complete data have been transferred. If this message would not be received, S3 could use this fact as an indication that the transfer was incomplete due to some reason, e.g. due to forced device unplugging:
S3 finishes the recording and sends the following information message about the successful completion of the recording to S0 so that it can notify the user:
Devices P and S3 deallocate their resources, and S0 now notifies the user about the successful completion of the transfer task.
The invention can be applied to all networking fields where conflicts or bottlenecks may occur and should be limited. Examples are networks based on peer-to-peer technology, such as e.g. OwnerZones, or Universal Plug and Play (UPnP) technology.
Number | Date | Country | Kind |
---|---|---|---|
05000466.2 | Jan 2005 | EP | regional |