This application is based upon and claims the benefit of priority of the prior Japanese Patent Application No. 2014-048196, filed on Mar. 11, 2014, the entire contents of which are incorporated herein by reference.
This invention relates to a scheduling technique of data transmission among nodes.
Various devices that generate data are distributedly deployed. For example, a smart phone generates log data of the Global Positioning System (GPS), pictures, videos and the like, and a sensor device generates monitoring data. Such data is collected through a network including plural nodes, and is used by applications used by an end user and/or is used for control based on the monitoring.
A specific example of such a system is a disaster alert system. In this system, measurement data is always collected from a water level sensor of a river, an accelerometer or the like, and when a value of the measurement data exceeds a threshold, an alert is issued. In such a system, it is desired that a dangerous state that the value of the measurement data exceeds the threshold is recognized soon and is notified to citizens.
Moreover, a certain document discloses a system for delivering an appropriate advertisement according to properties and/or a state of a user such as a behavioral targeting advertising system. This is a system to display a recommended advertisement by a human preference (e.g. purchase history), a temperature of a location and/or the like. For example, when an advertisement content that corresponds to the preference of the purchaser and/or the temperature is displayed by a vending machine that has a display device, an identifier of the purchaser is obtained, and then, the advertisement content is delivered from the content delivery server in a short time to cause the display device of the vending machine to display the advertisement. Therefore, a transmission time is calculated based on the arrival time limit to the vending machine, the time required for data transmission and the like, and the data transmission is performed according to the calculated transmission time. However, when the same transmission time is calculated for a lot of data, the congestion occurs.
Moreover, another certain document discloses a communication system to communicate through a transmission line between a main apparatus and plural sub-apparatuses. Then, the sub-apparatus transmits an information amount of an information signal to be transmitted through the transmission line to the main apparatus, and the main apparatus calculates an information amount of information that the sub-apparatus can transmit based on the information amount received from the sub-apparatus, and transmits the calculated information amount to the sub-apparatus. In response to this, when the sub-apparatus receives the information amount of information that the sub-apparatus can transmit, the sub-apparatus transmits an information signal having the received information amount.
The main apparatus in such a communication system determines the information amount of the information that the sub-apparatus can transmit, only based on the received information amount, and cannot determine contents of information to be transmitted by the sub-apparatus, individually. Therefore, the transmission to the main apparatus may be delayed.
Patent Document 1: Japanese Laid-open Patent Publication No. 2011-61869
Patent Document 2: Japanese Laid-open Patent Publication No. 2000-270010
Patent Document 3: Japanese Laid-open Patent Publication No. 2013-254311
In other words, there is no conventional art that can sufficiently suppress the delay of the data transmission.
An information communication method relating to a first aspect of this invention includes: (A) first transmitting, by a first information processing apparatus, a first transmission schedule for one or more data blocks to a second information processing apparatus that is a transmission destination of the one or more data blocks; (B) first receiving, by the second information processing apparatus, the first transmission schedule from the first information processing apparatus; (C) determining, by the second information processing apparatus, a second transmission schedule for at least one data block among the one or more data blocks that are defined in the first transmission schedule, based on the first transmission schedule and receiving resources of the second information processing apparatus; (D) second transmitting, by the second information processing apparatus, the second transmission schedule to the first information processing apparatus; (E) second receiving, by the first information processing apparatus, the second transmission schedule from the second information processing apparatus; and (F) third transmitting, by the first information processing apparatus, one or plural data blocks that are defined in the second transmission schedule, based on the second transmission schedule to the second information processing apparatus.
An information processing apparatus relating to a second aspect of this invention includes: a memory; and a processor configured to use the memory and execute a process including: (G) transmitting a first transmission schedule for one or more data blocks to an information processing apparatus that is a transmission destination of the one or more data blocks and determines a second transmission schedule for at least one data block among the one or more data blocks based on the first transmission schedule and receiving resources of the information processing apparatus; (H) receiving the second transmission schedule from the information processing apparatus; and (I) transmitting one or plural data blocks that are defined in the second transmission schedule, based on the second transmission schedule to the information processing apparatus.
An information processing apparatus relating to a third aspect of this invention includes: a memory; and a processor configured to use the memory and execute a process including: (J) receiving a first transmission schedule for one or more data blocks from an information processing apparatus that will transmit the one or more data blocks; (K) determining a second transmission schedule for at least one data block among the one or more data blocks that are defined in the first transmission schedule, based on the first transmission schedule and receiving resources; and (L) transmitting the second transmission schedule to the information processing apparatus.
The object and advantages of the embodiment will be realized and attained by means of the elements and combinations particularly pointed out in the claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the embodiment, as claimed.
The number of nodes included in the data collection and delivery system relating to this embodiment is not limited to “3”, and the number of stages of nodes provided between the data source and the application is not limited to “2”, and may be 2 or more. In other words, in this embodiment, nodes are connected so that plural stages of the nodes are made.
Here, definition of variables that will be used later is explained. In order to make it easy to understand the explanation, as illustrated in
As illustrated in
At this time when a transfer route of data dj (whose data size is represented as sj) as [La,b, Lb,c], a time limit (called as “an arrival time limit” or “a delivery time limit”) of the end-to-end from the node Nda to the node Nc is represented as “tlim,j” in this embodiment. Moreover, the delivery time limit tlim,j,a of the data dj at the node Nda becomes “tlim,j−sum([la,b, lb,c])” (“sum” represents the total sum.). Similarly, the delivery time limit tlim,j,b of the data dj at the node Ndb becomes “tlim,j−lb,c”.
The bandwidth (bit per second (bps)) of the link La,b is represented as ca,b.
In addition, time slots that will be described below are explained by using
In this embodiment, as illustrated in
When the node C receives the transmission schedule from the nodes A and B, the node C superimposes the transmission schedule as illustrated in
Next,
The data receiver 101 receives messages from other nodes or data sources. When the node itself performs a processing for data included in the message, a previous stage of the data receiver 101 performs the processing in this embodiment. In this embodiment,
In case of the message received from other nodes, as illustrated in
As illustrated in
Moreover, as illustrated in
Moreover, as illustrated in
The first scheduler 102 uses the link data storage unit 103, the data transfer route storage unit 104 and the latency data storage unit 105 to identify a delivery time limit (i.e. arrival time limit) up to the destination for the received message, identifies the transmission time limit at this node, and stores the identified transmission time limit and data of the message in the data queue 106.
The data transmitter 107 transmits, for each time slot defined in the data queue 106, messages allocated to the time slot to the destination node or application.
The schedule negotiator 108 has a scheduling requesting unit 1081, a schedule receiver 1082 and rescheduler 1083.
The scheduling requesting unit 1081 generates and transmits a scheduling request including a transmission schedule to a node of the message transmission destination by using data stored in the data queue 106. The schedule receiver 1082 receives schedule notification including a scheduling result from a node of the message transmission destination, and outputs the received schedule notification to the rescheduler 1083. The rescheduler 1083 updates contents of the data queue 106 according to the received scheduling result.
Moreover, the second scheduler 109 has a request receiver 1091, a scheduling processing unit 1092 and a notification unit 1093.
The request receiver 1091 receives scheduling requests from other nodes, stores the received scheduling requests in the scheduling data storage unit 111, and outputs the scheduling requests to the scheduling processing unit 1092. The scheduling processing unit 1092 changes a transmission schedule for each node based on the scheduling requests from plural nodes by using data stored in the resource management data storage unit 110.
Data is stored in the resource management data storage unit 110 in data formats illustrated in
Information concerning data blocks thrown into a queue is stored in the queue. However, as illustrated in
Moreover, data is stored in the scheduling data storage unit 111 in a data format as illustrated in
The notification unit 1093 transmits, to each node, the scheduling result generated by the scheduling processing unit 1092.
Next, processing contents of the node will be explained by using
Firstly, processing contents when the message is received will be explained by using
The data receiver 101 receives a message including data (dj) and outputs the message to the first scheduler 102 (step S1). When its own node is the most upper node connected to the data source (step S3: Yes route), the first scheduler 102 searches the latency data storage unit 105 for the data ID “dj” to read out a latency that is allowed up to the destination, and obtains the delivery time limit tlim,j (step S5). For example, the delivery time limit is calculated by “present time+latency”. When the delivery time limit itself is stored in the latency data storage unit 105, it is used. On the other hand, when its own node is not the most upper node (step S3: No route), the processing shifts to step S9.
Moreover, the first scheduler 102 adds the delivery time limit tlim,j to the received message header (step S7). By this step, a message as illustrated in
Furthermore, the first scheduler 102 searches the data transfer route storage unit 104 for dj to read out a transfer route [Lx,y] (step S9). In this embodiment, the transfer route is array data of link IDs.
Then, the first scheduler 102 searches the latency data storage unit 105 for each link ID in the transfer route [Lx,y], and reads out the latency lx,y of each link (step S11).
After that, the first scheduler 102 calculates a transmission time limit tlim,j,x at this node from the delivery time limit tlim,j and the latency lx,y (step S13). Specifically, “tlim,j−Σlx,y (a total sum with respect to all links on the transfer route)” is performed.
Then, the first scheduler 102 determines a transmission request time treq,j,x from the transmission time limit tlim,j,x (step S15). “tlim,j,x=treq,j,x” may hold, and “treq,j,x=tlim,j,x−α” may be employed considering a constant margin α. In the following explanation, “the transmission time limit=transmission request time” holds in order to make the explanation easy.
Then, the first scheduler 102 throws the message and additional data into the time slot of the transmission request time treq,j,x (step S17). Data as illustrated in
The aforementioned processing is performed every time when the message is received.
Next, processing contents of the schedule negotiator 108 will be explained by using
Firstly, the schedule negotiator 108 determines whether or not the present time is an activation timing of a time interval TSR,x (
Then, the scheduling requesting unit 1081 reads out data (except data body itself) within the scheduling window from the data queue 106, and generates a scheduling request (step S25).
For example, when specific values are inputted in the Javascript Object Notation (JSON) format, an example of
After that, the scheduling requesting unit 1081 transmits the scheduling request to a transmission destination of the data (step S27).
Then, the schedule negotiator 108 determines whether or not the processing end is instructed (step S29), and when the processing does not end, the processing returns to the step S21. On the other hand, when the processing ends, the processing ends.
By transmitting the scheduling request for plural time slots as described above, adjustment of the transmission timing is properly performed.
Next, a processing when the scheduling result is received will be explained by using
The schedule receiver 1082 receives schedule notification including the schedule result (
Then, when the rescheduler 1083 received the schedule notification, the rescheduler 1083 performs a processing to update the time slots into which the message in the data queue 106 (i.e. data block) is thrown according to the schedule notification (step S33). When the transmission schedule notified by the schedule notification is identical to the transmission schedule in the scheduling request, no special processing is performed. When a data block is moved to a different time slot, the data block is enqueued in a queue for the changed time slot. When there is no data for that time slot, the time slot is generated at this stage.
Thus, a transmission schedule adjusted in the node of the transmission destination can be reflected to the data queue 106.
Next, processing contents of the data transmitter 107 will be explained by using
The data transmitter 107 determines whether or not the present time becomes an activation timing t, which occurs at intervals of a time slot width Δt (
When it is not possible to read out data of the messages at the step S43 (step S45: No route), a processing for this time slot ends.
On the other hand, when the data of the messages can be read out (step S45: Yes route), the data transmitter 107 determines whether or not its own node is an end node of the transfer route (step S47). In other words, it is determined whether or not its own node is a node that outputs the messages to an application.
Then, when its own node is, the end node, the data transmitter 107 deletes the delivery time limit attached to the read message (step S49). On the other hand, when its own node is not the end node, the processing shifts to step S51.
After that, the data transmitter 107 transmits the read messages to the destinations (step S51). Then, the data transmitter 107 determines whether or not the processing ends (step S53), and when the processing does not end, the processing returns to the step S41. On the other hand, when the processing ends, the processing ends.
Thus, the message can be transmitted according to the transmission schedule determined by the node of the transmission destination. Therefore, data that can be received with the receiving resource of the node of the transmission destination is transmitted. Therefore, the delay of the data transmission is suppressed.
Next, processing contents of the second scheduler 109 will be explained by using
The request receiver 1091 in the second scheduler 109 receives a scheduling request from each node near the data source, and outputs the received scheduling requests to the scheduling processing unit 1092 and stores the received messages in the scheduling data storage unit 111 (
Then, the scheduling processing unit 1092 of the second scheduler 109 expands the respective scheduling requests for the respective time slot to count the number of messages (i.e. the number of data blocks) for each time slot (step S63). This processing result is stored in the resource management data storage unit 110 as illustrated in
Then, the scheduling processing unit 1092 determines whether or not the number of messages (the number of data blocks) that will be transmitted in each time slot is within a range of the receiving resources (i.e. less than the maximum value) (step S65). When the number of messages that will be transmitted in each time slot is within the range of the receiving resource, the scheduling processing unit 1092 outputs schedule notification including contents of the scheduling request stored in the scheduling data storage unit 111 as they are to the notification unit 1093, and causes the notification unit 1093 to transmit the schedule notification to each requesting source node (step S67). In such a case, this is because it is possible to receive the messages without changing the transmission schedule of each node.
Then, the scheduling processing unit 1092 stores the contents of the respective schedule notifications in the scheduling data storage unit 111 (step S69). Moreover, the scheduling processing unit 1092 discards the respective schedule requests that were received this time (step S71).
On the other hand, when the number of messages for any of the time slots exceeds the range of the receiving resources, the processing shifts to a processing in
Firstly, the scheduling processing unit 1092 initializes a counter n for the time slot to “1” (step S73). Then, the scheduling processing unit 1092 determines whether or not the number of messages for the n-th time slot exceeds the receiving resources (step S75). When the number of the messages for the n-th time slot is within the receiving resources, the processing shifts to a processing in
On the other hand, when the number of messages for the n-th time slot exceeds the range of the receiving resources, the scheduling processing unit 1092 sorts the messages within the n-th time slot by using, as a first key, the transmission time limit of the transmission source node and by using, as a second key, the delivery time limit (step S77).
A specific example of this step will be explained for the third time slot in
After that, the scheduling processing unit 1092 determines whether or not there is a vacant receiving resource for a time slot before the n-th time slot (step S79). When there is no vacant receiving resource, the processing shifts to the processing in
On the other hand, when there is a vacant receiving resource in the time slot before the n-th time slot, the scheduling processing unit 1092 moves a message from the top in the n-th time slot to the end of the time slot having a vacant receiving resource (step S81).
In an example illustrated in
There is a case where two or more messages exceed the range of the receiving resources. In such a case, messages of the number of vacant receiving resources in the time slots before the n-th time slot are picked up and moved from the top in the n-th time slot. When three messages exceed the range of the receiving resources, however, there are only two vacant receiving resources in the previous time slot, only two messages are moved to the previous time slot. A countermeasure for one remaining message is determined in the following processing.
Then, the scheduling processing unit 1092 determines whether or not the present state is a state that messages that exceed the range of the receiving resources are still allocated to the n-th time slot (step S83). When this condition is satisfied, the processing shifts to the processing in
On the other hand, when the messages in the n-th time slot are within the range of the receiving resources, the processing shifts to the processing in
Shifting to the explanation of the processing in
On the other hand, when there is a vacant receiving resource in the time slot after the n-th time slot, the scheduling processing unit 1092 moves the message from the end of the n-th time slot to the top of the time slot having the vacant receiving resource (step S87).
In the example illustrated in
There is a case where two or more messages exceed the range of the receiving resource. In such a case, messages of the number of vacant receiving resources in the time slots after the n-th time slot are picked up and moved from the end of the n-th time slot. When three messages exceed the range of the receiving resources, however, there are only two vacant receiving resources in the rear time slot, only two messages are moved to the rear time slot. One remaining message will be processed later.
Furthermore, the scheduling processing unit 1092 determines whether or not the present state is a state where the messages that exceed the range of the receiving resources are still allocated to the n-th time slot (step S89). When such a condition is not satisfied, the processing shifts to step S95.
When such a condition is satisfied, the scheduling processing unit 1092 adds a time slot after the current scheduling window (step S91). Then, the scheduling processing unit 1092 moves messages that exceed the range of the receiving resources at this stage from the end of the n-th time slot to the top of the added time slot (step S93).
By doing so, in each time slot in the scheduling window, it is possible to suppress the receipt of the messages within the range of the receiving resources. Therefore, the congestion is suppressed, and the delay of the data transmission is also suppressed.
Then, the scheduling processing unit 1092 determines whether or not a value of the counter n is equal to or greater than the number of time slots w within the scheduling window (step S95). When this condition is not satisfied, the scheduling processing unit 1092 increments n by “1” (step S97), and the processing returns to the step S75 in
Shifting to the explanation of the processing in
As illustrated in
Then, the scheduling processing unit 1092 stores contents of the respective scheduling notification in the scheduling data storage unit 111 (step S101). Moreover, the scheduling processing unit 1092 discards the respective scheduling requests that were received this time (step S103).
By performing such a processing, it becomes possible to receive data from the node of the transmission source within the range of the receiving resources of data, and the congestion is suppressed, and the delay of the data transmission is also suppressed.
In the first embodiment, although the explanation was made assuming the size of each message is identical, the sizes of the messages may be different. In this embodiment, a case will be explained where a processing is performed considering the transmission abilities of links in addition to the size of the message.
The configuration of the node illustrated in
Data is stored in the link data storage unit 103 relating to this embodiment in a data format as illustrated in
Moreover, data for the time slots in the data queue 106 is identical, however, as for portions of data concerning an individual message (i.e. data block), data is stored in a data format as illustrated in
Furthermore, data is stored in the resource management data storage unit 110 in a data format as illustrated in
Information concerning the message (i.e. data block), which were thrown into a queue, is stored in the queue. However, as illustrated in
Moreover, data formats of the scheduling request and schedule notification are as illustrated in
In this embodiment, a latency is calculated by using the data size and the bandwidths of the links. This point will be explained by using
Here, the node Nda receives data dj having the data size sj from the data source, and transmits the data dj to the node Ndd through the node Ndb. Moreover, the node Ndc receives data dk having the data size sk from the data source, and transmits the data dk to the node Ndd.
Here, the latency la,b of the link La,b from the node Nda to the node Ndb is calculated by “the data size sj/the link bandwidth ca,b”. Similarly, the latency lb,d of the link Lb,d from the node Ndb to the node Ndd is calculated by “the data size sj/the link bandwidth cb,d”.
Therefore, the transmission time limit tlim,j,a at the node Nda is calculated as follows:
The delivery time limit tlim,j−sum([la,b,lb,d])=tlim,j−(sj/ca,b+sj/cb,d).
Similarly, the transmission time limit tlim,j,b at the node Ndb is calculated as follows:
The delivery time limit tlim,j−lb,d=tlim,j−sj/cb,d.
Moreover, the transmission time limit tlim,j,c at the node Ndc is calculated as follows:
The delivery time limit tlim,k−lc,d=tlim,i−sk/cc,d.
Next, processing contents relating to this embodiment will be explained by using
Firstly, processing contents when the message is received will be explained by using
The data receiver 101 receives a message including data dj, and outputs the message to the first scheduler 102 (step S201). When its own node is the most upper node that is connected with the data source (step S203: Yes route), the first scheduler 102 searches the latency data storage unit 105 for the ID “dj” of the data, reads out the latency that is allowed up to the destination, and obtains the delivery time limit tlim,j (step S205). For example, the delivery time limit is calculated by “the present time+latency”. When the delivery time limit itself is stored in the latency data storage unit 105, it is used. On the other hand, when its own node is not the most upper node (step S203: No route), the processing shifts to step S209.
Moreover, the first scheduler 102 adds the delivery time limit tlim,j to the received message header (step S207). Then, the message as illustrated in
Furthermore, the first scheduler 102 searches the data transfer route storage unit 104 for dj to read out the transfer route [Lx,y] (step S209). The transfer route is array data of link IDs in this embodiment.
Then, the first scheduler 102 searches the latency data storage unit 105 for each link ID of the transfer route [Lx,y] to read out the link bandwidth cx,y for each link (step S211).
After that, the first scheduler 102 calculates the transmission time limit tlim,j,x of this node from the delivery time limit tlim,j, the link bandwidth cx,y, and the data size of the message (step S213). Specifically, a following calculation is performed:
tlim,j−Σsj/cx,y
(total sum with respect to all links on the transfer route).
Then, the first scheduler 102 determines the transmission request time treq,j,x from the transmission time limit tlim,j,x (step S215). “tlim,j,x=treq,j,x” may be held, and “treq,j,x=tlim,j,x=α” may be employed considering a constant margin α. In the following explanation, “the transmission time limit=the transmission request time” is assumed in order to simplify the explanation.
Then, the first scheduler 102 throws the message and additional data to the time slot of the transmission request time treq,j,x (step S217).
The aforementioned processing is performed every time when the message is received.
Processing contents of the schedule negotiator 108 are almost the same as those in the first embodiment, and only the data format of the generated scheduling request as illustrated in
Moreover, processing contents of the data transmitter 107 is the same as those illustrated in
Furthermore, processing contents of the second scheduler 109 is similar except a portion. Specifically, a point is different that it is determined whether or not the time slot is vacant based on a data amount instead of the number of messages (i.e. the number of data blocks).
As illustrated in
For example, it is assumed that the time slot width Δt=one second holds and the bandwidth is 100 Mbps. Then, in case of data of 20M bits, it takes 0.2 seconds for the transmission, and it takes 0.1 seconds in case of data of 10M bits. Here, assuming 10M bits correspond to one unit, “ds: 1” represents data of 10M bits, and “ds: 2” represents data of 20M bits.
Here, when the transmission schedules of the nodes A and B are superimposed, the transmission schedules are as illustrated in the right side of
As for the vacant time slot, it is determined whether or not the data block (i.e. message) can be moved, while considering the vacant capacity. For example, when the data size of one data block (i.e. message) is large, and the vacant capacity in the time slot lacks, it is determined that the data block cannot be moved.
Processing contents of the notification unit 1093 are different in a point that the data format of the schedule notification is as illustrated in
As described above, it is possible to cope with a case of the message (i.e. data block), which has a different data size.
In the first and second embodiments, the explanation was made assuming the activation intervals TSR,x of the scheduling requesting unit 1081 and the initial activation time are identical in each node.
In this embodiment, in each node, the activation interval TSR,x and the initial activation time of the scheduling requesting unit 1081 are different. Therefore, the scheduling window is different in each node, and is set according to the activation interval TSR,x of that node.
In the first and second embodiments, the scheduling requests are almost simultaneously received, and in response to those, the second scheduler 109 performs a processing. However, in this embodiment, because the timing when the scheduling request is received is different for each transmission source node, the activation interval TTLS-inter,x of the second scheduler 109 is defined as follows:
TTLS-inter,x=min(TSR,a,TRb,TRc, . . . ,TSR,n).
In other words, the shortest activation interval of the transmission source nodes is employed as the maximum value of the activation interval of the second scheduler 109. This is because it is preferable that the node, which transmits plural scheduling requests in one TTLS-inter,x, does not appear.
The configuration of the node that copes with such a situation is similar to the first embodiment. However, because the processing contents are partially different, different portions of the processing contents will be explained in the following.
Firstly, processing contents of the request receiver 1091 will be explained by using
In other words, when the request receiver 1091 receives a scheduling request from a transmission source node (step S301), the request receiver 1091 stores data of the scheduling request in the scheduling data storage unit 111 (step S303).
Such a processing will be repeated every time when the scheduling request is received.
Next, processing contents of the scheduling processing unit 1092 and the like will be explained by using
The scheduling processing unit 1092 determines whether or not the activation interval TTLS-inter,x or a time period that is set in the following processing elapsed and the present time is the activation timing t (
On the other hand, when the present timing is the activation timing t, the scheduling processing unit 1092 reads out data concerning unprocessed time slots for which the scheduling requests from all of the nodes are collected and which are within the maximum width TTLS-inter,x, from the scheduling data storage unit 111 (step S313).
For example, an initial state as illustrated in
Then, the scheduling processing unit 1092 expands the read data for each time slot, and counts the number of messages (i.e. the number of data blocks) for each time slot (step S315). This processing result is stored in the resource management data storage unit 110 as illustrated in
In an example of
Then, the scheduling processing unit 1092 determines whether or not the number of messages that will be transmitted in each time slot is within the range of the receiving resources (step S317). When the number of messages that will be transmitted in each time slot is within the range of the receiving resources, the scheduling processing unit 1092 outputs, for each requesting source node, schedule notification that includes contents of the scheduling request stored in the scheduling data storage unit 111 as they are to the notification unit 1093, and causes the notification unit 1093 to send the schedule notification to each requesting source node (step S319). In such a case, this is because it is possible to receive the messages without changing the transmission schedule for each node.
Then, the scheduling processing unit 1092 stores the contents of each schedule notification in the scheduling data storage unit 111 (step S321).
Furthermore, the scheduling processing unit 1092 sets “t+(No. of currently scheduled time slots)*Δt” as the next activation timing (step S323). In this embodiment, the number of time slots to be processed is different for each activation timing. Therefore, the next activation timing is set for each activation timing.
On the other hand, when the number of messages for any of the time slots exceeds the range of the receiving resource, the processing shifts to a processing in
Firstly, the scheduling processing unit 1092 initializes a counter n of the time slot to “1” (step S325). Then, the scheduling processing unit 1092 determines whether or not the number of messages in the n-th time slot exceeds the range of the receiving resources (step S327). When the number of messages in the n-th time slot is within the range of the receiving resources, the processing shifts to a processing in
On the other hand, when the number of messages in the n-th time slot exceeds the range of the receiving resources, the scheduling processing unit 1092 sorts the messages within the n-th time slot by using, as a first key, the transmission time limit of the transmission source node and using, as a second key, the delivery time limit (step S329). This processing is the same as that at the step S77.
After that, the scheduling processing unit 1092 determines whether or not the time slot before the n-th time slot (however, within the range of the time slots that are currently processed) has a vacant receiving resource (step S331). When there is no vacant receiving resource, the processing shifts to the processing in
On the other hand, when there is a vacant receiving resource in the time slot before the n-th time slot, the scheduling processing unit 1092 moves the message in the top of the n-th time slot to the end of the time slot that has a vacant receiving resource (step S333).
There is a case where two or more messages exceed the range of the receiving resources. In such a case, only the messages of the number of vacant receiving resources in the time slot before the n-th time slot are picked up and moved from the top of the n-th time slot. When three messages exceed the range of the receiving resources, however, there are only two vacant receiving resources in the previous time slot, only two messages are moved to the previous time slot. Countermeasures for one remaining message are determined in the following processing.
Then, the scheduling processing unit 1092 determines whether or not the present state is a state that a message that exceeds the range of the receiving resource is still allocated to the n-th time slot (step S335). When such a condition is satisfied, the processing shifts to the processing in
On the other hand, when the number of messages in the n-th time slots becomes within the range of the receiving resources, the processing shifts to the processing in
Shifting to the explanation of the processing in
On the other hand, when there is a vacant receiving resource in the time slot after the n-th time slot, the scheduling processing unit 1092 moves the message from the end of the n-th time slot to the top of the time slot that have a vacant receiving resource (step S339).
There is a case where two or more messages exceed the range of the receiving resources. In such a case, the messages of the number of vacant receiving resources in the time slot after the n-th time slot are picked up and moved from the end of the n-th time slot. When three messages exceed the range of the receiving resources, however, there are only two vacant receiving resource in the rear time slot, only two messages are moved to the rear time slot. One remaining messages is processed later.
Furthermore, the scheduling processing unit 1092 determines whether or not the message that exceeds the range of the receiving resources is still allocated to the n-th time slot (step S341). When this condition is not satisfied, the processing shifts to step S345.
When this condition is satisfied, a processing to generate the time slot after the scheduling window and allocate the message (i.e. data block) to the top thereof is performed in the first embodiment.
However, in this embodiment, because the time slots already exist and the messages (i.e. data blocks) are allocated to the time slots, a different processing is performed.
In other words, the scheduling processing unit 1092 moves the messages (i.e. data blocks) that could not be moved to other time slots to a time slot after the range of the time slots to be processed presently (step S343).
Thus, in each time slot in the scheduling window, it is possible to suppress the receipt of the messages within the range of the receiving resources, and the congestion is suppressed, and the delay of the data transmission is also suppressed.
Then, the scheduling processing unit 1092 determines whether or not a value of the counter n is equal to or greater than the number of time slots to be processed presently (step S345). When this condition is not satisfied, the scheduling processing unit 1092 increments n by “1” (step S347), and the processing returns to the step S327 in
Shifting to the explanation of the processing in
Then, the scheduling processing unit 1092 stores the contents of the respective schedule notifications in the scheduling data storage unit 111 (step S351).
Furthermore, the scheduling processing unit 1092 sets “t+(No. of currently scheduled time slots)*Δt” as the next activation timing (step S353). This step is the same as that at the step S323.
Furthermore,
Thus, it becomes possible to cope with the difference between the scheduling windows and gaps in the initial activation timings.
Even when the data size of each message is different like in the second embodiment, this embodiment is modified like in the second embodiment.
In this embodiment, a case is explained where the upper limit value of the transmission resource is considered.
Next, a processing of the data receiver 101 and the first scheduler 102 relating to this embodiment will be explained by using
The data receiver 101 receives a message including data dj, and outputs the message to the first scheduler 102 (
Moreover, the first scheduler 102 adds the delivery time limit tlim,j to the received message header (step S507). Thus, the message as illustrated in
Furthermore, the first scheduler 102 searches the data transfer route storage unit 104 for dj to read out the transfer route [Lx,y] (step S509). The transfer route is assumed to be array data of the link IDs.
Then, the first scheduler 102 searches the latency data storage unit 105 for each link ID in the transfer route [Lx,y], and reads out the latency lx,y for each link (step S511).
After that, the first scheduler 102 calculates the transmission time limit tlim,j,x of this node from the delivery time limit tlim,j and latency lx,y (step S513). Specifically, the transmission time limit is calculated as follows:
tlim,j−Σlx,y
(a total sum with respect to all links on the transfer route).
Then, the first scheduler 102 determines the transmission request time treq,j,x from the transmission time limit tlim,j,x (step S515). tlim,j,x=treq,j,x may hold, and treq,j,x=tlim,j,x−α may be employed when a constant margin α is considered. In the following explanation, the transmission time limit=the transmission request time holds in order to simplify the explanation.
Then, the first scheduler 102 throws the message and additional data into the time slot of the transmission request time treq,j,x (step S517). Data as illustrated in
Shifting to the explanation of the processing in
On the other hand, when the lack of the resources occurs in the time slot to which the message was enqueued, the first scheduler 102 sorts the messages within the time slot to which the message that was received at this time was enqueued by using, as a first key, the transmission time limit of this node and by using, as a second key, the delivery time limit (step S521).
Then, the first scheduler 102 determines whether or not there is a vacant transmission resource in a time slot before the time slot to which the message that was received at this time was enqueued (step S523). When the transmission is moved forward, it is possible to suppress the possibility of the delay of the data transmission. Therefore, firstly, the previous time slot is checked.
When there is a vacant time slot before the time slot to which the message was enqueued, the first scheduler 102 moves the message from the top of the time slot to which the message that was received at this time was enqueued to the end of the time slot that has a vacant resource (step S525). Then, the processing ends.
On the other hand, when there is no vacant transmission resource in the time slot before the time slot to which the message was enqueued, the first scheduler 102 determines whether or not there is a vacant resource in the time slot after the time slot to which the message that was received at this time was enqueued (step S527).
When there is a vacant resource in the time slot after the time slot to which the message was enqueued, the first scheduler 102 moves the message from the end of the time slot to which the message that was received at this time was enqueued to the top of the time slot that has a vacant resource (step S529). Then, the processing ends.
On the other hand, when there is no resource in the time slot after the time slot to which the message was enqueued, the first scheduler 102 adds a time slot after the time slots that have been defined (step S531), and moves the message that was received at this time from the end of the time slot to which the message that was received at this time was enqueued to the top of the added time slot (step S533).
When such a processing is performed, it is possible to transmit data within the range of the transmission resources of data, and the congestion is suppressed and the delay of the data transmission is also suppressed.
Because other processing contents are the same as those in the first embodiment, the further explanation is omitted.
The fourth embodiment is also transformed into a form that the data size is taken into consideration, like the second embodiment.
The node configuration is the same as that illustrated in
Next, the processing contents of the data receiver 101 and the first scheduler 102 will be explained by using
The data receiver 101 receives a message including data dj, and outputs the message to the first scheduler 102 (
When the delivery time limit itself is stored in the latency data storage unit 105, it is employed. On the other hand, when its own node is not the most upper node (step S603: No route), the processing shifts to the step S609.
Moreover, the first scheduler 102 adds the delivery time limit tlim,j to the received message header (step S607). By doing so, the message as illustrated in
Furthermore, the first scheduler 102 searches the data transfer route storage unit 104 for “dj” to read out the transfer route [Lx,y] (step S609). The transfer route is assumed to be array data of the link IDs.
Then, the first scheduler 102 searches the latency data storage unit 105 for each link ID of the transfer route [Lx,y] to read out the link bandwidth cx,y of each link (step S611).
After that, the first scheduler 102 calculates the transmission time limit tlim,j,x at this node from the delivery time limit tlim,j, link bandwidth cx,y and the data size of the message (step S613). Specifically, the transmission time limit is calculated as follows:
tlim,j−Σsj/cx,y
(a total sum with respect to all links on transfer route).
Then, the first scheduler 102 determines the transmission request time treq,j,x from the transmission time limit tlim,j,x (step S615). “tlim,j,x=treq,j,x” may hold, and “treq,j,x=tlim,j,x−α” may hold when a constant margin α is considered. In the following explanation, it is assumed that “the transmission time limit=transmission request time” holds in order to make the explanation easy.
Then, the first scheduler 102 throws the message (i.e. data block) and additional data into a time slot for the transmission request time treq,j,x (step S617). Then, the processing shifts to a processing in
Shifting to the explanation of the processing in
On the other hand, when the lack of the transmission resources occurs in the time slot to which the message was enqueued, the first scheduler 102 sorts messages within the time slot to which the message that was received at this time was enqueued by using, as a first key, the transmission time limit at this node and using, as a second key, the delivery time limit (step S621).
Then, the first scheduler 102 determines whether or not there is a vacant capacity of the transmission resource in a time slot before the time slot to which the messages that was received at this time was enqueued (step S623). When the transmission is moved forward, it is possible to suppress the possibility of the delay of the data transmission. Therefore, firstly, the previous time slot is checked. Whether or not there is a vacant capacity of the transmission resource is determined based on whether or not the vacant capacity is equal to or greater than the data size of the top message (i.e. top data block) in the time slot to which the message that was received at this time was enqueued.
When there is a vacant capacity of the transmission resource in the time slot before the time slot to which the message was enqueued, the first scheduler 102 moves the message from the top of the time slot to which the message that was received at this time was enqueued to the end of the time slot having the vacant transmission resource (step S625). Then, the processing ends.
On the other hand, when there is no vacant transmission resource in the time slot before the time slot to which the message was enqueued, the first scheduler 102 determines whether or not there is a vacant capacity of the transmission resource in a time slot after the time slot to which the message that was received at this time was enqueued (step S627). Similarly to the step S623, whether or not there is a vacant capacity of the transmission resource is determined based on whether or not the vacant capacity is equal to or greater than the data size of the end message (i.e. end data block) in the time slot to which the message that was received at this time was enqueued.
When there is a vacant capacity of the transmission resource in the time slot after the time slot to which the message was enqueued, the first scheduler 102 moves the message from the end of the time slot to which the message that was received at this time was enqueued to the top of the time slot having the vacant capacity of the transmission resource (step S629). Then, the processing ends.
On the other hand, when there is no vacant transmission resource in the time slot after the time slot to which the message was enqueued, the first scheduler 102 adds a time slot after the time slots that has already been defined (step S631), and moves the message from the end of the time slot to which the message that was received at this time was enqueued to the top of the added time slot (step S633).
By performing such a processing, it becomes possible to transmit data within the range of the transmission resources of the data, and the congestion is suppressed, and the delay of the data transmission is also suppressed.
Other processing contents are the same as those in the fourth embodiment. Therefore, further explanation is omitted.
This embodiment handles a case, as illustrated in
In other words, a configuration example of the node is modified as illustrated in
Next, processing contents when the message is received will be explained by using
The data receiver 101 receives a message including data dj and outputs the message to the first scheduler 102 (step S701). When its own node is the most upper node that is connected with the data source (step S703: Yes route), the first scheduler 102 searches the latency data storage unit 105 for the ID “dj” of the data to read out the latency that is allowed up to the destination, and obtains the delivery time limit tlim,j (step S705). For example, the delivery time limit is calculated by “the present time+latency”. When the delivery time limit itself is stored in the latency data storage unit 105, it is employed. On the other hand, when its own node is not the most upper node (step S703: No route), the processing shifts to step S709.
Moreover, the first scheduler 102 adds the delivery time limit to the received message header (step S707). Thus, the message as illustrated in
Furthermore, the first scheduler 102 searches the data transfer route storage unit 104 for “dj” to read out the transfer route [Lx,y] (step S709). The transfer route is assumed to be array data of the link IDs.
Then, the first scheduler 102 searches the latency data storage unit 105 for each link ID of the transfer route [Lx,y] to read out the latency lx,y of each link (step S711).
After that, the first scheduler 102 calculates the transmission time limit tlim,j,x at this node from the delivery time limit tlim,j and the latency lx,y (step S713). Specifically, the transmission time limit is calculated as follows:
tlim,j−Σlx,y
(a total sum with respect to all links on the transfer route).
Then, the first scheduler 102 determines the transmission request time t from the transmission time limit tlim,j,x (step S715). “tlim,j,x=treq,j,x” may hold, and “treq,j,x=tlim,j,x−α” may hold considering a constant margin α. In the following explanation, “the transmission time limit=the transmission request time” is employed in order to simplify the explanation.
Then, the first scheduler 102 throws the message and additional data into the time slot of the transmission request time treq,j,x in the data queue for the transmission destination node (step S717). Data as illustrated in
The aforementioned processing is performed every time when the message is received.
The processing contents of the second scheduler 109 are the same as those in the first embodiment. Therefore, further explanation is omitted.
However, processing contents of the schedule negotiator 108 are partially different. In other words, the processing contents of the rescheduler 1083 are changed as illustrated in
In other words, the schedule receiver 1082 receives schedule notification including a schedule result (step S721), and output the received notification to the rescheduler 1083. A data format of the schedule notification is a format as illustrated in
Then, when the rescheduler 1083 receives the schedule notification, the rescheduler 1083 identifies the data queue 106a or 106b from the ID of the transmission source node (step S723), and performs a processing to update the time slot to which the message (i.e. data block) in the data queue 106a or 106b is thrown, according to the schedule notification (step S725). When the transmission schedule notified by the schedule notification is identical to the transmission schedule in the scheduling request, no special processing is performed. When the message is moved to a different time slot, the message is enqueued into a queue of that time slot. When there is no data for that time slot, the time slot is generated at this stage.
Thus, the transmission schedule adjusted in the transmission destination node can be reflected to the data queue 106a or 106b for that transmission destination node.
The sixth embodiment may be modified so as to take into consideration the data size like the second embodiment.
In this embodiment, a case will be explained that the transmission resource is taken into consideration in the sixth embodiment. Specifically, in case where there are plural transmission destination nodes, when the transmission schedules included in the schedule notifications are superimposed, there is a case where messages (i.e. data blocks) that exceed the transmission resources are transmitted. Then, in this embodiment, by changing a processing of the rescheduler 1083, the rescheduling processing is performed taking into consideration the transmission resources.
The configuration of the node relating to the fourth embodiment is similar to that in the embodiment except the data queues 106a and 106b.
Especially, processing contents of the schedule receiver 1082 and the rescheduler 1083 in the schedule negotiator 108 will be explained by using
In other words, the schedule receiver 1082 receives schedule notification including a schedule result (
Then, when the rescheduler 1083 receives the schedule notification, the rescheduler 1083 identifies an ID of that transmission source node (step S803).
Then, the rescheduler 1083 identifies the data queue 106a or 106b from the ID of the transmission source node, and performs a processing to update a time slot to which the message (i.e. data block) in the data queue 106a or 106b is thrown, according to the schedule notification (step S805). When the transmission schedule notified by the schedule notification is identical to the transmission schedule in the scheduling request, no special processing is performed. When the message is moved to a different time slot, the message is enqueued in a queue of that time slot. When there is no data of that time slot, the time slot is generated at this stage.
Furthermore, the rescheduler 1083 identifies one unprocessed time slot among the updated time slots (step S807). When there is no updated time slot, the processing ends. On the other hand, plural updated time slots may exist.
Then, the rescheduler 1083 reads out data of the identified time slot for all transmission destinations from the data queue 106a or 106b, and counts the number of messages (i.e. the number of data blocks) (step S809).
Then, the rescheduler 1083 determines whether or not the number of messages that will be transmitted in the identified time slot is within a range of the transmission resources, which are stored in the resource data storage unit 112 (step S811).
When the number of messages that will be transmitted in the identified time slot is within the range of the transmission resources (i.e. the maximum number of messages), the rescheduler 1083 determines whether or not there is an unprocessed time slot among the updated time slots (step S813). When there is no unprocessed time slot, the processing ends. Moreover, when there is an unprocessed time slot, the processing returns to the step S807.
On the other hand, when the number of messages that will be transmitted in the identified time slot is not within the range of the transmission resources, in other words, when lack of the transmission resources occurs, the processing shifts to a processing in
Shifting to the explanation of the processing in
After that, the rescheduler 1083 counts the number of messages for all transmission destinations for each time slot that is to be processed next time and later for the scheduling requests (step S816).
Then, the rescheduler 1083 determines whether or not there is a vacant transmission resource in any time slot that is to be processed next time or later for the scheduling request (step S817). When there is no vacant transmission resource, the processing shifts to step S821.
On the other hand, when there is a vacant transmission resource, the rescheduler 1083 moves the message from the end of the identified time slot to the top of the time slot having a vacant transmission resource (step S819).
There is a case where two or more messages exceed the range of the transmission resources. In such a case, the messages of the number of vacant transmission resource in the time slot are picked up and moved from the end of the identified time slot. When three messages exceed the range of the transmission resource, however there is only two vacant transmission resources in the time slot that is to be processed next time or later for the scheduling requests, only two messages are moved to that time slot. Countermeasures are determined for one remaining message in the following processing.
Then, the rescheduler 1083 determines whether or not the present state is a state that the messages that exceed the range of the transmission resources are still allocated to the identified time slot (step S821). When this condition is not satisfied, the processing returns to the step S813 in
On the other hand, when this condition is satisfied, the rescheduler 1083 additionally sets a time slot after the time slots that have already been generated, and moves the message (i.e. data block) that was not moved to other time slot to the added time slot (step S823).
Thus, in each time slot of the scheduling window, it is possible to suppress the message transmission within the range of the transmission resource, and the congestion is suppressed, and the delay of the data transmission is suppressed.
In the following, a processing explained by using
Firstly, for example, as for the node D,
Then, the data queues 106a to 106c in the node D is updated to a state as illustrated in
Here, according to the aforementioned processing, the number of messages that will be transmitted to the nodes A to C is counted for each time slot as illustrated in the left side of
However, all of the messages cannot be transmitted in the third time slot. Therefore, as illustrated in
Thus, while suppressing the influence to the time slots that have already been scheduled to the minimum, it becomes possible to suppress the delay of the data transmission, which is caused by the congestion and the like.
This embodiment may be modified so as to take into consideration the data size like the second embodiment.
Although the embodiments of this invention were explained above, this invention is not limited to those.
For example, as for the processing flow, as long as the processing results do not change, the turns of the steps may be exchanged or plural steps may be executed in parallel. The functional block configurations are mere examples, and may not correspond to a program module configuration and file configuration.
Furthermore, the aforementioned embodiments may be combined arbitrarily.
In addition, the aforementioned node is a computer device as illustrated in
The aforementioned embodiments are outlined as follows:
An information communication method relating to the embodiments includes: (A) first transmitting, by a first information processing apparatus, a first transmission schedule for one or more data blocks to a second information processing apparatus that is a transmission destination of the one or more data blocks; (B) first receiving, by the second information processing apparatus, the first transmission schedule from the first information processing apparatus; (C) determining, by the second information processing apparatus, a second transmission schedule for at least one data block among the one or more data blocks that are defined in the first transmission schedule, based on the first transmission schedule and receiving resources of the second information processing apparatus; (D) second transmitting, by the second information processing apparatus, the second transmission schedule to the first information processing apparatus; (E) second receiving, by the first information processing apparatus, the second transmission schedule from the second information processing apparatus; and (F) third transmitting, by the first information processing apparatus, one or plural data blocks that are defined in the second transmission schedule, based on the second transmission schedule to the second information processing apparatus.
By performing the aforementioned processing, it is possible to avoid receiving data blocks (e.g. messages) that exceeds receiving resources in the second information processing apparatus that is a receiving side apparatus, and also suppress the delay of the data transmission.
The aforementioned first transmission schedule may be a transmission schedule for plural time slots. Moreover, the aforementioned second transmission schedule may be a transmission schedule for at least one time slot among the plural time slots. Thus, by processing plural time slots together, it is easy to schedule the data transmission so as to avoid receiving data blocks that exceeds the receiving resources.
In addition, the aforementioned determining may include: (c1) prioritizing the one or more data blocks by at least one of a transmission time limit in the first information processing apparatus and an arrival time limit in a destination. Thus, it becomes possible to suppress the delay of the data transmission to the minimum level.
Furthermore, the aforementioned determining may include: (c2) determining, for a certain time slot among the plural time slots, whether a number of data blocks or a data amount of data blocks, which are scheduled to be transmitted in the certain time slot, is equal to or less than a threshold; and (c3) upon determining that the number of data blocks or the data amount of data blocks, which are scheduled to be transmitted in the certain time slot, exceeds the threshold, allocating a first data block that is selected based on a priority among the data blocks, which are scheduled to be transmitted in the certain time slot, to a time slot before or after the certain time slot. According to this processing, it becomes possible to surely receive data blocks (e.g. messages) within a range of the receiving resources.
Moreover, the aforementioned allocating may include: (c31) determining whether the first data block (e.g. data block selected from the top of the priorities) can be allocated to a time slot before the certain time slot; and (c32) upon determining that the first data block can not be allocated to the time slot before the certain time slot, allocating a data block that is selected based on a priority among the data blocks, which are scheduled to be transmitted in the certain time slot (e.g. data block selected from the bottom of the priorities) to a time slot after the certain time slot. Thus, it is possible to suppress the delay caused by changing the transmission schedule to the minimum level.
Furthermore, there may be plural first information processing apparatuses, and time slots that are scheduled in the first transmission schedule may be different among the plural first information processing apparatuses. In such a case, the determining may include: (c4) determining, for a certain time slot among time slots that are defined commonly in first transmission schedules received from the plural first information processing apparatuses, whether a number of data blocks or a data amount of data blocks, which are scheduled to be transmitted in the certain time slot, is equal to or less than a threshold; and (c5) upon determining that the number of data blocks or the data amount of data blocks, which are scheduled to be transmitted in the certain time slot, exceeds the threshold, allocating a data block that is selected based on a priority among the data blocks, which are scheduled to be transmitted in the certain time slot, to a time slot before or after the certain time slot.
When the time slots in the first transmission schedules are different among the first information processing apparatuses that are transmission source of the data blocks, the second transmission schedules are generated while absorbing the difference in the time slots.
Furthermore, the aforementioned information communication method may further include: (G) generating, by the first information processing apparatus, a first transmission schedule according to a transmission time limit of a received data block.
Moreover, the aforementioned information communication method may further include: (H) identifying, by the first information processing apparatus, a time slot in which a received data block is transmitted according to a transmission time limit of the received data block; (I) determining, by the first information processing apparatus, the number of data blocks or a data amount of data blocks, which are to be transmitted in the identified time slot, is equal to or less than a threshold; and (J) upon determining that the number of data blocks or the data amount of data blocks, which are to be transmitted in the identified time slot, exceeds the threshold, allocating a data block that is selected based on a priority among the data blocks, which are to be transmitted in the identified time slot, to a time slot before or after the identified time slot.
By this processing, the transmission of the data blocks (i.e. messages) that exceeds the transmission resources is suppressed.
Furthermore, there may be plural second information processing apparatuses. In such a case, the information communication method may further include: (K) generating a first transmission schedule according to a transmission time limit of a received data block and a destination of the plural second information processing apparatuses. The processing is performed for each transmission destination.
Furthermore, there may be plural second information processing apparatuses. In such a case, the information communication method may further include (L) generating a first transmission schedule according to a transmission time limit of a received data block and a destination of the plural second information processing apparatuses. Moreover, the second receiving may include: (e1) receiving, by the first information processing apparatus, the second transmission schedule from a second information processing apparatus of the plural second information processing apparatuses. In addition, the aforementioned third transmitting may include: (f1) when the received second transmission schedule is different from a first transmission schedule that was transmitted to a second information processing apparatus that is a transmission source of the received second transmission schedule, and when a number of data blocks or a data amount of data blocks, which are to be transmitted in a first time slot in which a difference between the received second transmission schedule and the first transmission schedule is detected, exceeds a threshold, allocating a data block that is selected based on a priority among data blocks that are scheduled to be transmitted in the first time slot to a time slot after the second transmission schedule.
When there are plural second information processing apparatuses, there is a case where the transmission schedule that causes the data transmission that exceeds the transmission resources is defined in the second transmission schedule. Therefore, such a case is properly corrected.
An information processing apparatus that is an upper-side in a data flow has (M) circuitry configured or programmed to transmit a first transmission schedule for one or more data blocks to an information processing apparatus that is a transmission destination of the one or more data blocks and determines a second transmission schedule for at least one data block among the one or more data blocks based on the first transmission schedule and receiving resources of the information processing apparatus; (N) circuitry configured or programmed to receive the second transmission schedule from the information processing apparatus; and (O) circuitry configured or programmed to transmit one or plural data blocks that are defined in the second transmission schedule, based on the second transmission schedule to the information processing apparatus.
An information processing apparatus that is a lower-side on the data flow has (P) circuitry configured or programmed to receive a first transmission schedule for one or more data blocks from an information processing apparatus that will transmit the one or more data blocks; (Q) circuitry configured or programmed to determine a second transmission schedule for at least one data block among the one or more data blocks that are defined in the first transmission schedule, based on the first transmission schedule and receiving resources; and (R) circuitry configured or programmed to transmit the second transmission schedule to the information processing apparatus.
Incidentally, it is possible to create a program causing a computer to execute the aforementioned processing, and such a program is stored in a computer readable storage medium or storage device such as a flexible disk, CD-ROM, DVD-ROM, magneto-optic disk, a semiconductor memory such as ROM (Read Only Memory), and hard disk. In addition, the intermediate processing result is temporarily stored in a storage device such as a main memory or the like.
All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the invention and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority and inferiority of the invention. Although the embodiments of the present inventions have been described in detail, it should be understood that the various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2014-048196 | Mar 2014 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
6778557 | Yuki et al. | Aug 2004 | B1 |
8837398 | Yuan | Sep 2014 | B2 |
20040032847 | Cain | Feb 2004 | A1 |
20050094642 | Rogers | May 2005 | A1 |
20050286422 | Funato | Dec 2005 | A1 |
20080075006 | Morita | Mar 2008 | A1 |
20090238130 | Nakatsugawa | Sep 2009 | A1 |
20110319072 | Ekici | Dec 2011 | A1 |
20120051316 | Yajima | Mar 2012 | A1 |
20120287913 | Lee | Nov 2012 | A1 |
20130003678 | Quan | Jan 2013 | A1 |
20130294234 | Thapliya et al. | Nov 2013 | A1 |
20130332502 | Amemiya et al. | Dec 2013 | A1 |
20160337167 | Kawato | Nov 2016 | A1 |
Number | Date | Country |
---|---|---|
2000-270010 | Sep 2000 | JP |
2007-523513 | Aug 2007 | JP |
2011-61869 | Mar 2011 | JP |
2013-236182 | Nov 2013 | JP |
2013-254311 | Dec 2013 | JP |
Entry |
---|
Japanese Office Action dated Jul. 4, 2017 in Japanese Patent Application No. 2014-046196 (with unedited computer generated English translation). |
Number | Date | Country | |
---|---|---|---|
20150264713 A1 | Sep 2015 | US |