This application claims the benefit of Korean Patent Application No. 10-2015-0151075, filed on Oct. 29, 2015, in the Korean Intellectual Property Office, the disclosure of which is incorporated herein by reference in its entirety.
1. Field
The present inventive concept relates to a method and apparatus for controlling network bandwidth, and more particularly, to a method of controlling a transmission rate of chunks in view of the priority of each piece of data when dividing data into chunks and sending the chunks through a network and an apparatus which performs the method.
2. Description of the Related Art
With the spread of Internet-based solutions such as cloud, network bandwidth is emerging as an issue. However, there is no proper solution to this issue.
In the conventional art, to send data through a network more efficiently, the data is divided into a number of chunks, and the chunks are sent. In some cases, the chunks may be sent in a parallel manner in multiple sessions instead of a single session.
However, due to the absence of an apparatus for controlling network bandwidth, there are cases where one client occupies the entire network bandwidth of a server and where data is sent using the same bandwidth regardless of the priority of data to be sent by each client. This often leads to thoughtless data uploading and downloading without bandwidth management, which results in frequent freezing or locking of a server solution.
There, of course, is a method of controlling network bandwidth using hardware. However, since the hardware is expensive equipment, restriction methods such as port restriction have to be used. In the case of access restriction, in particular, access control itself is difficult, and functions such as quality of service (QoS) cannot be provided.
In this regard, there is a need for a method of controlling network bandwidth using software, the method capable of efficiently solving the problem of network bandwidth in data transmission and securing the stability and reliability of data transmission through QoS management.
Aspects of the inventive concept provide a method and apparatus for controlling network bandwidth.
However, aspects of the inventive concept are not restricted to the one set forth herein. The above and other aspects of the inventive concept will become more apparent to one of ordinary skill in the art to which the inventive concept pertains by referencing the detailed description of the inventive concept given below.
In some embodiments, a method of controlling network bandwidth, the method being performed by a server and comprising: receiving a transmission request for data from a client, calculating a delay time for controlling bandwidth of the client, receiving chunks of the data from the client at intervals of the delay time, and restoring the data by merging the chunks.
In some embodiments, a method of controlling network interface, the method being performed by a server and comprising: receiving a reception request for data from a client; calculating a delay time for controlling bandwidth of the client, and sending chunks of the data to the client at intervals of the delay time.
In some embodiments, an apparatus for controlling network bandwidth, the apparatus comprising: a network interface, one or more processors, a memory which loads a computer program executed by the processors, and a storage device which stores throughput data and priority information, wherein the computer program executed by the apparatus for controlling network bandwidth comprises: an operation of receiving a transmission request for data from a client, an operation of calculating a delay time for controlling bandwidth of the client, an operation of receiving chunks of the data from the client at intervals of the delay time, and an operation of restoring the data by merging the chunks.
In some embodiments, an apparatus for controlling network bandwidth, the apparatus comprising: a network interface, one or more processors, a memory which loads a computer program executed by the processors, and a storage device which stores throughput data and priority information, wherein the computer program executed by the apparatus for controlling network bandwidth comprises: an operation of receiving a reception request for data from a client, an operation of calculating a delay time for controlling bandwidth of the client, and an operation of sending chunks of the data to the client at intervals of the delay time.
These and/or other aspects will become apparent and more readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings in which:
In
In the conventional art, there are no restrictions between the send manager 110 of the client 100 and the receive manager 210 of the server 200. Therefore, the receive manager 210 of the server 200 just has to receive chunks sent by the send manager 110 of the client 100. In this case, however, if a particular client 100 occupies the entire throughput of the server 200, other clients cannot access the server 200. In addition, since the particular client 100 occupies system resources of the server 200, there are cases where a solution of the server 200 stops. Furthermore, network bandwidth of the server 200 is equally shared by clients 100 regardless of a difference in the importance of the clients 100.
This is because data is sent without regard to the fact that the importance of each client 100 can vary according to the type or size of data to be sent by the client or according to various standards.
Referring to
The process of allocating the throughput of the server 200 to each client 100 may be implemented by setting a delay time between chunks of data sent by each client 100. That is, when sending a number of chunks to the server 200, a client 100 may be controlled to wait for a delay time set by the server 200 after sending a chunk and before sending a next chunk. In this case, it is possible to prevent a particular client 100 from occupying the entire throughput of the server 200. In addition, if a different delay time is set according to priority, a different throughput value can be set for each client 100. Therefore, more important data can be sent first.
To this end, in the method of controlling network bandwidth according to the embodiment, a client 100 first sends information about chunks to be sent and priority to the server 200 (operation 1), unlike in the conventional art in which the client 100 sends the chunks directly to the server 200. The server 200 receives the chunk information and the priority and sends to the client 100 a uniform resource identifier (URI) to which the chunks can be sent and a delay time which is to be used when the chunks are sent using the QoS controller 240 (operation 2). In addition, the server 200 stores the priority received from the client 100 in the QoS controller 240. This is to compare the priority of the client 100 with that of another client 100 when the another client 100 makes a data transmission request. The URI and the delay time will be described in greater detail later with reference to
As illustrated in
In
That is, the delay time is a value that is not only set in an initial process of generating a connection but also continuously changed in real time even while the client 100a or 100b is sending chunks to the server 200. Therefore, the throughput manager 230 of the server 200 measures and updates the transmission rate of each client 100a or 100b in real time, and the QoS controller 240 identifies whether the throughput of the server 200 has been properly allocated to each client 100a or 100b according to priority so that each client 100a or 100b can have an appropriate transmission rate and then feeds the adjusted delay time back to each client 100a or 100b in real time. If the actual transmission rate of each client 100a or 100b is smaller than the throughput allocated to the client 100a or 100b, the QoS controller 240 may change the delay time allocated to the client 100a or 100b in the initial process of generating a connection to a shorter delay time and feed the shorter delay time back to the client 100a or 100b as a new set value. On the contrary, if the actual transmission rate of each client 100a or 100b is greater than the throughput allocated to the client 100a or 100b, the QoS controller 240 may feed a delay time longer than the delay time allocated to the client 100a or 100b in the initial process of generating a connection back to the client 100a or 100b as a new set value.
Setting a delay time in view of the priority of each client 100a or 100b and changing the delay time in view of the actual transmission rate of each client 100a or 100b has been described above. Hereinafter, factors that can be taken into consideration when the server 200 actually allocates its throughput to each client 100a or 100b will be described. First of all, the maximum throughput of the server 200 may be one criterion that determines the delay time. In addition, the number of clients 100 currently connected to the server 200 may be one factor that determines the delay time of each client 100. The delay time may also vary according to data priority sent by the client 100 in the initial process of generating a connection.
Referring to
Referring to
Referring to
Referring to
Referring to
Referring to
The method of controlling network bandwidth by setting the delay time of each client 100 has been described above with reference to
In this case, since server A 200a cannot process the request of client A 100a, it may send the request of client A 100a to server B 200b, server C 200c or server D 200d which can process the request. It would be good if server A 200a knew which of server B 200b, server C 200c and server D 200d had the most available throughput and sent the request to the server with the most available throughput. However, even if server A 200a does not know, if the servers are connected in a circulation structure, it would good enough for server A 200a to send the request of client A 100a to a next server, i.e., server B 200b. If server B 200b also cannot process the request of client A 100a, it may just send the request of client A 100a to server C 200c. Likewise, if server C 200c cannot process the request of client A 100a, it may also send the request of client A 100a to server D 200d.
When the servers that form the server cluster 300 are connected in the circulation structure, even if each server does not know the throughput state of other servers, it can find a server which can process a request of a client simply by sending the request to its next server. That is, if the number of servers is n, a server which can process a request of a client can be found after the request is sent (n−1) times in the worst-case scenario. Therefore, when receiving a transmission request from a client 100, each server 200 may identify whether it can process the request of the client 100 before calculating a delay time. When the server 200 cannot process the request, it may send the request of the client 100 to another server. When the server 200 can process the request, it may calculate the throughput to be allocated to the client 100 and a delay time corresponding to the throughput and send an URI and the delay time to the client 100 as a response to the transmission request. This enables load balancing between the servers that form the server cluster 300.
In
If the server 200 sends the URI of another server to the client 100, the client 100 should attempt to connect to the server and send a transmission request to the server. On the other hand, if the server 200 sends its URI and a delay time to the client 100, a send manager 110 of the client 100 sequentially sends chunks to the server 200 according to the delay time set by the server 200. Then, a receive manager 210 of the server 200 receives the chunks from the client 100 and sends the received chunks to a data merger 200 to restore the original data from the chunks. The server 200 measures a chunk transmission rate of the client 100 in real time and manages the chunk transmission rate using a throughput manager 230. A QoS controller 240 of the server 200 calculates the delay time of the client 100 by reflecting, in real time, the transmission rate of the client 100 managed by the throughput manager 230 and feeds the delay time back to the client 100. If conditions have been changed after the delay time was calculated in the initial process of generating a connection or if the actual throughput of the client 100 is large or small compared with the initially calculated delay time, the delay time is corrected. In so doing, the throughput of the server 200 can be utilized efficiently within the maximum throughput of the server 200.
Referring to
If the server 200 can process the data transmission request of the client 100 by generating an additional connection, it calculates a delay time to be used when the client 100 sends the data (operation S3000). As for factors that affect the delay time, the delay time of the client 100 is set to a smaller value as the priority of the client 100 is higher. In addition, the delay time is set to a smaller value as the maximum throughput of the server 200 is greater. The delay time is set to a larger value as the number of clients 100 connected to the server 2000 is larger. The delay time is set to a larger value as the size of chunks to be sent by the client 100 is larger. Through this process, a QoS controller 240 calculates the delay time in view of the priority sent by each client 100 together with the data transmission request (operation S3000) and sends the calculated delay time to the client 100 (operation S4000). The client 100 sequentially sends the chunks to the server 200 according to the delay time set by the server 200, and the server 200 sequentially receives the chunks from the client 100 according to the delay time (operation S5000). In this way, the network bandwidth for the data transmission request of each client 100 can be controlled efficiently according to priority within the maximum throughput of the server 200.
In addition, when the factors that can affect the delay time are changed while the server 200 is actually receiving chunks from each client 100, the server 200 may calculate a new delay time and feed the new delay time back to each client 100. Of the factors that can affect the delay time, the priority of each client 100, the maximum throughput of the server 200, and the size of chunks to be sent by each client 100 may be invariable, but the number of clients 100 connected to the server 200 may be variable. Therefore, the throughput of each client 100 may be controlled according to a change in the number of clients 100 connected to the server 200 by changing the delay time in real time in view of the variable number of clients 100 connected to the server 200.
In addition to the above-described factors, there are various factors that can actually affect the throughput of each client 100. If the above-described factors are controllable factors that can be taken into consideration in the calculation of the delay time, there may also be uncontrollable factors that can affect the throughput of each client 100. For example, various factors such as the network state between each client 100 and the server 200 and the difference in the performance of the clients 100 can affect the throughput of each client 100. To reduce the effect of these factors on the throughput of each client 100, the QoS controller 240 of the server 200 may correct the delay time based on a real-time transmission rate of each client 100 managed by a throughput manager 230 of the server 200. If the actual transmission rate of a client 100 is smaller than the throughput allocated to the client 100, a delay time shorter than a delay time allocated to the client 100 during initial connection setting may be fed back to the client 100 as a new set value. On the contrary, if the actual transmission rate of the client 100 is greater than the throughput allocated to the client 100, a delay time longer than the delay time allocated to the client 100 during the initial connection setting may be fed back to the client 100 as a new set value.
Referring to
The data merger 220 receives the chunks of the data from the receive manager 210. Although not described in the data transmission process due to its too small size, meta information of each chunk of the data may also be received. Then, the data merger 220 may identify how the chunks should be merged to obtain the original data and restore the data by merging the chunks.
The throughput manager 230 is linked in real time to the receive manager 210 to measure a transmission rate when the server 200 receives the chunks from the client 100. The throughput manager 230 may provide the measured transmission rate to the QoS controller 240 so that the QoS controller 240 can correct the delay time by a difference between the measured transmission rate and the throughput actually allocated to the client 100.
When generating an initial connection to the client 100, the QoS controller 240 may determine whether to generate a connection and calculate a delay time to be used by the client 100. In addition, when factors that affect the delay time are changed, the QoS controller 240 may calculate the delay time in real time and feed the delay time back to the client 100. Furthermore, when the real-time throughput of each client 100 which is measured by the throughput manager 230 is different from the throughput intended to be actually allocated to each client 100 through delay time setting, the QoS controller 240 may correct the delay time to reduce the difference and feed the corrected delay time back to the client 100.
Until now, a case where data is uploaded from a client 100 to a server 200 has mainly been described. However, controlling network bandwidth using a delay time can also be applied to a case where data is downloaded from the client 100 to the server 200. That is, the server 200 may receive a download request from the client 100 together with priority and divide data into a number of chunks in order to send the data to the client 100. Then, the QoS controller 240 may calculate a delay time of the client 100 and sequentially send the chunks to the client 100 according to the delay time. The only differences from the former case are that there is no need to inform the client 100 of the delay time because it is the server 200 that sends the chunks and that the server 200 can immediately update the delay time for sending data to the client 100 according to situation. Except for these differences, basic characteristics of controlling network bandwidth using a delay time are the same in the above two cases.
Referring to
The processors 510 execute a computer program loaded in the memory 520, and the memory 520 loads the computer program from the storage device 560. The computer program may include a receive management operation 521, a throughput management operation 523, a QoS control operation 525, and a chunk merge operation 527.
The receive management operation 521 may receive a data transmission request and chunks from a client 100 through a network. The receive management operation 521 may send the data transmission request of the client 100 to the QoS control operation 525 and request the QoS control operation 525 to determine whether to accept the data transmission request and, if determining to accept the data transmission request, calculate a delay time to be used by the client 100. Here, the receive management operation 521 may store priority information 565 received from the client 100 in the storage device 560 through the system bus 550. The stored priority information 565 may be used when the QoS control operation 525 calculates the delay time of each client 100. In addition, chunk data 561 received from the client 100 by the receive management operation 521 may be stored in the storage device 560 through the system bus 550. The receive management operation 521 may send the chunks stored as the chunk data 561 in the storage device 560 to the data merge operation 527 to restore the original data from the chunks.
The throughput management operation 523 may be linked in real time to the receive management operation 521 to measure a transmission rate when the server 200 receives the chunks from the client 100. Then, the throughput management operation 523 may store the measured transmission rate as throughput data 563 in the storage device 560 through the system bus 550. The throughput management operation 523 may provide the stored throughput data 563 to the QoS control operation 525 so that the QoS control operation 525 can correct the delay time by a difference between the measured transmission rate and the throughput actually allocated to the client 100.
When generating an initial connection to the client 100, the QoS control operation 525 may determine whether to generate a connection and calculate a delay time to be used by the client 100. In addition, when factors that affect the delay time are changed, the QoS control operation 525 may calculate the delay time in real time and feed the delay time back to the client 100. Furthermore, when the real-time throughput of each client 100 which is measured by the throughput management operation 570 is different from the throughput intended to be actually allocated to each client 100 through delay time setting, the QoS control operation 525 may correct the delay time to reduce the difference and feed the corrected delay time back to the client 100.
The data merge operation 527 receives the chunk data 561 stored in the storage device 560 from the receive management operation 521 through the system bus 550. Although not described in the data transmission process due to its too small size, meta information of each chunk of the data may also be received. Then, the data merge operation 527 may identify how the chunks should be merged to obtain the original data and restore the data by merging the chunks.
According to the inventive concept described above, a session terminal measures the real-time bandwidth of all data being sent and controls the bandwidth of each session in real time based on the measured real-time bandwidth, thereby preventing a particular session from monopolizing network bandwidth.
In addition, QoS is applied to data by managing the bandwidth of both a server and a client. Therefore, data can be sent efficiently according to priority.
Number | Date | Country | Kind |
---|---|---|---|
10-2015-0151075 | Oct 2015 | KR | national |