This application is based upon and claims the benefit of priority of the prior Japanese Patent Application No. 2015-087977, filed on Apr. 22, 2015, the entire contents of which are incorporated herein by reference.
The present embodiment relates to a storage apparatus, a controlling method, and a computer readable medium.
A storage server is available wherein stream data (data of an indefinite length that flows on a network and arrives in a chronological order) received through a network are accumulated into a mass storage apparatus formed from a disk storage such as a hard disk drive (HDD). The storage server temporarily stores, for example, a series of stream data distributed successively thereto into a buffer in a memory and writes, when the amount of the data stored in the buffer reaches a fixed amount, the data into a storage device such as an HDD. When the writing speed into the storage device such as an HDD is lower than the reception speed of the stream data, the storage server performs, in order to achieve higher speed accessing to the storage device, a process of dividing the stream data and writing the divisional stream data in parallel to a plurality of HDDs (also called “striping”).
As a technology for accumulating stream data, a technology is known by which loss of data by exhaustion of the bandwidth is prevented by dynamically managing requirements for the bandwidth of a digital recording system by which stream data are stored. Also a technology is known wherein a plurality of stream data are recorded and reproduced simultaneously while the redundancy of recorded data is secured by recording stream data in a duplicated fashion into two data recording apparatuses.
As an example of a related art document, Japanese National Publication of International Patent Application No. 2003-533843 and Japanese Laid-open Patent Publication No. 2007-281972 are known.
According to an aspect of the invention, a storage apparatus includes a plurality of disk apparatuses, a memory including a read buffer, and a processor. The processor is configured to perform a writing process, the writing process including writing a plurality of pieces of divided data into the plurality of disk apparatuses, interrupt, when a readout request for reading out series of data from the plurality of disk apparatuses is received during execution of the writing process, the writing process to a predetermined number of disk apparatuses from among the plurality of disk apparatuses, read out the pieces of data requested by the readout request from the predetermined number of disk apparatuses, store the read out pieces of data into the read buffer, reconstruct the pieces of data stored in the read buffer back into the series of data requested by the readout request, and output the reconstructed data.
The object and advantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention, as claimed.
An HDD of a large-scale capacity that stores stream data sometimes undergoes different control from that performed for an HDD for common personal computers (PCs). For example, access to an HDD is sometimes controlled directly without generating a file system or access control to an HDD is sometimes performed by generating a huge file on a file system. Since access to a disk recording medium such as an HDD is performed after a magnetic head is moved to a track on which data of an access destination are stored, when access for writing and access for reading out, in which the access destinations are different from each other, are to be changed over, a delay occurs due to a movement of the magnetic head and so forth. Therefore, if reading out access to an HDD is performed frequently while writing access is being performed for the same HDD, then the writing performance is degraded.
In the case of a storage server in which stream data are accumulated, the degradation of the writing performance immediately gives rise to loss of data, and therefore, degradation of the writing performance is unacceptable. Therefore, a readout request is processed to such a degree that the process does not exert an influence on the writing performance. As a result, the reading out time is elongated.
For example, batch processing or the like for reading out and analyzing stream data received within a certain period of time is sometimes performed while divided data are being written into a plurality of HDDs. In this case, in order not to exert an influence on writing of the stream data into the HDDs, for example, reading out of data is performed utilizing a period of time after the stream data are written into the plurality of HDDs until stream data to be written subsequently are stored into a buffer in a memory. Then, after received stream data are accumulated fully in the buffer in the memory, writing into the plurality of HDDs is started again. However, if writing into and reading out from the plurality of HDDs are repeated frequently, then since waiting time is generated by movement of the magnetic head and so forth, there is a subject that the reading out time becomes long.
According to one aspect of the present embodiment, it is desirable to improve the reading out performance from a storage apparatus when writing and reading out of stream data are competitive.
In the following, an embodiment of a storage system is described with reference to the accompanying drawings. The configuration of the embodiment described below is exemplary, and the embodiment is not limited to that including the configuration described below.
The storage apparatus of the present embodiment accumulates stream data received through a network into a mass storage device such as an HDD. Here, in the present embodiment, the stream data are data of an indefinite length which flow on the network and arrive in a chronological order. The stream data are not limited to continuous content data of music, a movie or the like and include packet data or the like of an indefinite length flowing on the network.
As an application of the storage apparatus disclosed below, an application is assumed which stores data such as, for example, video data of a security camera or remote sensing sensor data over a fixed period (for example, over several days to several weeks or more) in the past and reads out the data by setting a suitable period for which the data may be used. Also a different application is assumed which stores operational information indicative of a state of operational environments or the like of equipment of a data center or the like in a time series and analyzes, when some trouble or the like occurs, the operational information retrospectively to find out the cause of the trouble.
The network tap apparatus 5 almost always monitors the network 6 and branches and takes out stream data flowing on the network 6. As an acquisition method of data by the network tap apparatus 5, for example, all data packets of stream data flowing on the network 6 may be taken out or particular data packets having a given destination address or a transmission source address may be taken out. The network tap apparatus 5 transmits the taken out data packets to the writing client apparatus 3 together with information of time at which the data packets branched and taken out from the network 6 are acquired.
The writing client apparatus 3 performs division and reconstruction of the data packets received from the network tap apparatus 5 as occasion demands and transmits the divided and reconstructed data packets to the storage server 1 together with a write request through the network 2. If a data packet received from the network tap apparatus 5 indicates long data of an indefinite length, then the writing client apparatus 3 may convert or divide the data into data of a fixed length and then transmit the resulting data to the storage server 1.
The reading out client apparatus 4 transmits a readout request to the storage server 1 through the network 2 in accordance with a readout request sent thereto at any time from a terminal (not depicted) of a user of the storage server 1 and then transmits readout data read out from the storage server 1 back to the terminal of the user of the requesting source. Usually, while a write request from the writing client apparatus 3 is generated successively or in a high frequency, a readout request from the reading out client apparatus 4 is generated sporadically as a request that designates a particular range of time (year, month, day, hour, minute and second).
The memory 13 includes a storage region for a write buffer 131, another storage region for a read buffer 132, a further storage region for a writable HDD table 133 and a still further storage region for a data management table 134. The write buffer 131 is a storage region for temporarily storing therein write data sent from the writing client apparatus 3. The read buffer 132 is a storage region for temporarily storing readout data read out from the HDDs 15 to 18 in accordance with a readout request from the reading out client apparatus 4. Details of the writable HDD table 133 and the data management table 134 are hereinafter described.
The memory 13 may be configured such that the memory 13 includes a plurality of write buffers 131. In particular, it is possible to provide, in the memory 13, a write buffer for accumulating received stream data and another write buffer for retaining, while a process for dividing received data into units of a stripe and writing the divided data into a plurality of HDDs is being performed, the data to be written into the HDDs. By the configuration just described, a writing process by striping into the plurality of HDDs 15 to 18 and a process for accumulating received stream data can be performed in parallel.
Though not depicted in
It is to be noted that the read buffer 132 may be provided not in the memory 13 but in a different storage device such as an HDD for exclusive use for the buffer, and also the write buffer 131 may be provided in a different high-speed storage device. Further, although a mass storage device of a different type can be used in place of the HDDs 15 to 18, in the present embodiment, noticeable effects are exhibited in the case of a disk storage apparatus of the access type that involves mechanical movement of a head for performing reading and writing such as a magneto-optical disk or an optical disk.
The request reception unit 111 receives a write request from the writing client apparatus 3 or a readout request from the reading out client apparatus 4. If the received request is a write request from the writing client apparatus 3, then the request reception unit 111 receives, together with the write request, write data, namely, a data packet of stream data processed by the writing client apparatus 3 or the like. The request reception unit 111 stores the received write data into the write buffer 131. If the request reception unit 111 receives a readout request from the reading out client apparatus 4, then the request reception unit 111 instructs the data reading out unit 114 to perform a reading out process of the received readout request.
The data division unit 112 divides, after data of a fixed amount are stored into the write buffer 131, the data stored in the write buffer 131 into data of units of a stripe to be used upon writing the data into a plurality of HDDs by striping.
The data writing unit 113 refers to the writable HDD table 133 to specify writable HDDs from among the plurality of HDDs 15 to 18. The data writing unit 113 performs a process for writing the data divided by the data division unit 112 into the specified writable HDDs.
If the data reading out unit 114 receives a readout request from the reading out client apparatus 4, then the data reading out unit 114 refers to the data management table 134 to specify the HDDs in which data of a reading out target are stored. Then, the data reading out unit 114 designates, from among the specified HDDs, part of HDD for a reading out target HDD and performs a reading out process of reading out the data of the reading out target from the designated HDD and storing the readout data into the read buffer 132. The part of data reading out unit 114 successively changes the HDD for which the reading out process is to be performed and reads out the data of the reading out target from all of the HDDs in which the reading out target data are stored to store the readout data in the read buffer 132.
The data integration unit 115 re-arranges and integrates the data stored in the read buffer 132 on the basis of time information of the data acquired from the data management table 134 by the data reading out unit 114. In other words, the data integration unit 115 creates data by reconstructing the data read out from the plurality of HDDs 15 to 18 into the read buffer 132 so as to have a state before the striping. The reply transmission unit 116 transmits the data integrated by the data integration unit 115 to the reading out client apparatus 4 that is the request source of the readout request.
In
Generally, since a write address and a read address are different from each other, when the surplus time period described above starts, the magnetic head of each HDD is moved once from a write position to a read position and then a reading out process is performed, and a time period for returning the magnetic head to the write position before time of an end of the surplus time period is required. Substantially only part of the surplus time period can be used for the reading out process because the time period for moving the magnetic head or the like is required. Accordingly, a reading out process that utilizes a time period shorter than the surplus time period is repeated many times, and a long period of time is required for processing of the readout request.
It is to be noted that, also in a case in which data are read out in parallel from all of the HDDs 15 to 18 in the reading out process of
The example depicted in
In particular, while a writing process for the HDD 15 is interrupted and a reading out process is executed for the HDD 15, a writing process of data is performed into the remaining three HDDs 16 to 18. If the reading out process of a fixed amount of data from the HDD 15 is completed, then the HDD 15 returns to the writing process, and the HDD 16 subsequently interrupts a writing process and executes a reading out process. Therefore, the HDD from which data are to be read out is successively changed one by one to perform a reading out process similarly. Since the reading out process from the HDDs can be performed continuously, frequent movements of the magnetic heads in the example of
In the example of the writable HDD table 133 depicted in
In the data management table 134 depicted in
In
It is to be noted that, if writing into an HDD is performed to the full capacity of the HDD, then rewriting is thereafter performed beginning with the top address, and therefore, information regarding the old data at the overwritten addresses may be deleted. Therefore, such a situation that data in the data management table 134 continues to increase infinitely does not occur.
The format in which address information and information of a data reception time are stored in the data management table 134 is not limited to the example of
For example, at a stage when stream data for 20 seconds received through the network 2 are written into the write buffer 131, the data division unit 112 executed by the CPU 11 partitions the stream data for the 20 seconds stored in the write buffer 131 for each five seconds to divide the stream data. The divisional data partitioned for each five seconds are data of a unit of a stripe upon writing into a plurality of HDDs by striping. Then, the data writing unit 113 executed by the CPU 11 writes the divisional data of a unit of a stripe partitioned for each five seconds into the four HDDs. It is to be noted that, while
In the example of
The data reading out unit 114 executed by the CPU 11 refers, when a readout request for data is received from the reading out client apparatus 4, to the data management table 134 to specify HDDs in which data of the reading out target are stored. In the example of the data management table 134 of
After a plurality of HDDs including a storage region in which data of a target of a readout request are stored are specified, the data reading out unit 114 selects some HDD included in the specified plurality of HDDs and performs a reading out process for the selected HDD. In the example of
In
Then, for a period of time from the 130th second to time immediately before the 145th second indicated by a broken line frame 83, data for a period from the 10th second to the 14th second of the reception time written in the addresses following the address “0” of the HDD 3 are read out from the HDD 3. Thereafter, for a period from the 145th second to time immediately before the 160th second indicated by a broken line frame 84, data within a period from the 15th second to the 19th second of the reception time written in the addresses following the address “0” of the HDD 4 are read out from the HDD 4.
In
Data read out from the HDDs by the data reading out unit 114 are stored once into the read buffer 132 of the memory 13. Then, the readout data stored in the read buffer 132 are reconstructed in a chronological order of the reception time by the data integration unit 115. The reply transmission unit 116 transmits the reconstructed data to the reading out client apparatus 4.
While the data reading out unit 114 is performing a reading out process within the period of the broken line frame 81, since it is difficult to write the data into the HDD 1, the write start address of the HDD 1 at the point of time of the 115th second is the 300th address into which the HDD 1 has been scheduled to start writing at the point of time of the 100th second. Similarly, in the HDD 2, the write start address at the point of time of the 130th second is the 360th address into which the HDD 2 has been scheduled to start writing at the point of time of the 115th second. This similarly applies also to the HDD 3 and the HDD 4.
It is to be noted that, in the present embodiment, while reading out is performed for one HDD, a writing process is performed for the remaining three HDDs. In the present embodiment, since the writing performance of each HDD is 4 [MB/s] that is one third of the maximum throughput 12 [MB/s] of received data, it is possible to write stream data into three HDDs without missing while a reading out process from one HDD is performed.
In
If a readout request for data in the region surrounded by the broken line frame 90 is received from the reading out client apparatus 4 at the point of time of the 200th second, then the data reading out unit 114 refers to the data management table 134 to specify a storage location of the data of the reading out target. In particular, the data reading out unit 114 first reads out and decides information in the column of the data management table 134 for the 100th second of the reception time from the HDDs in an ascending order of the HDD number beginning with the HDD 1.
Since the information of the field for the HDD 1 at the 100th second of the reception time is “—,” the data reading out unit 114 decides that data at the reception time of the 100th second are not stored in the HDD 1, and then decides information in the column of the HDD 2. Since the address value “300” is stored in the field of the HDD 2 in the column of the reception time of the 100th second, the data reading out unit 114 decides that the data at the reception time of the 100th second are stored in a region beginning of the address “300” of the HDD 2. As described hereinabove, since the examples of
In the reading out process by the data reading out unit 114, data from the 300th address to an address immediately before the 480th address of the HDD 1 are read out for 45 seconds from the 200th second to time immediately before the 245th second as indicated by a broken line frame 91. The reading out process for the HDD 1 within the period of the broken line frame 91 corresponds to a reading out process of data within time periods from the 115th second to the 119th second, from the 130th second to the 134th second and from the 145th second to the 149th second of the reception time written in the HDD 1 within the period of the broken line frame 90.
Then, the data reading out unit 114 reads out data from the 300th address to the address immediately before the 480th address of the HDD 2 for 45 seconds from the 245th second to time immediately before the 290th second as indicated by a broken line frame 92. The reading out process for the HDD 2 within the period of the broken line frame 92 corresponds to a reading out process of data within time periods from the 100th second to the 104th second, from the 135th second to the 139th second and from the 150th second to the 154th second of the reception time, written in the HDD 2 within the period of the broken line frame 90.
Then, the data reading out unit 114 reads out data from the 300th address to the address immediately before the 480th address of the HDD 3 for 45 seconds from the 290th second to time immediately before the 335th second as indicated by a broken line frame 93. The reading out process for the HDD 3 within the period of the broken line frame 93 corresponds to a reading out process of data within time periods of the 105th second to the 109th second, from the 120th second to the 124th second and from the 155th second to the 159th second of the reception time, written in the HDD 3 within the period of the broken line frame 90.
Then, the data reading out unit 114 reads out data from the 300th address to the address immediately before the 480th address of the HDD 4 for 45 seconds from the 335th second to time immediately before the 380th second as indicated by a broken line frame 94. The reading out process for the HDD 4 within the period of the broken line frame 94 corresponds to a reading out process of data within time periods of from the 110th second to the 114th second, from the 125th second to the 129th second and from the 140th second to the 144th second of the reception time, written in the HDD 4 within the period of the broken line frame 90.
The readout data read out within the periods of the broken line frames 91 to 94 by the data reading out unit 114 are stored once into the read buffer 132 in the memory 13. Then, the readout data stored in the read buffer 132 are reconstructed in a chronological order of the reception time by the data integration unit 115. The reconstructed data within the period from the 100th second to the time immediately before the 160th second are transmitted to the reading out client apparatus 4 by the reply transmission unit 116.
If a readout request for data written in the plurality of HDDs in the past is received during a writing process into the plurality of HDDs, then in the present embodiment, part of the plurality of HDDs is selected as an HDD for reading out. While data of the reading out target are read out from the selected part of HDD for reading out, writing of stream data into the remaining HDDs is continued. Since, while writing of received data into the number of HDDs to be used to accumulate stream data without missing is continued, reading out from part of the HDDs is performed, it is possible to reduce the number of times of movement of a head of such disk apparatus as HDDs and improve the performance in the reading out process of the storage server.
In the example of the storage server 1 described above with reference to
Here, it is assumed that the number of HDDs included in the storage server is M and the number of HDDs except the number of HDDs necessary for accumulation of stream data without missing from among the M HDDs is N (M and N are natural numbers). In this case, if a readout request for data written in the past is received during writing of stream data into a plurality of HDDs included in the storage server, then a reading out process can be performed from the N HDDs while divisional stream data are written into the (M−N) HDDs.
Although the number N of surplus HDDs can be determined from a relationship between the flow rate of stream data and the writing performance of the HDDs, generally it can be roughly predicted upon system design of the storage server 1. Here, it is assumed that the maximum flow rate (bandwidth) of stream data that flow in the network 6 is B [MB/s], the number of HDDs is M, the maximum writing performance is b [MB/s], and the number of HDDs that can perform a reading out process simultaneously is N. In this case, only it is necessary for the system configuration of the storage server 1 to satisfy the condition of “B<=(M−N)*b,” and if the values of B, M and b are determined, then the value of N can be determined.
In a large-scale storage server, several tens to several hundreds HDDs are sometimes used, and in this case, even if a surplus performance of several % is estimated, it is possible to design the system such that a plurality of (N) HDDs can be assured so as to be used for a reading out process during data writing. It is to be noted that, if it is known that the maximum flow rate (bandwidth) of stream data varies, then the number N may be changed dynamically in accordance with the variation of the flow rate of the stream data.
Now, a program process for executing a writing process and a reading out process by the functional blocks (111 to 116) depicted in
At step S103, it is decided whether or not the write buffer 131 is full or a given amount of data (for example, data of an amount which becomes a target for division in a unit of a stripe) are written in the write buffer 131. If the decision at step S103 is YES, then the request reception unit 111 switches the write buffer of the target of writing and activates a writing process into an HDD depicted in
If the decision at step S103 is NO, then it is decided at step S105 whether or not data which has not been written into a write buffer still remains in the data of the received write request. If such data still remains (YES at step S105), then the processing returns to step S102. If it is decided at step S105 that writing of all data is completed (NO at step S105), then the processing is ended at step S106. The data amount by one write request is indefinite and may be smaller than the capacity of the write buffer 131. In this case, it is waited that data relating to a next write request arrive until the write buffer 131 is filled with data. On the contrary, if the writing client apparatus 3 transmits write data without taking the capacity of the write buffer 131 of the storage server 1 into consideration, then data exceeding the buffer capacity are sometimes received on the basis of one write request. Therefore, the decision process at step S105 is prepared.
If the write buffer 131 is full and does not allow writing all of received data, then the request reception unit 111 can issue a response to the writing client apparatus 3 of the transmission source of the write request to retry the write request. By the response, it is possible to store, after the data in the write buffer 131 are written into an HDD and a free space is generated in the write buffer 131, the data relating to the write request re-transmitted by the re-trial into the write buffer 131. It is to be noted that also it is possible to reduce the number of times of retrial by providing a certain margin to the capacity of the write buffer 131.
After the HDDs of the writing destination of data are determined at step S113, the data writing unit 113 performs a writing process in a data unit of a strip in parallel into the writable HDDs (step S114). If writing of all of the data stored in the write buffer 131 into the plurality of HDDs comes to an end, then the data writing unit 113 updates the data management table 134 described hereinabove at step S115 and ends the processing at step S116.
Then, the data reading out unit 114 repeats the processes from step S124 to step S127 in a loop from step S123 to step S128 for all of the listed up HDDs. Steps S123 and S128 signify that the processes between the steps are to be repeated. At step S124, N HDDs to be made a reading out target are selected from among the listed up HDDs. Then, for the N selected HDDs, the flag of the corresponding HDD number in the writable HDD table 133 is reset to zero to establish a writing suppression condition. In other words, the writing process of stream data into the selected HDD is interrupted. It is to be noted that the N HDDs of the reading out target may be selected arbitrarily only if the number of HDDs of a writing target into which stream data can be stored without missing can be secured, and a selection order may be determined in advance or may be selectively determined at random.
At step S125, it is decided whether or not the writing process that has been performed till the point of time for the N HDDs of the reading out target selected at step S124 is completed. If the writing process is being executed (NO at step S125), then it is waited that the writing process is completed. Since the writing process of
If the writing into the N selected HDDs is completed (YES at step S125), then the processing advances to step S126, at which a reading out process of the readout request target data from the N selected HDDs is performed and then the read out data are stored into the read buffer 132. If the reading out process from the N selected HDDs is completed, then the processing advances to step S127, at which the flag of the HDD in the writable HDD table 133 is returned to “1” and then an HDD which is to be made a next reading out target is selected. Thereafter, similar processes at the steps beginning with step S124 are repeated.
As regards the amount of data to be read out at step S126, where the capacity of the read buffer 132 is greater than the size of data that is made a reading out target of the readout request, it is possible to read out all data of the reading out target individually from the N HDDs. On the other hand, where the capacity of the read buffer 132 is smaller than the data size of the reading out target of the readout request, part of the data within a fixed range of reception time are read out from the N HDDs.
After data are read out from the HDDs of the entire reading out target into the read buffer 132 by the processes at steps S123 to S127, the data integration unit 115 reconstructs the data stored in the read buffer 132 into data of a chronological order at step S129. Then, the reply transmission unit 116 transmits the data reconstructed by the data integration unit 115 to the reading out client apparatus 4 (step S130). If all of the reading out target data based on the readout request are read out (YES at step S131), then the reading out process is completed (step S132).
If data that has not been read out from the plurality of HDDs remains from among the reading out target data of the readout request (NO at step S131), then the processing returns to step S123, and the processes at steps S123 to S130 are performed again for the data that has not been read out as yet. Thereafter, similar operation is repeated until all of the reading out target data are read out. In this case, until after all of the data of the reading out target are read out, the reply transmission unit 116 transmits the read out data to the reading out client apparatus 4 divisionally by a plural number of times or while temporarily placing wait time for a period of time until data to be transmitted are read out from an HDD.
As described above, with the working example disclosed herein, if a writing process and a reading out process into and from a plurality of HDDs are competitive in a storage server 1 that includes a plurality of disk storages such as plural HDDs, then the reading out process is performed from only part of the HDDs. By such reading out process, frequent movements of a magnetic head in the HDDs when the writing process and the reading out process into and from all HDDs are repeated alternately can be suppressed to the minimum.
When a writing process and a reading out process are switched for different addresses, a period of time is prepared such as seek time in which a head of an HDD moves, search time for waiting a magnetic disk to be rotated to a position at which data to be read out after the movement of the head is stored and so forth. The size of the write buffer 131 has an upper limit depending upon the limitation to the memory capacity and so forth, and it is assumed, for example, that the time for which data are to be accumulated into the write buffer 131 is 500 milliseconds. Further, it is assumed that the sum of the seek time and the search time of an HDD when switching from writing operation into reading out operation for the HDD is performed and then switching from the reading out operation back to the writing operation is performed is 50 milliseconds. In this case, if a currently available processing method of frequently performing switching between a writing process and a reading out process of all HDDs is applied, then time of 10% is taken every time the switching between writing operation and reading out operation is performed for each HDD.
It is assumed that, in a currently available processing method in which switching between a writing process and a reading out process for all HDDs is performed frequently, the four HDDs 15 to 18 of
In contrast, with the above-described technology disclosed herein, while a number of HDDS for a writing process prepared to accumulate stream data without missing are secured, frequent movements of a head for performing access can be suppressed by reading out data from part of the HDDs which is not performing a writing process. As a result, the reading out performance when writing and reading out of stream data are competitive in the storage server 1 can be improved.
While the preferred working example of the present embodiment has been described, the present embodiment is not limited to the particular working example but various modifications or alterations are possible. For example, when the data reading out unit 114 determines N HDDs that become a target of a reading out process to be executed by the data reading out unit 114, the number N of the HDDs that are to become a reading out target may be changed dynamically in response to the amount of reception per a fixed period of time of write data to be received from the writing client apparatus 3. Further, also with regard to the read buffer 132 as well as the write buffer 131, a plurality of regions may be provided such that a data transmission process to the reading out client apparatus 4 by the reply transmission unit 116 and a data reading out process from HDDs by the data reading out unit 114 may be performed in parallel to each other.
It is to be noted that a computer program that causes the CPU 11 to execute the functions of the functional blocks (111 to 116) depicted in
All examples and conditional language provided herein are intended for the pedagogical purposes of aiding the reader in understanding the invention and the concepts contributed by the inventor to further the art, and are not to be construed as limitations to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority and inferiority of the invention. Although one or more embodiments of the present invention have been described in detail, it should be understood that the various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2015-087977 | Apr 2015 | JP | national |