The present invention relates to an information processing system, and more particularly, to efficient access to a storage, particularly, a non-volatile memory.
A recording density has increased with the development of a communication technique such as the Internet and the improvement of a storage technique, a data amount with which companies or individuals deal has been significantly increased, and thus recently, analyzing a connection (which is also referred to as a “network”) of large-scale data has become important. Particularly, in a connection of data occurring in the natural world, many graphs have a characteristic called scale free, and analyzing a large-scale graph having a scale-free characteristic has become important (Patent Document 1).
The graph is configured with a vertex and an edge as illustrated in
As a representative graph analysis technique, there is a graph process using a bulk synchronous parallel (BSP) model (Non-Patent Document 1). In this technique, each vertex performs a calculation based on a value of its own vertex, a value of a connected edge, a vertex connected by an edge, and a message transmitted to a vertex, and transmits a message to another vertex according to a calculation result. A process is delimited in synchronization with each calculation of each vertex, and the delimiting is referred to as a “superstep.” The process is performed by repeating the superstep.
In the graph process using the BSP model, it is necessary to store a message transmitted at a certain superstep until the message is used for a calculation in a next superstep. The message and graph data (for example, data of a vertex ID, a value, a vertex ID of a vertex connected by an edge, and a value of an edge are associated as illustrated in
The present invention was made in light of the foregoing, and it is an object of the present invention to provide an information processing system and a data processing method in which random access having fine granularity does not occur frequently.
In order to solve the above problem and achieve the above object, an information processing system according to the present invention causes an information processing apparatus including a main storage unit and a storage unit capable of reading and writing data including an identifier in predetermined units to collect and process the data by a predetermined amount, and the information processing apparatus includes a preprocessing unit that allocates the identifier to one or more collected groups, the main storage unit including a buffer having a size of the predetermined unit installed for each group, the storage unit that stores the data written in the buffer for each predetermined unit and each group, a write processing unit that acquires the data allocated to the group for each group and writes the acquired data in the buffer, determines whether or not the data of the predetermined unit has been written in the buffer, and causes the storage unit to store the data written in the buffer when the data of the predetermined unit is determined to have been written in the buffer, and a read processing unit that reads the stored data out to the main storage unit for each group, extracts the read data, and executes the process.
Further, the present invention provides a data processing method performed by the information processing system.
According to the present invention, it is possible to provide an information processing system and a data processing method in which random access having fine granularity does not occur frequently.
Hereinafter, an embodiment of an information processing system and a data processing method according to the present invention will be described in detail with reference to the appended drawings. Hereinafter, there are cases in which a non-volatile memory is abbreviated as an NVM.
The preprocessing unit 3091 decides a server device that performs a graph process for all vertices, groups the vertices according to each server device, allocates and defines a plurality of collected sub groups in order to further collect a predetermined amount and perform a calculation, and associates a local vertex ID to a vertex within each of sub groups. The graph data read processing unit 3092 reads the graph data from the shared storage device 306, and transmits data of each vertex of the graph data to a server device that performs a process decided by the preprocessing unit 3091 through the communication unit 3096. The graph data write processing unit 3093 receives the transmitted graph data through the communication unit 3096, and writes the graph data in the non-volatile memory 311.
The message read processing unit 3094 performs a process of a superstep for a vertex (identifier) allocated to each server device, and transmits a message to another vertex through the communication unit 3096 according to the result. The message write processing unit 3095 receives the message transmitted from each server device to which another vertex is allocated through the communication unit 3096, and writes the message in the non-volatile memory 311. The communication unit 3096 performs transmission and reception of various kinds of information such as the message or the graph data with another server device. Specific processes performed by the above components will be described later using a flowchart.
Then, upon receiving the graph data, the graph data write processing unit 3093 executes the graph data NVM writing process (step S1003).
For example, as illustrated in
Further, the graph data write processing unit 3093 of each server device generates non-volatile memory write buffers (a vertex value non-volatile memory write buffer and adjacency information non-volatile memory write buffer) and write data amount counters (a vertex value write data amount counter and a adjacency information write data amount counter) for the vertex value and the adjacency information, and initializes the write data amount counters to zero (step S1102).
For example, as illustrated in
The graph data write processing unit 3093 generates a vertex value write data amount counter 4031 by the number of sub groups that are counted up by a written data amount. Similarly, the graph data write processing unit 3093 generates a adjacency information write data amount counter 4032 by the number of sub groups that are counted up by a written data amount.
Thereafter, the graph data write processing unit 3093 of each server device receives the vertex ID, the vertex value, and the adjacency information corresponding to one vertex from the transmitted graph data (step S1103), and calculates a group ID of a sub group to which the read vertex ID, the vertex value, and the adjacency information corresponding to one vertex from the vertex ID belong (step S1104).
The graph data write processing unit 3093 of each server device adds the entry of the vertex ID to the vertex data start position table 4041, writes the value of the vertex write data amount counter 4031 of the calculated sub group ID (step S1105), and writes the vertex value in the vertex value non-volatile memory write buffer 4031 of the calculated sub group ID (step S1106). In
Then, the graph data write processing unit 3093 of each server device determines whether or not the vertex value non-volatile memory write buffer 4021 has been fully filled (step S1107), and when the vertex value non-volatile memory write buffer 4021 is determined to have been fully filled (Yes in step S1107), the graph data write processing unit 3093 writes data of the vertex value non-volatile memory write buffer 4021 at that time in the non-volatile memory 405, and adds a written address of the non-volatile memory 405 to the entry of the sub group ID of the vertex value data address table 4061 (step S1108). On the other hand, when the graph data write processing unit 3093 of each server device determines that the vertex value NVM write buffer has not been fully filled (No in step S1107), the graph data write processing unit 3093 writes the vertex value in the vertex value NVM write buffer instead of the non-volatile memory 405, and the process proceeds to step S1111. In
Thereafter, the graph data write processing unit 3093 of each server device clears the vertex value non-volatile memory write buffer 4021 of the sub group ID (step S1109), further writes the remainder in the vertex value NVM write buffer of the sub group ID (step S1110), and adds the value of the vertex value write data amount counter of the sub group ID by the data size of the vertex value (step S1111).
The graph data write processing unit 3093 of each server device performs the same process as the process of steps S1105 to S1111 on the adjacency information. Specifically, the graph data write processing unit 30933091 of each server device adds the entry of the vertex ID to the vertex data start position table 4041, writes the value of the vertex write data amount counter 4031 of the calculated sub group ID (step S1112), and writes the adjacency information in the adjacency information non-volatile memory write buffer 4022 of the calculated sub group ID (step S1113). In
Then, the graph data write processing unit 3093 of each server device determines whether or not the adjacency information non-volatile memory write buffer 4022 has been fully filled (step S1114), and when the adjacency information non-volatile memory write buffer 4022 is determined to have been fully filled (Yes in step S1114), the graph data write processing unit 3093 writes data of the adjacency information non-volatile memory write buffer 4022 at that time in the non-volatile memory 405, and adds a written address of the non-volatile memory 405 to the entry of the sub group ID of the adjacency information data address table 4062 (step S1115). On the other hand, when the graph data write processing unit 3093 of each server device determines that the adjacency information non-volatile memory write buffer 4022 has not been fully filled (No in step S1114), the process proceeds to step S1118.
Thereafter, the graph data write processing unit 3093 of each server device clears the adjacency information non-volatile memory write buffer 4022 of the sub group ID (step S1116), further writes the remainder in the adjacency information non-volatile memory write buffer 4022 of the sub group ID (step S1117), and adds the value of the adjacency information write data amount counter 4033 of the sub group ID by the data size of the vertex value (step S1118). It is possible to execute the process of steps S1105 to S1111 and the process of steps S1105 to S1111 in parallel as illustrated in
The graph data write processing unit 3093 of each server device determines whether or not reception of the graph data of all vertices allocated to the server devices has been completed (step S1119), and when the reception of the graph data of all vertices allocated to the server devices is determined to have not been completed (No in step S1119), the process returns to step S1103, and the subsequent process is repeated.
Meanwhile, when the reading of the graph data of all vertices allocated to the server devices is determined to have been completed (Yes in step S1119), the graph data write processing unit 3093 of each server device executes the following process on all the sub groups of the server devices (steps S1120 and S1123).
The graph data write processing unit 3093 of each server device generates a sub group vertex value data start position table 4051 in which the entries of the vertex IDs included in the vertex value data start position table 4041 are collected for each sub group (step S1121). Then, the graph data write processing unit 3093 of each server device writes the sub group vertex value start position table 4051 in the non-volatile memory 405, and adds an address at which the sub group vertex value data start position table 4051 of each sub group ID is written to the entry of the sub group ID of the vertex value data address table (step S1122). In
Further, similarly to the case of the vertex value, the graph data write processing unit 3093 of each server device executes the same process as the process of steps S1120 to S1123 on the vertex value included in the adjacency information (step S1124, S1127). Specifically, the message transmission processing unit 3091 of each server device generates a sub group adjacency information start position table 4052 in which the entries of the vertex IDs included in the adjacency information data start position table are collected for each sub group (step S1125). Then, the graph data write processing unit 3093 of each server device writes the sub group adjacency information start position table in the NVM, and adds an address at which the sub group adjacency information data start position table 4052 of each sub group ID is written to the entry of the sub group ID of the adjacency information data address table 4052 (step S1126). When the process of steps S1123 and S1127 ends, the graph data NVM writing process illustrated in
In the graph process illustrated in
First, the message read processing unit 3094 of each server device executes a non-volatile memory reading process of step S1005.
Then, the message read processing unit 3094 of each server device reads the adjacency information and the sub group adjacency information data start position table 4052 from the non-volatile memory 405 to the main storage 310 with reference to the address list of the sub group ID in the adjacency information data address table (step S1202).
The message read processing unit 3094 of each server device reads a message of the address list of the sub group ID from a previous superstep message data address table 505 used for writing of a message processed in a previous superstep to the non-volatile memory 405 to the main storage 310 (step S1203). As will be described later, since the message and the local vertex ID of the sub group are associated and stored in a message non-volatile memory write buffer 5021 and the non-volatile memory 405, when the message is read from the non-volatile memory 405, the local vertex ID corresponding to the message can be known.
Since a message received in a certain superstep is used for a calculation in a next superstep, the current superstep message data address table 504 records an address at which a message is written in a superstep in which a message is received, but the previous superstep message data address table 505 stores address at which a message received in a previous superstep (that is, an immediately previous superstep) used for reading for a calculation is written. Each time a superstep is switched, the message read processing unit 3094 clears content of the previous superstep message data address table 505, and then replaces the previous step message data address table 505 with the current superstep message data address table 504. If the process of step S1203 ends, the process of step S1005 in
In the graph process illustrated in
For example, as illustrated in
The message read processing unit 3094 of each server device performs the process of step S1303 on all message data of the sub group ID read from the non-volatile memory 405 to the main storage 310 (steps S1302 and S1304). In step S1303, the message read processing unit 3094 of each server device counts up the number of messages corresponding to the local vertex ID in the generated message count table 702 (step S1303).
When the number of messages of the local vertex ID is counted up, the message read processing unit 3094 of each server device generates a message write index table 703, initializes a write index in which the local vertex ID is n to 0 when n is 0, and initializes the write index to the sum of the number of messages in which the local vertex IDs of the message count table generated in step S1301 are 0 to n when n is 1 or more (step S1305). The message write index table 703 is a table for deciding a write position of a message of each local vertex ID in the main storage 310, and an initial value indicates a start position of each local vertex ID when the messages of the sub group are sorted according to the local vertex ID.
Thereafter, the message read processing unit 3094 of each server device generates a sorted message region 704 for sorting the message data according to the local vertex ID for each sub group by all messages of the sub group (step S1306), and performs the process of steps S1308 to S1309 on all message data of the sub group ID read from the non-volatile memory 405 to the main storage 310 (step S1307, S1310).
The message read processing unit 3094 of each server device writes the message at the position of the write index corresponding to the local vertex ID in the message write index table 703 in the generated sorted message region 704 (step S1308), and count up the write index corresponding to the local vertex ID in the message write index table 703 (step S1309). For example, as illustrated in
In the graph process illustrated in
Then, the message read processing unit 3094 of each server device extracts the vertex value with reference to the vertex value data start position table 4041 using the vertex ID as a key (step S1402), similarly extracts the adjacency information with reference to the adjacency information data start position table 4042 using the vertex ID as a key (step S1403), calculates the local vertex ID from the vertex ID, and extracts the message destined for the vertex with reference to the sorted message sorted for each local vertex ID in the sorting process of
The message read processing unit 3094 of each server device performs a graph process calculation using the extracted vertex value, the adjacency information, and the message, for example, through the technique disclosed in Non Patent Document 1 (step S1405), determines whether or not there is a message destined for the vertex (step S1406), calculates the server device of the transmission destination from the vertex ID of the destination (step S1407) when it is determined that there is the message (Yes in step S1406), and adds the destination vertex ID to the message and transmits the resulting message to the server device of the transmission destination (step S1408). Further, when the server device or the local vertex ID is calculated from the vertex ID, the message read processing unit 3094 calculates the server device or the local vertex ID based on the correspondence relation of the vertex ID, the server device, and the local vertex ID illustrated in
Meanwhile, when it is determined that there is no message destined for the vertex (No in step S1406), the message read processing unit 3094 of each server device reflects the vertex value updated by the graph process calculation in the vertex value read from the non-volatile memory 405 to the main storage 310 in step S1201 (step S1409). When the process of step S1409 ends, the process of step S1007 in
Then, the message read processing unit 3094 of each server device writes the vertex value read from the non-volatile memory 405 to the main storage 310 in step S1201 back to the non-volatile memory 405 (step S1008), and notifies all the server devices 302 to 305 of the process in one superstep has ended (step S1010). Upon receiving this notification from all the server devices 302 to 305, the CPU 309 of the server devices 302 to 305 clears content of the previous superstep message data address table 505, replaces the previous step message data address table 505 with the current superstep message data address table 504, determines whether or not the graph process has ended as all the supersteps have ended (step S1011), and ends the process without change when the graph process is determined to have ended (Yes in step S1011). On the other hand, when the graph process is determined to have not ended (No in step S1011), the process returns to step S1004, and the subsequent process is repeated.
Meanwhile, when the process of step S1003 ends, the message write processing unit 3095 of each server device also executes the process of step S1012 (the message reception/NVM writing process).
Further, the message write processing unit 3095 of each server device generates the message non-volatile memory write buffer (the message non-volatile memory write buffer 5021) by the number of sub groups (step S1502), and receives a set of the destination vertex ID and the message (step S1503). The message write processing unit 3095 of each server device calculates the sub group ID and the local vertex ID from the destination vertex ID based on the correspondence relation of the vertex ID, the server device, and the local vertex ID illustrated in
The message write processing unit 3095 of each server device determines whether or not the message non-volatile memory write buffer 5021 of the sub group ID has been fully filled (step S1506), and when the message non-volatile memory write buffer 5021 of the sub group ID is determined to have been fully filled (Yes in step S1506), the message write processing unit 3095 of each server device writes the message non-volatile-memory write buffer 5021 at that time in the non-volatile memory 405, and adds the written address of the non-volatile memory 405 to the entry of the sub group ID included in the current superstep message data address table 504 (step S1507). On the other hand, when the vertex value NVM write buffer is determined to have not been fully filled (No in step S1506), the CPU 309 of each server device proceeds to step S1510. For example, in
Thereafter, the message write processing unit 3095 of each server device clears the message non-volatile memory write buffer 5021 of the sub group ID (step S1508), and writes the remainder in the message non-volatile memory write buffer 5021 of the sub group ID (step S1509). Then, the message write processing unit 3095 of each server device determines whether or not the process in the superstep has ended in all the server devices (step S1510), and when the process in the superstep is determined to have not ended in any one of the server devices (No in step S1510), the process returns to step S1503, and the subsequent process is repeated. On the other hand, when the process in the superstep is determined to have ended in all the server devices (Yes in step S1510), the message write processing unit 3095 of each server device ends the message reception/NVM writing process illustrated in
When the process of step S1012 of
As the graph process using the BSP model is performed as described above, it is possible to perform all access to the non-volatile memory in units of page sizes while storing the graph data and the message in the non-volatile memory, and it is possible to efficiently perform access to the non-volatile memory without causing random access having fine granularity to occur frequently. In other words, data necessary for a calculation in the graph process is the graph data (the value of the vertex and the adjacency information (an ID of a vertex connected with the vertex by an edge and a value of the edge) and data of the message destined for the vertex, but since the data around one vertex is smaller than the page size, a plurality of vertices are collectively defined as a vertex group, the calculation of the graph process is performed in units of vertex groups, and the vertex group is defined so that data necessary for a calculation of a vertex group is sufficiently larger than the page size, and thus it is possible to arrange data necessary for a calculation of the vertex group on the non-volatile memory in units of page sizes, and it is possible to use the page size as the granularity of access to the non-volatile memory and suppress a decrease in performance of the non-volatile memory access.
Further, the present embodiment has been described in connection with the example in which the message is transmitted in the example of the graph process using the BSP model but can be also applied to a shuffling phase and a sorting phase in which a total amount of a key-value pair generated in a mapping phase in MapReduce per one of the server devices 302 to 305 is larger than the size of the main storage 310.
In the shuffling phase, the message read processing unit 3094 of each server device receives the key-value pair generated in the mapping phase, and writes the key-value pair in the non-volatile memory 405. A method of writing the key-value pair in the non-volatile memory 405 is illustrated in
Similarly to the example illustrated in
A method of reading the key-value pair from the non-volatile memory in the sorting phase is illustrated in
As the shuffling phase and the sorting phase are performed as described above, when a total amount of the key-value pair generated in the mapping phase per one of the server devices 302 to 305 is larger than the size of the main storage 310, the shuffling process and the sorting process using the non-volatile memory can be performed while performing all access to the non-volatile memory in units of page sizes.
The present invention is not limited to the above embodiment, and includes various modified examples. For example, the above embodiment has been described in detail in order to help understanding with the present invention, and the present invention is not limited to a configuration necessarily including all components described above. Further, some components of a certain embodiment may be replaced with components of another embodiment, and components of another embodiment may be added to components of a certain embodiment. In addition, another component can be added to, deleted from, or replaced with some components of each embodiment.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2013/065696 | 6/6/2013 | WO | 00 |