The present application claims priority to and incorporates by reference the entire contents of Japanese Patent Application No. 2012-029674 filed in Japan on Feb. 14, 2012 and Japanese Patent Application No. 2013-023797 filed in Japan on Feb. 8, 2013.
1. Field of the Invention
The present invention relates to a multicore processor including a plurality of cores.
2. Description of the Related Art
Conventionally, a tightly-coupled multicore processor system is known in which a plurality of cores share a main memory. As an example of such a multicore processor system, as described in Japanese Patent Application Laid-open No. 57-161962, a configuration has been employed in which a main memory is provided with message exchange buffers for respective cores, and data is exchanged via the exchange buffers.
Specifically, a core on the transmitting side sets data in a message exchange buffer in the shared memory, and thereafter sends an interrupt request to a core on the receiving side. The core on the receiving side acquires the data from the message exchange buffer and sets the data in a receiving buffer. After completion of a requested process with the received data, the core on the receiving side sets a message indicating completion of the process in the message exchange buffer. The core on the receiving side sends an interrupt request to the core on the transmitting side, and the core on the transmitting side receives the message indicating the completion of the process from the message exchange buffer.
However, in the inter-core communication of a system as described above, when the inter-core communication is performed for a certain task, write to a memory for other tasks is excluded. Therefore, a wait occurs in a process, which may result in the reduced processing speed.
Therefore, there is a need to improve the processing speed of a multicore processor that processes a plurality of tasks.
According to an embodiment, there is provided a multicore processor that includes a plurality of cores; a shared memory that is shared by the cores and that is divided into a plurality of storage areas whose writable data sizes are determined in advance; a receiving unit that receives a task given to the cores; and a writing unit that writes the received task in one of the storage areas that is set in advance according to a data size of the task.
The above and other objects, features, advantages and technical and industrial significance of this invention will be better understood by reading the following detailed description of presently preferred embodiments of the invention, when considered in connection with the accompanying drawings.
Exemplary embodiments of the present invention will be explained in detail below with reference to the accompanying drawings.
A multicore processor 1 illustrated in
The implementation I/Fs 11 and 21 are interfaces that accept a received task as a processing instruction to be executed by a processor. The stub I/Fs 12 and 22 function as logically-set stubs that can virtually call a system of another core because the stubs I/F 12 and 22 are unable to directly call the implementation I/Fs 21 and 11 of different cores. The stub I/F 12 of the first core 10 is set as an interface that is logically the same as the implementation I/F 21 of the second core 20. On the other hand, the stub I/F 22 of the second core 20 is set as an interface that is logically the same as the implementation I/F 11 of the first core 10.
The task transmitting units 13 and 23 serve as writing units that write a task received by a core into the main memory 30. Upon writing the task, the task transmitting unit 13 sends a write notice to the task receiving unit 24 of the second core 20, and the task transmitting unit 23 sends a write notice to the task receiving unit 14 of the first core 10. During the write to the main memory 30, the task transmitting units 13 and 23 perform exclusive control to prohibit writing data due to other processes to a storage area of the main memory 30. The task receiving units 14 and 24 serve as receiving units that perform a process for reading data from a specified location in the main memory 30 upon reception of the write notice from the task transmitting units 13 and 23. In this way, data is exchanged between the first core 10 and the second core 20 via the main memory 30.
The main memory 30 (shared memory) includes three sections (storage areas). A first section 31 is a section used to write and read data whose size is 32 bytes or smaller. A second section 32 is a section used to write and read data whose size is greater than 32 bytes and equal to or smaller than 1 kilobytes. A third section 33 is a section used to write and read data whose size is greater than 1 kilobytes and equal to or smaller than 65 kilobytes. In the embodiment, a case is illustrated in which all of the sections have different sizes. However, it may be possible to provide a plurality of sections provided corresponding to the same size. The main memory 30 further has an area for storing address information indicating a position range of each of the sections and flag information indicating whether or not each of the sections 31 to 33 is in use, within an area (not illustrated) other than the areas of the sections 31 to 33.
A flow of the process for exchanging data between cores will be explained below with reference to
Then, as shown by a line (4), the task transmitting unit 13 specifies the section 31, 32, or 33 of the main memory 30 to which the data has been written, and sends a notice of the specified section to the task receiving unit 24 of the second core 20. As shown by a line (5), the task receiving unit 24 that has received the notice reads the written task data, from the specified section 31, 32, or 33 of the main memory 30. As shown by a line (6), the task receiving unit 24 calls the implementation I/F 21, and sends the read task data to the implementation I/F 21. As shown by a line (7), the implementation I/F 21 that has received the task data executes processing based on the task data, and sends an execution result as a reply to the task receiving unit 24. As shown by a line (8), the task receiving unit 24 writes the received execution result in the corresponding section 31, 32, or 33 of the main memory 30. As shown by a line (9), the task receiving unit 24 notifies the task receiving unit 14 of the first core 10 that the execution result is written. As shown by a line (10), the task receiving unit 14 of the first core 10 reads the execution result of the task performed by the second core 20 from the specified section 31, 32, or 33 of the main memory 30. At this time, the exclusive control on the main memory 30 due to the task 1 is terminated. Specifically, the flag information indicating whether or not the corresponding section is in use is updated. As shown by a line (11), the task receiving unit 14 sends the read execution result of the task to the stub I/F 12 that has been called. Finally, as shown by a line (12), the stub I/F 12 sends the execution result as a reply to the task 1 and completes the task processing. Meanwhile, when a task is given to the second core 20, the same process as above is performed.
The flow of the task processing as described above will be explained below with reference to sequence diagrams in
Subsequently, the task transmitting unit 13 writes the function ID and an argument to the corresponding section 31, 32, or 33 of the main memory 30 (Step S105). After the above-descibed processes, the stub I/F 12 enters a wait state until receiving a reply from the second core 20 (Step S106). Subsequently, the task transmitting unit 13 notifies the task receiving unit 24 of the second core 20 about a write location in the section 31, 32, or 33 of the main memory 30 (Step S107). The task receiving unit 24 sends a function call to the implementation I/F 21, causes the processing to be executed via the implementation I/F 21, and receives a processing result (Step S108).
The task receiving unit 24 writes a function ID and a return value, which are obtained as the processing result of the task, in the main memory 30 (Step S109). In this case, the section 31, 32, or 33 used to the write is the same as the section ensured at Step 5103. Subsequently, the task receiving unit 24 notifies the task receiving unit 14 of the first core 10 about location information on the main memory 30 in which the return value is written (Step S110). The task receiving unit 14 reads the function ID and the return value from the main memory 30 based on the specified location information (Step S111). At the same time, the task receiving unit 14 updates the flag information on the corresponding section 31, 32, or 33 of the main memory 30 with “not in use” (Step S112). The task receiving unit 14 notifies the stub I/F 12 about the return value (Step S113), and the return value is returned to the task 1 (Step S114).
With reference to
In the multicore processor 1 of the embodiment as described above, when data is exchanged between a plurality of cores via the main memory 30, the sections 31 to 33 of the main memory 30 to be used are changed depending on the data size of a task. Therefore, when a plurality of tasks are to be processed in parallel, it is possible to reduce the frequency that a wait time occurs due to prohibition of write to the main memory 30 by exclusive control. Therefore, it is possible to improve the processing speed of a multicore processor system that processes a plurality of tasks.
As an example of the tasks to be processed in parallel, there may be a case that a process for acquiring a management screen of a printer or the like by an HTTP protocol and a process for controlling the state of the printer by an SNMP protocol. Even when there are tasks that are frequently requested at the same time, it is possible to prevent a wait time to access a memory, enabling to improve the processing speed.
Furthermore, the task transmitting unit 13 can receive and process a new task when the write to the main memory 30 is completed. Therefore, it is possible to improve the processing speed.
In the embodiment, the main memory 30 has three sections. However, the number of the sections can be changed appropriately. Furthermore, it is possible to provide a plurality of sections corresponding to the same data size. The data size that can be stored in each of the sections of the main memory 30 is not limited to the example illustrated in the embodiment. A combination of the data sizes may be changed to an arbitrary combination.
According to an embodiment of the present invention, it is possible to improve the processing speed of a multicore processor that processes a plurality of tasks.
Although the invention has been described with respect to specific embodiments for a complete and clear disclosure, the appended claims are not to be thus limited but are to be construed as embodying all modifications and alternative constructions that may occur to one skilled in the art that fairly fall within the basic teaching herein set forth.
Number | Date | Country | Kind |
---|---|---|---|
2012-029674 | Feb 2012 | JP | national |
2013-023797 | Feb 2013 | JP | national |