The present invention relates to a control apparatus, a control method, and a program.
In recent years, image processing has become widespread for joining images captured by a plurality of cameras to reconfigure a wide image. A graphical processing unit (GPU) specialized in image processing is used for the image processing.
In a case where live distribution of a wide image is performed, it is necessary to generate the wide image within a predetermined processing time. In programming using a GPU, a developer can set a priority of a task, but cannot externally control resource allocation of the GPU. For that reason, there has been a problem that it is difficult to ensure that processing is completed within a determined processing time in a case where tasks with uneven processing times are simultaneously and continuously executed.
The present invention has been made in view of the above, and an object thereof is to output a processing result within a predetermined processing time.
A control apparatus of one aspect of the present invention is a control apparatus that controls a processing device that executes different types of processing, the control apparatus including: a monitoring unit that monitors a buffer used by each of the different types of processing executed by the processing device in order to transfer a task and estimates a load on the processing device; and a change unit that changes a processing content of processing with a lower priority among the different types of processing executed by the processing device to processing with a smaller load in a case where the load on the processing device is larger than a first threshold.
According to the present invention, it is possible to output a processing result within a predetermined processing time.
A configuration of a processing device of a present embodiment will be described with reference to
The image processing unit 30 includes correction processing units 31A and 31B, a combining processing unit 32, an integration processing unit 33, and buffers 35A to 35G, inputs a plurality of images A and B, connects the input images together, synthesizes one wide image, and outputs the synthesized one wide image. A program corresponding to a processing content is executed on a GPU to function as the correction processing units 31A and 31B, the combining processing unit 32, and the integration processing unit 33. Each of the correction processing unit 31A, the correction processing unit 31B, the combining processing unit 32, and the integration processing unit 33 corresponds to one process. The processing units transfer tasks via the buffers 35A to 35G.
The correction processing units 31A and 31B input the images A and B, and correct inclination, luminance, hue, and the like of the images A and B. The correction processing units 31A and 31B store data of overlapping areas overlapping between adjacent images in the buffers 35C and 35D, and store data of non-overlapping areas not overlapping between the adjacent images in the buffers 35F and 35G. The images A and B are 4K images captured by different cameras, for example. Data of the images A and B are temporarily held in the buffers 35A and 35B, respectively. The correction processing unit 31A reads the image A from the buffer 35A and processes the image A. The correction processing unit 31B reads the image B from the buffer 35B and processes the image B.
The combining processing unit 32 reads the overlapping areas of the adjacent images A and B from the respective buffers 35C and 35D, sets a seam for the overlapping areas, and combines the overlapping areas of the adjacent images A and B. The combining processing unit 32 stores a combined overlapping area in the buffer 35E. The seam is a joint connecting the images A and B, and it is possible to improve combining quality by setting the seam not to be noticeable according to the images of the overlapping areas.
The integration processing unit 33 reads the combined overlapping area from the buffer 35E, reads the non-overlapping areas of the images A and B from the buffers 35F and 35G, respectively, integrates the overlapping area and the two non-overlapping areas, and outputs a wide image.
Note that the image processing unit 30 may input three or more images. In that case, the image processing unit 30 includes a number of correction processing units depending on the number of images to be input and a number of combining processing units depending on the number of overlapping areas.
The control unit 10 includes a monitoring unit 11 and a change unit 12, estimates a GPU load on the image processing unit 30 by monitoring the buffers 35A to 35G of the image processing unit 30, and controls a processing content of each of the different types of processing executed by the image processing unit 30 according to the GPU load.
The monitoring unit 11 monitors the buffers 35A to 35G and estimates the GPU load on the image processing unit 30. If the amount of data held by the buffers 35A to 35G, that is, the amount of data waiting for being processed is large, the monitoring unit 11 estimates that the load on the image processing unit 30 is high. The monitoring unit 11 may estimate the load on the image processing unit 30 on the basis of the total amount of data held by all the buffers 35A to 35G, or may estimate the load on the image processing unit 30 on the basis of the amount of data held by a specific buffer (for example, buffers 35A, 35B, 35C, 35D, and 35E).
Note that if the GPU load can be estimated, the monitoring unit 11 may monitor another information.
The change unit 12 changes processing with a lower priority among the processing executed by the image processing unit 30 to that with a processing content with a lighter load according to the GPU load. For example, in a case where a priority of processing executed by the combining processing unit 32 is lower, the change unit 12 changes a processing content of the combining processing unit 32 to a lighter processing content.
When the GPU load is reduced, the change unit 12 may change processing executed by each processing unit to that with a processing content with a heavier load.
Next, a flow of processing executed by the image processing unit 30 will be described with reference to
The correction processing unit 31A performs correction processing on an image 100A, transfers a non-overlapping area 110A of the image 100A to the integration processing unit 33, and transfers an overlapping area 120A to the combining processing unit 32. The non-overlapping area 110A is stored in the buffer 35F, and the overlapping area 120A is stored in the buffer 35C.
Similarly, the correction processing unit 31B performs correction processing on the image 100B, transfers a non-overlapping area 110B of the image 100B to the integration processing unit 33, and transfers an overlapping area 120B to the combining processing unit 32. The non-overlapping area 110B is stored in the buffer 35G, and the overlapping area 120B is stored in the buffer 35D.
Note that, in
The combining processing unit 32 reads the overlapping areas 120A and 120B of the images 100A and 100B at the same timing from the buffers 35C and 35D, sets a seam 200 for the overlapping areas 120A and 120B, and transfers an overlapping area 130 obtained by combining areas 130A and 130B obtained by dividing the overlapping areas 120A and 120B by the seam 200 to the integration processing unit 33. The overlapping area 130 is stored in the buffer 35E.
The integration processing unit 33 reads the overlapping area 130 and the non-overlapping areas 110A and 110B at the same timing from the buffer 35C and the buffers 35F and 35G, integrates the overlapping area 130 and the non-overlapping areas 110A and 110B, and outputs a wide image 140. An area 140A on the left side from the seam 200 of the wide image 140 is an image of the image 100A, and an area 140B on the right side from the seam 200 is an image of the image 100B.
To perform live distribution of the wide image in which the plurality of images is connected together, it is necessary to continuously output the wide image 140 at intervals at which the images 100A and 100B are input. In a case where the load on the image processing unit 30 increases, the control unit 10 changes processing with a lower priority executed by a processing unit to that with a processing content with a lighter load.
Next, a flow of processing executed by the control unit 10 will be described with reference to a flowchart of
In step S11, the monitoring unit 11 monitors the buffers 35A to 35G and estimates the GPU load on the image processing unit 30. For example, the monitoring unit 11 monitors the buffers 35A to 35G at intervals of a frame rate of images to be processed.
In step S12, the monitoring unit 11 determines whether or not the GPU load exceeds a preset value (first threshold). If not, the processing proceeds to step S14. The monitoring unit 11 may advance the processing to step S13 in a case where a state in which the GPU load exceeds the first threshold continues for a predetermined time or more.
In a case where the GPU load exceeds the first threshold, in step S13, the change unit 12 changes processing with a lower priority executed by a processing unit to lighter processing.
In a case where the GPU load does not exceed the first threshold, in step S14, the monitoring unit 11 determines whether or not the GPU load is below a second threshold. The second threshold may be a threshold set lower than the first threshold.
In a case where the GPU load is lower than the second threshold, in step S15, the change unit 12 changes processing executed by a processing unit to heavier processing. The change unit 12 may change the processing executed by the processing unit whose processing is changed to lighter processing in step S13 to heavier processing, or may change processing with a higher priority executed by a processing unit to heavier processing.
Here, an example of changing a processing content will be described in terms of combining processing. Since the processing executed by the combining processing unit 32 affects only quality of a combined portion of the wide image, it is considered that the priority is lower than those of types of processing executed by the correction processing units 31A and 31B and the integration processing unit 33 that affect quality of the entire image. In addition, in the combining processing, combining quality and a processing load can be changed by changing three parameters of a blending method, a seam search frequency, and a seam search method at the time of combining. Increasing the processing load increases the combining quality, and decreasing the processing load decreases the combining quality.
When the control unit 10 detects that the GPU load exceeds the threshold while the combining processing unit 32 is operating with a processing content of the GPU load=8 in the table of
Note that the control unit 10 may change the processing contents of the correction processing units 31A and 31B according to the estimated GPU load. For example, in a case where a filter is applied in the correction processing, a type of the filter may be changed, or a resolution of the image may be lowered in the correction processing.
As described above, the control unit 10 of the present embodiment controls the image processing unit 30 that executes different types of processing. The control unit 10 includes: a monitoring unit 11 that monitors buffers 35A to 35G used by each of the different types of processing executed by the image processing unit 30 in order to transfer a task and estimates a load on the image processing unit 30; and a change unit that changes a processing content of processing with a lower priority among the different types of processing executed by the image processing unit 30 to processing with a smaller load in a case where the load on the image processing unit 30 is larger than a first threshold. As a result, in a case where the image processing unit 30 simultaneously and continuously executes a plurality of types of processing that is uneven, the control unit 10 controls the processing executed by the image processing unit 30 according to the load on the image processing unit 30, whereby the image processing unit 30 can output a processing result within a predetermined processing time.
According to the present embodiment, the monitoring unit 11 monitors the buffers 35A to 35G and estimates the load on the GPU that executes each of the different types of processing, whereby the GPU load can be estimated even in a case where it is difficult to directly acquire the load on the GPU. In addition, in a case where a plurality of GPUs uses one buffer, it is possible to estimate loads on the plurality of GPUs by monitoring the buffer.
As the control unit 10 of the present embodiment described above, for example, a general-purpose computer system including a central processing unit (CPU) 901, a memory 902, a storage 903, a communication device 904, an input device 905, and an output device 906 can be used as illustrated in
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2020/045215 | 12/4/2020 | WO |