The present invention relates to a technique of suppressing output of unnecessary image data in an image processing system that executes a plurality of image processes.
Recently, a technique of performing control while maintaining time synchronization among a plurality of apparatuses connected via a network is widely used. For example, in the image processing field or the image transmission field, an image capturing timing is generated based on time synchronized between apparatuses, and image capturing (synchronous image capturing) is performed while maintaining synchronization between a plurality of terminals (cameras or the like) by using the image capturing timing. Japanese Patent Laid-Open No. 2017-211827 describes a virtual viewpoint image generation system that performs synchronous image capturing at a plurality of viewpoints using a plurality of cameras installed at different positions and generates a virtual viewpoint content using images at the plurality of viewpoints obtained by the synchronous image capturing. To generate such virtual viewpoint images with high quality, image capturing timings need to be accurately synchronized. As a protocol for accurately synchronizing time among a plurality of terminals, the Precision Time Protocol (PTP) is widely used. On the other hand, Japanese Patent Laid-Open No. 2004-312059 describes, as an image data transmission technique, a technique of, using an arbitrary frame as a reference frame at the time of encoding, accumulating, in a medium, an encoded bitstream obtained by encoding based on the reference frame or a technique of transmitting it to a network.
In general, when transmitting an image captured by a camera or the like to a network, a plurality of image processes are performed for the image data. The plurality of image processes can be executed using, for example, pipeline processing.
The present invention provides a technique of suppressing output of unnecessary image data in an image processing system that executes a plurality of image processes.
According to an aspect of the present invention, there is provided a terminal apparatus comprising: a processing unit configured to perform processing of accepting input of an image data set formed by a plurality of image data, generating processed data by performing image processes for the image data, and executing output of the processed data to an output buffer, the processing needing a predetermined time from the input to the output; a transfer unit configured to read out the processed data from the output buffer and output the processed data to a communication line; and a control unit configured to control such that the processed data stored in the output buffer is not output to the communication line during a predetermined time from the input of the image data of the image data set to the processing unit until first processed data of the image data set is output to the output buffer.
Further features of the present invention will become apparent from the following description of embodiments with reference to the attached drawings.
Hereinafter, embodiments will be described in detail with reference to the attached drawings. Note, the following embodiments are not intended to limit the scope of the claimed invention. Multiple features are described in the embodiments, but limitation is not made to an invention that requires all such features, and multiple such features may be combined as appropriate. Furthermore, in the attached drawings, the same reference numerals are given to the same or similar configurations, and redundant description thereof is omitted.
The sensor systems 110a to 110z each generate an image by image capturing using a camera. In this embodiment, a term “image” can include both “moving image” and “still image” unless otherwise specified. That is, the image processing system 100 according to this embodiment can process both a still image and a moving image. In this embodiment, the sensor systems 110a to 110z will sometimes be referred to as a sensor system 110 without distinction. Similarly, camera adapters 111a to 111z and cameras 112a to 112z included in the sensor systems 110a to 110z will sometimes be referred to as a camera adapter 111 and a camera 112 without distinction.
The sensor system 110 is configured to include the camera adapter 111 and the camera 112. Note that the sensor system 110 may include other constituent elements, and can include, for example, an audio device such as a microphone and a camera platform configured to control the direction of the camera. Also, the sensor system 110 can include one or more camera adapters 111 and one or more cameras 112. The camera adapter 111 and the camera 112 may be formed integrally or may be formed separately. Some of the functions of the camera adapter 111 may be imparted to the image processing apparatus 120. An image captured in the sensor system 110 on the upstream side of the daisy chain is transferred to the image processing apparatus 120 via the sensor system 110 on the downstream side of the daisy chain. The sensor system 110 on the upstream side of the daisy chain is the sensor system 110 arranged at a position relatively far when viewed from the image processing apparatus 120 and is, for example, the sensor system 110z. The sensor system 110 on the downstream side of the daisy chain is the sensor system 110 arranged at a position relatively close when viewed from the image processing apparatus 120 and is, for example, the sensor system 110a. For example, an image captured by the camera 112z is subjected to an image process in the camera adapter 111z, packetized, and transferred to the camera adapter 111y via the daisy chain. The camera adapter 111y transfers the packet received from the sensor system 110z and a packet obtained by performing an image process for an image captured by the camera 112y and packetizing to the sensor system 110 on the downstream side. Finally, images captured by the cameras 112a to 112z are transferred to the image processing apparatus 120 via the sensor system 110a and the hub 140. In this embodiment, the daisy chain is one of communication lines that connect the camera adapters 111.
The image processing apparatus 120 reconstructs an image using the packets acquired from the sensor systems 110. The image processing apparatus 120 extracts, for example, the identifier of the camera adapter 111, the type of the image, the frame number, from the header of the packet and reconstructs an image using these. Note that the image processing apparatus 120 includes a storage device configured to accumulate an acquired image and can store image data extracted from the packet and information obtained from the header in linkage with each other. The image processing apparatus 120 can read out a corresponding image from the stored images based on information for specifying a viewpoint, which is designated from the user terminal 160 or the control terminal 150, perform rendering processing, and generate a virtual viewpoint image. Note that some of the functions of the image processing apparatus 120 may be imparted to the control terminal 150, the user terminal 160, or the camera adapter 111. The virtual viewpoint image generated by the rendering processing can be transmitted to the user terminal 160 and displayed on the user terminal 160. The image processing system 100 can thus provide the image of a viewpoint based on the designation of the user who operates the user terminal 160. That is, the image processing apparatus 120 can select an image associated with a viewpoint designated by the user terminal 160 or the like from the images captured by the plurality of cameras 112 and generate a virtual viewpoint image using these images. Note that the virtual viewpoint image may be generated by a function unit (for example, the control terminal 150 or the user terminal 160) other than the image processing apparatus 120. In this case, the image processing apparatus 120 can provide the image associated with the designated viewpoint to the function unit.
The time server 130 provides time information to each sensor system 110. For example, the time server 130 can distribute a communication packet (synchronization packet) including time information for performing time synchronization to each sensor system 110. The sensor system 110 can perform synchronization between the plurality of cameras 112 based on the time information included in the synchronization packet. Synchronizing the plurality of cameras 112 will sometimes be referred to as Genlock. When the camera 112 synchronize, image data output from the cameras 112 can cooperatively be used. For example, images captured at the same time can be selected from images captured by the different cameras 112 and accumulated in the image processing apparatus 120. Synchronization between the cameras 112 can be executed by the Transparent Clock function and the Ordinary Clock function of Precision Time Protocol (PTP) provided in the camera adapter 111. The Transparent Clock function is a function of measuring time taken to transfer a PTP packet (synchronization packet) passing through the apparatus, writing the time in the header of the PTP packet, and transferring it. The Ordinary Clock function is a function for an apparatus having a single network connection. That is, to synchronize the plurality of cameras 112 in the plurality of sensor systems 110 connected by daisy chain, the camera adapter 111 calculates the retention time of the synchronization packet in the self-apparatus and writes the time in the header of the packet. The camera adapter 111 then transfers the synchronization packet to the next camera adapter 111. Thus, each camera adapter 111 can synchronize with the time server 130 at high accuracy based on the received synchronization packet. When the camera adapter 111 synchronizes with the time server 130, the image capturing timing of each camera 112 can be synchronized. For example, the camera adapter 111 can control the image capturing timing of the camera 112 based on the time of synchronization with the time server 130. Also, the camera adapter 111 can generate, based on the time of synchronization with the time server 130, time information to be added to the image captured by the camera 112. Note that, to increase the reliability of the time server 130, a redundant configuration using a plurality of time servers 130 may be formed. If a failure occurs in the time server 130, the image capturing timings of the cameras 112 do not synchronize, and a virtual viewpoint image cannot be generated. In this case, times of the plurality of time servers 130 may be synchronized using a Global Positioning System (GPS), or the like.
The camera adapter 111 generates image data to be used to create a virtual viewpoint content or the like using the image captured by the camera 112.
The CPU 201 controls the entire camera adapter 111. Also, the CPU 201 performs time synchronization based on exchange of a synchronization packet with the time server 130. The CPU 201 also exchanges a control packet with the control terminal 150. Furthermore, the CPU 201 can accept a designation of image data to be processed from the user terminal 160 or the image processing apparatus 120. Also, the CPU 201 can control pipeline processing formed by the image processing unit A 231 to the image processing unit C 233.
The internal storage unit 202 is a memory that holds programs executed by the CPU 201 to control the camera adapter 111 and the like and the synchronization packet and the control packet that the CPU 201 transmits/receives to/from another apparatus. Also, the internal storage unit 202 stores captured image data that the camera control unit 206 acquires from the camera 112b. Image data to be input to the image processing units 230 and image data output from the image processing units 230 are stored in the frame buffer A 221 to the frame buffer D 224 provided in the internal storage unit 202.
The DMA unit 203 and the DMA unit 204 execute Direct Memory Access (DMA) for transmitting/receiving a packet to/from another camera adapter. For example, the DMA unit 203 and the DMA unit 204 transmit/receive a packet based on an instruction from the CPU 201 or the transmission unit 212. A transfer instruction for a packet of image data can be done by the transmission unit 212. A transfer instruction for a synchronization packet can be done by the CPU 201. As an example, the DMA unit 203 transmits/receives a packet to/from the camera adapter 111c via the communication IF unit 207. The DMA unit 203 reads out the packet from a designated area (for example, the internal storage unit 202) and transfers it to the communication IF unit 207. In addition, the DMA unit 204 transfers, to a designated area (for example, the internal storage unit 202), a packet received from the camera adapter 111a.
The communication IF unit 207 and the communication IF unit 208 transmit/receive communication packets (a synchronization packet, a TCP/IP packet, a packet of image data, and the like) that are exchanged between the camera adapter 111 and other function units. For example, the communication IF unit 207 and the communication IF unit 208 can execute the processes of the first and second layers of an Open Systems Interconnection (OSI) reference model. The communication IF unit 207 inputs, to the DMA unit 204, a PTP packet (synchronization packet) received from the time server 130 via the camera adapter 111a. The PTP packet input to the DMA unit 204 is transferred to the internal storage unit 202 based on the transfer instruction by the CPU 201, subjected to protocol processing by the CPU 201, and input to the DMA unit 203. The communication IF unit 207 transfers the PTP packet input to the DMA unit 203 to the camera adapter 111c. On the other hand, the communication IF unit 208 inputs, to the DMA unit 203, a PTP packet received from each of the camera adapters 111c to 111z. The PTP packet input to the DMA unit 203 is transferred to the internal storage unit 202 based on a transfer instruction by an instruction of the CPU 201, subjected to protocol processing by the CPU 201, and input to the DMA unit 204. The communication IF unit 208 transfers the PTP packet input to the DMA unit 204 to the time server 130 via the camera adapter 111a. Note that a PTP packet generated by the CPU 201 is also input to the DMA unit 204. The communication IF unit 208 transfers the PTP packet to the time server 130 via the camera adapter 111a. The communication IF unit 208 similarly acquires a packet of image data generated by the transmission unit 212 from the DMA unit 204 and transfers it to the camera adapter 111a. Note that the communication IF unit 207 and the communication IF unit 208 can each hold the time of a timepiece provided in a self-function when transmitting/receiving the PTP packet (time stamp function). The time stamp function can be used by the CPU 201 to calculate time synchronization. By the time stamp function, the CPU 201 can correctly calculate the transmission/reception time of the PTP packet and the retention time in the camera adapter 111. The time difference between the time server 130 and the camera adapter 111 calculated by the protocol processing of PTP by the CPU 201 can be used to correct the timepieces of both the communication IF unit 207 and the communication IF unit 208. Note that before the start of time synchronization, time synchronization may be performed between the timepieces of the communication IF unit 207 and the communication IF unit 208.
The signal generation unit 205 generates a control signal for causing the function units to operate in synchronism. For example, the signal generation unit 205 can generate a control signal based on the timepiece maintained by the communication IF unit 208. Also, the signal generation unit 205 displays the number of generated control signals (pulse generation count) on the register, thereby causing the CPU 201 to acquire the pulse generation count. In addition, the signal generation unit 205 makes it possible to control initialization of the pulse generation count via the register. The operation procedure of the signal generation unit 205 will be described later.
The camera control unit 206 controls the camera 112b. For example, the camera control unit 206 outputs a Genlock signal for the camera 112b to perform Genlock or Timecode for adding time information to the image captured by the camera 112b to the camera 112b. The Genlock signal and Timecode can be generated based on a reference signal generated by the signal generation unit 205. Note that the camera control unit 206 can be controlled by the CPU 201 that has received an instruction from the control terminal 150. The camera control unit 206 can receive captured image data (RAW format image data) which is captured by the camera 112b in synchronism with the Genlock signal and to which Timecode is added, and transfer it to the internal storage unit 202 or the external storage unit 209. For example, the captured image data may be transferred to the frame buffer A 221 of the internal storage unit 202 and subjected to pipeline processing. In this embodiment, a description will be made using captured image data accumulated in the external storage unit 209, and a detailed description of a method of performing pipeline processing using captured image data received from the camera 112b and transmitting it will be omitted.
The image processing unit A 231, for example, reads out RAW format image data stored in the frame buffer A 221, converts it into a BAYER format image, and writes it in the frame buffer B 222. Note that when reset of the image processing unit B 232 is released, the pointer of an address for reading out image data from the frame buffer A 221 and the pointer of an address for writing image data in the frame buffer B 222 are initialized. These pointers of the addresses can be recognized by other function blocks via the register.
The image processing unit B 232 reads out the BAYER format image stored in the frame buffer B 222, performs vibration control processing, and writes the image in the frame buffer C 223. Note that the vibration control processing is processing of correcting an image blur caused by a camera shake in image capturing. The pointer of an address for reading out image data from the frame buffer B 222 and the pointer of an address for writing image data in the frame buffer C 223 can automatically be calculated based on the pointers of the addresses in the processing of the image processing unit A 231. For example, the pointer of the address for reading out image data from the frame buffer B 222 can be equivalent to the address used by the image processing unit A 231 to write. These pointers of the addresses can be recognized by other function blocks via the register.
The image processing unit C 233 reads out the image data after the vibration control processing from the frame buffer C 223, and separates it into a foreground region and a background region. The foreground region can be a moving body such as a person, and the background region can be a region other than the foreground region. The image processing unit C 233 can write only the image (foreground image) of the foreground region in the frame buffer D 224. Note that the image processing unit C 233 may write the image (background image) of the background region in the frame buffer D 224. The pointer of an address for reading out image data from the frame buffer C 223 and the pointer of an address for writing image data in the frame buffer D 224 can automatically be calculated based on the pointers of the addresses in the processing of the image processing unit B 232. For example, the pointer of the address for reading out image data from the frame buffer C 223 can be equivalent to the address used by the image processing unit B 232 to write. These pointers of the addresses can be recognized by other function blocks via the register.
The image processes executed by the above-described image processing units 230 are merely an example, and the image processing units 230 can execute processes different from above. Image data that has undergone an image process can be called processed data. An example in which the processing result of each image processing unit 230 is transferred to the next image processing unit 230 using the frame buffer 220 has been described above. This technique can be applied to arbitrary processing in the plurality of image processes executed for the image data.
The image notification unit 210 notifies the transmission instruction unit 211 of information that specifies image data to be transmitted. For example, upon detecting that the image processing unit C 233 stores the foreground image (or the background image) in the frame buffer D 224, the image notification unit 210 can notify the transmission instruction unit 211 of the size (data length) of the image and the address at which the image is stored. The pointer of the address at which the image data to be detected by the image notification unit 210 is stored can automatically be calculated based on the pointer of the address in the processing of the image processing unit C 233. For example, the pointer of the address for reading out image data from the frame buffer D 224 can be equivalent to the address used by the image processing unit C 233 to write.
The transmission instruction unit 211 instructs the transmission unit 212 to transmit the image data based on the information notified from the image notification unit 210. For example, the transmission instruction unit 211 can specify the image data based on the pointer of the address at which the image data notified from the image notification unit 210 is stored and the image size. In addition, the transmission instruction unit 211 can perform the notification to the transmission unit 212 based on the control signal input from the signal generation unit 205.
The transmission unit 212 packetizes the image data based on the instruction from the transmission instruction unit 211, and outputs it to the communication line. For example, the transmission unit 212 acquires the image data instructed by the transmission instruction unit 211, performs packetization, and stores it in an area that the DMA unit 203 or the DMA unit 204 can access. The packetized image data can be called an image packet. The image packet can be stored in the internal storage unit 202 or the external storage unit 209. Note that the image packet may be stored in the transmission unit 212. For example, the transmission unit 212 can receive the image packet, via the DMA unit 203, from another camera adapter 111 on the upstream side, and transfer the received image packet to the camera adapter 111 on the downstream side based on an instruction to the DMA unit 204.
The external storage unit 209 is a storage device in which an enormous amount of image data captured by the camera 112b is accumulated. For example, the external storage unit 209 can be a Solid State Drive (SSD) or a Hard Disk Drive (HDD). The external storage unit 209 can manage the image data as captured image data organized for each image capturing of the camera.
Pipeline processing including a plurality of image processes according to this embodiment will be described.
The relationship between the control signal A 401 and the control signal B 402 will be described with reference to
As described above, the CPU 201 sequentially stores the image data 310 designated by the user in the frame buffer A 221, and controls pipeline processing based on the control signal A 401 generated by the signal generation unit 205 and output of the image packet based on the control signal B 402. On the other hand, for example, if the pipeline processing is interrupted halfway by a stop instruction from the user, the processed image data 310 remain in the frame buffer A 221 to the frame buffer D 224. After that, even if another image data set is designated at the time of a start instruction from the user, the image data 310 are sequentially sent to the communication line when the next pipeline processing is started. For this reason, the image data 310 that need not be transmitted use the resource of the communication line, and images that the user does not want may be displayed. Note that when the CPU 201 flushes the image data stored in the frame buffer A 221 to the frame buffer D 224 at the timing of stopping the control signal, an influence of the remaining image data 310 can be avoided. However, if it is impossible to discriminate whether the interruption of processing is caused by temporary stop or complete stop, even the image data 310 that should originally be output at the time of resumption after the temporary stop is deleted. In addition, if the whole system needs to be reset to erase the image data 310, recovery from the system reset takes time each time.
Considering such a situation, in this embodiment, transmission prevention processing for unnecessary images is executed, thereby preventing the past processed image data 310 stored in the frame buffer from being output to the communication line. For example, assume that in a case where a stop instruction is received from the user during execution of pipeline processing of a first image data set, and a start instruction is then received from the user, the target of pipeline processing is a new second image data set. That is, assume that a switching instruction is done to switch the target of pipeline processing from the first image data set to the second image data set. In this case, the CPU 201 instructs to mask (i.e. to not output) the control signal B 402 a predetermined number of times such that the processed first image data set is not output from the transmission unit 212. Since the control signal B 402 is not output, the transmission instruction unit 211 is not notified of an output instruction. Hence, the transmission unit 212 does not read out the image data. This can avoid output of unnecessary image data. Here, if image processes are executed by pipeline processing based on a common timing signal, the CPU 201 can specify a predetermined period (latency) needed for the pipeline processing and specify the number of times (mask count) to mask the control signal B 402 based on this. When the CPU 201 stores the first image data 310 of the image data set in the frame buffer A 221, and time corresponding to the latency needed for the pipeline processing elapses, the image data that has undergone the image processes is stored in the frame buffer D 224. For example, the CPU 201 specifies the mask count based on the number of function units included in the pipeline processing and notifies the control signal generation unit 503 of it. In this case, the number of image processes included in the pipeline processing can be used as the mask count. Note that specifying the predetermined period needed for the pipeline processing by the CPU 201 can include specifying the number of events that occur at a predetermined period, such as specifying the number of control signals output to execute the pipeline processing. An example of a hardware configuration that can implement this embodiment and the operation procedure of the function units of the camera adapter 111 shown in
The input unit 2004 accepts various kinds of operations from the user. The output unit 2005 performs various kinds of outputs to the user via a monitor screen or a speaker. Here, the output by the output unit 2005 may be display on a monitor screen, sound output by a speaker, or a vibration output. Note that both the input unit 2004 and the output unit 2005 may be implemented as one module, like a touch panel display. Also, the input unit 2004 and the output unit 2005 may each be a device integrated with the camera adapter 111 or may be a separate device. The communication unit 2006 is Gigabit Ethernet (GbE) complying with the IEEE802.3 standard formulated for Ethernet (registered trademark, to be omitted below), or 10 GbE or 100 GbE. The present invention is not limited to these, and Interconnect Infiniband, an industrial Ethernet, or a network of another type may be used, or these may be used in combination. For example, the communication IF unit 207 or the communication IF unit 208 can be implemented by the communication unit 2006.
The operation procedure of each function unit will be described.
First, the CPU 201 determines whether a designation of an image to be processed is received from the user (step S701). For example, the user can designate a predetermined image from the image data 310 stored in the external storage unit 209. The predetermined image can be designated by the Timecode 320 associated with each of the image data 310 to start pipeline processing and the image data 310 to end pipeline processing and the sequence number of the captured image data 300 including these. Upon receiving a designation of an image from the user (YES in step S701), the CPU 201 stores the designated image data 310 in the frame buffer A 221, initialize the image processing unit A 231, and waits for an instruction to start pipeline processing from the user (step S702). Note that the CPU 201 can store a plurality of image data 310 in the frame buffer A 221. For example, if the number of buffer planes of the frame buffer A 221 is N, the CPU 201 can store N image data 310 in the frame buffer A 221 at maximum. When the image processing unit A 231 is initialized, the image processing unit A 231 can read out the image data 310 sequentially from the initial address. Hence, the CPU 201 can store the plurality of image data 310 sequentially from the first buffer plane of the frame buffer A 221 in accordance with the order to read out. For example, assume that the image data 310 designated by the user has a sequence number “5” of captured image data and Timecodes from 9:50:30.0 to 9:51:30.0, and the number of buffer planes of the frame buffer A 221 is 16. In this case, the CPU 201 first stores, in the first buffer plane of the frame buffer A 221, the image data 310 for which the sequence number of the captured image data is 5 and the Timecode is 9:50:30.0. The CPU 201 then stores the image data 310 of 9:50:30.1 in the next buffer plane, and repeats this, thereby storing 16 image data in the frame buffer A 221 at maximum. If no image designation is received from the user, and neither the start instruction nor the end instruction of pipeline processing is received from the user (NO in all steps S701, S703, and S704), the CPU 201 waits for an instruction from the user.
If a start instruction from the user is received (YES in step S703), the CPU 201 performs transmission prevention processing for unnecessary images (step S706). The start instruction can include information of time to start pipeline processing and information that species the period of pipeline processing. In this embodiment, as the transmission prevention processing for unnecessary images, the CPU 201 instructs the signal generation unit 205 to mask the control signal B 402 a predetermined number of times. Masking the control signal indicates, for example, not outputting a pulse signal that should originally be output by inhibiting output of the control signal or outputting a signal whose value is zero. By the transmission prevention processing for unnecessary images, it is possible to avoid output of image data remaining in each frame buffer due to interruption of pipeline processing to a communication line. On the other hand, if not a start instruction from the user but an end instruction from the user is received (NO in step S703 and YES in step S704), the CPU 201 notifies the function units of the end instruction and ends the processing (step S705).
The CPU 201 instructs the signal generation unit 205 to start generating the control signal A 401 and the control signal B 402 (step S707). For example, the generation start instruction can include information of time to start pipeline processing input by the user and information that specifies the period of pipeline processing. The register of the signal generation unit 205 resets display of the pulse generation counts of the control signal A 401 and the control signal B 402 and counts from 0. The reset of the register can be executed by the CPU 201. The CPU 201 stores the image data 310 to be input to the pipeline processing in the frame buffer A 221 until the final image data 310 designated by the user is input to the frame buffer A 221. For this reason, the CPU 201 monitors whether the pulse signal of the control signal A 401 is output. If a pulse signal is detected (YES in step S708), the CPU 201 determines whether the final image data 310 is stored in the frame buffer A 221. If the final image data 310 is not stored in the frame buffer A 221 (NO in step S709), the CPU 201 stores the next image data 310 in the frame buffer A 221 (step S711). For example, the CPU 201 can do the determination based on whether the image data 310 for which the sequence number of captured image data is 5 and the Timecode is 9:51:30.0 is stored in the frame buffer A 221. Note that if the CPU 201 first arranges 16 image data (9:50:30.0 to 9:50:30.15) in the frame buffer A 221, the Timecode of the next image data to be stored in the frame buffer A 221 is 9:50:30.16. That is, the CPU 201 stores, in the frame buffer A 221, the image data 310 corresponding to a value obtained by adding 1 to the value of the Timecode 320 of the image data 310 finally stored in the frame buffer A 221. On the other hand, if the final image data 310 is stored in the frame buffer A (YES in step S709), the processing waits until the final image data 310 is output by the transmission unit 212. If the final image data 310 is notified to the transmission unit 212 and output to the communication line, the CPU 201 instructs the signal generation unit 205 to stop generation of the control signal A 401 and the control signal B 402 (step S713). The CPU 201 returns to step S701 and waits for input of an instruction from the user. Note that the CPU 201 can recognize that the final image data 310 is notified to the transmission unit 212 by waiting for a time corresponding to the latency needed for the pipeline processing after the final image data 310 is stored in the frame buffer A 221. That is, since the pipeline processing of image data is always performed by a predetermined number of function units, the time (latency) needed for the processing is always constant. For example, the CPU 201 can calculate the latency by the product of the number of function units included in pipeline processing and the processing period of pipeline processing (the output period of the control signal A 401). On the other hand, upon receiving a stop instruction from the user before the final image data is stored in the frame buffer A 221 (NO in step S708 and YES in step S710), the CPU 201 instructs the signal generation unit 205 to stop generation of the control signals (step S713). The CPU 201 then returns to step S701 and waits for input of an instruction from the user.
In this way, for example, if the stop instruction of the user is received during execution of pipeline processing of the first image data set (YES in step S710), the CPU 201 waits for input of an instruction from the user. At this time, in some cases, a designation of the second image data set is received from the user in step S701, and the start instruction is received from the user in step S703. In this case, in step S706, the CPU 201 executes mask processing of the control signal B 402 a predetermined number of times such that processed data of the first image data set is not output to the communication line. In this way, control can be performed such that the processed data is not output from the input of the second image data set to the frame buffer A 221 until the output of the processed data to the frame buffer D 224.
An example of an operation procedure when the signal generation unit 205 generates the control signal A 401 and the control signal B 402 will be described with reference to
Next, the operation procedure of each function unit constituting pipeline processing will be described.
As described above, in this embodiment, the CPU 201 controls the signal generation unit 205 such that the control signal B 402 is not output to the transmission instruction unit 211 until the first image data 310 of the image data set designated by the user or the like is stored in the frame buffer D 224. When performing image processes on the image data 310 by pipeline processing, the CPU 201 specifies the mask count based on the latency of the pipeline processing. The signal generation unit 205 does not output the control signal B 402 as many times as the mask count designated by the CPU 201. It is therefore possible to avoid output of image data 310 stored in the frame buffer 220 before the image data set designated by the user or the like is processed to the communication line. Image data to be output can be controlled by adding processing of only masking the control signal B 402 a predetermined number of times.
In the first embodiment, an example in which the signal generation unit 205 masks the output of the control signal B 402, thereby controlling not to output unnecessary image data stored in the frame buffer A 221 to the frame buffer D 224 has been described. In this embodiment, a description will be made using an example in which the data length of image data 310 to be notified from an image notification unit 210 to a transmission instruction unit 211 is set to 0 during a predetermined period or a predetermined number of times, thereby controlling not to output the image data 310 to a communication line. Note that in this embodiment, differences from the first embodiment will be described, and except for these, the operations of function units are the same as in the first embodiment, and a description thereof will be omitted. For example, the operation of a CPU 201 is the same as the operation procedure shown in
As described above, in this embodiment, the CPU 201 controls the image notification unit 210 to notify the transmission instruction unit 211 that the data length of the image data is 0 until the first image data 310 of the designated image data set is stored in the frame buffer D 224. Since the transmission instruction unit 211 is notified that the data length is 0 regardless of the presence/absence of the data in the frame buffer D 224, the transmission instruction unit 211 does not make an output notification to the transmission unit 212. Alternatively, the transmission instruction unit 211 also notifies the transmission unit 212 that the size of image data 311 is 0. This makes it possible to avoid output of the image data 310 stored in a frame buffer 220 before the image data set designated by the user or the like is processed to the communication line. Additionally, with the configuration according to this embodiment, image data to be output can be controlled by adding processing of only correcting the output of the image notification unit 210 without correcting the conventional operation of the control signal generation unit 503.
In the first embodiment, an example in which the signal generation unit 205 masks the output of the control signal B 402, thereby controlling not to output unnecessary image data stored in the frame buffer A 221 to the frame buffer D 224 has been described. In this embodiment, a control signal C 1601 different from a control signal A 401 that is input to an image processing unit A 231 to an image processing unit C 233 is input to an image notification unit 210. The output of the control signal C 1601 is masked, thereby controlling not to output image data 310. Note that in this embodiment as well, differences from the first embodiment will be described, and except for these, the operations of function units are the same as in the first embodiment, and a description thereof will be omitted. For example, the operation of a CPU 201 is the same as the operation procedure shown in
As described above, in this embodiment, the CPU 201 controls the control signal generation unit 503 such that the control signal C 1601 is not output to the image notification unit 210 until the first image data 310 of the designated image data set is stored in a frame buffer D 224. During the absence of input of the control signal C 1601, the image notification unit 210 does not make a notification to the transmission instruction unit 211. This makes it possible to avoid output of the image data 310, that is stored in a frame buffer 220 before the image data set designated by the user or the like is processed, to the communication line.
In the first to third embodiments, a description has been made using examples premised on that the function units constituting pipeline processing operate in accordance with the period of the control signal A 401 output from the control signal generation unit 503, and the latency of the pipeline processing is fixed. That is, in these examples, the CPU 201 specifies the latency of pipeline processing, and instructs the signal generation unit 205 or the image notification unit 210 not to output a notification to the transmission unit 212 during a period corresponding to the latency. In this embodiment, identification information capable of identifying an image data set to which image data 310 belongs is used to control not to output unnecessary image data stored in a frame buffer A 221 to a frame buffer D 224. For example, the identification information can be the sequence number or Timecode 320 of captured image data 300. For this reason, this embodiment is premised on that the sequence number and the Timecode 320 of captured image data associated with the image data 310 stored in an external storage unit 209 and each frame buffer 220 are maintained (are not changed). A CPU 201 notifies an image notification unit 210 of the sequence number and the Timecode 320 of the captured image data associated with the first image data 310 of the image data set designated by the user or the like. For example, based on the input of a control signal A 401, the image notification unit 210 compares the values of the sequence number and the Timecode 320 of the captured image data associated with the image data 310 stored in the frame buffer D 224 with values notified from the CPU 201. Until these match, the image notification unit 210 notifies the transmission instruction unit 211 that the data length of image data 311 is zero. After these match, the image notification unit 210 notifies the transmission instruction unit 211 of the data length of the image data acquired from the header information of the image data 310 in the frame buffer D 224. Hence, the CPU 201 can control not to output unnecessary image data stored in the frame buffer A 221 to the frame buffer D 224 without specifying the latency of pipeline processing. For example, even if the latency of pipeline processing varies, this technique can be applied. The operations of function units according to this embodiment will be described below. Note that in this embodiment as well, differences from the first embodiment will be described, and except for these, the operations of function units are the same as in the first embodiment, and a description thereof will be omitted. For example, the operation of the CPU 201 is the same as the operation procedure shown in
As described above, in this embodiment, the CPU 201 controls such that the image notification unit 210 does not make a notification to the transmission instruction unit 211 until the first image data 310 of the designated image data set is stored in the frame buffer D 224. The image notification unit 210 detects, based on the sequence number and the Timecode 320 of the captured image data notified from the CPU 201, that the image data 310 to be output has reached the frame buffer D 224, and does not make a notification to the transmission instruction unit 211 till then. This makes it possible to avoid output of the image data 310, that is stored in the frame buffer 220 before the image data set designated by the user or the like is processed, to the communication line. Additionally, according to the configuration of this embodiment, since it does not depend on the configuration of image processing in the camera adapter 111, this technique can be applied to pipeline processing with variable latency or a configuration other than pipeline processing.
As a modification of this embodiment, if the image notification unit 210 detects that the image data 310 to be output has reached the frame buffer D 224, the CPU 201 or the signal generation unit 205 may be notified of it. For example, the CPU 201 notifies the image notification unit 210 of the sequence number and the Timecode 320 of the captured image data and causes it to determine whether the first image data 310 of the image data set designated by the user or the like is stored in the frame buffer D 224. In addition, the CPU 201 notifies the signal generation unit 205 to mask a control signal B 402. Upon detecting that the first image data 310 of the image data set is stored in the frame buffer D 224, the image notification unit 210 makes a notification to the CPU 201. Based on this notification, the CPU 201 notifies the signal generation unit 205 to cancel the mask. Note that the image notification unit 210 may notify the signal generation unit 205 to cancel the mask. This makes it possible to avoid output of the image data 310, that is stored in the frame buffer 220 before the image data set designated by the user or the like is processed, to the communication line.
Note that a description has been made above using an example in which the CPU 201 instructs the number of times of masking the control signal B 402 or a control signal C 1601 or the number of times of notifying, by the image notification unit 210, that the data length of the image data 310 is 0. However, the CPU 201 may instruct a predetermined period. For example, when executing image processes by pipeline processing, time until the image data 310 reaches the frame buffer D 224 can be calculated based on the number of image processes included in the pipeline processing and the period of the pulse signal of the control signal A 401. The CPU 201 notifies each function unit of the thus calculated predetermined period, thereby controlling such that the transmission unit 212 does not output the image data 310 in the frame buffer D 224 during the period. Also, in the above-described embodiment, a description has been made using an example in which image processes are executed by pipeline processing. However, the present invention is not limited to application of the technique to pipeline processing. For example, the technique can be applied to an arbitrary image processing system that includes a plurality of image processes and needs a predetermined period from input to the image data 310 to output of an image processing result. Note that the target of image processing may be the captured image data 300 or the image data 310. That is, the image data 310 and the captured image data 300 can be replaced with each other. Furthermore in this embodiment, the functional diagrams described in the embodiments are merely examples and, for example, some function units may not be included. For example, the transmission instruction unit 211 and the transmission unit 212 may be integrated. In this case, for example, the control signal B 402 can directly be output to the transmission unit 212. Also, the image notification unit 210 can directly notify the transmission unit 212 of the address and the data length of the image data 310. Also, in the above-described embodiments, a description has been made using an example in which the transmission instruction unit 211 is notified that the data length of the image data 310 is 0. However, the image notification unit 210 may not make a notification to the transmission instruction unit 211. Alternatively, the image notification unit 210 may notify that the image data 310 does not exist in the frame buffer D 224 using a method other than setting the data length of the image data 310 to 0. For example, the image notification unit 210 may make a notification directly indicating that the data does not exist in the frame buffer D 224 to the transmission instruction unit 211.
The present invention can be implemented by supplying a program configured to implement one or more functions of the above-described embodiments to a system or an apparatus via a network or a storage medium and causing one or more processors in the computer of the system or the apparatus to read out the program and execute it. The present invention can also be implemented by a circuit (for example, an ASIC) that implements one or more functions.
Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2023-209549, filed Dec. 12, 2023, which is hereby incorporated by reference herein in its entirety.
| Number | Date | Country | Kind |
|---|---|---|---|
| 2023-209549 | Dec 2023 | JP | national |