The present disclosure relates to devices and methods for encoding images by compression (encoding), and imaging systems.
As digital still cameras and digital camcorders have become widespread, the joint photographic experts group (JPEG) standard and the moving picture experts group (MPEG) standard, which are techniques of compressing (encoding) image data, have been commonly used. Also, networks, such as the Internet, etc., have rapidly become widely available, and network cameras including surveillance cameras, and television telephones are becoming more and more popular. The number of users which can be simultaneously connected to a network, however, increases with an increase in the bandwidth of the network, and therefore, the amount of data which can be transmitted or received is limited. Therefore, service providers have studied a control method for reducing the data amount.
There is a conventional technique of reducing the deviation of data amounts occurring when a plurality of portions of encoded data having different bit rates are generated from the same image data by performing an offset control with respect to the timing of start of compression (encoding) processes performed on a frame-by-frame basis by a plurality of encoding processors. A multiplexing processor performs transmission at equal intervals within a unit time, depending on the amount of each portion of encoded data generated within the unit time by a plurality of encoding processors (see Japanese Patent Publication No. 2004-140651.
In conventional image encoding devices, the amount of encoded data generated by an encoding processor is detected by an amount-of-encoded-data detector, the detected amount of encoded data is compared with a target encoded data amount which is previously set, and when the detected amount of encoded data exceeds the target encoded data amount, a quantization table is updated so that a quantized coefficient is reduced or the number of quantized coefficients which are evaluated as zero increases. Thereafter, the updated quantization table is used to quantize data. The quantized data is encoded. The amount of the encoded data is compared with the target encoded data amount. These processes are repeated until the amount of encoded data becomes lower than the predetermined target encoded data amount, thereby reducing the amount of data. Therefore, updating of the quantization table, quantization, and encoding are repeatedly performed, resulting in a delay in data transfer and a decrease in the frame rate of moving images.
Specifically, for example, in the case of network cameras, etc., when a sudden change occurs in an image (e.g., somebody enters a field monitored by the camera), the amount of data to be encoded increases. In this case, the amount of encoded data may suddenly increase and exceed the target encoded data amount, resulting in frame dropping, etc.
The present disclosure describes implementations of an image encoding device and an image encoding method capable of reducing the number of times of quantization to increase the speed of a compression (encoding) process.
An example image encoding device for generating a plurality of portions of encoded data from the same input image data, includes an image encoding processor configured to compress/encode image data, an amount-of-encoded-data detector configured to detect the amount of first encoded data generated, and an amount-of-encoded-data controller configured to determine a quantization parameter for obtaining target amounts of second and subsequent encoded data, based on the amount of encoded data detected by the amount-of-encoded-data detector.
The example image encoding device may further include a conversion table configured to determine a multiplier to be multiplied by a quantization parameter based on the detected amount of the first encoded data so that the image encoding processor generates second and subsequent encoded data. In this case, the amount-of-encoded-data controller can determine a quantization parameter for obtaining target amounts of the second and subsequent encoded data, based on the determined multiplier.
Thus, by determining a multiplier for a quantization parameter for generating the second and subsequent encoded data with reference to the amount of the first encoded data using the conversion table, and based on the determined multiplier, determining a quantization parameter for obtaining target amounts of the second and subsequent encoded data, the amounts of the second and subsequent encoded data can be reduced before quantization and encoding for generation of the second and subsequent encoded data.
Moreover, in the example image encoding device, the amount-of-encoded-data detector has a function of detecting the amount of the second and subsequent encoded data. As a result, when the third and subsequent encoded data are generated, an appropriate quantization parameter can be determined based on the amounts of the first or second encoded data.
According to the present disclosure, a quantization parameter is controlled before quantization and encoding, whereby the number of times of processing can be reduced, resulting in an increase in the speed of compression (encoding) of image data.
Embodiments of the present disclosure will be described hereinafter with reference to the accompanying drawings.
In the imaging system 20 of
According to the configuration of
In addition, the encoded data obtained by the variable-length encoder 43 is input to the amount-of-encoded-data detector 51, which then obtains the amount of the encoded data. The amount-of-encoded-data controller 53 calculates a multiplier for the quantization parameter from the conversion table 52 based on the amount of the encoded data obtained in the amount-of-encoded-data detector 51, and determines the quantization parameter from the multiplier.
On the other hand, in the configuration of
In the case of I-frames, the output of the quantizer 63 is also input to the inverse quantizer 65, and then transferred through the inverse DCT unit 66 to the reconstructed image generator 67. The result of the motion compensator 70 is also simultaneously input to the reconstructed image generator 67. If a block is of inter-frame correlation, both the portions of input data are added and the result is written to the frame memory 68. In the case of I-frames, however, blocks are only of intra-frame correlation, the result of the motion compensator 70 is not input to the reconstructed image generator 67. Therefore, the data transferred from the inverse DCT unit 66 is directly written to the frame memory 68. This image data transferred to the frame memory 68 is referred to as a reconstructed image, which is used as a reference image for P-frames or B-frames.
In the case of P-frames and B-frames, image data is input on a block-by-block basis, and transferred to the predictive error generator 61 and the motion detector 69. The motion detector 69 receives the input image data, reads out pixel data in the vicinity of the same spatial position as that of the input image data from the frame memory 68, and performs motion search for obtaining a pixel position which has a highest correlation with the input image data. Thereafter, the motion detector 69 transfers the image data having the highest correlation as retrieved reference image data to the motion compensator 70, and at the same time, transfers a motion vector indicating the position to the motion vector encoder 71. Here, when intra-frame correlation encoding is selected, the subsequent encoding processes are similar to those for I-frames. When inter-frame correlation encoding is selected, the reference image data is transferred via the motion compensator 70 to the predictive error generator 61, which then calculates a difference between the input image data and the reference image data and outputs the difference to the DCT unit 62. The variable-length encoder 64 encodes the quantized image data, and at the same time, outputs the resultant data along with motion vector data encoded by the motion vector encoder 71 from the multiplexer 72.
In
According to the embodiment of the present disclosure thus configured, for example, it is possible to determine which of the second and third encoded data is larger than the other before the moving image encoding processor 60 generates the second and third encoded data. In other words, the amount of encoded data can be reduced before quantization and encoding.
Although “H.264/60 fps” is the first encoding which is a reference in the above example, other “encoding techniques/frame rates” may be used. Moreover, a multiplier may be calculated from bit rates or frame types instead of frame rates. A case where bit rates are used is shown in
In
The imaging processing in the image encoding device 25 of the embodiment of the present disclosure is not necessarily applied to a signal based on a subject image formed on the imaging sensor 22 via the optical system 21, or alternatively, of course, may be applicable to, for example, a case where an image signal input as an electrical signal from an external device is processed.
As described above, according to the present disclosure, the compression (encoding) of an image can be sped up. Therefore, the present disclosure is useful for image encoding devices which require a control for obtaining a predetermined amount of encoded data, such as network cameras including surveillance cameras, television telephones, etc.
Number | Date | Country | Kind |
---|---|---|---|
2008-251111 | Sep 2008 | JP | national |
This is a continuation of PCT International Application PCT/JP2009/003308 filed on Jul. 14, 2009, which claims priority to Japanese Patent Application No. 2008-251111 filed on Sep. 29, 2008. The disclosures of these applications including the specifications, the drawings, and the claims are hereby incorporated by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2009/003308 | Jul 2009 | US |
Child | 12979938 | US |