The present application claims priority to and incorporates by reference the entire contents of Japanese priority documents, 2004-266653 filed on Sep. 14, 2004.
1. Field of the Invention
The present invention relates to an image processing device, an image processing program and a computer-readable recording medium which perform a block-basis editing of specific blocks in a codestream of compression coded image data according to JPEG2000 algorithm.
2. Description of the Related Art
There is known JPEG2000 algorithm which is considered as the compression coding algorithm which encodes a multi-level image (especially color image) reversibly, and made into the standard system by the ITU-T and the ISO. Refer to the ISO 15444-1 JPEG2000 Image Coding System (JPEG2000).
Digital image processing systems in recent years have a tendency which makes the resolution high or increases the number of gradations for higher quality of image. While the quality of image improves because the amount of information contained in the encoded image data increases by this tendency, a problem exists in that the amount of information of the image increases.
If the image which conventionally has a value of two gradations (white or black) is made into an image having a value of 256 gradations, the amount of information is increased by 8 times. If the amount of information is increased by 8 times, the storage capacity needed to store the image data is also increased by 8 times. A problem exists in that the manufacture cost of the system increases. In order to reduce the storage capacity, the compression coding of an image is carried out.
One such compression coding algorithms is the technology for encoding a multi-level image efficiently. A representative example of the compression coding algorithm of multi-level image (in which color image is also included) is the JPEG algorithm which is recommended as the standard by the ISO and ITU-T.
In the JPEG algorithm, there are a DCT system which is the baseline system and a DPCM system which is the optional system. The former is the irreversible compression coding algorithm (called a lossy coding algorithm) which carries out encoding by reducing part of the amount of information of the original image in such a manner that the quality of image is not distorted to the human eyes' vision properties. The latter is a reversible compression coding algorithm (called a lossless compression coding algorithm) which carries out encoding without spoiling the amount of information of the original image.
The DCT system carries out the encoding of the image information after the image information is converted into frequency information using a discrete cosine transform. On the other hand, the DPCM system predicts a target pixel level from the neighboring pixels and carries out the encoding based on the prediction error.
If the quality of image is considered more important, using an efficient DCT system is appropriate. If the storage of image information is considered more important, using a reversible DPCM system is appropriate since the DCT system is irreversible.
Although the ideal system is a system which is reversible and has high efficiency, there is a problem in that high efficiency cannot be acquired with the reversible system such as the existing DPCM system. The current trend is to useg the DCT system for compression of a multi-level image with a comparatively large number of gradations which is usually used in a personal computer (PC) etc.
However, in the case of the DCT system, if the compressibility is made high, a characteristic block distortion or a mosquito noise occurs at the contour part, and the quality of image will deteriorate extremely. Especially, in the case of a character image, that tendency is remarkable and the problem of the quality of image becomes serious.
Although the JPEG algorithm is the optimal system for the uses in which the storage capacity of an image is reduced, it is not the best for the uses, such as image edit processing, etc. which are used by a digital copier. This is because the JPEG algorithm cannot specify the position of the image in the state of the codestream. In other words, it is impossible for the JPEG algorithm to perform a decode processing of only an arbitrary portion of a specified image.
Therefore, in order to perform edit processing of compression coded image data, it is necessary that the entire image data be decoded, and the edit processing be performed for the reconstructed image after the decoding. And, if required, the reconstructed image after the editing is again subjected to the compression coding. There is a problem in that the memory is needed with a large storage capacity for storing the reconstructed image after the decoding. For example, in a case of an A4-size, 600-dpi, RGB color image, the storage capacity of about 100 M bytes is required.
One of the countermeasures for solving the above problem of the storage capacity of the memory when performing the edit processing is to use a fixed-length compression coding algorithm. The compression coding may be divided into a variable length image coding algorithm and a fixed length image coding algorithm depending on the length of the code word after the coding.
The advantages of the former are that the encoding efficiency is higher than that of the latter and the reversible coding is possible. On the other hand, the advantages of the latter are that the position of the image before the coding can be specified in the state of the codestream and the decoding of only the arbitrary portion of the image is possible. This means that the edit processing of the image can be carried out without changing the state of the codestream.
However, there is a problem that the encoding efficiency is generally low and the reversible coding is difficult compared with the variable length coding.
In order to eliminate the above problems of the JPEG system, the compression coding algorithm called JPEG2000 attracts attention in recent years. JPEG2000 algorithm is the transform compression coding algorithm using the wavelet transform, and it is predicted that JPEG2000 algorithm will replace the JPEG algorithm in the field of static images including a color image from now on.
In addition to having eliminated the degradation of the image quality due to the low bit rate (which is the problem of the JPEG algorithm), JPEG2000 algorithm is provided with many new functions for practical uses.
One of such new functions included in JPEG2000 algorithm is the tile processing. In the tile processing, the image is divided into a number of rectangular small areas (tiles) and the encoding of each of the respective small areas is performed independently. It is possible to specify the area of the image in the state of the codestream by using the tile processing, and the edit processing of the image is attained without changing the state of the codestream.
However, there is still a problem also in JPEG2000. When the edit processing of a particular code block in the codestream is carried out, there is a case where the data amount of the code block after the editing is larger than the data amount of the code block before the editing.
In this case, in order to insert the new code block after the editing into the codestream, it is necessary to rearrange the codestream so that a part of the codestream needed for insertion of the new code block is created at the location of the subsequent code block data following the edited portion of the codestream.
If the processing on the memory is taken for an example, in order to secure a memory area required to insert the new code block data, the movement of code data on the memory is needed. JPEG2000 algorithm is highly efficient and includes various functions, and the processing thereof is complicated. When compared with that of the JPEG, JPEG2000 algorithm needs 4 or 5 times longer processing time in the processing of software.
When the above-mentioned edit processing is performed, the processing time becomes very long and a serious problem arises with respect to the operability of the user.
Moreover, as described above, it is possible to specify the area of the image in the state of the codestream by using the tile processing, and the edit processing of the image is attained without changing the state of the codestream. However, the complexity of processing is proportional to the processing time, and the processing of JPEG2000 becomes complicated. The problem remains unresolved in that the codestream must be rearranged in a certain case in order to to create a part of the codestream needed for insertion of the new code block at the location of the subsequent code block data following the edited portion of the codestream.
An apparatus and method for image processing device, image processing program, and recording medium are described. In one embodiment, the image processing device comprises a block-basis edit processing unit to perform a block-basis edit processing of specific blocks in an initial codestream of compression coded image data in a predetermined format to create blocks of edited code data; and a codestream control unit to combine the edited code data of the blocks created by the block-basis edit processing unit and non-edited code data of remaining blocks in the initial codestream which are not subjected to the edit processing, to create a new codestream of edited compression coded image data.
Other embodiments, features and advantages of the present invention will be apparent from the following detailed description when reading in conjunction with the accompanying drawings.
An embodiment of the present invention comprises an improved image processing device in which the above-described problems are eliminated.
Another embodiment of the present invention includes an image processing device that efficiently performs, with high speed, a block-basis editing of specific blocks in a codestream of compression coded image data to create edited code blocks of a new codestream by combining the edited code blocks after the editing and the non-edited code blocks in the initial codestream.
Embodiments of the present invention includes an image processing device which comprises: a block-basis edit processing unit to perform a block-basis edit processing of specific blocks in an initial codestream of compression coded image data in a predetermined format to create blocks of edited code data; and a codestream control unit to combine the edited code data of the blocks created by the block-basis edit processing unit and non-edited code data of remaining blocks in the initial codestream which are not subjected to the edit processing, to create a new codestream of edited compression coded image data.
According to the above-described image processing device of the invention, the edited code data of the blocks created by the block-basis edit processing unit are combined with the non-edited code data of the remaining blocks in the initial codestream which are not subjected to the edit processing, to create a new codestream of edited compression coded image data. One of the plurality of combining units which combines the edited code data of the created blocks with the non-edited code data of the remaining blocks is selected while monitoring the non-edited code data of the remaining blocks in the initial codestream before the editing, so that the time needed for the edit processing of the whole codestream is controlled to be near the shortest. Accordingly, it is possible to perform high-speed edit processing efficiently.
To facilitate understanding of the subject matter of the invention, a description will be given of the outline of the hierarchical coding algorithm and JPEG2000 algorithm prior to giving a description of the preferred embodiments of the invention.
The image processing system of
In the case of a conventional JPEG algorithm, the discrete cosine transform (DCT) is used. In the case of the system of
Compared with the DCT, the DWT has the advantage that the quality of image in high compression ranges is high. This is because JPEG2000 algorithm, which is the succeeding algorithm of JPEG, has adopted the DWT.
Moreover, with the hierarchical coding algorithm, another difference is that the system of
In the tag processing unit 105, at the time of image compression operation, compressed image data are generated as a codestream, and the interpretation of the codestream required for image expansion is performed at the time of image expansion operation.
JPEG2000 algorithm includes various convenient functions with the codestream. For example, as shown in
The color-space transform (or inverse-transform) unit 101 is connected to the I/O unit of the original image in many cases.
The color-space transform unit 101 is equivalent to, for example, the unit which performs the color-space transform to the RGB calorimetric system which includes each component of R(red)/G(green)/B(blue) of the primary-colors system, or the YUV or YCbCr colorimetric system which includes each component of Y(yellow)/M(magenta)/C(cyanogen) of the complementary-colors system from the YMC colorimetric system.
Moreover, the color-space inverse-transform unit 101 is equivalent to the inverse color-space transform that is the reverse processing to the above color-space transform.
Next, a description will be given of the color space transform (or inverse-transform) unit 101 and the wavelet transform (or inverse transform) unit 102 with reference to
Generally, the color image is divided into rectangular portions where each color component (RGB primary-colors system) of the original image is arranged as shown in
The rectangular portion is generally called the block or the tile, and it is common to call it the tile as for this divided rectangular portion according to JPEG2000. It is hereinafter made to describe such a divided rectangular portion as being the tile. In the example of
Each tile 112 of the component 111 (which is, in the example of
After the data of each tile 112 of each component 111 are input into the color-space transform (or inverse-transform) unit 101 of
The tile of the original image is initially obtained. To the original image tile (0LL) (decomposition level 0), 2-dimensional wavelet transform is performed by the wavelet transform unit 102 and the sub band (1LL, 1HL, 1LH, 1HH) shown in the decomposition level 1 is separated.
Subsequently, to low-frequency component 1LL in this layer, 2-dimensional wavelet transform is performed by the wavelet transform unit 102 and the sub band (2LL, 2HL, 2LH, 2HH) shown in the decomposition level 2 is separated.
Similarly, 2-dimensional wavelet transform is performed by the wavelet transform unit 102 also to low-frequency component 2LL, and the sub band (3LL, 3HL, 3LH, 3HH) shown in the decomposition level 3 is separated one by one.
As shown in
For example, when the number of decomposition levels is set to 3, the sub band components (3HL, 3LH, 3HH, 2HL, 2LH, 2HH, 1HL, 1LH, 1HH) indicated by the dotted area serve as the candidate for the coding, and the sub band component 3LL is not coded.
Subsequently, the bit set as the object of the coding in the turn of the specified coding is appointed, and the context is generated from the bit of the object bit circumference by the quantization (inverse quantization) unit 103 shown in
The wavelet coefficients after performing quantization are divided into the rectangles which are called the precincts and not overlapping for each of the sub bands. This is introduced in order to use the memory efficiently by implementation.
As shown in
Furthermore, each precinct is divided into the code block of the rectangle not overlapping. This serves as the base unit at the time of performing entropy coding.
The wavelet coefficients after the discrete wavelet transform (DWT) is performed may be subjected to the quantization and entropy encoding. However, according to JPEG2000 algorithm, it is also possible that the wavelet coefficients are divided into the bit-plane components, and the ordering of the bit-plane components is performed for each pixel or code block, in order to raise the efficiency of encoding.
The outline of the typical layers with respect to the tile number 0/the precinct number 3/the code block number 3 is also shown in
In the entropy coding/decoding unit 104 shown in
The tag information, called the main header, is disposed at the beginning of this codestream. After the main header, the tile-part header of the codestream (bit stream) of each tile, and the coding data of each tile are continuously disposed. And, the tag (end of codestream) is disposed at the end of the codestream.
On the other hand, at the time of decoding of the codestream, the image data is generated from the codestream of each tile of each component which is the reverse processing to the coding of the image data.
In this case, the tag processing unit 105 interprets the tag information added to the codestream that is inputted from the exterior, decomposes the codestream into the codestream of each tile of each component, and performs decoding processing for every codestream of each tile of each of that component.
While the location of the bit set as the object of decoding in the turn based on the tag information in the codestream is defined at this time, the context is generated in quantization and the inverse quantization unit 103 from the row of the circumference bit (decoding is already completed) of the object bit position.
In the entropy coding/decoding unit 104, the codestream is decoded by probability presumption from this context, the object bit is generated, and it is written in the location of the object bit.
Thus, the space division of the decoded data is carried out for every frequency band, each tile of each component of the image data is restored in this by performing the 2-dimensional wavelet inverse transformation at the 2-dimensional wavelet inverse-transform unit 102.
The restored data are changed into the image data of the original colorimetric system by the color-space inverse-transform unit 101.
In the entropy coding/decoding unit 104 of
Finally, the tag processing unit 105 performs processing which attaches the tag to the coded data from the entropy coding unit to form the codestream.
The structure of the codestream is briefly shown in
On the other hand, at the time of the decoding, the image data is generated from the codestream of each tile of each component contrary to the time of coding. As shown in
The position of the bit set as the object of the decoding in the sequence based on the tag information in the codestream is defined. In the quantization/inverse-quantization unit 103, the context is generated from the list of the circumference bits (the decoding of which is already completed) of the object bit position.
In the entropy coding/decoding unit 104, the codestream is decoded by probability presumption from this context and the codestream so that the object bit is generated and it is written in the presumed position of the object bit.
The space division of the decoded data is carried out for every frequency band, and each tile of each component of the image data is restored in this way by performing the 2-dimensional wavelet inverse transformation at the 2-dimensional wavelet transform/inverse-transform unit 102. The restored data is transformed into the image data of the original color system at the color-space transform/inverse-transform unit 101.
The above description relates to the outline of JPEG2000 algorithm that deals with the image processing method for a still image, or a single frame. It is extended to the Motion-JPEG2000 algorithm which deals with the image processing method for a moving picture including a plurality of frames.
Next, a description will be given of an example of the codestream format according to JPEG2000 algorithm.
The actual code data starts with the SOT (start of tile-part) marker, and further includes the tile header, the SOD (start of data) marker, and the tile data (code data).
After the code data that is equivalent to the whole image data, the EOC (end of codestream) marker which indicates the end of the codestream is added.
As shown in
On the other hand,
The marker and marker segments that are used according to JPEG2000 will now be explained.
Every marker comprises 2 bytes (the first byte is “0xff” and the second byte has any value in the range “0x01” to “0xfe”). The markers and marker segments can be classified into the following six styles:
Among these, the markers related to embodiments of this invention are (1) and (2). A detailed description thereof will be given below.
First, the delimiting marker and marker segment will be explained. The delimiting marker and the marker segment are indispensable, and there are SOC, SOT, SOD, and EOC. A codestream start marker (SOC) is added to the head of a codestream. A tile start marker (SOT) is added to the head of a tile-part codestream.
Next, the fixed information marker segment will be explained. This is a marker that describes the information about the image, and a SIZ marker segment corresponds to this. The SIZ marker segment is required in the main header immediately after the SOC marker segment. The length of the SIZ marker segment varies depending on the number of components.
The positional relationship of the image area and a tile in JPEG2000 will be explained.
As is apparent from the composition of the marker segment shown in
As shown in
As described above, the image arranged on the reference grid is divided into a number of rectangular small areas called “tiles” for the processing at the time of the compression coding.
Since there is the necessity that a tile can be encoded or decoded solely or independently, referring to the pixels exceeding the boundary of that tile is impossible. Every tile is XTsiz reference grid points wide and YTsiz reference grid points high. The upper left corner on the first tile is offset from the upper left corner of the reference grid by (XTOsiz, YTOsiz).
The tile grid offsets (XTOsiz, YTOsiz) are constrained to be no greater than the image area offsets. This is expressed by the following formulas:
0<=XTOsiz<=XOsiz;
0<=YTOsiz<=YOsiz.
Also the tile size plus the tile offset shall be greater than the image area offset. This ensures that the first tile will contain at least one reference grid point from the image area. This is expressed by the following formulas:
XTsiz+XTOsiz>XOsiz;
YTsiz+YTOsiz>YOsiz.
The number of tiles in the X direction (horizontal number of tiles) is expressed by the formula: “horizontal number of tiles”=(Xsiz−XTOsiz)/XTsiz, and the number of tiles in the Y direction (perpendicular number of tiles) is expressed by the formula:
“perpendicular number of tiles”=(Ysiz−YTOsiz)/YTsiz.
A “tile” is a rectangular array of points on the reference grid into which an “image” is divided, and, in the case of the number of partitions=1, the condition: “image”=“tile” is met. A tile-component means all the samples of a given component in a tile. There is a tile-component for every component and for every tile. A “precinct” is a sub-division of a tile-component within each resolution, used for limiting the size of packets. A “sub-band” is a group of transform coefficients resulting from the same sequence of low-pass and high-pass filtering operations, both vertically and horizontally. A precinct consists of either a group of HL, LH and HH sub-bands or a single LL sub-band. A “code block” is a rectangular grouping of coefficients from the same sub-band of a tile-component.
A packet is a part of the bit stream comprising a packet header and the coded data from one layer of one decomposition level of one component of a tile. A layer is a collection of coding pass compressed data from one, or more code-blocks of a tile-component. Layers have an order for encoding and decoding that must be preserved. Since a layer is a part of codestream of the bit plane of the whole image roughly, if the number of layers decoded increases, the quality of image becomes high. Therefore, if all the layers are collected, it becomes a codestream of all the bit planes of the image whole region.
Next, a description will be given of the first preferred embodiment of the invention.
A personal computer, a workstation, etc. may be used as the image processing system 91. As shown in
Other components of the image processing system 91 connected to the bus 94 via predetermined interfaces include the magnetic storage 95 (hard disk drive), the network interface 101, the input unit 96 (such as a mouse, a keyboard, etc.), the display device 97 (such as a LCD and a CRT), and the disk drive 99 which reads a recording medium 98 (such as an optical disk). The network interface 101 is provided to connect the image processing system 91 with the external network 100 for communications with the network 100.
The network interface 101 is connectable with WAN, such as the Internet, through the network 100. As the recording medium 98, the media of various systems including optical disks, such as CD and DVD, magneto-optic disks, and flexible disks, may be used. As for the disk drive 99, any of an optical disk drive, a magnetic-optical disk drive, a flexible disk drive, etc. may be used according to the kind of the recording medium 98.
The image processing program according to the invention is stored in the magnetic storage 95. This image processing program may be read out from the recording medium 98 by the disk drive 99, or may be downloaded from WAN, such as the Internet, and such image processing program is installed in the magnetic storage 95. The image processing system 91 is set in an operable state by this installation. The image processing program may operate on a predetermined OS (operating system). The image processing program may constitute a part of specific application software.
In the image processing system 91 of the above-described composition, the processing which will be later mentioned with reference to
The image processing system 91 performs the block-basis edit processing with respect to the positions at which the original image is specified. As shown in
The block-basis detecting unit 31 receives a codestream before the edit processing (an initial codestream of compression coded image data created according to, for example, JPEG2000 algorithm) inputted by the image processing program, and extracts, from the codestream, specific blocks that are designated for the edit processing. The remaining blocks in the codestream that are not designated for the edit processing are sent to the codestream control unit 33.
The block-basis edit processing unit 32 performs the block-basis edit processing of the blocks extracted from the codestream by the block-basis detecting unit 31, and creates blocks of the edited code data as a result of the edit processing.
The codestream control unit 33 combines the edited code data of the blocks that are created by the block-basis edit processing unit 32 and the non-edited code data of the remaining blocks which are not designated for the edit processing, to create a new codestream of the edited compression coded image data.
Briefly speaking, the block-basis detecting unit 31 detects whether each block of the codestream inputted to the block-basis detecting unit 31 is designated for the edit processing.
Suppose that the JPEG2000 codestream using the tiles is taken for an example. As shown in
The code data of the blocks which are detected as being designated for the edit processing the block-basis detecting unit 31 are sent to the block-basis edit processing unit 32. In the block-basis edit processing unit 32, the predetermined block-basis edit processing of the specific blocks in the initial codestream is carried out.
As shown in
Finally, the encoding of the code blocks after the editing is performed by the same method as that is performed for the initial codestream to the image, and the original codestream and the codestream of the same form are created (step S3).
Some examples of the edit processing performed in the present embodiment may include color modification, image composition, etc. After the edit processing is sent to the codestream control unit 33 the newly created codestream.
In the codestream control unit 33, the codestream of the block unit of the origin which is not an editing object, and the codestream of the block unit newly created with the block-basis edit processing unit 32 are combined, and the codestream by which format adjustment was carried out as a whole is created.
Although various methods about the coupling method of the codestream in this case can be considered, it is desirable suitable to make unit proper selection according to a code amount.
In this embodiment, the combining unit 52 performs combining scheme A when the code amount after the editing is larger than the code amount before the editing, and the combining unit 53 performs combining scheme B when the code amount after the editing is smaller than the code amount before the editing. One of the combining units 52 and 53 is chosen by the selector 54, and the codestream after the editing is created. The code amount comparison unit 51 detects which of the code amount before the editing and the code amount after the editing is larger, and one of the combining units 52 and 53 is chosen by the selector 54 based on a result of the code amount detection. The combining processing is not subjected to the non-edited portion of the codestream (or the blocks of the codestream which are not designated for the edit processing.
If the above processing is performed, only the particular code data of the codestream can be processed. Since the whole codestream is controllable, it is possible to perform the insertion method of the new codestream after the editing by the shortest processing time, referring to the codestream before the editing, and high-speed edit processing can be realized.
Various methods can be considered about a concrete unit of code data combination to combine the code data of the blocks of the initial codestream which is not an editing object in codestream control unit 33, and the codestream of the newly created block unit after the editing.
The codestream control unit 33 (or the combining unit 52) in this embodiment is provided to perform, when the amount of the edited code data of the newly created block is larger than the amount of the initial code data of that block before the editing, the combining such that the edited code data of the newly created block is added to the tail end of the new codestream as shown in
As shown in
If it is carried out like this, access will become possible at high speed to the added codestream (in consideration of the re-arrangement of the whole codestream, it is considered as the regular codestream corresponding to the image of the block unit of
As for a codestream format, it is desirable that the original header information is not rewritten even if the above-mentioned processing is performed.
According to this method, in order to insert a codestream in the original position, the problem of processing in which the data after it must be relocated is lost, and can realize high-speed edit processing.
Although a new code data has been arranged immediately after the tail end of the codestream in the example of
However, it is necessary to operate code length's information in the codestream of the last block unit (for the code length of a new codestream to be added to the code length of the last codestream), and to perform format adjustment as the whole codestream in that case.
The codestream control unit 33 (or the code amount comparison unit 51) in this embodiment includes the unit which is provided to perform a compression coding of the edited code data of the created block when the amount of the edited code data after the editing is larger than the amount of the initial code data before the editing, so that the amount of the compressed edited code data does not exceed the amount of the initial code data.
Especially JPEG2000 system is a scalable compression coding algorithm in which lossless and lossy are possible with the same composition, and regulation of the code amount is possible for it without changing the state of the codestream.
If this feature is used, the code amount of the once created codestream can be changed without changing the state of the codestream. Thus, the codestream whose amount is nearest to the code amount before the editing and which provides the highest image quality can be created at high speed.
Under the present circumstances, even when the code amount after the editing does not agree with the code amount before the editing, the header information 42 after the editing is made the same as that in the code amount before the editing. If it is carried out like this, it will be satisfactory as the whole codestream.
It is possible to perform the edit processing of a codestream at high speed, and to create the codestream after the editing at high speed. However, from a viewpoint of a codestream format, the redundant portion will exist in the new codestream which should exist essentially.
For example, the codestream created by processing when there are many code amounts of the block after the edit explained with reference to
When the amount of the code block after the editing is smaller than the amount of the code block before the editing, the codestream control unit 33 (or the combining unit 53) performs the combining such that a dummy code data (meaningless code data) is added to a part of the created block which is not filled with the edited code data as shown in
When this point is taken into consideration, it is desirable to maintain the codestream which should exist essentially by rearranging only a required portion of the codestream suitably, if required in consideration of the whole processing time.
The insertion method of the code data after the editing is determined by the codestream control unit 33, taking the whole processing time into consideration, and if required, it is possible that only a necessary part of the codestream be rearranged suitably. According to this method, without reducing the processing speed, it is possible to delete the useless code information, and efficient creation of a new codestream is attained.
Next, a description will be given of the second preferred embodiment of the invention.
The reading unit 11 is a scanner which optically reads the image of an original document, and this reading unit 11 focuses the reflected light of the lamp irradiation light to the original document, onto the photoelectric transducer, such as CCD (charge-coupled device), in association with the optical system including a mirror, a lens, etc.
This photoelectric transducer is carried on the SBU (sensor board unit) 12. In the SBU 12, the image signal from the reading unit 11 is converted into an electrical signal by the photo detector, and this electrical signal is converted into a digital image data. Then, the SBU 12 outputs this digital image data to the CDIC (compression/decompression and data interface control unit) 13.
The digital image data outputted from the SBU12 is inputted into the CDIC 13. The CDIC 13 controls the transmission of all the image data between the functional devices and the data bus. With respect to the image data, the CDIC 13 carries out the data transfer between the SBU 12, the parallel bus 14, and the IPP (image-processing processor) 15, and carries out the communication of image data between the CDIC 13 and the system controller (CPU) 16, which controls the whole copier system, and between the CDIC 13 and the process controller 27. Reference numerals 16a and 16b denote the ROM and RAM which are used by the system controller 16.
The image signal from the SBU 12 is also transmitted to the IPP 15 through the CDIC 13, and the IPP 15 corrects the signal degradation (the signal degradation of the scanner system) accompanied with the quantization to the digital image signal and the optical system. The resulting image signal is again outputted to the CDIC 13.
In the copier system 1, there are a job which stores the image read by the reading unit 11 in the memory and reuses the stored image, and a job which is not subjected to the storing of the image in the memory. Each of the jobs will be explained.
An example of the job storing the image in the memory, when copying two or more sheets of the same document, reading operation of the original document is performed only once by the reading unit 11, it stores in memory, and there is usage which reads accumulation data two or more times. Since what is necessary is just to print the read image as it is as an example in which the memory is not used when copying only one sheet of the document, it is not necessary to perform the memory access.
First, when the memory is not used, the image data transmitted to the CDIC 13 from the IPP 15 are again returned to the IPP 15 from the CDIC 13. The quality-of-image processing for converting the luminance data based on the photo detector into the area gradation data is performed by the IPP 15. The image data after the quality-of-image processing are transmitted to the VDC (video data controller) 17 from the IPP 15.
And pulse control for reproducing the after treatment and the dot about dot arrangement is performed to the image data which are converted to the area gradation data, and a reproduced image is formed on a copy sheet by the imaging unit 18 which is the printer engine which carries out image formation by the electrophotographic printing method.
Besides the electrophotographic printing method, any of various printing methods, such as inkjet printing method, sublimated type thermal printing method, film photo method, direct thermal printing method, and melting type thermal printing method, can be used for the printing method of the imaging unit 18.
The flow of image data when the image data are stored in the memory and additional processing (for example, image rotation, image composition, etc.) is performed at the time of retreiving the stored image data will be explained.
The image data transmitted to the CDIC 13 from the IPP 15 are sent to the IMAC (Image Memory Access Control) 19 via the parallel bus 14 from the CDIC 13. In the IMAC 19, the access control of the MEM (memory module) 20 which is the storage of the image data, the expansion of the image data for printing out to an external PC (personal computer) 21, and the compression/decompression of the image data for making effective use of the MEM 20 are performed based on the control of the system controller 16. The image data sent to the IMAC 19 are stored in the MEM 20 after the data compression, and this accumulated image data is read out, if needed. The read image data are expanded and reconstructed to the original image data, and the reconstructed image data are returned to the CDIC 13 via the parallel bus from the IMAC 19.
After the image data is transmitted to the IPP 15 from the CDIC 13, the image data is subjected to the quality-of-image processing and the pulse control of the VDC 17. Finally, an image is formed on a copy sheet in the imaging unit 18 in accordance with the processed image data.
The copier system 1 is a multi-function peripheral device and is provided with the FAX transmission function. When the FAX transmission function is used, the image processing of the read image data is performed by the IPP 15, and the processed image data is transmitted to the FCU (FAX control unit) 22 through the CDIC 13 and the parallel bus 14. Data conversion to the communication network is performed by the FCU 22, and the converted image data is transmitted to the PN (public network) 23 as the FAX data.
When the FAX reception function is used, the signal received from the PN 23 is converted into the image data by the FCU 22, and the resulting image data is transmitted to the IPP 15 through the parallel bus 14 and the CDIC 13. In this case, any special image processing is not performed, but the dot relocation and pulse control are performed by the VDC 17, and a reproduced image is formed on a copy sheet by the imaging unit 18.
Under the situation in which a plurality of jobs, such as a copy function, a FAX transmission function and a print output function, are operated in parallel, the allocation of the access control rights of the reading unit, the imaging unit and the parallel bus 14 to the plurality of jobs is controlled by the system controller 16 and the process controller 27.
The process controller (CPU) 27 controls the flow of image data, and the system controller 16 controls the whole copier system and manages starting of the respective resources. Reference numerals 27a and 27b denote the RAM and the ROM used by the process controller 27 respectively.
The user chooses any of various kinds of functions by carrying out the selection input of the control panel 24, and sets up the contents of processing, such as a copy function and a facsimile function.
The system controller 16 and the process controller 27 communicate with each other through the parallel bus 14, the CDIC 13, and the serial bus 25. Under the present circumstances, data format conversion for the data interface of the parallel bus 14 and the serial bus 25 is performed by the CDIC 13.
The MLC (Media Link Controller) 26 realizes the function of the code translation of image data. Transform (for example, the JPEG system) to another compression coding algorithm which is used by the IMAC 19, the compression coding algorithm specifically used by the CDIC 13 and the coding is performed.
In the copier system 1 of the present embodiment, the system controller 16 controls the image-processing processor 15, and the edit processing of the codestream mentioned above with reference to
The present invention is not limited to the above-described embodiments, and variations and modifications may be made without departing from the scope of the present invention.
Further, the present application is based on and claims the benefit of priority of Japanese patent application No. 2004-266653, filed on Sep. 14, 2004, the entire contents of which are hereby incorporated by reference.
Number | Date | Country | Kind |
---|---|---|---|
2004-266653 | Sep 2004 | JP | national |