1. Field of the Invention
The present invention relates to a technology for embedding data into an image, and relates to a material that includes embedded data.
2. Description of the Related Art
Recently, a technology to embed data into an image such as digital data or a printed material is being developed for preventing falsification and unauthorized use or for providing additional services. The image, which includes embedded data, is used as a user interface and for asserting rights. Further, various methods have been proposed for embedding data into the image.
However, a method that enables to embed data into all the images does not exist. If data cannot be embedded into an image, a method is proposed to discard the image as an image that cannot be subjected to embedding of data. Alternatively, a method disclosed in Japanese Patent Application Laid-open No. 2005-117154 embeds data to the effect that data cannot be embedded.
However, in a conventional technology disclosed in Japanese Patent Application Laid-open No. 2005-117154, because data cannot be embedded in an image due to image characteristics, even if a user desires embedding of data, embedding of data cannot be carried out.
The user carries out embedding of data into the image for various reasons. Accordingly, discarding the image on the grounds that data cannot be embedded or simply embedding data to the effect that data cannot be embedded amounts to unnecessarily neglecting social, political, and legal reasons that make the user carry out embedding of data. Thus, enabling to embed as much data as possible into the image selected by the user is desirable.
Thus, there is a need of a technology that enables to reliably embed data into an image selected by the user regardless of the characteristics of the image.
It is an object of the present invention to at least partially solve the problems in the conventional technology.
According to an aspect of the present invention, a data embedding apparatus that embeds data into an image includes a code calculating unit that extracts a feature quantity from the image and calculates a code by using extracted feature quantity; a code embedding unit that embeds calculated code into the image; an embedding determining unit that determines whether the code has been embedded normally into the image; an image-feature-quantity modifying unit that modifies the extracted feature quantity upon the embedding determining unit determining that the code has not been embedded normally into the image, wherein the code calculating unit extracts a new feature quantity from the image based on modified feature quantity, and recalculates a code by using the new feature quantity.
According to another aspect of the present invention, a method of embedding data into an image includes extracting a feature quantity from the image calculating a code by using extracted feature quantity; embedding calculated code into the image; determining whether the code has been embedded normally into the image; modifying the extracted feature quantity upon it is determined at the determining that the code has not been embedded normally into the image; extracting a new feature quantity from the image based on modified feature quantity; and recalculating a code by using the new feature quantity.
According to still another aspect of the present invention, a computer product stores therein a computer program that causes a computer to implement the above method.
The above and other objects, features, advantages and technical and industrial significance of this invention will be better understood by reading the following detailed description of presently preferred embodiments of the invention, when considered in connection with the accompanying drawings.
Exemplary embodiments the present invention are explained in detail below with reference to the accompanying drawings.
In a first to a third embodiments explained below, it is assumed that image data is split into a plurality of blocks, and an average grayscale value of grayscale values of all the pixels in each block is calculated as a feature quantity. Next, two adjoining blocks are treated as a block pair, a bit corresponding to a magnitude relation between the average grayscale values of the pair block is assigned to the pair block to generate a code, and the code is embedded into the image that is split into the blocks.
A salient feature of the present invention is explained first with reference to
Upon decoding the block split image data, a code C1 is generated from the code C that is embedded into the embedding area A1. Similarly, a code C2 is generated from the code C that is embedded into the embedding area A2, a code C3 is generated from the code C that is embedded into the embedding area A3, a code C4 is generated from the code C that is embedded into the embedding area A4, a code C5 is generated from the code C that is embedded into the embedding area A5, a code C6 is generated from the code C that is embedded into the embedding area A6, a code C7 is generated from the code C that is embedded into the embedding area A7, and a code C8 is generated from the code C that is embedded into the embedding area A8. “2” appearing in the block split image data after decoding indicates that the bit is not determined as either “0” or “1”.
By taking a majority bit from the bits in the same position of each embedding area of the block split image data after decoding, a single code C′ is determined from the codes C1 to C8 that are 16-bit bit strings. If the code C′ which is a 16-bit bit string matches with the code C that is the embedded code, embedding of the code C into the image is successful. However, if the code C′ does not match with the code C, embedding of the code C into the image is a failure and the code C is not embedded normally into the image.
To overcome the drawback, in the present invention, the grayscales of the entire image or a portion of the image are modified according to a correspondence of grayscale conversion that is regulated by a grayscale conversion table. Next, the grayscales are extracted again from the image that includes the modified grayscales. Based on the extracted grayscales, a code for embedding into the image is calculated and the code is embedded into the image. Due to this, even a code, which is calculated from the grayscales and cannot be embedded due to grayscale characteristics, can be embedded into the image.
In a block pair, DL indicates a grayscale value of a left block and DR indicates a grayscale value of a right block. If DL<DR, the bit “0” is assigned to the block pair and if DL≧DR, the bit “1” is assigned to the block pair. Based on such rules, bits are assigned to the block pairs of the image area of top two rows in the image. The assigned bit string is the code C that is the embedded code in the image shown in
As shown in
A method to calculate the code C based on the grayscale values of the image is not to be thus limited, and any method can be used that calculates a predetermined code based on the bits assigned to all the block pairs or a part of the block pairs. Further, a determining method of the code C′ is not to be thus limited.
The first embodiment according to the present invention is explained below with reference to
The image input unit 101 is an interface that receives an input of the image data from an input device 200 that is an external imaging device such as a Charge Coupled Device (CCD) camera that converts an input image into the image data. The image input unit 101 carries out a process to transfer to the storage unit 102, the image data that is transferred from the input device 200. Further, the image data is not to be limited to an image taken by the CCD, and can also include a natural image, an image that is taken and trimmed by a cameraman, and an artificial image such as a logo. In other words, the input image can be any image that is provided as a digital image.
The storage unit 102 further includes an image data-storage unit 102a and a grayscale conversion table 102b. The image data-storage unit 102a stores therein the image data that is transferred from the image input unit 101. Further, the grayscale conversion table 102b stores therein a one to one correspondence established between the grayscales before conversion and the grayscales after conversion.
As shown in
The encoding processor 104 embeds into the image data that is stored in the image data-storage unit 102a, the code C that is calculated by the embedded code-calculating processor 103. In an embedding process, the encoding processor 104 sequentially embeds a single bit of the code C from the first bit to the eighth bit into a single block pair starting from the block pair at the extreme left of an odd numbered row of the image. Further, the encoding processor 104 sequentially embeds a single bit of the code C from the ninth bit to the sixteenth bit into a single block pair starting from the block pair at the extreme left of an even numbered row that continues after the next odd numbered row.
The decode checking processor 105 decodes the code C′ from the image that includes the code C embedded by the encoding processor 104 and determines whether the decoded code C′ matches with the code C. If the code C′ matches with the code C, the decode checking processor 105 instructs the image output processor 106 to carry out an image output process. If the code C′ does not match with the code C, the decode checking processor 105 instructs the grayscale modifying processor 107 to carry out a grayscale modifying process.
The image output processor 106 instructs an output device 300, which is an external device in the form of a display device such as a display or a printing device such as a printer that outputs the image based on the image data, to output the image data.
Based on the instruction from the decode checking processor 105, the grayscale modifying processor 107 which is a grayscale-converting filter refers to the grayscale conversion table 102b, operates the image data that is stored in the image data storage unit 102a, and modifies all the grayscales or a part of the grayscales of the image. The image data of the image that includes the grayscales modified by the grayscale modifying processor 107 is transferred to the embedded code-calculating processor 103 and a string of processes such as an embedded code calculating process, an encoding process, and a decode checking process is executed again.
A structure of a decoder, which decodes data from the image that includes the embedded data, is explained next.
If image data (for example, a blank portion) is included around the embedded code of the image data that is read by an input device 500, an image cutting unit 401 of the decoder 400 includes a function to cut the valid embedded code from the entire image data. However, cutting is not carried out if only the embedded code is input into the image cutting unit 401.
Similarly as the block split image data shown in
According to a bit shift of the decoded code (16-bit code), a block extracting unit 403 sequentially extracts the block pairs (two blocks) from the block split image data and sequentially outputs a density distribution of the block pair (two blocks) as block density data (not shown in
Based on the block density data, an averaging unit 404 calculates left average density data (not shown in
A comparing unit 406 compares the magnitude relation between the left average density data and the right average density data that are stored in the register 405l and the register 405r to carry out bit determination and outputs to a decoding unit 407, a cluster of the codes C1 to C8 corresponding to a bit determination result (based on a relational expression mentioned earlier, bits are determined as “0” or “1”).
Each of the candidate codes C1 to C8 shown in
As shown in
All the components of the decoder 400 are interconnected via a not shown controller.
Acquired grayscale characteristics of the decoder shown in
However, due to the device characteristics of the input device 500, even if the embedded code is decoded from the image that includes the embedded code that is calculated based on the acquired grayscale values, the embedded code cannot be decoded normally. The present invention is carried out to overcome such a drawback. In other words, as shown in
The grayscale conversion characteristics shown in
A data embedding process in the data embedding apparatus according to the first embodiment is explained next.
Next, the embedded code-calculating processor 103 executes the embedded code calculating process (step S102). To be specific, the embedded code-calculating processor 103 splits the image data into the image areas of M rows and N columns and assigns the bits based on the grayscale values of the right blocks and the left blocks in the block pairs of the image areas of the top two rows in the image. The embedded code thus calculated is the code C.
Next, the encoding processor 104 embeds in block pair units, into the image data stored in the image data-storage unit 102a, the code C that is calculated by the embedded code-calculating processor 103 (hereinafter, “encoding process” (step S103)).
Next, the decode checking processor 105 decodes the code C′ from the image that includes the code C that is embedded by the encoding processor 104 and determines whether the decoded code C′ matches with the code C (hereinafter, “decode checking process” (step S104)). If the code C′ matches with the code C (Yes at step S105), the image output processor 106 instructs the output device 300 to output the image data (step S106).
However, if the code C′ does not match with the code C (No at step S105), the grayscale modifying processor 107 refers to the grayscale conversion table 102b, operates the image data stored in the image data-storage unit 102a, and modifies the grayscales of the entire image or a part of the image (step S107). After step S107 has ended, the data embedding process moves to step S102.
The second embodiment according to the present invention is explained below with reference to
If the output device 300 is a printing device such as a printer, grayscale conversion is optimum when carried out by converting the grayscales according to a four-dimensional to four-dimensional correspondence between the grayscales that are represented by four dimensions of Cyan, Magenta, Yellow, and Black (CMYK). Thus, based on whether the output device 300 is a display device or a printing device, switching to RGB grayscale conversion or CMYK grayscale conversion enables to carry out optimum grayscale conversion. Further, because CMYK grayscale conversion is the same as RGB grayscale conversion in principle, an explanation is omitted. Even a method in which the grayscales are represented by three dimensions of YUV (Y indicates a luminance signal, U indicates a difference between the luminance signal and a blue color component, and V indicates a difference between the luminance signal and a red color component) is basically similar to RGB grayscale conversion.
As shown in
In grayscale conversion based on the grayscale conversion table shown in
For example, it is assumed that original grayscale values of each “B” component of the block pairs in the image are (20,20). Further, it is assumed that grayscale values of (35,5) are desirable after embedding of the embedded code by encoding but the grayscale values after decoding become (19,19). If the grayscale values become (19,19), a grayscale difference such as the grayscale difference when the grayscale values are (35,5) cannot be obtained, thus indicating that the embedded code is not embedded normally.
To ensure that the magnitude relation between the grayscale values in the block pairs is similar to the grayscale values (35,5), for example, the grayscale modifying processor 107 refers to a grayscale component of “B” of the grayscale levels before conversion in the grayscale conversion table shown in
The third embodiment according to the present invention is explained below with reference to
The structure of the data embedding apparatus according to the third embodiment is explained first.
The image data-storage unit 102a of the storage unit 102 stores therein the image data for each image area unit. Further, the decode checking processor 105 outputs a decode checking result for each image area unit. The decode checking processor 105 verifies the decode checking result for each image area unit (determines whether the code C is embedded normally into each image area unit).
Based on an instruction from the decode checking processor 105, the image area unit-grayscale modifying processor 108 refers to the grayscale conversion table 102b, based on the decode checking result, operates the image data (stored in the image data-storage unit 102a) of the image areas in which the embedded code is not encoded normally, and carries out a process to modify the grayscales of the image areas.
The image data, which includes the grayscales that are modified by the image area unit-grayscale modifying processor 108, is transferred to the embedded code-calculating processor 103. After the decode checking processor 105 has completed verifying the decode checking results of all the image areas, the embedded code-calculating processor 103, the encoding processor 104, and the decode checking processor 105 once again carry out the string of processes that include the embedded code calculating process, the encoding process, and the decode checking process.
The data embedding process performed by the data embedding apparatus according to the third embodiment is explained next.
Next, the embedded code-calculating processor 103 executes the embedded code calculating process (step S112). To be specific, the embedded code-calculating processor 103 splits the image data into the image areas of M rows and N columns and assigns the bits based on the grayscale values of the right blocks and the left blocks in the block pairs of the image areas of the top two rows in the image. The embedded code thus calculated is the code C.
Next, the encoding processor 104 embeds in block pair units, into the image data stored in the image data-storage unit 102a, the code C that is calculated by the embedded code-calculating processor 103 (hereinafter, “encoding process” (step S113)).
Next, the decode checking processor 105 decodes the code C′ from the image that includes the code C embedded by the encoding processor 104 and determines whether the decoded code C′ matches with the code C (step S114). If the code C′ matches with the code C (Yes at step S115), the image output processor 106 instructs the output device 300 to output the image data (step S116).
However, if the code C′ does not match with the code C (No at step S115), the decode checking processor 105 verifies the decode checking result of each image area unit and determines whether the code C is embedded normally into the image area (step S117).
If the code C is embedded normally into the image area (Yes at step S117), the data embedding process moves to step S119. If the code C is not embedded normally into the image area (No at step S117), the image area unit-grayscale modifying processor 108 refers to the grayscale conversion table 102b, operates the image data stored in the image data-storage unit 102a, and modifies the grayscales of the image area (step S118).
At step S119, the decode checking processor 105 determines whether the verification of the decode checking results of all the image areas is completed. If the verification of the decode checking results of all the image areas is completed (Yes at step S119), the data embedding process moves to step S112. If the verification of the decode checking results of all the image areas is not completed (No at step S119), the data embedding process moves to step S117.
The present invention is explained with reference to the first to the third embodiments. However, the invention in its broader aspects is not limited to the specific details and representative embodiments shown and described herein. Accordingly, various modifications may be made without departing from the spirit or scope of the general inventive concept as defined by the appended claims and their equivalents. Further, the effects described in the embodiments are not to be thus limited.
The sequence of processes, the sequence of controls, specific names, and data including various parameters can be changed as required unless otherwise specified.
The constituent elements of the device illustrated are merely conceptual and may not necessarily physically resemble the structures shown in the drawings. For instance, the device need not necessarily have the structure that is illustrated. The device as a whole or in parts can be broken down or integrated either functionally or physically in accordance with the load or how the device is to be used.
The process functions performed by the apparatus are entirely or partially realized by a Central Processing Unit (CPU) (or a Micro Processing Unit (MPU), a Micro Controller Unit (MCU) etc.) or a computer program executed by the CPU (or the MPU, the MCU etc.) or by a hardware using wired logic.
All the automatic processes explained in the first to the third embodiments can be, entirely or in part, carried out manually. Similarly, all the manual processes explained in the first to the third embodiments can be entirely or in part carried out automatically by a known method. The sequence of processes, the sequence of controls, specific names, and data including various parameters explained in the first to the third embodiments can be changed as required unless otherwise specified.
According to the present invention, if a code is not embedded normally, a feature quantity of an image is modified and the code based on the feature quantity of the image is embedded again. Due to this, the code can be reliably embedded into the image and a scope of types of the images that enable embedding of the code can be widened.
Although the invention has been described with respect to a specific embodiment for a complete and clear disclosure, the appended claims are not to be thus limited but are to be construed as embodying all modifications and alternative constructions that may occur to one skilled in the art that fairly fall within the basic teaching herein set forth.
Number | Date | Country | Kind |
---|---|---|---|
2006-320666 | Nov 2006 | JP | national |