Methods for estimating watermark signal strength, an embedding process using the same, and related arrangements

Information

  • Patent Grant
  • 10972628
  • Patent Number
    10,972,628
  • Date Filed
    Friday, February 7, 2020
    4 years ago
  • Date Issued
    Tuesday, April 6, 2021
    3 years ago
Abstract
The disclosure relates to image processing and embedding machine-readable codes into image data. One combination estimates embedded signal strength from embedded image data transformed according to anticipated color space, printer data and/or substrate data. Other combinations are also provided.
Description
TECHNICAL FIELD

The disclosure relates generally to product packaging, image capture, signal processing, steganographic data hiding and digital watermarking.


BACKGROUND AND SUMMARY

The term “steganography” generally means data hiding. One form of data hiding is digital watermarking. Digital watermarking is a process for modifying media content to embed a machine-readable (or machine-detectable) signal or code into the media content. For the purposes of this application, the data may be modified such that the embedded code or signal is imperceptible or nearly imperceptible to a human, yet may be detected through an automated, machine-based detection process. Most commonly, digital watermarking is applied to media content such as images, audio signals, and video signals. Digital watermarks can be incorporated into images or graphics that are then printed, e.g., on product packaging.


Digital watermarking systems may include two primary components: an embedding component that embeds a watermark in media content, and a reading component that detects and reads an embedded watermark (referred to as a “watermark reader,” or “watermark decoder,” or simply as a “reader” or “decoder”). The embedding component (or “embedder” or “encoder”) may embed a watermark by altering data samples representing the media content in the spatial, temporal or some other domain (e.g., Fourier, Discrete Cosine or Wavelet transform domains). The reading component (or “reader” or “decoder”) may analyze target content to detect whether a watermark is present. In applications where the watermark encodes information (e.g., a message or auxiliary information), the reader may extract this information from a detected watermark.


A watermark embedding process may convert a message, signal, etc., into a payload conveyed by a watermark signal. The embedding process may then combine the watermark signal with media content and possibly other signals (e.g., a transform domain-based orientation pattern or synchronization signal) to create watermarked media content. The process of combining the watermark signal with the media content may be a linear or non-linear function. The watermark signal may be applied by modulating or altering signal samples in a spatial, temporal or transform domain.


A watermark encoder may analyze and selectively adjust media content to give it attributes that correspond to the desired message symbol or symbols to be encoded. There are many signal attributes that may encode a message symbol, such as a positive or negative polarity of signal samples or a set of samples, a given parity (odd or even), a given difference value or polarity of the difference between signal samples (e.g., a difference between selected spatial intensity values or transform coefficients), a given distance value between watermarks, a given phase or phase offset between different watermark components, a modulation of the phase of a host signal associated with the media content, a modulation of frequency coefficients of the host signal, a given frequency pattern, a given quantizer (e.g., in Quantization Index Modulation) etc.


The present assignee's work in steganography, data hiding and digital watermarking is reflected, e.g., in U.S. Pat. Nos. 7,013,021, 6,993,154, 6,947,571, 6,912,295, 6,891,959, 6,763,123, 6,718,046, 6,614,914, 6,590,996, 6,449,377, 6,408,082, 6,345,104, 6,122,403 and 5,862,260. Some 3rd-party work is reflected in, e.g., U.S. Pat. Nos. 7,130,442; 6,208,735; 6,175,627; 5,949,885; 5,859,920. Each of the patent documents identified in this paragraph is hereby incorporated by reference herein in its entirety. Of course, a great many other approaches are familiar to those skilled in the art, e.g., Avcibas, et al., “Steganalysis of Watermarking Techniques Using Images Quality Metrics”, Proceedings of SPIE, January 2001, vol. 4314, pp. 523-531; Dautzenberg, “Watermarking Images,” Department of Microelectronics and Electrical Engineering, Trinity College Dublin, 47 pages, October 1994; Hernandez et al., “Statistical Analysis of Watermarking Schemes for Copyright Protection of Images,” Proceedings of the IEEE, vol. 87, No. 7, July 1999; J. Fridrich and J. Kodovský. Rich models for steganalysis of digital images, IEEE Transactions on Information Forensics and Security, 7(3):868-882, June 2011; J. Kodovský, J. Fridrich, and V. Holub. Ensemble classifiers for steganalysis of digital media, IEEE Transactions on Information Forensics and Security, 7(2):432-444, 2012; and T. Pevný, P. Bas, and J. Fridrich. Steganalysis by subtractive pixel adjacency matrix, IEEE Transactions on Information Forensics and Security, 5(2):215-224, June 2010; I. J. Cox, M. L. Miller, J. A. Bloom, J. Fridrich, and T. Kalker. Digital Watermarking and Steganography, Morgan Kaufman Publishers Inc., San Francisco, Calif., 2007; R. O. Duda, P. E. Hart, and D. H. Stork. Pattern Classification. Wiley Interscience, New York, 2nd edition, 2000; each of which is hereby incorporated herein by reference in its entirety. The artisan is presumed to be familiar with a full range of literature concerning steganography, data hiding and digital watermarking.


Digital watermarking may be used to embed auxiliary information into cover media (e.g., images, packaging, graphics, etc.) such that changes to the cover media to convey the digital watermarking remain invisible to humans but allows machines to reliably extract the auxiliary information even after common signal-processing operations (e.g., noise, filtering, blurring, optical capture). This allows machines to uniquely identify objects depicted in captured imagery. Digital watermarking has been used for applications including media content protection, track and trace, etc.





BRIEF DESCRIPTION OF THE DRAWINGS

The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawing(s) will be provided by the Office upon request and payment of the necessary fee.



FIG. 1 is a flow chart illustrating various embodiments of a watermark embedding process.



FIG. 2A illustrates an example of imagery (i.e., a front image of a box of oatmeal) conveying multiple instances of a watermark signal and represented by image data that has been transformed as discussed with respect to FIG. 1. In FIG. 2A, illustrated textual elements (e.g., “Heart Healthy,” “May reduce the risk of heart disease,” “Concordia Superstores,” “Apple Cinnamon Oatmeal,” etc.) and certain graphical elements (e.g., the red heart, the green and white logo above “Concordia”) are vector images, whereas the remainder of the elements (e.g., the apples, the bowl of oatmeal, the syrup, cinnamon sticks, the oats in the background, etc.) are raster images.



FIG. 2B illustrates a representation (e.g., rainbow-colored a “heat map”) of estimated watermark signal strength at different areas of the imagery illustrated in FIG. 2A. In FIG. 2B, the heat map is overlaid by outlines of some of the aforementioned graphical elements discussed above with respect to FIG. 2A.





DETAILED DESCRIPTION

Embodiments of the present invention relate to estimating the signal strength of a watermark signal embedded within media content. In these embodiment, the signal strength represents a measure indicating how easy or difficult a watermark signal, embedded within media content, can be detected or decoded by a reader. Generally, the techniques disclosed herein are applicable to watermark signals embedded within media content such as consumer packaging (e.g., beverages, food, toiletries, cosmetics, small appliances, etc.), documents, labels, tags, stickers, books, posters, etc., as well as to embedded watermark signals that are visually conveyed by electronic displays, or are otherwise embedded or present within surface textures of physical objects, or the like or any combination thereof.


When included in consumer packaging, auxiliary information conveyed by a watermark signal can include information such as a universal product code (UPC) number, a global trade item number (GTIN), application identifier (AI) number (e.g., as used within UCC/EAN-128 Symbols), an electronic product code (EPC), a globally unique identifier (GUID), recycling information, product information, distribution information, retail channel information, labelling information, an index to such information, or the like or any combination thereof. Assignee's U.S. patent application Ser. No. 14/611,515, filed Feb. 2, 2015 (published as US 2015-0302543 A1), which are each hereby incorporated herein by reference in its entirety, describes additional examples of auxiliary information that may be conveyed by a digital watermark. Because a large surface area of a package can be watermarked, consumers, retail check-out personnel, etc., do not need to search for barcode at checkout, thus leading to overall speedup of the checkout process. U.S. Patent App. Pub. Nos. 2013/0223673 and 2014/0112524, each of which is hereby incorporated herein by reference in its entirety, discusses related use scenarios. Such retail checkout scenarios are improved when digital watermarking can be located and decoded in a timely manner as watermarked packaging is swiped or moved in front of an optical scanner (or camera).


Once embedded into cover media, the signal strength of the embedded watermark signal may vary depending upon factors such as the presence, type, color, etc., of any pattern, texture or gradient depicted by the cover media where the watermark was embedded, or in the vicinity thereof. Further, watermark signals are typically embedded into imagery by changing data representing pixel values of images (e.g., raster images). In practice, however, it is common to stack a vector image over a raster image (e.g., by assigning vector and raster image to different layers supported by digital image editing software such as ADOBE PHOTOSHOP, PHOTO-PAINT, PAINT SHOP PRO, GIMP, PAINT.NET, STYLEPIX, etc.). Stacking a vector image over a watermarked raster image can degrade the signal strength of the embedded watermark signal. To address these and other problems, numerous embodiments for estimating the signal strength of a watermark signal embedded within media content are discussed in greater detail below. To provide a comprehensive disclosure without unduly lengthening the specification, applicant hereby incorporates by reference certain referenced patent documents, each in their entirety. These documents also disclose other technologies and teachings that can be incorporated into the arrangements detailed herein, and into which the technologies and teachings detailed herein can be incorporated.


The methods, processes, components, apparatus and systems described herein may be implemented in hardware, software or a combination of hardware and software. For example, the watermark encoding processes and embedders may be implemented in software, firmware, hardware, combinations of software, firmware and hardware, a programmable computer, electronic processing circuitry, processors, parallel processors, or by executing software or instructions with processor(s) or circuitry. Example software includes, e.g., C, C++, Visual Basic, Java, Python, Tcl, Perl, Scheme, Ruby, executable binary files, etc. Similarly, watermark data decoding or decoders may be implemented in software, firmware, hardware, combinations of software, firmware and hardware, a programmable computer, electronic processing circuitry, or by executing software or instructions with a multi-purpose electronic processor, parallel processors or multi-core processors, or other multi-processor configurations.


Applicant's work also includes taking the scientific principles and natural laws on which the present technology rests, and tying them down in particularly defined implementations. One such implementation uses the hardware/software apparatus mentioned above. Another such implementation is electronic circuitry that has been custom-designed and manufactured to perform some or all of the component acts, as an application specific integrated circuit (ASIC).


To realize such an ASIC implementation, some or all of the technology is first implemented using a general purpose computer, using software such as MatLab (from Mathworks, Inc.). A tool such as HDLCoder (also available from Math Works) is next employed to convert the MatLab model to VHDL (an IEEE standard, and doubtless the most common hardware design language). The VHDL output is then applied to a hardware synthesis program, such as Design Compiler by Synopsis, HDL Designer by Mentor Graphics, or Encounter RTL Compiler by Cadence Design Systems. The hardware synthesis program provides output data specifying a particular array of electronic logic gates that will realize the technology in hardware form, as a special-purpose machine dedicated to such purpose. This output data is then provided to a semiconductor fabrication contractor, which uses it to produce the customized silicon part. (Suitable contractors include TSMC, Global Foundries, and ON Semiconductors.)


Several detailed embodiments are now described.


A. Embodiment 1

At 102, image data is obtained. Generally, the image data is obtained in one or more raster file formats (e.g., TIFF, PSD, EPS, JPG, PNG, BMP, etc.), one or more vector file formats (e.g., EMF, EPS, PDF, PS, etc.), or the like or any combination thereof. The image data represents imagery as a raster image, a vector image or any combination thereof (e.g., represented by one or more different “layers” of imagery, as is understood in the field of digital image editing).


Generally, the image data represents imagery in a color mode selected from the group including, e.g., grayscale, indexed, bitmap, RGB, CMYK, Lab, HSB, HSL, duotone, multichannel, PMS, or the like or any combination thereof. In one embodiment, the image data may be accompanied by color mode data describing the color mode with which the imagery is represented. In another embodiment, the obtained image data may be processed according to any known technique to derive the color mode data.


At 104, the image data is processed to embed one or more watermark signals (e.g., each conveying an orientation or synchronization pattern, auxiliary information, or the like or any combination thereof) therein. Exemplary techniques that may be used for embedding a watermark signal are described in aforementioned U.S. Pat. Nos. 7,130,442, 7,013,021, 6,993,154, 6,947,571, 6,912,295, 6,891,959, 6,763,123, 6,718,046, 6,614,914, 6,590,996, 6,449,377, 6,408,082, 6,345,104, 6,208,735, 6,175,627, 6,122,403, 5,949,885, 5,862,260 and 5,859,920, or the like or any combination thereof. Additional techniques are disclosed in US Published Patent Appln. No. 2015-0156369, and U.S. patent application Ser. No. 14/725,399, filed May 29, 2015 (now U.S. Pat. No. 9,635,378), and Ser. No. 14/842,575, filed Sep. 1, 2015 (published as US 2017-0004597 A1), which are hereby incorporated herein by reference in their entirety.


In one embodiment, the image data represents imagery as a raster image and a vector image, and one or more blocks or patches of the raster image is watermarked at a location that is overlapped by the vector image. In another embodiment, the image data simply represents imagery as a one or more raster images, and does not represent a vector image.


At 106, reader data is obtained. In one embodiment, the reader data may include the reader color profile data describing an input color profile (e.g., in an ICC color profile format) of one or more readers configured to detect or decode one or more watermark signals embedded within the image data. In other embodiments, reader data may include reader identifier data (e.g., specifically identifying one or more readers configured to detect or decode the embedded watermark signal(s)), reader type data (e.g., identifying one or more types of readers configured to detect or decode the embedded watermark signal(s)), reader manufacturer data (e.g., identifying one or more manufacturers of readers configured to detect or decode the embedded watermark signal(s)), or the like or any combination thereof. In such other embodiments, the reader identifier data, reader type data, reader manufacturer data, etc., may be used as an index in a database to look up associated reader color profile data representing the input color profile of the reader. Reader type data can identify types of readers as hand-held readers (e.g., a specially-designed reader such as the JOYA or the POWERSCAN series, both offered by DATALOGIC, or a reader implemented as software—such as DIGIMARC DISCOVER, offered by DIGIMARC—operating on a smartphone—such as an IPHONE, offered by APPLE, etc.), fixed-position readers (e.g., so-called “in-counter scanners”), a laser scanners, CCD readers, camera-based readers, video camera readers, or the like or any combination thereof.


At 108, upon obtaining the color mode data and the reader color profile data, the watermarked image data is processed to transform the imagery having the embedded watermark signal(s) from an initial color mode (e.g., the color mode with which the imagery was initially represented by the image data obtained at 102) to a transformed color mode (e.g., a color mode that corresponds to or is otherwise compatible with the input color profile of the reader) according to one or more known techniques. The transformed image data thus represents the imagery in a color space for which the reader is designed.


At 110, the transformed image data is analyzed to estimate the signal strength of the embedded watermark signal. Examples of techniques for estimating the signal strength of the embedded watermark signal in the transformed image data are described in U.S. Pat. Nos. 8,051,295, 7,796,826, 7,607,016, 7,602,977, 7,352,878, 7,263,203, 7,085,396, 7,058,200, 7,006,662, 6,993,154 and 6,738,495, each of which is incorporated herein by reference in its entirety. Generally, the analysis is performed on individual blocks or patches of the transformed image data which may overlap one another, adjoin (i.e., so as to not overlap) one another, be adjacent to (i.e., so as to not adjoin) one another, or the like or any combination thereof.


At 112, results of the estimated signal strength may be rendered (e.g., via a display such as a computer monitor communicatively coupled to a computer that performed an act at 104, 106, 108, or the like or any combination thereof) so that a user can identify areas in imagery represented by the transformed image data (e.g., the imagery illustrated in FIG. 2A) where the watermark signal strength is unacceptably low (e.g., in the vicinity of the depicted green and white logo above “Concordia,” in the vicinity of the oatmeal depicted in the bowl, etc., as illustrated by the heat map shown in FIG. 2B).


Upon viewing the rendering of the estimated signal strength, the user may, at 114, provide one or more commands, instructions, or other input (e.g., by interacting with some user interface, keyboard, mouse, touch-screen, stylus, or the like or any combination thereof) to identify one or more blocks or patches of the transformed image data where the watermark signal strength is to be adjusted (e.g., increased). Optionally, the user input indicates an amount or percentage by which the user desires the watermark signal strength be increased. In one embodiment, the user input may also allow the user to decrease watermark signal strength in one or more blocks or patches of the transformed image data (e.g., in cases where the user determines that the watermark signal is undesirably visible).


Upon receiving the user input, the signal strength of the identified blocks/patches can be adjusted (e.g., as described in any of aforementioned U.S. Pat. Nos. 8,051,295, 7,796,826, 7,607,016, 7,602,977, 7,352,878, 7,263,203, 7,085,396, 7,058,200, 7,006,662, 6,993,154 or 6,738,495, or any combination thereof) and, in one embodiment, one or more of the watermark signals can be re-embedded (e.g., at 104) according to the adjusted signal strength. The above described acts (e.g., 108, 110, 112 and 114) may thereafter performed again, as desired until the watermark signal strength is acceptably strong. In another embodiment, however, the user input received at 114 can indicate that the watermarked signal strength is acceptably high, and the watermarked image data obtained as a result of performing (or re-performing) the watermark embedding at 104 is rendered at 116. As used herein, watermarked image data is rendered by generating one or computer-readable files (e.g., each having a raster file format, a vector file format, or the like or any combination thereof) representing the watermarked image data, storing the computer-readable file (e.g., in a computer-readable memory such as Flash memory, hard drive, magnetic tape, optical disk, or the like or any combination thereof), transmitting the computer-readable file (e.g., via FTP, Ethernet, WiFi, or the like or any combination thereof), printing the imagery represented by the watermarked image data, or the like or any combination thereof.


Notwithstanding the foregoing, it will be appreciated that identification of one or more blocks or patches or adjustment of the watermark signal strength can be performed automatically (e.g., without receiving user input) or semi-automatically (e.g., upon receipt of user input identifying one or more blocks or patches of the transformed image data where the watermark signal strength is to be adjusted).


B. Embodiment 2

In this embodiment, the embedding workflow may be performed as described above with respect to “Embodiment 1,” but may be performed by reference to the color gamut of the printer(s) that will be used to print imagery represented by the watermarked image data. In this Embodiment 2, therefore, printer data is obtained at 118, which includes the printer color profile data describing an output color profile (e.g., in an ICC color profile format) of one or more printers configured to print the watermarked image data. In other embodiments, printer data may include printer identifier data (e.g., specifically identifying one or more printers configured to print the watermarked image data, printer type data (e.g., identifying one or more types of printers configured to print the watermarked image data), printer manufacturer data (e.g., identifying one or more manufacturers of printers configured to print the watermarked image data), or the like or any combination thereof. In such other embodiments, the printer identifier data, printer type data, printer manufacturer data, etc., may be used as an index in a database to look up associated printer color profile data representing the output color profile of the printer. Printer type data can identify types of printers according to the printing processes they are designed to carry out, such as offset lithography, flexography, digital printing (e.g., inkjet, xerography, etc.), gravure, screen printing, or the like or any combination thereof.


At 108, upon obtaining the color mode data and the printer color profile data, the watermarked image data is processed to transform the imagery having the embedded watermark signal(s) from an initial color mode (e.g., the color mode with which the imagery was initially represented by the image data obtained at 102) to a transformed color mode (i.e., a color mode that corresponds to or is otherwise compatible with the output color profile of the printer) according to one or more known techniques. The transformed image data thus represents the imagery in a color space in which the printer will print imagery (e.g., at 116).


In one embodiment, the watermarked image data is not transformed according to the reader color profile data (and, thus, act 106 can be omitted). In another embodiment, however, the transformation at 108 can be based upon the reader color profile data (and, thus, act 106 may be performed). For example, after initially transforming the watermarked image data by reference to the printer color profile data, the initially-transformed watermarked image data can be subsequently transformed by reference to the reader color profile data (e.g., as discussed above with respect to Embodiment 1).


C. Embodiment 3

In this embodiment, the embedding workflow may be performed as described above with respect to “Embodiment 1” or “Embodiment 2,” but may be performed by reference to the substrate onto which imagery represented by the watermarked image data will be printed. In this Embodiment 3, therefore, substrate data is obtained at 120, which includes information describing the type of substrate (e.g., paper, cardboard, metal, plastic, foil, etc.) on which the watermarked image data will be printed, the surface finish of the substrate (e.g., matte, glossy, etc.), or the like or any combination thereof.


At 108, upon obtaining the color mode data and the substrate data, the watermarked image data is processed to estimate or otherwise emulate the appearance of the watermarked imagery when rendered on a substrate associated with the substrate data. In one embodiment, the watermarked image data is not transformed according to the reader color profile data or the printer color profile data (and, thus, acts 106 and 118 can be omitted). In another embodiment, however, the transformation at 108 can be based upon the printer color profile data (and, thus, act 118 may be performed, and the watermarked image data may be transformed by reference to the printer color profile data before transforming the watermarked image data by reference to the substrate data). In another embodiment, the transformation at 108 can be further based upon the reader color profile data (and, thus, act 106 may be performed, so that the watermarked image data is transformed by reference to the reader color profile data after being transformed by reference to the substrate data).


D. Embodiment 4

In this embodiment, the embedding workflow may be performed as described above with respect to “Embodiment 1,” but may be performed by reference to the color gamut of the electronic display(s) that will be used to visually convey imagery represented by the watermarked image data. In this Embodiment 4, therefore, display data is obtained at 122, which includes the display color profile data describing an output color profile (e.g., in an ICC color profile format) of one or more displays configured to display the watermarked image data (e.g., at 116). In other embodiments, display data may include display identifier data (e.g., specifically identifying one or more displays configured to display the watermarked image data, display type data (e.g., identifying one or more types of displays configured to display the watermarked image data), display manufacturer data (e.g., identifying one or more manufacturers of printers configured to display the watermarked image data), or the like or any combination thereof. In such other embodiments, the display identifier data, display type data, display manufacturer data, etc., may be used as an index in a database to look up associated display color profile data representing the output color profile of the display.


At 108, upon obtaining the color mode data and the display color profile data, the watermarked image data is processed to transform the imagery having the embedded watermark signal(s) from an initial color mode (e.g., the color mode with which the imagery was initially represented by the image data obtained at 102) to a transformed color mode (e.g., a color mode that corresponds to or is otherwise compatible with the output color profile of the display) according to one or more known techniques. The transformed image data thus represents the imagery in a color space in which the display will present imagery.


Possible Combinations:


A few combinations of features include the following. Of course, this is not an exhaustive list since many other combinations are evident from the above detailed description.


A. A method comprising:


obtaining image data representing imagery in a first color mode;


steganographically embedding a watermark signal into the image data, thereby generating watermarked image data;


obtaining reader data representing an input color mode of a reader configured to detect or decode the watermarked image data, wherein the input color mode is different from the first color mode;


transforming the watermarked image data from the first color mode to the input color mode;


estimating a signal strength of the watermark signal within the transformed watermark image data; and


re-embedding the watermark signal in the image data based upon the estimated signal strength within the transformed image data.


B. A method comprising:


obtaining image data representing imagery;


steganographically embedding a watermark signal into the image data, thereby generating watermarked image data;


obtaining printer data representing an output color mode of a printer configured to print the watermarked image data;


transforming the watermarked image data to the output color mode;


obtaining reader data representing an input color mode of a reader configured to detect or decode printed watermarked image data, wherein the input color mode is different from the output color mode;


transforming the watermarked image data from the output color mode to the input color mode;


estimating a signal strength of the watermark signal within the transformed watermark image data; and


re-embedding the watermark signal in the image data based upon the estimated signal strength within the transformed image data.


C. A system comprising:


a processor;


at least one non-transitory, computer-readable medium communicatively coupled to the processor,


wherein at least one non-transitory, computer-readable medium stores image data, reader data and, optionally, printer data, and


wherein at least one non-transitory, computer-readable medium stores instructions to cause the processor to perform the method of any of combination A or B; and


optionally, a display communicatively coupled to the processor.


D. A non-transitory, computer-readable medium comprising instructions to cause a computer processor to perform the method of any one of combination A or B.


E. An apparatus comprising:


means for obtaining image data representing imagery in a first color mode;


steganographically embedding a watermark signal into the image data, thereby generating watermarked image data;


means for obtaining reader data representing an input color mode of a reader configured to detect or decode the watermarked image data, wherein the input color mode is different from the first color mode;


means for transforming the watermarked image data from the first color mode to the input color mode;


means for estimating a signal strength of the watermark signal within the transformed watermark image data; and


means for re-embedding the watermark signal in the image data based upon the estimated signal strength within the transformed image data.


F. An apparatus comprising:


means for obtaining image data representing imagery;


means for steganographically embedding a watermark signal into the image data, thereby generating watermarked image data;


means for obtaining printer data representing an output color mode of a printer configured to print the watermarked image data;


means for transforming the watermarked image data to the output color mode;


means for obtaining reader data representing an input color mode of a reader configured to detect or decode printed watermarked image data, wherein the input color mode is different from the output color mode;


means for transforming the watermarked image data from the output color mode to the input color mode;


means for estimating a signal strength of the watermark signal within the transformed watermark image data; and


means for re-embedding the watermark signal in the image data based upon the estimated signal strength within the transformed image data.


CONCLUSION

For the avoidance of doubt we expressly contemplate the combination of subject matter under any above embodiment with the subject matter from the other such detailed embodiments.


Having described and illustrated the principles of the technology with reference to specific embodiments, it will be recognized that the technology can be implemented in many other, different, forms. To provide a comprehensive disclosure without unduly lengthening the specification, applicant hereby incorporates by reference each of the above referenced patent documents in its entirety. Such documents are incorporated in their entireties, even if cited above in connection with specific of their teachings. These documents disclose technologies and teachings that can be incorporated into the arrangements detailed herein, and into which the technologies and teachings detailed herein can be incorporated.


The particular combinations of elements and features in the above-detailed embodiments are exemplary only; the interchanging and substitution of these teachings with other teachings in this and the incorporated-by-reference patents are also contemplated.

Claims
  • 1. An apparatus comprising: memory for storing image data;means for embedding a machine-readable signal into stored image data, the machine-readable signal comprising a synchronization signal and a plural-bit identifier, thereby generating embedded image data;memory for storing substrate data, the substrate data associated with a substrate upon which the embedded image data is to be printed upon;memory for storing printer data, the printer data associated with a printer that is anticipated to print the embedded image data upon the substrate;means for processing the embedded image data with reference to the substrate data and the printer data, thereby generating processed, embedded image data;means for estimating a signal strength of the machine-readable signal embedded within the processed, embedded image data, in which said means for estimating utilizes the synchronization signal, and in which said means for estimating yields an estimated signal strength; andmeans for controlling visual display of the estimated signal strength.
  • 2. The apparatus claim 1 in which the estimated signal strength is controlled to be visually displayed as a multi-color heat map spatially relative to the embedded image data.
  • 3. The apparatus of claim 2 in which the estimated signal strength is controlled to be visually displayed spatially relative to the embedded image data with a first color representing first signal strength and a second color representing second signal strength, in which the second signal strength comprises a signal strength that is relatively higher than the first signal strength.
  • 4. The apparatus of claim 3, in which the estimated signal strength is controlled to be visually displayed spatially relative to the image data, with a third color representing a third signal strength comprising a signal strength between the first signal strength and the second signal strength.
  • 5. The apparatus of claim 1 in which the printer data comprises color profile information of the printer that is anticipated to print the embedded image data.
  • 6. The apparatus of claim 1 in which the substrate data comprises data associated with substrate surface finish.
  • 7. The apparatus of claim 1 in which the substrate data comprises data associated with a type of substrate, the type of substrate indicating paper, cardboard, metal, plastic, foil, or metal.
  • 8. The apparatus of claim 1 in which said means for processing processes the embedded image data with reference to the substrate data, the printer data and with reference to reader data associated with a reader that is anticipated to read the machine-readable signal from printed, embedded image data.
  • 9. The apparatus of claim 1 in which said means for embedding utilizes digital watermarking to embed a machine-readable signal into the stored image data.
  • 10. The apparatus of claim 1 further comprising memory for storing color data associated with the stored image data, in which said means for processing utilizes the color data to transform the embedded image data from an initial color mode to a transformed color mode.
  • 11. The apparatus of claim 10 in which the transformed color mode is associated with a color space for a signal decoder.
  • 12. An image processing method comprising: embedding a machine-readable signal into image data, the machine-readable signal comprising a synchronization signal and a plural-bit identifier, thereby generating embedded image data;accessing substrate data, the substrate data associated with a substrate upon which the embedded image data is to be printed upon;accessing printer data, the printer data associated with a printer that is anticipated to print the embedded image data upon the substrate;accessing color data, the color data associated with the image data,processing the embedded image data with reference to the substrate data, the printer data and the color data, thereby generating processed, embedded image data;determining a signal strength of the machine-readable signal embedded within the processed, embedded image data, in which said determining utilizes the synchronization signal, and in which said determining yields a determined signal strength.
  • 13. The method claim 12 further comprising controlling visual display of the determined signal strength spatially relative to the image data.
  • 14. The method of claim 13 in which said controlling visual display of the determined signal strength utilizes a first color representing first signal strength and a second color representing second signal strength, in which the second signal strength comprises a signal strength that is relatively higher than the first signal strength.
  • 15. The method of claim 14 in which the said controlling visual display utilizes a third color representing a third signal strength comprising a signal strength between the first signal strength and the second signal strength.
  • 16. The method of claim 12 in which the printer data comprises color profile information of the printer that is anticipated to print the embedded image data.
  • 17. The method of claim 12 in which the substrate data comprises data associated with substrate surface finish.
  • 18. The method of claim 12 in which the substrate data comprises data associated with a type of substrate comprising a type of paper, cardboard, metal, plastic, foil, or metal.
  • 19. The method of claim 12 in which said processing further processes the embedded image data with reference to reader data associated with a reader that is anticipated to read the machine-readable signal from printed, embedded image data.
  • 20. The method of claim 12 in which said embedding utilizes digital watermarking to embed a machine-readable signal into the image data.
  • 21. The method of claim 12 in which said processing utilizes the color data to transform the embedded image data from an initial color mode to a transformed color mode.
  • 22. The method of claim 21 in which the transformed color mode is associated with a color space of a reader that is anticipated to read the machine-readable signal from printed, embedded image data.
  • 23. A non-transitory computer readable medium comprising instructions stored therein that, when executed by one or more processors, cause the one or more processors to perform the following acts: embedding a machine-readable signal into image data, the machine-readable signal comprising a synchronization signal and a plural-bit identifier, thereby generating embedded image data;obtaining substrate data, the substrate data associated with a substrate upon which the embedded image data is to be printed upon;obtaining printer data, the printer data associated with a printer that is anticipated to print the embedded image data upon the substrate;obtaining color data, the color data associated with the image data,processing the embedded image data with reference to the substrate data, the printer data and the color data, thereby generating processed, embedded image data;determining a signal strength of the machine-readable signal embedded within the processed, embedded image data, in which said determining utilizes the synchronization signal, and in which said determining yields a determined signal strength.
  • 24. The non-transitory computer readable medium of claim 23 further comprising instructions for controlling visual display of the determined signal strength spatially relative to the image data.
  • 25. The non-transitory computer readable medium of claim 24 in which said controlling visual display of the determined estimated signal strength utilizes a first color representing first signal strength and a second color representing second signal strength, in which the second signal strength comprises a signal strength that is relatively higher than the first signal strength.
  • 26. The non-transitory computer readable medium of claim 23 in which said processing utilizes the color data to transform the embedded image data from an initial color mode to a transformed color mode.
  • 27. The non-transitory computer readable medium of claim 26 in which the transformed color mode is associated with a color space of a reader that is anticipated to read the machine-readable signal from printed, embedded image data.
RELATED APPLICATION DATA

This application is a continuation of U.S. application Ser. No. 15/655,376, filed Jul. 20, 2017 (U.S. Pat. No. 10,560,599) which is a continuation of U.S. application Ser. No. 14/881,448, filed Oct. 13, 2015 (U.S. Pat. No. 9,716,807), which claims benefit of U.S. Provisional Application No. 62/063,248, filed Oct. 13, 2014, which are each hereby incorporated herein by reference in its entirety.

US Referenced Citations (160)
Number Name Date Kind
5335267 Evers Aug 1994 A
5365048 Komiya Nov 1994 A
5745604 Rhoads Apr 1998 A
5822436 Rhoads Oct 1998 A
5832119 Rhoads Nov 1998 A
5859920 Daly Jan 1999 A
5862260 Rhoads Jan 1999 A
5949885 Leighton Sep 1999 A
5974150 Kaish Oct 1999 A
6122403 Rhoads Sep 2000 A
6175627 Petrovic Jan 2001 B1
6208735 Cox Mar 2001 B1
6209094 Levine Mar 2001 B1
6246775 Nakamura Jun 2001 B1
6345104 Rhoads Feb 2002 B1
6385329 Sharma May 2002 B1
6408082 Rhoads Jun 2002 B1
6449377 Rhoads Sep 2002 B1
6516079 Rhoads Feb 2003 B1
6535617 Hannigan Mar 2003 B1
6535618 Rhoads Mar 2003 B1
6567533 Rhoads May 2003 B1
6590996 Reed Jul 2003 B1
6614914 Rhoads Sep 2003 B1
6625297 Bradley Sep 2003 B1
6631198 Hannigan Oct 2003 B1
6683966 Tian Jan 2004 B1
6698658 McQueen Mar 2004 B2
6704869 Rhoads Mar 2004 B2
6718046 Reed Apr 2004 B2
6738495 Rhoads May 2004 B2
6757406 Rhoads Jun 2004 B2
6763123 Reed Jul 2004 B2
6771797 Ahmed Aug 2004 B2
6785398 Shimizu Aug 2004 B1
6850734 Bruno Feb 2005 B1
6879701 Rhoads Apr 2005 B1
6891959 Reed May 2005 B2
6912295 Reed Jun 2005 B2
6947571 Rhoads Sep 2005 B1
6971012 Shimizu Nov 2005 B1
6988202 Rhoads Jan 2006 B1
6993154 Brunk Jan 2006 B2
7006662 Alattar Feb 2006 B2
7013021 Sharma Mar 2006 B2
7046819 Sharma May 2006 B2
7054461 Zeller May 2006 B2
7058200 Donescu Jun 2006 B2
7085396 Pelly Aug 2006 B2
7116781 Rhoads Oct 2006 B2
7130442 Braudaway Oct 2006 B2
7197164 Levy Mar 2007 B2
7224819 Levy May 2007 B2
7231061 Bradley Jun 2007 B2
7263203 Rhoads Aug 2007 B2
7277468 Tian Oct 2007 B2
7286685 Brunk Oct 2007 B2
7305104 Carr Dec 2007 B2
7319775 Sharma Jan 2008 B2
7352878 Reed Apr 2008 B2
7502759 Hannigan Mar 2009 B2
7515733 Rhoads Apr 2009 B2
7570781 Rhoads Aug 2009 B2
7607016 Brunk Oct 2009 B2
7656930 Tian Feb 2010 B2
7657058 Sharma Feb 2010 B2
7738673 Reed Jun 2010 B2
7796826 Rhoads Sep 2010 B2
7916354 Rhoads Mar 2011 B2
7986807 Stach Jul 2011 B2
8005254 Rhoads Aug 2011 B2
8051295 Brunk Nov 2011 B2
8127137 Levy Feb 2012 B2
8243980 Rhoads Aug 2012 B2
8245926 Guess Aug 2012 B2
8301893 Brundage Oct 2012 B2
8321350 Durst Nov 2012 B2
8339668 Kato Dec 2012 B2
8488837 Bae Jul 2013 B2
8610958 Rossier Dec 2013 B2
8923546 Reed Dec 2014 B2
9224184 Bai Dec 2015 B2
9305559 Sharma Apr 2016 B2
9449357 Lyons Sep 2016 B1
9690967 Brundage Jun 2017 B1
9716807 Holub Jul 2017 B2
9892301 Holub Feb 2018 B1
9892478 Calhoon Feb 2018 B2
1019878 Bradley Feb 2019 A1
1021718 Holub Feb 2019 A1
1027584 Brundage Apr 2019 A1
10304149 Falkenstern May 2019 B2
10453163 Reed Oct 2019 B2
1056059 Holub Feb 2020 A1
1074823 Brundage Aug 2020 A1
10748232 Kamath Aug 2020 B2
10783601 Rodriguez Sep 2020 B1
20010019618 Rhoads Sep 2001 A1
20020003903 Engeldrum Jan 2002 A1
20020009208 Alattar Jan 2002 A1
20020015508 Hannigan Feb 2002 A1
20020054355 Brunk May 2002 A1
20020146120 Anglin Oct 2002 A1
20020157005 Brunk Oct 2002 A1
20020191812 Kim Dec 2002 A1
20020194064 Parry Dec 2002 A1
20030002710 Rhoads Jan 2003 A1
20030009670 Rhoads Jan 2003 A1
20030025423 Miller Feb 2003 A1
20030053653 Rhoads Mar 2003 A1
20030095685 Tewfik May 2003 A1
20030156733 Zeller Aug 2003 A1
20040190751 Rhoads Sep 2004 A1
20040263911 Rodriguez Dec 2004 A1
20050110892 Yun May 2005 A1
20050157907 Reed Jul 2005 A1
20060008112 Reed Jan 2006 A1
20060083403 Zhang Apr 2006 A1
20060153422 Tapson Jul 2006 A1
20060171559 Rhoads Aug 2006 A1
20060230273 Crichton Oct 2006 A1
20070091376 Calhoon Apr 2007 A1
20070130197 Richardson Jun 2007 A1
20080025554 Landwehr Jan 2008 A1
20080089550 Brundage Apr 2008 A1
20080137749 Tian Jun 2008 A1
20080192275 Reed Aug 2008 A1
20080309612 Gormish Dec 2008 A1
20090046931 Xiao Feb 2009 A1
20090060331 Liu Mar 2009 A1
20090067671 Alattar Mar 2009 A1
20090074242 Yamamoto Mar 2009 A1
20090097702 Rhoads Apr 2009 A1
20100008534 Rhoads Jan 2010 A1
20100027851 Walther Feb 2010 A1
20100142003 Braun Jun 2010 A1
20120133993 Ohira May 2012 A1
20120277893 Davis Nov 2012 A1
20130010150 Silverbrook Jan 2013 A1
20130157729 Tabe Jun 2013 A1
20130223673 Davis Aug 2013 A1
20130336525 Kurtz Dec 2013 A1
20140027516 Fushiki Jan 2014 A1
20140112524 Bai Apr 2014 A1
20150030201 Holub Jan 2015 A1
20150146262 Fan May 2015 A1
20150156369 Reed Jun 2015 A1
20150187039 Reed Jul 2015 A1
20160034231 Miyake Feb 2016 A1
20160105585 Holub Apr 2016 A1
20160198064 Bai Jul 2016 A1
20160316098 Reed Oct 2016 A1
20160364826 Wang Dec 2016 A1
20170024845 Filler Jan 2017 A1
20170061563 Falkenstern Mar 2017 A1
20170223218 Su Aug 2017 A1
20180130169 Falkenstern May 2018 A1
20180352111 Bai Dec 2018 A1
20190070880 Falkenstern Mar 2019 A1
20190332840 Sharma Oct 2019 A1
Foreign Referenced Citations (1)
Number Date Country
2013033442 Mar 2013 WO
Non-Patent Literature Citations (29)
Entry
Avcibas, et al., ‘Steganalysis of Watermarking Techniques Using Images Quality Metrics’, Proceedings of SPIE, Jan. 2001, vol. 4314, pp. 523-531.
Chih-Chung Chang and Chih-Jen Lin,, ‘LIBSVM: A Library for Support Vector Machines,’ ACM Transactions on Intelligent Systems and Technology, vol. 2, No. 3, Article 27, Publication date: Apr. 2011. 27 pgs.
Dautzenberg, ‘Watermarking Images,’ Department of Microelectronics and Electrical Engineering, Trinity College Dublin, 47 pages, Oct. 1994.
Fan, et al., ‘LIBLINEAR: A Library for Large Linear Classification,’ Journal of Machine Learning Research 9 (2008) 1871-1874.
Fridrich, Kodovsk & Holub, ‘Steganalysis of Content-Adaptive Steganography in Spatial Domain,’ Information Hiding, vol. 6958 of the series Lecture Notes in Computer Science pp. 102-117, 2011.
Hernandez et al., ‘Statistical Analysis of Watermarking Schemes for Copyright Protection of Images,’ Proceedings of the IEEE, vol. 87, No. 7, Jul. 1999. 22 pgs.
Holub & Fridrich, ‘Digital image steganography using universal distortion,’ IH&MMSec '13 Proceedings of the first ACM workshop on Information hiding and multimedia security, pp. 59-68 (2013).
J. Fridrich and J. Kodovsk , ‘Rich models for steganalysis of digital images,’ IEEE Trans. on Information Forensics and Security, 7(3):868-882, Jun. 2012.
J. Kodovsk , J. Fridrich, and V. Holub, ‘Ensemble classifiers for steganalysis of digital media,’ IEEE Trans. on Inform. Forensics and Security, 7(2):432-444, Apr. 2012.
Kodovsk , Fridrich & Holub, ‘On dangers of overtraining steganography to incomplete cover model,’ MM&Sec '11 Proceedings of the thirteenth ACM multimedia workshop on Multimedia and security, pp. 69-76 (2011).
Kutter, ‘Watermarking Resisting to Translation, Rotation and Scaling,’ Proc. of SPIE: Multimedia Systems and Applications, vol. 3528, pp. 423-431, Boston, Nov. 1998.
Lin et al., ‘Rotation, Scale, and Translation Resilient Watermarking for Images,’ IEEE Transactions on Image Pro62/803,341cessing, vol. 10, No. 5, May 2001, pp. 767-782.
May 19, 2017 Response to Amendment under Rule 312; May 12, 2017 Amendment after Notice of Allowance; Mar. 15, 2017 Notice of Allowance; Jan. 12, 2017 non-final Office Action; and Jan. 13, 2017 Amendment, all from assignee's U.S. Appl. No. 15/154,572. 48 pgs.
Nikolaidis and Pitas., ‘Region-based Image Watermarking,’ IEEE Transactions on Image Processing, vol. 10, No. 11, Nov. 2001, pp. 1726-1740.
O'Ruanaidh et al., ‘Rotation, Scale and Translation Invariant Digital Image Watermarking,’ Int. Conf. on Image Proc., Oct. 1997, IEEE, pp. 536-539.
O'Ruanaidh et al., ‘Rotation, Scale and Translation Invariant Spread Spectrum Digital Image Watermarking,’ Signal Processing 66, May 1, 1998, pp. 303-317.
O'Ruanaidh, ‘Rotation, Scale and Translation Invariant Digital Image Watermarking,’ Signal Processing, pp. 2-15, May 1, 1998.
Pereira et al., ‘Template Based Recovery of Fourier-Based Watermarks Using Log-Polar and Log-log Maps,’ Proc. IEEE Int. Conf. on Multimedia Computing and Systems, vol. 1, Jun. 1999, pp. 870-874.
Pevny , P. Bas, and J. Fridrich, ‘Steganalysis by subtractive pixel adjacency matrix,’ IEEE Transactions on Information Forensics and Security, 5(2):215-224, Jun. 2010.
R. O. Duda, P. E. Hart, and D. G. Stork, ‘Pattern Classification.’ Wiley Interscience, 2nd edition, 2000. 737 pgs.
Sheng et al., ‘Experiments on Pattern Recognition Using Invariant Fourier-Mellin Descriptors,’ Journal of Optical Society of America, vol. 3. No. 6, Jun. 1986, pp. 771-776.
Su et al., ‘An Image Watermarking Scheme to Resist Generalized Geometrical Transforms,’ Proc. SPIE vol. 4209: Multimedia Systems and Applications III, Nov. 2000, pp. 354-365.
Su et al., ‘Synchronized Detection of the Block-based Watermark with Invisible Grid Embedding,’ Proc. SPIE vol. 4314: Security and Watermarking of Multimedia Contents III, Jan. 2001, pp. 406-417.
U.S. Appl. No. 16/988,366, filed Aug. 7, 2020. 90 pgs.
U.S. Appl. No. 15/154,529, filed May 13, 2016.
U.S. Appl. No. 15/154,572, filed May 13, 2016. 91 pgs.
U.S. Appl. No. 61/856,476, filed Jul. 19, 2013. 20 pgs.
U.S. Appl. No. 61/918,214, filed Dec. 19, 2013. 29 pgs.
U.S. Appl. No. 62/322,193, filed Apr. 13, 2016. 73 pgs.
Related Publications (1)
Number Date Country
20200288036 A1 Sep 2020 US
Provisional Applications (1)
Number Date Country
62063248 Oct 2014 US
Continuations (2)
Number Date Country
Parent 15655376 Jul 2017 US
Child 16785286 US
Parent 14881448 Oct 2015 US
Child 15655376 US