Method for constructing a composite image incorporating a hidden authentication image

Information

  • Patent Grant
  • 9275303
  • Patent Number
    9,275,303
  • Date Filed
    Wednesday, February 12, 2014
    10 years ago
  • Date Issued
    Tuesday, March 1, 2016
    8 years ago
Abstract
A method is provided for constructing a composite image having an authentication image formed therein. The authentication image is viewable using a decoder lens having one or more decoder lens frequencies. The method includes generating two gray-scale component images having tonal areas that are tonally balanced around at least one tonal value. At least one of the two gray-scale component images includes a representation of the authentication image. The method further includes determining a first pattern of the component image elements for the two gray-scale component images, the first pattern including a first element configuration and at least one element frequency that is equal to or a multiple of one of the decoder lens frequencies. The method includes extracting at least a portion of the content from the component image elements of the two gray-scale component images and constructing a composite image having a second pattern of composite image elements.
Description
FIELD OF THE INVENTION

The invention relates generally to the field of counterfeit protection, and more particularly to the field of electronic and printed document protection using encoded images.


BACKGROUND OF THE INVENTION

Document falsification and product counterfeiting are significant problems that have been addressed in a variety of ways. One of the more successful approaches has been the use of latent or hidden images applied to or printed on objects to be protected. These images are generally not viewable without the assistance of specialized devices that render them visible.


One approach to the formation of a latent image is to optically encode the image so that, when applied to an object, the image can be viewed through the use of a corresponding decoding device. Such images may be used on virtually any form of printed document including legal documents, identification cards and papers, labels, currency, stamps, etc. They may also be applied to goods or packaging for goods subject to counterfeiting.


Objects to which an encoded image is applied may be authenticated by decoding the encoded image and comparing the decoded image to an expected authentication image. The authentication image may include information specific to the object being authenticated or information relating to a group of similar objects (e.g., products produced by a particular manufacturer or facility). Production and application of encoded images may be controlled so that they cannot easily be duplicated. Further, the encoded image may be configured so that tampering with the information on the document or label is readily apparent.


Authentication of documents and other objects “in the field” has typically required the use of separate decoders such as lenticular or micro-array lenses that optically decode the encoded images. These lenses may have optical characteristics that correspond to the parameters used to encode and apply the authentication image and may be properly oriented in order for the user to decode and view the image. The decoding lenses may also be able to separate secondary images from the encoded images. For example, the decoding lens can be a lenticular lens having lenticules that follow a straight line pattern, wavy line pattern, zigzag pattern, concentric rings pattern, cross-line pattern, aligned dot pattern, offset dot pattern, grad frequency pattern, target pattern, herring pattern or any other pattern. Other decoding lenses include fly's eye lenses and any other lens having a multidimensional pattern of lens elements. The elements of such lenses can be arranged using a straight line pattern, square pattern, shifted square pattern, honey-comb pattern, wavy line pattern, zigzag pattern, concentric rings pattern, cross-line pattern, aligned dot pattern, offset dot pattern, grad frequency pattern, target pattern, herring pattern or any other pattern. Examples of some of these decoding lenses are illustrated in FIG. 1.


In some cases, lens element patterns and shapes may be so complex that it they are impossible or impractical to manufacture. While such patterns may be highly desirable from the standpoint of their anti-counterfeiting effectiveness, cost and technology difficulty in their manufacture may limit their use.


SUMMARY OF THE INVENTION

The present invention provides methods for constructing a digital encoded image in the form of a composite image constructed from a series of component images. An aspect of the invention provides a method for constructing a composite image having an authentication image formed therein. The authentication image is viewable by placement of a decoder lens having a plurality of lens elements defining one or more decoder lens frequencies over an object to which the composite image has been applied. The method includes generating two gray-scale component images having tonal areas that are tonally balanced around at least one tonal value. At least one of the two gray-scale component images is configured to include a representation of the authentication image. The method further includes determining a first pattern of the component image elements for the two gray-scale component images. The first pattern including a first element configuration and at least one element frequency that is equal to or a multiple of one of the decoder lens frequencies. The component image elements for a corresponding gray-scale component image collectively carrying content of the corresponding gray-scale component image. The method still further includes extracting at least a portion of the content from the component image elements of the two gray-scale component images and constructing a composite image having a second pattern of composite image elements. The second pattern having a second element configuration that corresponds to the first element configuration, the second pattern having the at least one element frequency that is equal to or a multiple of one of the decoder lens frequencies. The composite image elements including the content extracted from the component image elements obtained from the two gray-scale component images.


Another aspect of the invention provides an authenticatable object having a surface configured for receiving a composite security image and a composite security image applied to the surface. The composite security image includes a plurality of composite image elements having subelements defining content extracted from component image elements of two gray-scale component images, the two gray-scale component images having tonal areas that are tonally balanced around at least one tonal value. At least one of the two gray-scale component images is configured to include a representation of an authentication image. The component image elements include a first pattern for the two gray-scale component images, the first pattern including a first element configuration and at least one element frequency that is equal to or a multiple of one or more decoder lens frequencies. According to one example, component image elements for a corresponding gray-scale component image collectively carry content of the corresponding gray-scale component image. A composite image is provided having a second pattern of the composite image elements, the second pattern having a second element configuration that corresponds to the first element configuration, the second pattern having the at least one element frequency that is equal to or a multiple of the one or more decoder lens frequencies. The composite image elements include content extracted from the component image elements obtained from the two gray-scale component images. The authentication image is viewable through a decoder lens placed over the composite security image.





BRIEF DESCRIPTION OF THE DRAWINGS

The invention can be more fully understood by reading the following detailed description together with the accompanying drawings, in which like reference indicators are used to designate like elements, and in which:



FIGS. 1A-1G illustrate lens element patterns that may be used to view images produced using a method of the invention;



FIGS. 2A-2D illustrate component images used to produce a composite image according to an embodiment of the invention;



FIG. 2E illustrates a composite image corresponding to the component images illustrated in FIGS. 2A-2D according to an embodiment of the invention;



FIGS. 3A-3C illustrate a schematic representation of component image elements used to produce a composite image according to one embodiment of the invention;



FIGS. 3D-3F illustrate a schematic representation of component image elements used to produce a composite image according to a second embodiment of the invention;



FIGS. 4A-4C illustrate a schematic representation of component image elements used to produce a composite image according to a third embodiment of the invention;



FIG. 5 illustrates a composite image produced in a method according to an embodiment of the invention;



FIGS. 6A-6C illustrate a schematic representation of component images used to produce a composite image according to an embodiment of the invention;



FIG. 7 is a flow diagram of a method of producing a composite image incorporating an authentication image according to an embodiment of the invention;



FIGS. 8A and 8B illustrate component images used to produce a composite image according to an embodiment of the invention;



FIG. 8C illustrates a composite image corresponding to the component images illustrated in FIGS. 8A and 8B according to an embodiment of the invention;



FIG. 8D illustrates an enlargement of a portion of the composite image illustrated in FIG. 8C according to an embodiment of the invention;



FIG. 9A illustrates a primary image according to an embodiment of the invention;



FIG. 9B illustrates a composite image formed from a primary image illustrated in FIG. 9A screened using the composite image of FIG. 8C in accordance with a method according to an embodiment of the invention;



FIG. 9C illustrates an enlargement of a portion of the composite image illustrated in FIG. 9B according to an embodiment of the invention;



FIG. 10A illustrates a primary image according to an embodiment of the invention;



FIGS. 10B and 10C illustrate component images formed from a primary image illustrated in FIG. 10A that produce a composite image using a method according to an embodiment of the invention;



FIG. 10D illustrates a composite image corresponding to the component images illustrated in FIGS. 10B and 10C according to an embodiment of the invention;



FIG. 10E illustrates an enlargement of a portion of the composite image illustrated in FIG. 10D according to an embodiment of the invention;



FIGS. 11A-11E illustrate primary component images that are used to produce a composite image using a method according to an embodiment of the invention;



FIGS. 11F-11H illustrate secondary component images that are used to produce a composite image using a method according to an embodiment of the invention;



FIGS. 12A-12F illustrate a schematic representation of the elements of a series of component images used to produce a composite image using a method according to an embodiment of the invention;



FIG. 12G illustrates a composite image corresponding to the component images illustrated in FIGS. 12A-12F according to an embodiment of the invention;



FIG. 13 is a photograph of an example document to which composite images produced using methods of the invention were applied;



FIG. 14 is a photograph illustrating the use of a decoder lens to view an authentication image incorporated into a composite image applied to the document shown in FIG. 13; and



FIG. 15 is a photograph illustrating the use of a decoder lens to view an authentication image incorporated into a composite image applied to the document shown in FIG. 13.





DETAILED DESCRIPTION OF THE INVENTION

The present invention provides for the encoding and decoding of encoded images. In some embodiments, an authentication or other image is broken into component images that are preferably tonal complements of one another; i.e., they are balanced around a particular color shade. The component images are then systematically sampled and the sampled portions assembled to provide a composite image that appears to the eye to be a single tone image (the single tone being the particular color shade). As will be discussed, the samples are taken according to a pattern of the decoder lens that will be used to view the authentication image.


In some embodiments, multiple authentication images may be used, each such image being used to establish multiple component images. Samples from each component of each authentication image can then be used to form a single composite image that can be decoded to reveal the authentication images.


In some embodiments, an authentication image can be “hidden” within a visible source image by constructing a composite image as described above and applying the composite image to the source image as a halftone screen. In other embodiments, an authentication image may be hidden within a source image by creating a composite from samples of component images derived from the source image. In these component images, certain areas are masked according to the content of the image to be hidden. The tonal value of the masked area of each component image is taken from the masked area of one of the other component images.


The principles of the invention will now be discussed in more detail. As discussed above, the invention provides an encoded image in the form of a composite image constructed from multiple component images. The invention also provides methods of using a multiple component approach for hiding information into a composite image.


The use of component images takes advantage of the fact that the human eye is unable to discern tiny details in an encoded image. The encoded image is usually a printed or otherwise displayed image. The human eye tends to merge the fine details of the printed or displayed image together. This is generally used in printing photos and other images. The printer produces a lot of tiny dots or other structures on the paper. The size of individual dots can be measured as small as thousands of an inch. These individual dots are not perceptible for human vision; however, taken together these dots will be averaged out by human eye to create a shade of color. The size of the dots or the density of the dots will determine the perceived color shade. If dots are bigger, or if they are closer together, the eye will perceive the darker shade. If the dots are smaller, or if they are placed further apart, the eye will perceive the lighter shade.


In the methods of the invention, an authentication or other image can be broken into tonally complementary component images. The term “tonally complementary” means that the component images are balanced around a particular color shade. This means that if corresponding elements (i.e., elements from corresponding locations) of the component images are viewed together, the eye will perceive the color shade around which the component tones are balanced.



FIG. 2A shows a first component image and FIG. 2B shows a second component image defining an authentication image. In component image 1 of FIG. 2A, a solid background 10 with a first tonal shade surrounds an area 20 with a second tonal shade that defines the authentication image (the block letters “USA”). In component image 2 of FIG. 2B, the tonal values are reversed; that is, the background area 10′ has the second tonal shade and the area 20′ forming the letters USA has the first tonal shade. The first and second tonal shades are balanced around a single shade so that if the component images are combined, the naked eye will perceive only that single shade. Each component image may be referred to as a “phase” of the original image.


In an exemplary method of the invention, each of the phases can be divided into small elements according to a pattern corresponding to a lens element pattern of a decoder lens. These elements may, for example, be linear (straight or curved) elements or segments as illustrated in FIGS. 1A-1C that correspond to the lens elements of a lenticular lens or may be in the form of a matrix of two dimensional elements corresponding to a multiple-element lens such as a fly's eye lens as illustrated in FIGS. 1D-1G. In the example shown in FIGS. 2C and 2D, the component images are divided into an array of square elements 30, 30′, which could, for example, correspond in size and position to the elements of a fly's eye lens. The component element pattern has a frequency that corresponds to the element frequency (or one of the element frequencies) of the lens. It may have the same frequency (or frequencies for a multi-dimensional pattern) as the element frequency (or frequencies) of the lens or may have a frequency that is a multiple of the of the lens frequency.


As shown in FIG. 2E, the elements 30, 30′ of the two component images may be systematically divided into subelements 32, 32′ from which samples can be taken and combined to form a composite image 40 having an average tone that matches that of the shade around which the two component images were balanced. In the example shown in FIG. 2E, the elements and subelements are so large that the authentication image is readily apparent. It will be understood, however, that if the elements of the composite images are small enough, the human eye will merge them together so that only a single uniform color shade is perceived.


Although the composite image would appear to the naked eye to be a single uniform tone, when a decoder lens having a frequency, shape and geometrical structure corresponding to the component image elements is placed in the correct orientation over the image, the decoder separates the portions of the composite image contributed by each of the component images. This allows the authentication image to be viewed by a human observer looking through the decoder. The decoder elements may have magnifying properties and the particular component that is viewed by the observer may change depending on the angle of view through the decoder. Thus, from one angle, the viewer may see a light background with a dark inset and from another angle, he may see the reverse.


The example component images of FIGS. 2A-2E use two color shades. It will be understood, however, that the number of color shades is unlimited. The only requirement is that if the composite image is to produce a single apparent tonal value, then the tonal values for corresponding elements of the two component images must all be balanced around that single tonal value. Component images may also be balanced around multiple tonal values, in which case, the resulting composite image will have multiple apparent tonal values.


In some embodiments like those exemplified in FIGS. 2A-2E, the composite image may be designed to work with fly's eye lenses arranged in an array with a square or rectangular grid. It will be understood, however, that the lens elements may be formed in virtually any pattern (symmetric or asymmetric, regularly or irregularly spaced) and have any shape. The size of the elements of the composite image may be determined by the size of the elements in the decoding lens. As noted above, the frequency of the sampling of the component images may be done using a multiple of the frequency of lens elements in the decoding lens. For example, the sampling of the component image may have the same, twice, or three times the frequency of the decoding elements.


In the example shown in FIGS. 2A-2E, alternating portions of the component image were used to form the composite image. The matrix pattern thus appeared like this:



















Com-
Com-
Com-
Com-
Com-
Com-


ponent 1
ponent 2
ponent 1
ponent 2
ponent 1
ponent 2


Com-
Com-
Com-
Com-
Com-
Com-


ponent 2
ponent 1
ponent 2
ponent 1
ponent 2
ponent 1


Com-
Com-
Com-
Com-
Com-
Com-


ponent 1
ponent 2
ponent 1
ponent 2
ponent 1
ponent 2


Com-
Com-
Com-
Com-
Com-
Com-


ponent 2
ponent 1
ponent 2
ponent 1
ponent 2
ponent 1


Com-
Com-
Com-
Com-
Com-
Com-


ponent 1
ponent 2
ponent 1
ponent 2
ponent 1
ponent 2


Com-
Com-
Com-
Com-
Com-
Com-


ponent 2
ponent 1
ponent 2
ponent 1
ponent 2
ponent 1









It will be understood that other systematic approaches of collecting and ordering portions of the component images to form the composite image and/or the elements inside the composite image may be utilized. FIGS. 3A and 3B, for example, illustrate an approach to collecting and ordering portions of the component images 100, 100′ to form elements of the composite image 100″ illustrated in FIG. 3C. The component images 100, 100′ may be constructed using tonal values balanced around one or more tonal values, with the balanced values used to define an authentication image.


In the examples of FIGS. 3A-3F, the component images 100, 100′ are divided into elements 130, 130′ each having a 2×2 pattern of subelements 132, 132′, similar to the pattern used in the examples of FIGS. 2A-2E. It will be understood that while only a single exemplary element 130, 130′ is shown for each component 100, 100′, the method involves dividing the entire images into a grid of such components. Diagonally opposed subelements A1 and A2 are then taken from each element (or cell) 130 of the first component image 100 and diagonally opposed subelements B1 and B2 are taken from the corresponding element 130′ of the second component image 100′. The B1 and B2 portions may be selected so that they differ in exact location from the A1 and A2 portions, as shown in FIGS. 3A-3C. Alternatively, the B portions may be taken from the same locations as the A portions as shown in FIGS. 3D-3F. In either case, the selected portions are then used to construct a composite image 100″ illustrated in FIGS. 3C and 3F. In the example of FIGS. 3A-3C, the subelements may all be placed in the corresponding element 130″ of the composite image 100′ in the exact locations from which they were taken. In the Example of FIGS. 3D-3F, the B subelements may be positioned in a slightly different location in the composite image from where they were taken in order to fill out the element 130″. In both examples, however, the four subelements are all taken from the same cell location. This assures that the corresponding cell 130″ in the composite image 100″ will have the same apparent tonal value in either case.


It will be understood by those of skill in the art that the subelements 132, 132′ do not have to be square or any other specific shape including but not limited to any polygon, circle, semicircle, ellipse and combinations or portions thereof. The component elements 130, 130′ could for example be divided into two or four triangles. They could also be formed as two rectangles that make up a square element. For images to be viewed using a fly's eye lens, the component elements (or portions thereof) can be sized and shaped to correspond to the shape of the lens elements and any combination of subelement shapes can be used that combine to form the corresponding element shape. It would even be possible to mix different shapes, as long as the tonal balance is maintained. Different sized subelements may also be used. Even if the total areas belonging to each of the components are not equal, the disparity can be compensated by using a darker tone for one of the components. For example, 50% area at 60% density for the first component and 50% are at 40% density for the second component will give a 50% overall tint. However, using a 75% area at 60% density for the first component and 25% area at 20% density for the second component will also be perceived as 50% overall tint density. Another approach would be to use a different number of subelements from different components. For example, two subelements can be taken from the first component and four from the second component, as long as the tonal balance is maintained.


It will also be understood that in these embodiments, there are two component images. Thus, half of each component image is used to form the composite image.



FIGS. 4A-4C illustrate embodiments that produce a scrambling effect in the composite image. In this approach, larger, overlapping sample portions are taken from each component image and are reduced in size so as to form non-overlapping pieces of a composite image.


The difference in sizes between the portions of the component image and the elements of the composite image may be referred to as a zoom factor or element reduction factor. For example, for a zoom factor of three, while the size of the elements of the composite image may be similar to that illustrated in FIGS. 3A-3F, the size of the portions of the component images would be three times larger. In this example, the size of the portions of the component images are shrunk down three times before being inserted into the composite image.



FIGS. 4A and 4B illustrate first and second component images 200, 200′, which are used to construct a composite image 200″ illustrated in FIG. 4C. In a method according to this embodiment of the invention, overlapping elements 250, 250′ are taken from the component images 200, 200′, reduced in size as a function of the zoom factor, and placed as subelements 232″ within elements 230″ to form the composite image 200″. It will be understood that although only two such elements are shown for each component image (i.e., A1, A2, B1 and B2), the overlapping elements 250, 250′ cover the entirety of the two component images 200, 200′. Each such element is located based on the configuration and frequency of the lens elements of the decoder and on the configuration of the subelements 232″. In the embodiments shown in FIGS. 4A-4C, the overlapping elements are centered on the locations of the subelements 232′.


In FIG. 4A, the shaded area identified as Element A1 in the first component image is shrunk down three times in each dimension to create subelement A1 of the composite image (i.e., a zoom factor of 3 is applied). Subelement A1 is centered on the location corresponding to the center of Element A1 in the component image. The large square identified as Element A2 is shrunk down three times in each dimension to obtain subelement A2 of the composite image 200″, which is similarly centered on the location corresponding to the center of the Element A2. Similar operations were performed to obtain subelements B1 and B2 of the composite image 200″ illustrated in FIG. 4B.


The effect of using a zoom factor to create the composite image is illustrated in FIG. 5, which shows a composite image 300 formed from the component images illustrated in FIGS. 2A-2E. The composite image of FIG. 5 was formed using a zoom factor of 4, but it will be understood that the composite image may be formed using any zoom factor. Despite the scrambled appearance of the image portions that make up the composite image, placement of a decoder lens having a corresponding lens element array over the composite image results in the “reassembly” of the component images 10, 10′ for viewing by an observer, allowing the observer to see the authentication image 20, 20′. The authentication images viewed by placing the decoder lens over a composite image formed after applying a zoom factor appear to move or “float” when as the observer changes his angle of view through the decoder. This results from the use of overlapping component portions that have been zoomed. The elements of the component images thus spread into multiple parts of the composite image. By adjusting the angle of view, the decoder makes viewable information from the multiple parts of the component image, thereby creating an illusion of floating. Generally, the bigger the zoom factor, the more pronounced the floating effect. On the other hand, by shrinking the portions of the component images by a zoom factor, the resolution of the component images may be decreased when seen through the decoding lenses.


In some embodiments of the invention, the subelements of the component images may be flipped before forming the composite image. Flipping portions of the component images changes the direction in which these portions appear to float when seen through the decoder. By alternating between flipping and not flipping the elements of the composite image, different parts of the component images may appear to float in opposite directions when seen through the decoder.


In certain instances, the above effects may be applied to a single component image (or two identical component images) that is used to produce a non-tonally balanced encoded image. Such images could be used, for example, in applications where a decoder lens is permanently affixed to the image. In such applications, tonal balancing is unnecessary because the authentication image is always viewable through the permanently affixed decoder lens.


In some embodiments of the invention, a composite image may be formed from more than one authentication (or other) image. For each such image, a plurality of component images may be created using the methods previously discussed. Portions from each component image may then be used to form a single composite image. For example, if it is desired to use two authentication images (Image 1 and Image 2), each image could be used to form two component images, each divided into elements and subelements as shown in FIGS. 2A-E, 3A-3F, and 4A-4C. This would produce four component images, each having corresponding elements and subelements. A composite image similar to those of FIGS. 3C and 3F could be formed by using a subelement A1 taken from a first component of Image 1 and a subelement A2 taken from a second component of Image 1. Similarly, a subelement B1 could be taken from a first component of Image 2 and a subelement B2 from a second component of Image 2. In another example, subelements A1 and B2 could be taken from components of Image 1 and subelements B1 and A2 could be taken from components of Image 2. The subelements could be ordered in multiple ways. The subelements could be ordered one below another, side by side, across the diagonal from each other, or in any other way. The composite image may produce the effect that the human observer may see the different authentication images depending on the angle of view through a decoder lens. The component images may alternate and switch when the angle of view is changed. Additionally, the zoom factor and flipping techniques may be used with this technique. This may create a multitude of effects available to the designer of the composite image. Any number of images may be hidden together in this manner and any number of component images may be used for each.


In some embodiments of the invention, different zoom factors can be used for the subelements coming from the different images. For example, a zoom factor of two may be used for the subelements coming from Image 1 and a zoom factor of eight may be used for the phases coming from Image 2. The subelements coming from the different images may appear to be at different depths when seen through the decoder. In this way, various 3D effects may be achieved.



FIGS. 6A-6C illustrate an approach to collecting and ordering portions of the component images to form elements of a composite image that is decodable using a lenticular lens. In FIGS. 6A and 6B, two component images 400, 400′ are divided into elements 430, 430′ corresponding in shape and frequency to the lens elements of a lenticular decoder having “wavy” lenticules. As before, the component images are created so as to be balanced around a particular shade (or shades). The composite image 400″ illustrated in FIG. 6C is again formed by assembling subelements 432, 432′ from the component images 400, 400′. A zoom factor can be used if desired. In this example, the zoom factor is one, which indicates that the composite image elements are the same size as the component image elements (i.e., the component image elements are not shrunk). The approaches of collecting and ordering discussed above may also be applied for a wavy lenticular decoding lens or any other type of decoding lens. In this example, the portion of the first component image, which is the light gray portion, may be taken from the same geometrical position as the portion of the second component image, which is the dark gray portion. The portions of the component images may have equal size. The combined portions of the component images or the element of the composite image may cover the area of a single decoding element in the composite image.


If the portions of the component images used to create a composite image are small enough and if the phases are balanced along the same color shade, all of the techniques described above may produce an image that looks like a tint, i.e. uniform color shade when printed.



FIG. 7 illustrates a generalized method M100 of producing a composite authentication image according to an embodiment of the invention. The method M100 begins at S5 and at S10 an authentication image is provided. Using the authentication image, two or more component images are created at S20. As previously discussed, these component images are formed so that at each location, their tonal values are balanced around a predetermined tonal value or tint density. At S30, the image components are used to produce a plurality of image elements to be used to form a composite image. These composite image elements are formed and positioned according to a pattern and frequency of the elements of a decoder lens. As previously discussed, the component elements may be positioned and sized so as to provide a frequency that is the same as or a multiple of the frequency of the decoder. In some embodiments, the component image elements are constructed by dividing the composite image into non-overlapping elements or cells. In other embodiments, the component image elements may be formed as overlapping elements or cells.


At S40, content from each element of each of the component images is extracted. In embodiments where the component images are divided into non-overlapping elements, the action of extracting content may include subdividing each element of each component image into a predetermined number of subelements. The image content of a fraction of these subelements is then extracted. The fraction of subelements from which content is extracted may be the inverse of the number of component images or a multiple thereof. Thus, if two component images are used, then half of the subelements are extracted from each element.


In embodiments where the component images are used to produce overlapping elements, the content of each entire element may be extracted. As previously described, a zoom factor may be applied to the extracted elements to produce subelements that can be used to form the composite image.


At S50, the extracted content from the component images is used to form a composite image. This may be accomplished by placing subelements from each of the components into locations corresponding to the locations in the component images from which the content of the subelements was extracted. The method ends at S60.


Any or all of the actions of the method M100 and any variations according to various embodiments of the invention may be carried out using any suitable data processor or combination of data processors and may be embodied in software stored on any data processor or in any form of non-transitory computer-readable medium. Once produced in digital form, the encoded composite images of the invention may be applied to a substrate by any suitable printing, embossing, debossing, molding, laser etching or surface removal or deposit technique. The images may be printed using ink, toner, dye, pigment, a transmittent print medium (as described in U.S. Pat. No. 6,980,654, which issued Dec. 27, 2005 and is incorporated herein by reference in its entirety), a non-visible spectrum (e.g., ultraviolet or infrared) print medium (as described in U.S. Pat. No. 6,985,607, which issued Jan. 10, 2006 and is incorporated herein by reference in its entirety).


It will be understood that there are a variety of ways in which balanced image components may be constructed. In various embodiments, balanced component image portions may be created by inverting the portions of one component image to form the portions of the second component. If this approach is used, the component images will be balanced around 50% density, and the composite image will appear to the naked eye as a 50% tint. When printed or otherwise displayed the elements of the composite image may be printed next to each other and the eye will average them out to (60%+40%)/2=50%. To obtain a lighter composite tint instead of 50%, both component images can be brightened by the same amount. For darker composite tint, both component images can be darkened by the same amount.


In some embodiments of the invention, a tint based composite image may be integrated or embedded into a primary image, such as any visible art. The composite image(s) may be hidden to the naked eye within the art work, but rendered visible when a decoder is placed on the printed visible artwork with composite image(s) integrated inside. All of the effects associated with the composite image (i.e. the appearance of floating, alternation of component image viewability, etc.) are retained.


One approach to this is to apply a halftone screening technique that uses the composite images as a screen file to halftone the visible artwork. This technique may modify the elements of the composite image by growing or shrinking them to mimic the densities of the pieces of the visible artwork image at the same positions.



FIGS. 8A-8D and 9A-9C illustrate an example of this approach. FIGS. 8A and 8B illustrate two component images 500, 500′ constructed based on a block letter “USA” authentication image, which are used to construct a composite image 500″ illustrated in FIG. 8C formed from square elements of the two component images 500, 500″. As has been discussed, the basic composite images produced according to the methods of the invention appear as single tone images to the naked eye. As illustrated in FIG. 8D, however, magnification shows that the composite image 500″ is formed from a plurality of subelements. Each of these subelements is a square portion taken from a corresponding element of one of the component images 500, 500′. It will be understood that all of these subelements are the same size and shape. The appearance of varying sized rectangles in the enlarged area occurs as the result of the variation in content within the subelements. Placement of a corresponding decoder over the composite image 500″ “reassembles” this content so that the component images 500, 500′ with the authentication image can be viewed.



FIGS. 9A and 9B illustrate a visible artwork image 510 along with a halftone 510′ of the same image screened using the composite image 500″ of FIG. 8C. The unmagnified half-tone image 510′ illustrated in FIG. 9B appears unchanged to the naked eye. As illustrated in FIG. 8D, however, magnification shows that the image 510′ is made up of the square elements of the composite image, which have been modified according to the tone density of the original image 510. In effect, the composite image 500″ of FIG. 8C is embedded within the primary image 510. When a decoder is placed over the encoded image (i.e., the halftone artwork 510′), the component images 500, 500′ will be visible.



FIGS. 10A-10E illustrate another approach to hiding a secondary image within visible artwork. As was previously discussed, component images may be formed by tonally balancing corresponding locations around different tone densities in different areas. This approach can be used to create component images 610, 610′ from a primary visible image 600 as shown in FIGS. 10B and 10C. One approach to this is to darken the primary image 600 illustrated in FIG. 10A to create a first replica image as illustrated in FIG. 10B and correspondingly lighten the primary image 600 to create a second replica image as illustrated in FIG. 10C. An area matching an authentication image may be masked from each of the replica images and replaced in each case by the content from the masked area of the other replica. In the example illustrated in FIGS. 10A-10E, the areas matching the letters “USA” (i.e. the authentication image) are essentially swapped between the replica images to produce the component images 610, 610′. The component images may then be sampled and combined to create the composite image 610″ illustrated in FIG. 10D using any of the techniques previously discussed. The composite encoded image 610″ closely resembles the original primary image 600, but with the hidden message “USA” being viewable using a decoder lens corresponding to the size and configuration of the elements used to form the subelements of the composite image 610″.


Another approach to hiding a secondary image within a primary image is to use both the primary and secondary images to create component images. This approach is illustrated in FIGS. 11A-11H and 12A-12G. FIGS. 11A-11H illustrate (in gray scale) a color primary image 700 of a tiger, and a color secondary image 710 of a girl. In this example, the primary image 700 is used to form four identical component images 700A, 700B, 700C, 700D, which are divided into elements 730A, 730, 730C, 730D as shown in FIGS. 12A-12D. As in previous examples, only a single element is shown for each component image, but it will be understood that the elements are formed from the entire component image. It will also be understood that, for demonstration purposes, the elements in FIGS. 12A-12G are depicted much larger than actual elements used in the methods of the invention. In the illustrated embodiment, each of the elements of the four components is divided into subelements 732A, 732B, 732C, 732D. Because, in this example, a total of six components are used to produce the composite image, the component image elements are divided into six subelements.


It will be understood that, in practice, it is not actually necessary to create separate component images of the primary image. The primary image itself can be used to produce the elements and subelements used to construct the composite image.


The secondary image 710 is used to produce two component images 710A, 710B. The second component image 710B illustrated in FIG. 11H is produced as an inverse of the first component image 710A illustrated in FIG. 11G. The first and second component images 710A, 710B are divided into elements 730E, 730F, which may be non-overlapping elements (as shown in FIGS. 12E and 12F) or as overlapping elements like those shown in FIG. 4C. As with the primary component images, each of the elements of the secondary components 710A, 710B is divided into subelements 732E, 732B, 732C, 732D. Again, six subelements are formed from each element.


In this example, the goal is for the primary image to be visible to the naked eye and the secondary image to be visible with the assistance of a decoder lens corresponding to the frequency of the elements of the component images. Thus, in constructing the composite image illustrated in FIG. 12G, the majority of the subelements used are taken from the primary component images. In the illustrated example, four (A1, B2, C4 and D5) of the six subelements used in each element 722 of the composite image 720 are taken from the four primary component images that are identical to the primary image. The other two subelements (E3 and F6) used in the element 722 are taken from the secondary composite images 710A, 710B, and are interlaced with the four subelements from the primary image. Because the subelements taken from the secondary image are compensated (original image tint for one subelement and its inverse for the other subelement), they will not be visible to the naked eye. In other words, the eye will mix them up into a 50% tint. As in previous embodiments, the subelements used and their placement within the element 722 of the composite image 720 can vary.


Because the subelements coming from the primary image 700 are not changed in any way, an observer will still see the image of the tiger in the composite image 720 with a naked eye. Under a properly oriented decoder lens, however, the components will be separated so that, for some angles of view the observer will see the primary image (e.g., the tiger of FIGS. 11A-11E), for other angles of view, the observer will see the secondary image (e.g., the girl of FIGS. 11F-11G), and for yet other angles of view the observer will see the inverse of the secondary image as illustrated in FIG. 11H. In this way, a color secondary image and its inverse are hidden inside the color primary image. Element flipping and/or a zoom factor larger than one can be applied to the component images created from the secondary image, thus adding additional effects to the decoded image.


In a variation to the above embodiment, instead of using a majority of subelements from the primary image for each composite element, the primary image can be preprocessed to increase its contrast. This allows the reduction of the number of subelements that must be taken from the primary in order to hide the authentication image.


In any of the embodiments described herein, the images used to create a composite image may be binary, grayscale, color images, or a combination of any type of image. In this way, the components revealed with the decoding lens may be binary, grayscale or color images.



FIGS. 13-15 illustrate examples of the application to an object of authentication images produced according to embodiments of the present invention each having a primary image formed as a composite image according to methods of the present invention. FIG. 13 is a photographic image of a simulated document 800 to which composite images 810, 820 of the present invention have been applied. The first composite image 810 appears as a silhouette of a featureless, single tone cube on the right side of the document. The second composite image 820 has a primary image of an oval with the letters “SI” at its center. FIG. 14 shows the same document 800 with a decoder lens 850 placed over the area of the first composite image 810. Because the decoder 850 has a frequency corresponding to the frequency used to produce the first composite image 810 and is properly aligned with the elements of the image, the authentication image (the characters “SI” and “USA”) are visible when the cube is viewed through the decoder lens 850. FIG. 15 illustrates a similar placement of a decoder lens 860 over the second composite image. Depending on the parameters used to create the composite images 810, 820, the decoder lens 860 may or may not be the same as or have the same optical characteristics as the first decoder lens 850. When the decoder 860 is placed in the proper orientation as shown, the image of a head appears when viewed through the decoder 860. Both composite images 810, 820 were produced using overlapping elements to which a zoom factor was applied and, as a result, the two authentication images appear to “float” when the viewer changes the angle at which he views the image through the decoders 850, 860 (not viewable in static illustrations of FIGS. 14 and 15).


When the composite images produced according to the various embodiments of the invention are printed or otherwise applied to an object, the component images used to produce the composite images may be viewed by application of a corresponding decoder lens. The decoder lens may be virtually any form of lens having multiple lens elements and the lens elements may be formed in virtually any pattern (symmetric or asymmetric, regularly or irregularly spaced) and have any shape. Authentication may be accomplished by comparing the content of the image viewed through the decoder to the expected content for an authentic object to which the composite image has been applied. The component images may also be viewable through the use of a software-based decoder such as those in U.S. Pat. Nos. 7,512,249 and 7,630,513, the complete disclosure of which are incorporated herein by reference in their entirety. As described in the '249 and '513 Patents, an image of an area where an encoded image is expected to appear can be captured using an image capturing device such as a scanner, digital camera, or telecommunications device and decoded using a software-based decoder. In some embodiments, such a software-based decoder may decode a composite image by emulating the optical properties of the corresponding decoder lens. Software-based decoders may also be used to decode a digital version of a composite image of the invention that has not been applied to an object.


The use of software-based decoders also provides the opportunity to create encoded composite images using more complicated element patterns. As was previously noted, some lens element patterns and shapes may be so complex that it is impossible or impractical to manufacture optical lenses that make use of them. These difficulties, however, do not apply to the techniques used to create the images of the present invention and, moreover, do not apply to software-based decoders. The methods of the present invention can make use of a “software lens” having lens elements that have a variable frequency, complex and/or irregular shapes (including but not limited to ellipses, crosses, triangles, randomly shaped closed curves or polygons), variable dimensions, or a combination of any of the preceding characteristics. The methods of the invention can be applied based on the specified lens configuration, even if this configuration cannot be practically manufactured. The methods of creating composite images from component images as described herein are based on the innovative use of simple geometric transformations, such as mapping, scaling, flipping, etc, and do not require a physical lens to be created for this purpose. Just having a lens configuration, or specification, is enough to apply this method. Some or all of the characteristics of the software lens could then be used by a software decoder to decode the encoded composite image to produce decoded versions of the component images used to create the composite image.


It will be readily understood by those persons skilled in the art that the present invention is susceptible to broad utility and application. Many embodiments and adaptations of the present invention other than those herein described, as well as many variations, modifications and equivalent arrangements, will be apparent from or reasonably suggested by the present invention and foregoing description thereof, without departing from the substance or scope of the invention.


While the foregoing illustrates and describes exemplary embodiments of this invention, it is to be understood that the invention is not limited to the construction disclosed herein. The invention can be embodied in other specific forms without departing from its spirit or essential attributes.

Claims
  • 1. An automated method for constructing a composite image having an authentication image formed therein, the authentication image being viewable through a decoder lens having a plurality of lens elements defining one or more decoder lens frequencies, the method comprising: generating two gray-scale component images having tonal areas that are tonally balanced around at least one tonal value, at least one of the two gray-scale component images including a representation of the authentication image;determining a first pattern of the component image elements for the two gray-scale component images, the first pattern including a first element configuration and at least one element frequency that is equal to or a multiple of one of the decoder lens frequencies, the component image elements for a corresponding gray-scale component image collectively carrying content of the corresponding gray-scale component image;extracting at least a portion of the content from the component image elements of the two gray-scale component images; andconstructing a composite image having a second pattern of composite image elements, the second pattern having a second element configuration that corresponds to the first element configuration, the second pattern having the at least one element frequency that is equal to or a multiple of one of the decoder lens frequencies, the composite image elements including the content extracted from the component image elements obtained from the two gray-scale component images.
  • 2. The automated method according to claim 1, wherein the component image elements overlap and the method further comprises: applying a predetermined element reduction factor to reduce a size of the component image elements to form subelements, the subelements defining the content extracted from the component image elements in the extracting action.
  • 3. The automated method according to claim 2, wherein the action of constructing the composite image includes placing the subelements in the composite image so that the subelements are centered on a point corresponding to a center point of the composite image elements from which the subelements were formed.
  • 4. The automated method according to claim 1, further comprising: subdividing the component image elements into a number of subelements to form a subelement pattern.
  • 5. The automated method according to claim 4, wherein the number of subelements is equal to or a multiple of the number of two gray-scale component images.
  • 6. The automated method according to claim 4, wherein at least a portion of the subelements have a geometric shape selected from a group consisting of a polygon, an ellipse, a partial ellipse, a circle, and a partial circle.
  • 7. The automated method according to claim 4, wherein the portion of the content extracted from the component image elements of the two gray-scale component images includes one or more of the subelements of the corresponding component image elements.
  • 8. The automated method according to claim 4, wherein the action of constructing the composite image includes placing at least one subelement from each of the component image elements of the two gray-scale component images into the composite image elements subdivided into a subelement pattern, the composite image elements being positioned at locations corresponding to locations of the component image elements from which the subelements were extracted.
  • 9. The automated method according to claim 1 wherein a selected portion of the extracted content of the component image elements corresponding to the two gray-scale component images is flipped about an axis of the component image elements.
  • 10. The automated method according to claim 1, wherein the decoder lens is a lenticular lens having a single lenticular lens frequency and the component image elements are configured as elongate segments having a predetermined width and spaced apart so as to define an element frequency that is equal to or a multiple of the lenticular lens frequency.
  • 11. The automated method according to claim 10, wherein the lens elements and the elongate segments are curvilinear.
  • 12. The automated method according to claim 1, wherein the decoder lens is a fly's eye lens having a two dimensional array of lens elements having a predetermined lens element shape and a plurality of lens frequencies and wherein the first pattern of component image elements corresponds to a pattern of the fly's eye lens.
  • 13. The automated method according to claim 12, wherein the component image elements have a shape corresponding to the predetermined lens element shape.
  • 14. The automated method according to claim 1, further comprising: screening a visible artwork image using the composite image to produce a half-tone image of the visible artwork image having the authentication image incorporated therein, the authentication image being non-viewable to a naked eye, but viewable through the decoder lens provided over the half-tone image.
  • 15. The automated method according to claim 1, wherein the two gray-scale component images are formed from a visible artwork image, the visible artwork image having varying tonal densities, the two gray-scale component images being formed to be balanced around the varying tonal densities of the visible artwork image at corresponding locations within the two gray-scale component images.
  • 16. The automated method according to claim 15, wherein the action of generating the two gray-scale component images includes: forming a first replication of the visible artwork image in which the tone density of content is darker than in the visible artwork image;forming a second replication of the visible artwork image in which the tone density of content is lighter than in the visible artwork image;masking a portion of the first replication using the authentication image;masking a portion of the second replication using the authentication image;replacing the masked portion of the first replication with the masked portion of the second replication to form a first component image; andreplacing the masked portion of the second replication with the masked portion of the first replication to form a second component image.
  • 17. The automated method according to claim 16, wherein the first replication is darker than the visible artwork image by a first percentage that is equal to a second percentage by which the second replication is lighter that the visible artwork image.
  • 18. The automated method according to claim 1, further comprising: generating a plurality of second component images, the plurality of second component images being a replication of a visible artwork image, the plurality of second component images being greater in number than the two gray-scale component images.
  • 19. The automated method according to claim 1, further comprising: applying the composite image to a surface to be authenticated.
  • 20. The automated method according to claim 19, wherein the action of applying the composite image includes printing the composite image on the surface and embossing, debossing, molding or altering a surface geometry of the surface according to the composite image.
  • 21. A non-transitory computer-readable medium having software code stored thereon, the software code being configured to cause a computer to execute a method for constructing a composite image having an authentication image formed therein, the authentication image being viewable on an object to which the composite image has been applied through a decoder lens having a plurality of lens elements defining one or more decoder lens frequencies, the method comprising: generating two gray-scale component images having tonal areas that are tonally balanced around at least one tonal value, at least one of the two gray-scale component images including a representation of the authentication image;determining a first pattern of the component image elements for the two gray-scale component images, the first pattern including a first element configuration and at least one element frequency that is equal to or a multiple of one of the decoder lens frequencies, the component image elements for a corresponding gray-scale component image collectively carrying content of the corresponding gray-scale component image;extracting at least a portion of the content from the component image elements of the two gray-scale component images; andconstructing a composite image having a second pattern of composite image elements, the second pattern having a second element configuration that corresponds to the first element configuration, the second pattern having the at least one element frequency that is equal to or a multiple of one of the decoder lens frequencies, the composite image elements including the content extracted from the component image elements obtained from the two gray-scale component images.
  • 22. The non-transitory computer-readable medium according to claim 21, wherein the component image elements overlap and the method further comprises: applying a predetermined element reduction factor to reduce a size of the component image elements to form subelements, the subelements defining the content extracted from the component image elements in the extracting action.
  • 23. The non-transitory computer-readable medium according to claim 21, wherein the method further comprises: subdividing the component image elements into a number of subelements to form a subelement pattern, the number of subelements being equal to or a multiple of the number of two gray-scale component images, the portion of the content extracted from the component image elements of the two gray-scale component images includes one or more of the subelements of the corresponding component image elements.
  • 24. The non-transitory computer-readable medium according to claim 21, wherein the method further comprises: screening a visible artwork image using the composite image to produce a half-tone image of the visible artwork image having the authentication image incorporated therein, the authentication image being non-viewable to a naked eye, but viewable through the decoder lens provided over the half-tone image.
  • 25. The non-transitory computer-readable medium according to claim 21, wherein the two gray-scale component images are formed from a visible artwork image, the visible artwork image having varying tonal densities, the two gray-scale component images being formed to be balanced around the varying tonal densities of the visible artwork image at corresponding locations within the two gray-scale component images.
  • 26. An authenticatable device comprising: a surface; anda composite security image applied to the surface, the composite security image comprising:a plurality of composite image elements comprising subelements defining content extracted from component image elements of two gray-scale component images, the two gray-scale component images having tonal areas that are tonally balanced around at least one tonal value, at least one of the two gray-scale component images including a representation of an authentication image;a first pattern of the component image elements for the two gray-scale component images, the first pattern including a first element configuration and at least one element frequency that is equal to or a multiple of one or more decoder lens frequencies, the component image elements for a corresponding gray-scale component image collectively carry content of the corresponding gray-scale component image; anda composite image having a second pattern of the composite image elements, the second pattern having a second element configuration that corresponds to the first element configuration, the second pattern having the at least one element frequency that is equal to or a multiple of the one or more decoder lens frequencies, the composite image elements including content extracted from the component image elements obtained from the two gray-scale component images,the authentication image being viewable through a decoder lens provided over the composite security image.
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims priority to U.S. Non-Provisional application Ser. No. 13/270,738, filed on Oct. 11, 2011 which claims priority to U.S. Provisional Application 61/391,843, filed Oct. 11, 2010 and U.S. Provisional Application 61/461,224, filed Jan. 14, 2011, the complete disclosures of which are incorporated herein by reference in their entirety. This application is directed to subject matter related to technology disclosed in the following U.S. patents, the complete disclosures of which are incorporated herein by reference in their entirety: U.S. Pat. No. 5,708,717, issued Jan. 13, 1998, U.S. Pat. No. 7,466,876, issued Dec. 16, 2008, and U.S. Pat. No. 7,512,249, issued Mar. 31, 2009.

US Referenced Citations (136)
Number Name Date Kind
2952080 Avakian et al. Sep 1960 A
3524395 Alasia Aug 1970 A
3538632 Kay Nov 1970 A
3628271 Carrell et al. Dec 1971 A
3635778 Rice et al. Jan 1972 A
3642346 Dittmar Feb 1972 A
3784289 Wicker Jan 1974 A
3875026 Widmer Apr 1975 A
3875375 Scuitto et al. Apr 1975 A
3922074 Ikegami et al. Nov 1975 A
3937565 Alasia Feb 1976 A
4092654 Alasia May 1978 A
4147295 Nojiri et al. Apr 1979 A
4198147 Alasia Apr 1980 A
4303307 Tureck et al. Dec 1981 A
4417784 Knop et al. Nov 1983 A
4689477 Goldman Aug 1987 A
4715623 Roule et al. Dec 1987 A
4914700 Alasia Apr 1990 A
4972476 Nathans Nov 1990 A
4999234 Cowan Mar 1991 A
5027401 Soltesz Jun 1991 A
5034982 Heninger et al. Jul 1991 A
5113213 Sandor et al. May 1992 A
5128779 Mallik Jul 1992 A
5177796 Feig et al. Jan 1993 A
5178418 Merry et al. Jan 1993 A
5195122 Fabian Mar 1993 A
5195435 Morrone et al. Mar 1993 A
5249546 Pennelle Oct 1993 A
5303370 Brosh et al. Apr 1994 A
5373375 Weldy Dec 1994 A
5396559 McGrew Mar 1995 A
5416604 Park May 1995 A
5438429 Haeberli et al. Aug 1995 A
5576527 Sawanobori Nov 1996 A
5599578 Butland Feb 1997 A
5606609 Houser et al. Feb 1997 A
5608203 Finkelstein et al. Mar 1997 A
5708717 Alasia Jan 1998 A
5712731 Drinkwater et al. Jan 1998 A
5722693 Wicker Mar 1998 A
5735547 Morelle et al. Apr 1998 A
5828848 MacCormack et al. Oct 1998 A
5830609 Warner et al. Nov 1998 A
5867586 Liang Feb 1999 A
5900946 Kunitake et al. May 1999 A
5904375 Bruguda May 1999 A
5960081 Vynne et al. Sep 1999 A
5974150 Kaish et al. Oct 1999 A
6062604 Taylor et al. May 2000 A
6073854 Bravenec et al. Jun 2000 A
6084713 Rosenthal Jul 2000 A
6104812 Koltai et al. Aug 2000 A
6131161 Linnartz Oct 2000 A
6139066 Mowry et al. Oct 2000 A
6171734 Warner Jan 2001 B1
6176430 Finkelstein et al. Jan 2001 B1
6216228 Chapman et al. Apr 2001 B1
6222650 Long Apr 2001 B1
6222887 Nishikawa et al. Apr 2001 B1
6252963 Rhoads Jun 2001 B1
6256150 Rosenthal Jul 2001 B1
6260763 Svetal Jul 2001 B1
6280891 Daniel et al. Aug 2001 B2
6329987 Gottfried et al. Dec 2001 B1
6343138 Rhoads Jan 2002 B1
6362869 Silverbrook Mar 2002 B1
6373965 Liang Apr 2002 B1
6381071 Dona et al. Apr 2002 B1
6389151 Carr et al. May 2002 B1
6390372 Waters May 2002 B1
6414794 Rosenthal Jul 2002 B1
6435502 Matos Aug 2002 B2
6470093 Liang Oct 2002 B2
6496591 Rhoads Dec 2002 B1
6523826 Matos Feb 2003 B1
6536665 Ray et al. Mar 2003 B1
6542618 Rhoads Apr 2003 B1
6565089 Matos May 2003 B1
6636332 Soscia Oct 2003 B1
6757406 Rhoads Jun 2004 B2
6760464 Brunk Jul 2004 B2
6769618 Finkelstein Aug 2004 B1
6810131 Nakagawa et al. Oct 2004 B2
6817525 Piva et al. Nov 2004 B2
6827282 Silverbrook Dec 2004 B2
6859534 Alasia Feb 2005 B1
6980654 Alasia et al. Dec 2005 B2
6983048 Alasia et al. Jan 2006 B2
6985607 Alasia et al. Jan 2006 B2
7114750 Alasia et al. Oct 2006 B1
7226087 Alasia et al. Jun 2007 B2
7262885 Yao Aug 2007 B2
7315407 Menz et al. Jan 2008 B2
7321968 Capellaro et al. Jan 2008 B1
7386177 Alasia et al. Jun 2008 B2
7421581 Alasia et al. Sep 2008 B2
7466876 Alasia Dec 2008 B2
7512249 Alasia et al. Mar 2009 B2
7512280 Alasia et al. Mar 2009 B2
7551752 Alasia et al. Jun 2009 B2
7630513 Alasia et al. Dec 2009 B2
7654580 Alasia et al. Feb 2010 B2
7796753 Alasia et al. Sep 2010 B2
8682025 Cvetkovic et al. Mar 2014 B2
20010005570 Daniel et al. Jun 2001 A1
20020008380 Taylor et al. Jan 2002 A1
20020042884 Wu et al. Apr 2002 A1
20020054355 Brunk May 2002 A1
20020054680 Huang et al. May 2002 A1
20020117845 Ahlers et al. Aug 2002 A1
20020163678 Haines et al. Nov 2002 A1
20020185857 Taylor et al. Dec 2002 A1
20020196469 Yao Dec 2002 A1
20030012562 Lawandry et al. Jan 2003 A1
20030015866 Cioffi et al. Jan 2003 A1
20030039195 Long et al. Feb 2003 A1
20030115866 Price Jun 2003 A1
20030136837 Amon et al. Jul 2003 A1
20030137145 Fell et al. Jul 2003 A1
20030169468 Menz et al. Sep 2003 A1
20030183695 Labrec et al. Oct 2003 A1
20030201331 Finkelstein Oct 2003 A1
20030228014 Alasia Dec 2003 A1
20050018845 Suzaki Jan 2005 A1
20050057036 Ahlers et al. Mar 2005 A1
20050100204 Afzal et al. May 2005 A1
20050109850 Jones May 2005 A1
20050184504 Alasia et al. Aug 2005 A1
20050237577 Alasia et al. Oct 2005 A1
20070003294 Yaguchi et al. Jan 2007 A1
20070057061 Alasia et al. Mar 2007 A1
20070248364 Wicker et al. Oct 2007 A1
20080044015 Alasia Feb 2008 A1
20080267514 Alasia et al. Oct 2008 A1
Foreign Referenced Citations (23)
Number Date Country
10117038 Jun 2006 DE
0256176 Feb 1988 EP
0520363 Dec 1992 EP
0388090 Mar 1995 EP
0598357 Feb 1999 EP
1147912 Oct 2001 EP
1136947 Feb 2007 EP
1407065 Sep 1975 GB
1534403 Dec 1978 GB
2172850 Oct 1986 GB
155659 Apr 2008 IL
9204692 Mar 1992 WO
9315491 Aug 1993 WO
9407326 Mar 1994 WO
9427254 Nov 1994 WO
9720298 Jun 1997 WO
9815418 Apr 1998 WO
9901291 Mar 1999 WO
0180512 Oct 2001 WO
0187632 Nov 2001 WO
2004096570 Nov 2004 WO
2005006025 Jan 2005 WO
2005109325 Nov 2005 WO
Non-Patent Literature Citations (12)
Entry
International Search Report and the Written Opinion of the ISA mailed on Feb. 27, 2012 in PCT Application No. PCT/US11/55787, international filing date Oct. 11, 2011. (11 pages).
“IR inks”, Retrieved from http://www.maxmax.com/aIRInks.htm (Jun. 2004) (2 pages).
“16. Remote sensing”, Retrieved on Mar. 18, 2003, from http://www.gis.unbc.ca.webpages/start/geog205/lectures/rs-data/rsdata.html (4 pages).
“Security supplies tags”, Retrieved from http://www.zebra.com/cgi-bin/print.cgi?pname=http://zebra.com&ppath (Jun. 2004) (2 pages).
“UV Inks”, http://www.maxmax.com/aUVInks.htm (Jun. 2004) (2 pages).
De Capitani Di Vimercati et al., “Access control: Principles and solutions”, Software: Practice and Experience (2003) 33(5): 397-421.
Fulkerson, “Ink and paper take center ring in security market”, Retrieved on Mar. 18, 2003, from http://www.printsolutionsmag.com/articles/sec-doc.html (7 pages).
Lin et al., “Image authentication based on distributed source coding”, IEEE International Conference on Image Processing (2007) 3: III-5-III-8.
Pamboukian et al., “Watermarking JBIG2 text region for image authentication”, IEEE International Conference on Image Processing (2005) 2: II-1078-II-1081.
Skraparlis, “Design of an efficient authentication method for modern image and video”, IEEE Transactions on Consumer Electronics (May 2003) 49(2): 417-426.
Wong, “A public key watermark for image verification and authentication”, IEEE International Conference on Image Processing and its Applications (1998) 1(1): 455-459.
Wu et al., “Watermarking for image authentication”, IEEE International Conference on Image Processing (Oct. 1998) 2: 437-441.
Related Publications (1)
Number Date Country
20140233856 A1 Aug 2014 US
Provisional Applications (2)
Number Date Country
61391843 Oct 2010 US
61461224 Jan 2011 US
Continuations (1)
Number Date Country
Parent 13270738 Oct 2011 US
Child 14178964 US