The invention relates generally to the field of counterfeit protection, and more particularly to the field of electronic and printed document protection using encoded images.
Document falsification and product counterfeiting are significant problems that have been addressed in a variety of ways. One of the more successful approaches has been the use of latent or hidden images applied to or printed on objects to be protected. These images are generally not viewable without the assistance of specialized devices that render them visible.
One approach to the formation of a latent image is to optically encode the image so that, when applied to an object, the image can be viewed through the use of a corresponding decoding device. Such images may be used on virtually any form of printed document including legal documents, identification cards and papers, labels, currency, stamps, etc. They may also be applied to goods or packaging for goods subject to counterfeiting.
Objects to which an encoded image is applied may be authenticated by decoding the encoded image and comparing the decoded image to an expected authentication image. The authentication image may include information specific to the object being authenticated or information relating to a group of similar objects (e.g., products produced by a particular manufacturer or facility). Production and application of encoded images may be controlled so that they cannot easily be duplicated. Further, the encoded image may be configured so that tampering with the information on the document or label is readily apparent.
Authentication of documents and other objects “in the field” has typically required the use of separate decoders such as lenticular or micro-array lenses that optically decode the encoded images. These lenses may have optical characteristics that correspond to the parameters used to encode and apply the authentication image and may be properly oriented in order for the user to decode and view the image. The decoding lenses may also be able to separate secondary images from the encoded images. For example, the decoding lens can be a lenticular lens having lenticules that follow a straight line pattern, wavy line pattern, zigzag pattern, concentric rings pattern, cross-line pattern, aligned dot pattern, offset dot pattern, grad frequency pattern, target pattern, herring pattern or any other pattern. Other decoding lenses include fly's eye lenses and any other lens having a multidimensional pattern of lens elements. The elements of such lenses can be arranged using a straight line pattern, square pattern, shifted square pattern, honey-comb pattern, wavy line pattern, zigzag pattern, concentric rings pattern, cross-line pattern, aligned dot pattern, offset dot pattern, grad frequency pattern, target pattern, herring pattern or any other pattern. Examples of some of these decoding lenses are illustrated in
In some cases, lens element patterns and shapes may be so complex that it they are impossible or impractical to manufacture. While such patterns may be highly desirable from the standpoint of their anti-counterfeiting effectiveness, cost and technology difficulty in their manufacture may limit their use.
The present invention provides methods for constructing a digital encoded image in the form of a composite image constructed from a series of component images. An aspect of the invention provides a method for constructing a composite image having an authentication image formed therein. The authentication image is viewable by placement of a decoder lens having a plurality of lens elements defining one or more decoder lens frequencies over an object to which the composite image has been applied. The method includes generating two gray-scale component images having tonal areas that are tonally balanced around at least one tonal value. At least one of the two gray-scale component images is configured to include a representation of the authentication image. The method further includes determining a first pattern of the component image elements for the two gray-scale component images. The first pattern including a first element configuration and at least one element frequency that is equal to or a multiple of one of the decoder lens frequencies. The component image elements for a corresponding gray-scale component image collectively carrying content of the corresponding gray-scale component image. The method still further includes extracting at least a portion of the content from the component image elements of the two gray-scale component images and constructing a composite image having a second pattern of composite image elements. The second pattern having a second element configuration that corresponds to the first element configuration, the second pattern having the at least one element frequency that is equal to or a multiple of one of the decoder lens frequencies. The composite image elements including the content extracted from the component image elements obtained from the two gray-scale component images.
Another aspect of the invention provides an authenticatable object having a surface configured for receiving a composite security image and a composite security image applied to the surface. The composite security image includes a plurality of composite image elements having subelements defining content extracted from component image elements of two gray-scale component images, the two gray-scale component images having tonal areas that are tonally balanced around at least one tonal value. At least one of the two gray-scale component images is configured to include a representation of an authentication image. The component image elements include a first pattern for the two gray-scale component images, the first pattern including a first element configuration and at least one element frequency that is equal to or a multiple of one or more decoder lens frequencies. According to one example, component image elements for a corresponding gray-scale component image collectively carry content of the corresponding gray-scale component image. A composite image is provided having a second pattern of the composite image elements, the second pattern having a second element configuration that corresponds to the first element configuration, the second pattern having the at least one element frequency that is equal to or a multiple of the one or more decoder lens frequencies. The composite image elements include content extracted from the component image elements obtained from the two gray-scale component images. The authentication image is viewable through a decoder lens placed over the composite security image.
The invention can be more fully understood by reading the following detailed description together with the accompanying drawings, in which like reference indicators are used to designate like elements, and in which:
The present invention provides for the encoding and decoding of encoded images. In some embodiments, an authentication or other image is broken into component images that are preferably tonal complements of one another; i.e., they are balanced around a particular color shade. The component images are then systematically sampled and the sampled portions assembled to provide a composite image that appears to the eye to be a single tone image (the single tone being the particular color shade). As will be discussed, the samples are taken according to a pattern of the decoder lens that will be used to view the authentication image.
In some embodiments, multiple authentication images may be used, each such image being used to establish multiple component images. Samples from each component of each authentication image can then be used to form a single composite image that can be decoded to reveal the authentication images.
In some embodiments, an authentication image can be “hidden” within a visible source image by constructing a composite image as described above and applying the composite image to the source image as a halftone screen. In other embodiments, an authentication image may be hidden within a source image by creating a composite from samples of component images derived from the source image. In these component images, certain areas are masked according to the content of the image to be hidden. The tonal value of the masked area of each component image is taken from the masked area of one of the other component images.
The principles of the invention will now be discussed in more detail. As discussed above, the invention provides an encoded image in the form of a composite image constructed from multiple component images. The invention also provides methods of using a multiple component approach for hiding information into a composite image.
The use of component images takes advantage of the fact that the human eye is unable to discern tiny details in an encoded image. The encoded image is usually a printed or otherwise displayed image. The human eye tends to merge the fine details of the printed or displayed image together. This is generally used in printing photos and other images. The printer produces a lot of tiny dots or other structures on the paper. The size of individual dots can be measured as small as thousands of an inch. These individual dots are not perceptible for human vision; however, taken together these dots will be averaged out by human eye to create a shade of color. The size of the dots or the density of the dots will determine the perceived color shade. If dots are bigger, or if they are closer together, the eye will perceive the darker shade. If the dots are smaller, or if they are placed further apart, the eye will perceive the lighter shade.
In the methods of the invention, an authentication or other image can be broken into tonally complementary component images. The term “tonally complementary” means that the component images are balanced around a particular color shade. This means that if corresponding elements (i.e., elements from corresponding locations) of the component images are viewed together, the eye will perceive the color shade around which the component tones are balanced.
In an exemplary method of the invention, each of the phases can be divided into small elements according to a pattern corresponding to a lens element pattern of a decoder lens. These elements may, for example, be linear (straight or curved) elements or segments as illustrated in
As shown in
Although the composite image would appear to the naked eye to be a single uniform tone, when a decoder lens having a frequency, shape and geometrical structure corresponding to the component image elements is placed in the correct orientation over the image, the decoder separates the portions of the composite image contributed by each of the component images. This allows the authentication image to be viewed by a human observer looking through the decoder. The decoder elements may have magnifying properties and the particular component that is viewed by the observer may change depending on the angle of view through the decoder. Thus, from one angle, the viewer may see a light background with a dark inset and from another angle, he may see the reverse.
The example component images of
In some embodiments like those exemplified in
In the example shown in
It will be understood that other systematic approaches of collecting and ordering portions of the component images to form the composite image and/or the elements inside the composite image may be utilized.
In the examples of
It will be understood by those of skill in the art that the subelements 132, 132′ do not have to be square or any other specific shape including but not limited to any polygon, circle, semicircle, ellipse and combinations or portions thereof. The component elements 130, 130′ could for example be divided into two or four triangles. They could also be formed as two rectangles that make up a square element. For images to be viewed using a fly's eye lens, the component elements (or portions thereof) can be sized and shaped to correspond to the shape of the lens elements and any combination of subelement shapes can be used that combine to form the corresponding element shape. It would even be possible to mix different shapes, as long as the tonal balance is maintained. Different sized subelements may also be used. Even if the total areas belonging to each of the components are not equal, the disparity can be compensated by using a darker tone for one of the components. For example, 50% area at 60% density for the first component and 50% are at 40% density for the second component will give a 50% overall tint. However, using a 75% area at 60% density for the first component and 25% area at 20% density for the second component will also be perceived as 50% overall tint density. Another approach would be to use a different number of subelements from different components. For example, two subelements can be taken from the first component and four from the second component, as long as the tonal balance is maintained.
It will also be understood that in these embodiments, there are two component images. Thus, half of each component image is used to form the composite image.
The difference in sizes between the portions of the component image and the elements of the composite image may be referred to as a zoom factor or element reduction factor. For example, for a zoom factor of three, while the size of the elements of the composite image may be similar to that illustrated in
In
The effect of using a zoom factor to create the composite image is illustrated in
In some embodiments of the invention, the subelements of the component images may be flipped before forming the composite image. Flipping portions of the component images changes the direction in which these portions appear to float when seen through the decoder. By alternating between flipping and not flipping the elements of the composite image, different parts of the component images may appear to float in opposite directions when seen through the decoder.
In certain instances, the above effects may be applied to a single component image (or two identical component images) that is used to produce a non-tonally balanced encoded image. Such images could be used, for example, in applications where a decoder lens is permanently affixed to the image. In such applications, tonal balancing is unnecessary because the authentication image is always viewable through the permanently affixed decoder lens.
In some embodiments of the invention, a composite image may be formed from more than one authentication (or other) image. For each such image, a plurality of component images may be created using the methods previously discussed. Portions from each component image may then be used to form a single composite image. For example, if it is desired to use two authentication images (Image 1 and Image 2), each image could be used to form two component images, each divided into elements and subelements as shown in
In some embodiments of the invention, different zoom factors can be used for the subelements coming from the different images. For example, a zoom factor of two may be used for the subelements coming from Image 1 and a zoom factor of eight may be used for the phases coming from Image 2. The subelements coming from the different images may appear to be at different depths when seen through the decoder. In this way, various 3D effects may be achieved.
If the portions of the component images used to create a composite image are small enough and if the phases are balanced along the same color shade, all of the techniques described above may produce an image that looks like a tint, i.e. uniform color shade when printed.
At S40, content from each element of each of the component images is extracted. In embodiments where the component images are divided into non-overlapping elements, the action of extracting content may include subdividing each element of each component image into a predetermined number of subelements. The image content of a fraction of these subelements is then extracted. The fraction of subelements from which content is extracted may be the inverse of the number of component images or a multiple thereof. Thus, if two component images are used, then half of the subelements are extracted from each element.
In embodiments where the component images are used to produce overlapping elements, the content of each entire element may be extracted. As previously described, a zoom factor may be applied to the extracted elements to produce subelements that can be used to form the composite image.
At S50, the extracted content from the component images is used to form a composite image. This may be accomplished by placing subelements from each of the components into locations corresponding to the locations in the component images from which the content of the subelements was extracted. The method ends at S60.
Any or all of the actions of the method M100 and any variations according to various embodiments of the invention may be carried out using any suitable data processor or combination of data processors and may be embodied in software stored on any data processor or in any form of non-transitory computer-readable medium. Once produced in digital form, the encoded composite images of the invention may be applied to a substrate by any suitable printing, embossing, debossing, molding, laser etching or surface removal or deposit technique. The images may be printed using ink, toner, dye, pigment, a transmittent print medium (as described in U.S. Pat. No. 6,980,654, which issued Dec. 27, 2005 and is incorporated herein by reference in its entirety), a non-visible spectrum (e.g., ultraviolet or infrared) print medium (as described in U.S. Pat. No. 6,985,607, which issued Jan. 10, 2006 and is incorporated herein by reference in its entirety).
It will be understood that there are a variety of ways in which balanced image components may be constructed. In various embodiments, balanced component image portions may be created by inverting the portions of one component image to form the portions of the second component. If this approach is used, the component images will be balanced around 50% density, and the composite image will appear to the naked eye as a 50% tint. When printed or otherwise displayed the elements of the composite image may be printed next to each other and the eye will average them out to (60%+40%)/2=50%. To obtain a lighter composite tint instead of 50%, both component images can be brightened by the same amount. For darker composite tint, both component images can be darkened by the same amount.
In some embodiments of the invention, a tint based composite image may be integrated or embedded into a primary image, such as any visible art. The composite image(s) may be hidden to the naked eye within the art work, but rendered visible when a decoder is placed on the printed visible artwork with composite image(s) integrated inside. All of the effects associated with the composite image (i.e. the appearance of floating, alternation of component image viewability, etc.) are retained.
One approach to this is to apply a halftone screening technique that uses the composite images as a screen file to halftone the visible artwork. This technique may modify the elements of the composite image by growing or shrinking them to mimic the densities of the pieces of the visible artwork image at the same positions.
Another approach to hiding a secondary image within a primary image is to use both the primary and secondary images to create component images. This approach is illustrated in
It will be understood that, in practice, it is not actually necessary to create separate component images of the primary image. The primary image itself can be used to produce the elements and subelements used to construct the composite image.
The secondary image 710 is used to produce two component images 710A, 710B. The second component image 710B illustrated in
In this example, the goal is for the primary image to be visible to the naked eye and the secondary image to be visible with the assistance of a decoder lens corresponding to the frequency of the elements of the component images. Thus, in constructing the composite image illustrated in
Because the subelements coming from the primary image 700 are not changed in any way, an observer will still see the image of the tiger in the composite image 720 with a naked eye. Under a properly oriented decoder lens, however, the components will be separated so that, for some angles of view the observer will see the primary image (e.g., the tiger of
In a variation to the above embodiment, instead of using a majority of subelements from the primary image for each composite element, the primary image can be preprocessed to increase its contrast. This allows the reduction of the number of subelements that must be taken from the primary in order to hide the authentication image.
In any of the embodiments described herein, the images used to create a composite image may be binary, grayscale, color images, or a combination of any type of image. In this way, the components revealed with the decoding lens may be binary, grayscale or color images.
When the composite images produced according to the various embodiments of the invention are printed or otherwise applied to an object, the component images used to produce the composite images may be viewed by application of a corresponding decoder lens. The decoder lens may be virtually any form of lens having multiple lens elements and the lens elements may be formed in virtually any pattern (symmetric or asymmetric, regularly or irregularly spaced) and have any shape. Authentication may be accomplished by comparing the content of the image viewed through the decoder to the expected content for an authentic object to which the composite image has been applied. The component images may also be viewable through the use of a software-based decoder such as those in U.S. Pat. Nos. 7,512,249 and 7,630,513, the complete disclosure of which are incorporated herein by reference in their entirety. As described in the '249 and '513 Patents, an image of an area where an encoded image is expected to appear can be captured using an image capturing device such as a scanner, digital camera, or telecommunications device and decoded using a software-based decoder. In some embodiments, such a software-based decoder may decode a composite image by emulating the optical properties of the corresponding decoder lens. Software-based decoders may also be used to decode a digital version of a composite image of the invention that has not been applied to an object.
The use of software-based decoders also provides the opportunity to create encoded composite images using more complicated element patterns. As was previously noted, some lens element patterns and shapes may be so complex that it is impossible or impractical to manufacture optical lenses that make use of them. These difficulties, however, do not apply to the techniques used to create the images of the present invention and, moreover, do not apply to software-based decoders. The methods of the present invention can make use of a “software lens” having lens elements that have a variable frequency, complex and/or irregular shapes (including but not limited to ellipses, crosses, triangles, randomly shaped closed curves or polygons), variable dimensions, or a combination of any of the preceding characteristics. The methods of the invention can be applied based on the specified lens configuration, even if this configuration cannot be practically manufactured. The methods of creating composite images from component images as described herein are based on the innovative use of simple geometric transformations, such as mapping, scaling, flipping, etc, and do not require a physical lens to be created for this purpose. Just having a lens configuration, or specification, is enough to apply this method. Some or all of the characteristics of the software lens could then be used by a software decoder to decode the encoded composite image to produce decoded versions of the component images used to create the composite image.
It will be readily understood by those persons skilled in the art that the present invention is susceptible to broad utility and application. Many embodiments and adaptations of the present invention other than those herein described, as well as many variations, modifications and equivalent arrangements, will be apparent from or reasonably suggested by the present invention and foregoing description thereof, without departing from the substance or scope of the invention.
While the foregoing illustrates and describes exemplary embodiments of this invention, it is to be understood that the invention is not limited to the construction disclosed herein. The invention can be embodied in other specific forms without departing from its spirit or essential attributes.
This application claims priority to U.S. Non-Provisional application Ser. No. 13/270,738, filed on Oct. 11, 2011 which claims priority to U.S. Provisional Application 61/391,843, filed Oct. 11, 2010 and U.S. Provisional Application 61/461,224, filed Jan. 14, 2011, the complete disclosures of which are incorporated herein by reference in their entirety. This application is directed to subject matter related to technology disclosed in the following U.S. patents, the complete disclosures of which are incorporated herein by reference in their entirety: U.S. Pat. No. 5,708,717, issued Jan. 13, 1998, U.S. Pat. No. 7,466,876, issued Dec. 16, 2008, and U.S. Pat. No. 7,512,249, issued Mar. 31, 2009.
Number | Name | Date | Kind |
---|---|---|---|
2952080 | Avakian et al. | Sep 1960 | A |
3524395 | Alasia | Aug 1970 | A |
3538632 | Kay | Nov 1970 | A |
3628271 | Carrell et al. | Dec 1971 | A |
3635778 | Rice et al. | Jan 1972 | A |
3642346 | Dittmar | Feb 1972 | A |
3784289 | Wicker | Jan 1974 | A |
3875026 | Widmer | Apr 1975 | A |
3875375 | Scuitto et al. | Apr 1975 | A |
3922074 | Ikegami et al. | Nov 1975 | A |
3937565 | Alasia | Feb 1976 | A |
4092654 | Alasia | May 1978 | A |
4147295 | Nojiri et al. | Apr 1979 | A |
4198147 | Alasia | Apr 1980 | A |
4303307 | Tureck et al. | Dec 1981 | A |
4417784 | Knop et al. | Nov 1983 | A |
4689477 | Goldman | Aug 1987 | A |
4715623 | Roule et al. | Dec 1987 | A |
4914700 | Alasia | Apr 1990 | A |
4972476 | Nathans | Nov 1990 | A |
4999234 | Cowan | Mar 1991 | A |
5027401 | Soltesz | Jun 1991 | A |
5034982 | Heninger et al. | Jul 1991 | A |
5113213 | Sandor et al. | May 1992 | A |
5128779 | Mallik | Jul 1992 | A |
5177796 | Feig et al. | Jan 1993 | A |
5178418 | Merry et al. | Jan 1993 | A |
5195122 | Fabian | Mar 1993 | A |
5195435 | Morrone et al. | Mar 1993 | A |
5249546 | Pennelle | Oct 1993 | A |
5303370 | Brosh et al. | Apr 1994 | A |
5373375 | Weldy | Dec 1994 | A |
5396559 | McGrew | Mar 1995 | A |
5416604 | Park | May 1995 | A |
5438429 | Haeberli et al. | Aug 1995 | A |
5576527 | Sawanobori | Nov 1996 | A |
5599578 | Butland | Feb 1997 | A |
5606609 | Houser et al. | Feb 1997 | A |
5608203 | Finkelstein et al. | Mar 1997 | A |
5708717 | Alasia | Jan 1998 | A |
5712731 | Drinkwater et al. | Jan 1998 | A |
5722693 | Wicker | Mar 1998 | A |
5735547 | Morelle et al. | Apr 1998 | A |
5828848 | MacCormack et al. | Oct 1998 | A |
5830609 | Warner et al. | Nov 1998 | A |
5867586 | Liang | Feb 1999 | A |
5900946 | Kunitake et al. | May 1999 | A |
5904375 | Bruguda | May 1999 | A |
5960081 | Vynne et al. | Sep 1999 | A |
5974150 | Kaish et al. | Oct 1999 | A |
6062604 | Taylor et al. | May 2000 | A |
6073854 | Bravenec et al. | Jun 2000 | A |
6084713 | Rosenthal | Jul 2000 | A |
6104812 | Koltai et al. | Aug 2000 | A |
6131161 | Linnartz | Oct 2000 | A |
6139066 | Mowry et al. | Oct 2000 | A |
6171734 | Warner | Jan 2001 | B1 |
6176430 | Finkelstein et al. | Jan 2001 | B1 |
6216228 | Chapman et al. | Apr 2001 | B1 |
6222650 | Long | Apr 2001 | B1 |
6222887 | Nishikawa et al. | Apr 2001 | B1 |
6252963 | Rhoads | Jun 2001 | B1 |
6256150 | Rosenthal | Jul 2001 | B1 |
6260763 | Svetal | Jul 2001 | B1 |
6280891 | Daniel et al. | Aug 2001 | B2 |
6329987 | Gottfried et al. | Dec 2001 | B1 |
6343138 | Rhoads | Jan 2002 | B1 |
6362869 | Silverbrook | Mar 2002 | B1 |
6373965 | Liang | Apr 2002 | B1 |
6381071 | Dona et al. | Apr 2002 | B1 |
6389151 | Carr et al. | May 2002 | B1 |
6390372 | Waters | May 2002 | B1 |
6414794 | Rosenthal | Jul 2002 | B1 |
6435502 | Matos | Aug 2002 | B2 |
6470093 | Liang | Oct 2002 | B2 |
6496591 | Rhoads | Dec 2002 | B1 |
6523826 | Matos | Feb 2003 | B1 |
6536665 | Ray et al. | Mar 2003 | B1 |
6542618 | Rhoads | Apr 2003 | B1 |
6565089 | Matos | May 2003 | B1 |
6636332 | Soscia | Oct 2003 | B1 |
6757406 | Rhoads | Jun 2004 | B2 |
6760464 | Brunk | Jul 2004 | B2 |
6769618 | Finkelstein | Aug 2004 | B1 |
6810131 | Nakagawa et al. | Oct 2004 | B2 |
6817525 | Piva et al. | Nov 2004 | B2 |
6827282 | Silverbrook | Dec 2004 | B2 |
6859534 | Alasia | Feb 2005 | B1 |
6980654 | Alasia et al. | Dec 2005 | B2 |
6983048 | Alasia et al. | Jan 2006 | B2 |
6985607 | Alasia et al. | Jan 2006 | B2 |
7114750 | Alasia et al. | Oct 2006 | B1 |
7226087 | Alasia et al. | Jun 2007 | B2 |
7262885 | Yao | Aug 2007 | B2 |
7315407 | Menz et al. | Jan 2008 | B2 |
7321968 | Capellaro et al. | Jan 2008 | B1 |
7386177 | Alasia et al. | Jun 2008 | B2 |
7421581 | Alasia et al. | Sep 2008 | B2 |
7466876 | Alasia | Dec 2008 | B2 |
7512249 | Alasia et al. | Mar 2009 | B2 |
7512280 | Alasia et al. | Mar 2009 | B2 |
7551752 | Alasia et al. | Jun 2009 | B2 |
7630513 | Alasia et al. | Dec 2009 | B2 |
7654580 | Alasia et al. | Feb 2010 | B2 |
7796753 | Alasia et al. | Sep 2010 | B2 |
8682025 | Cvetkovic et al. | Mar 2014 | B2 |
20010005570 | Daniel et al. | Jun 2001 | A1 |
20020008380 | Taylor et al. | Jan 2002 | A1 |
20020042884 | Wu et al. | Apr 2002 | A1 |
20020054355 | Brunk | May 2002 | A1 |
20020054680 | Huang et al. | May 2002 | A1 |
20020117845 | Ahlers et al. | Aug 2002 | A1 |
20020163678 | Haines et al. | Nov 2002 | A1 |
20020185857 | Taylor et al. | Dec 2002 | A1 |
20020196469 | Yao | Dec 2002 | A1 |
20030012562 | Lawandry et al. | Jan 2003 | A1 |
20030015866 | Cioffi et al. | Jan 2003 | A1 |
20030039195 | Long et al. | Feb 2003 | A1 |
20030115866 | Price | Jun 2003 | A1 |
20030136837 | Amon et al. | Jul 2003 | A1 |
20030137145 | Fell et al. | Jul 2003 | A1 |
20030169468 | Menz et al. | Sep 2003 | A1 |
20030183695 | Labrec et al. | Oct 2003 | A1 |
20030201331 | Finkelstein | Oct 2003 | A1 |
20030228014 | Alasia | Dec 2003 | A1 |
20050018845 | Suzaki | Jan 2005 | A1 |
20050057036 | Ahlers et al. | Mar 2005 | A1 |
20050100204 | Afzal et al. | May 2005 | A1 |
20050109850 | Jones | May 2005 | A1 |
20050184504 | Alasia et al. | Aug 2005 | A1 |
20050237577 | Alasia et al. | Oct 2005 | A1 |
20070003294 | Yaguchi et al. | Jan 2007 | A1 |
20070057061 | Alasia et al. | Mar 2007 | A1 |
20070248364 | Wicker et al. | Oct 2007 | A1 |
20080044015 | Alasia | Feb 2008 | A1 |
20080267514 | Alasia et al. | Oct 2008 | A1 |
Number | Date | Country |
---|---|---|
10117038 | Jun 2006 | DE |
0256176 | Feb 1988 | EP |
0520363 | Dec 1992 | EP |
0388090 | Mar 1995 | EP |
0598357 | Feb 1999 | EP |
1147912 | Oct 2001 | EP |
1136947 | Feb 2007 | EP |
1407065 | Sep 1975 | GB |
1534403 | Dec 1978 | GB |
2172850 | Oct 1986 | GB |
155659 | Apr 2008 | IL |
9204692 | Mar 1992 | WO |
9315491 | Aug 1993 | WO |
9407326 | Mar 1994 | WO |
9427254 | Nov 1994 | WO |
9720298 | Jun 1997 | WO |
9815418 | Apr 1998 | WO |
9901291 | Mar 1999 | WO |
0180512 | Oct 2001 | WO |
0187632 | Nov 2001 | WO |
2004096570 | Nov 2004 | WO |
2005006025 | Jan 2005 | WO |
2005109325 | Nov 2005 | WO |
Entry |
---|
International Search Report and the Written Opinion of the ISA mailed on Feb. 27, 2012 in PCT Application No. PCT/US11/55787, international filing date Oct. 11, 2011. (11 pages). |
“IR inks”, Retrieved from http://www.maxmax.com/aIRInks.htm (Jun. 2004) (2 pages). |
“16. Remote sensing”, Retrieved on Mar. 18, 2003, from http://www.gis.unbc.ca.webpages/start/geog205/lectures/rs-data/rsdata.html (4 pages). |
“Security supplies tags”, Retrieved from http://www.zebra.com/cgi-bin/print.cgi?pname=http://zebra.com&ppath (Jun. 2004) (2 pages). |
“UV Inks”, http://www.maxmax.com/aUVInks.htm (Jun. 2004) (2 pages). |
De Capitani Di Vimercati et al., “Access control: Principles and solutions”, Software: Practice and Experience (2003) 33(5): 397-421. |
Fulkerson, “Ink and paper take center ring in security market”, Retrieved on Mar. 18, 2003, from http://www.printsolutionsmag.com/articles/sec-doc.html (7 pages). |
Lin et al., “Image authentication based on distributed source coding”, IEEE International Conference on Image Processing (2007) 3: III-5-III-8. |
Pamboukian et al., “Watermarking JBIG2 text region for image authentication”, IEEE International Conference on Image Processing (2005) 2: II-1078-II-1081. |
Skraparlis, “Design of an efficient authentication method for modern image and video”, IEEE Transactions on Consumer Electronics (May 2003) 49(2): 417-426. |
Wong, “A public key watermark for image verification and authentication”, IEEE International Conference on Image Processing and its Applications (1998) 1(1): 455-459. |
Wu et al., “Watermarking for image authentication”, IEEE International Conference on Image Processing (Oct. 1998) 2: 437-441. |
Number | Date | Country | |
---|---|---|---|
20140233856 A1 | Aug 2014 | US |
Number | Date | Country | |
---|---|---|---|
61391843 | Oct 2010 | US | |
61461224 | Jan 2011 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13270738 | Oct 2011 | US |
Child | 14178964 | US |