Claims
- 1. A method of producing a composite human-readable and machine-readable image, comprising:
generating human-readable content for placement on a substrate; and generating machine readable code marks for placement on the substrate, wherein the machine readable code marks encode spatial reference pointers.
- 2. The method of claim 1, wherein the spatial reference pointers include image properties of the composite image.
- 3. The method of claim 1, further comprises generating a printed image wherein the physical image provides accurate spatial references and accurate image properties consistent with the spatial reference pointer information.
- 4. A method for evaluating a human-readable document from a composite image data file comprising human-readable content and machine readable code marks, wherein the machine readable code marks comprise spatial reference pointers and image property information, the method comprising:
obtaining a scanned image of the composite printed document to extract spatial relationships in the document and other image property information; decoding the machine-readable content in the composite document to obtain reference information describing nominal spatial relationships of the human-readable and machine-readable content; and comparing observed spatial relationships with reference spatial relationships to obtain one or more spatial deviation values.
- 5. The method of claim 4, wherein decoding further comprises decoding the machine-readable content on the composite document to obtain nominal image property information of the human-readable content.
- 6. The method of claim 5, wherein comparing further comprises comparing nominal image property information with observed image property information to obtain an image deviation value.
- 7. A method for printing a human-readable document from a composite image data file comprising human-readable content and machine readable code marks, wherein the machine readable code marks comprise spatial reference pointers and image property information, the method comprising:
obtaining a scanned image of the composite printed document to extract a spatial pattern of the document and other image property information; decoding the machine-readable content in the composite document to obtain reference information describing nominal spatial relationships of the human-readable and machine-readable content; comparing observed spatial relationships with reference spatial relationships to obtain a spatial deviation value; and applying the spatial deviation value to the scanned image; outputting a corrected image.
- 8. The method of claim 7, wherein decoding further comprises decoding the machine-readable content on the composite document to obtain nominal image property information of the human-readable content.
- 9. The method of claim 8, wherein comparing further comprises comparing nominal image property information with observed image property information to obtain an image deviation value
- 10. A method for evaluating a human-readable document from a composite document comprising human-readable content and machine readable code marks, wherein the machine readable code marks comprise spatial reference pointers and image property information, the method comprising:
obtaining a scanned image of an inputted hardcopy composite document to extract a spatial relationship in the document; decoding the machine-readable content in the inputted composite document to obtain reference pointer information describing nominal spatial relationships in the human-readable and machine-readable content; extracting at least one spatial relationship from the outputted printed document; outputting a printed document; obtaining a scanned image of the outputted printed document; extracting at least one spatial relationship from the outputted printed document; comparing the reference spatial relationships with the extracted spatial relationships from the inputted printed document to obtain one or more input hardcopy spatial deviation values; and comparing the reference spatial relationships with the extracted spatial relationships from the outputted printed document to obtain one or more output printed document spatial deviation values.
- 11. The method of claim 10, wherein comparing the reference spatial relationships with the extracted spatial relationships from the outputted printed document further comprises, comparing extracted spatial relationships from the outputted printed document to the extracted spatial relationships of the inputted document to more relative spatial deviation values.
- 12. A method for copying a human-readable document from a composite document comprising human-readable content and machine readable code marks, wherein the machine readable code marks comprise spatial reference pointers and image property information, the method comprising:
obtaining a scanned image of an inputted hardcopy composite document to extract a spatial relationship in the document; decoding the machine-readable content in the inputted composite document to obtain reference pointer information describing nominal spatial relationships in the human-readable and machine-readable content; outputting a printed document; obtaining a scanned image of the outputted printed document; extracting at least one spatial relationship from the outputted printed document; comparing the reference spatial relationships with the extracted spatial relationships from the outputted printed document to obtain one or more spatial deviation values; and calibrating the copier to produce a corrected copy that closely approximates the inputted hardcopy composite document.
- 13. A method for copying a human-readable document from a composite document comprising human-readable content and machine readable code marks, wherein the machine readable code marks comprise spatial reference pointers and image property information, the method comprising:
obtaining a scanned image of an inputted hardcopy composite document to extract a spatial relationship in the document; decoding the machine-readable content in the inputted composite document to obtain reference pointer information describing nominal spatial relationships in the human-readable and machine-readable content; outputting a printed document; obtaining a scanned image of the outputted printed document; extracting at least one spatial relationship from the outputted printed document; comparing the reference spatial relationships with the extracted spatial relationships from the outputted printed document to obtain one or more spatial deviation values; and calibrating the copier to improve at least one aspect closer to the nominal reference information for the inputted hardcopy image
- 14. A method for calibrating a scanner using a composite printed image wherein the physical image provides accurate spatial references and accurate image properties consistent with the spatial reference pointer information, the method comprising:
obtaining a scanned image of the composite printed document to extract spatial pattern of the document and other image property information; decoding the machine-readable content in the composite document to obtain reference information describing reference spatial relationships of the human-readable and machine-readable content; and comparing observed spatial relationships with reference spatial relationships to obtain one or more spatial deviation values; applying spatial deviation values to correct subsequent operation of the scanner.
- 15. The method of claim 14, further comprising:
comparing observed other image property information with reference image property information to obtain one or more image property deviation values; and applying image property deviation to correct subsequent operation of the printer.
- 16. A method for calibrating a printer from a composite document data file comprising human-readable content and machine readable code marks, wherein the machine readable code marks comprise spatial reference pointers and image property information, the method comprising:
obtaining a hardcopy composite document from the printer; obtaining a scanned image of an inputted composite document to extract a spatial relationship in the document; decoding the machine-readable content in the scanned composite document to obtain reference information describing nominal spatial relationships in the human-readable and machine-readable content; extracting at least one spatial relationship in the scanned composite document; comparing reference spatial relationships with spatial relationships in the printed document to obtain one or more spatial deviation values; and applying spatial deviation values to correct subsequent operation of the printer.
- 17. The method of claim 16, further comprising:
comparing observed other image property information with reference image property information to obtain one or more image property deviation values; and applying image property deviation to correct subsequent operation of the printer.
RELATED APPLICATIONS
[0001] The following application is relied upon and hereby incorporated by reference in this application:
[0002] U.S. Pat. No. 5,486,686 to Zydbel et al., entitled, “Hardcopy Lossless Data Storage and Communications for Electronic Document Processing Systems”;
[0003] U.S. Pat. No. 5,453,605 to Hecht et al., entitled, “Global Addressability for Self-Clocking Glyph Codes”;
[0004] U.S. Pat. No. 5,825,933 to Hecht, entitled, “Parallel Propagating Embedded Binary Sequences for Parameterizing Two Dimensional Image Domain Code Patterns in Two Dimensional Address Space”;
[0005] U.S. patent application Ser. No. 6,327,395 to Hecht et al., entitled “Glyph Address Carpet Methods and Apparatus for Providing Location Information in Multidimensional Address Space;
[0006] U.S. patent application Ser. No. 09/467,509 to Hecht, David L., Lennon, John, Merkle, Ralph, entitled “A Record and Related Methodologies For Storing Encoded Information Using Overt Code Characteristics to Identify Covert Code Characteristics,” filed Dec. 20, 1999; and
[0007] U.S. Pat. No. 5,949,055 to Fleet, David et al., entitled “Automatic Geometric Transformation of a Color Image Using Embedded Signals.”