System for reading data glyphs

Information

  • Patent Grant
  • 6298171
  • Patent Number
    6,298,171
  • Date Filed
    Monday, May 15, 2000
    24 years ago
  • Date Issued
    Tuesday, October 2, 2001
    22 years ago
Abstract
A high speed system for locating and decoding glyphs on documents is disclosed. The system includes acquiring one or more images of a document containing a glyph. One-dimensional projections of the images are correlated against a reference function to locate the glyph in the images. The position of the glyph is refined by correlating against a kernel designed to have a maximum response when aligned over a corner of the glyph. Symbols in the glyph are decoded utilizing a kernel which generates a positive response for one symbol type and a negative response for the other.
Description




FIELD OF THE INVENTION




The present invention relates generally to machine vision systems, and more specifically to a system for rapidly reading data glyphs.




BACKGROUND AND SUMMARY OF THE INVENTION




Automated document factories are mechanized assembly lines that may print, collate, label, sort, or otherwise process documents, such as bills, statements and advertisements, to be assembled for mass mailing. Examples of automated document factories are disclosed in U.S. Pat. Nos. 5,510,997 and 5,608,639, which are incorporated herein by reference. In an automated document factory, the documents to be assembled or otherwise processed often are identified by various symbologies printed on the documents, such as barcodes or dataglyphs, several of which are discussed in U.S. Pat. Nos. 5,276,315, 5,329,104 and 5,801,371, which are also incorporated herein by reference.




Data glyphs can be preprinted on stock permit automated identification of print stock to insure that the correct materials are being used. Data glyphs can also be printed on the stock as it goes through the printer to identify the intended recipient of the document or some other materials that should be associated with the printed item. By reading the printed glyphs during subsequent processing, collation and handling of documents can be verified and automated.




Documents in automated document factories are moved on conveyer belts, usually at high speed. As a result it is necessary to scan the documents quickly and process the acquired data in a minimum time. Preferably, a proximity sensor is used to monitor when a document or other object to be read has moved within range of a camera of the machine vision system. When the proximity sensor detects a document, a pulsed illuminator is triggered so that the camera may obtain a clear picture of the document to be read even though the document is moving continuously. The rapid flash of the illuminator “freezes” the document for the camera. Examples of pulsed LED and other light sources are disclosed in U.S. Pat. Nos. 4,542,528, 5,135,160, 5,349,172 and 5,600,121 which are incorporated herein by reference.




Unfortunately, although existing systems permit scanning of documents at sufficient rates, the process of analyzing the scanned data to extract the data encoded in data glyphs has been an impediment to rapid document processing. In particular, with existing systems, there is no feasible process for extracting the data glyph data from scanned images quickly enough to allow for real time processing of each document as it moves through a document factory.











BRIEF DESCRIPTION OF THE DRAWINGS





FIG. 1

is a block diagram showing the machine vision system of the present invention, including a camera, illuminator, proximity sensor and central processing unit.





FIG. 2

is a side elevation showing the camera, illuminator and proximity sensor of

FIG. 1

, partially assembled and cut-away, in proper angular relation to a conveyor belt.





FIG. 3

is a plan view of the components of the machine vision system shown in

FIG. 2

, as assembled .





FIG. 4

shows a typical glyph with non-glyph clutter on a document.





FIGS. 5-7

illustrate the acquisition of images of bands of a document according to the present invention.





FIG. 8

illustrates the partitioning of an image into blocks for subsequent processing.





FIG. 9

illustrates the process of computing one-dimensional projections according to the present invention.





FIG. 10

schematically illustrates an outward search for the boundaries of a glyph.





FIG. 11

illustrates the pixel structure of a symbol forming part of a glyph.





FIG. 12

shows a schematic representation of a sparse kernel for use in the disclosed system.





FIG. 13

illustrates application of the sparse kernel to a glyph image.





FIG. 14

illustrates a search area for finding a maximum correlation response.





FIG. 15

illustrates a search area for refinement of symbol location.





FIG. 16

illustrates a correlation function for discriminating symbols.











DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT




A machine vision system


10


is shown in FIG.


1


and includes a camera


12


, preferably a high-resolution digital camera with a CCD sensor and a 16-millimeter fixed focal length lens. An illuminator or radiation source


14


, preferably in the form of a pulsed array of LEDs is used to illuminate a document D or other item to be sensed. The pulses of illuminator


14


and the image produced by camera


12


are controlled and monitored by a CPU


16


.




A proximity sensor


18


is used to signal CPU


16


that the leading edge of a document D has passed beneath proximity sensor


18


. CPU


16


then calculates an appropriate delay before triggering pulses of illumination from illuminator


14


. Images from camera


12


are monitored by CPU


16


during appropriate time periods, based on the pulses provided by illuminator


14


.





FIG. 2

shows a more detailed representation of the arrangement of the camera and related components of machine vision system


10


. Camera


12


preferably is mounted approximately perpendicular to a conveyor C that carries documents D to be scanned. Illuminator


14


includes opposed, angled LED canisters


20


mounted on either side of camera


12


, preferably at an angle A of approximately 21-degrees to either side of the axis of camera


12


, as shown in FIG.


3


. For glyphs printed on paper, it has been found that an array of twenty LEDs emitting approximately 690 nanometer-wavelength light works well with the LEDs grouped tightly in a cylindrical canister, and with a diffusing lens mounted between (or as part of) illuminator


14


and document D. One of the canisters is shown cutaway to illustrate its internal structure. Other wavelengths of electromagnetic radiation may be used, with appropriate changes of the radiation source and sensor, but the visible light source and sensor of the described illuminator


14


and camera


12


are believed to work well for most document reading.




The disclosed proximity sensor is light-based, and includes a fiber optic cable


22


that is mounted adjacent conveyor C, aimed approximately perpendicularly to conveyor C. Conveyor C moves at speeds up to 400-inches per second. In the disclosed embodiment, illuminator


14


is pulsed to produce a flash of light for approximately 10-microseconds, and approximately 10 separate images are taken by producing 10 separate pulses.




Various brackets, including a camera bracket


24


, illuminator brackets


26


, and proximity sensor bracket


28


are attached as shown in

FIGS. 2 and 3

to mount the associated components. Brackets


26


preferably mount directly to bracket


24


, through spot welding or screws, and brackets


24


and


28


mount to a supporting block


30


.




A typical glyph


50


is shown in

FIG. 4

with associated marks that may be present on the document to be scanned. The glyph includes rows and columns of symbols


60


, including upstroke symbols


62


and downstroke symbols


64


. Of course, other types of symbols could be used as well. The first step in reading the glyph is to acquire or capture an image of the glyph. Acquiring an image of glyph


50


involves capturing a series of images of overlapping bands


52


along a document


54


, as illustrated in

FIGS. 5-7

.

FIGS. 6 and 7

depict schematically the relative positions of the two sets of bands. Each band has a width approximately twice the size of the glyph to insure that the entire glyph is contained in one of the bands. Typically, the bands extend across the width of the document perpendicular to the direction of travel.




If there are known constraints on the position of the glyph on the document, it may not be necessary to acquire bands covering the entire document. For instance, if it is known that that the glyph will be printed at the trailing edge of a document, only that portion of the document need be acquired. Moreover, if the exact position of the glyph is known, it may only be necessary to acquire one small image containing the glyph, rather than a series of overlapping bands. Of course, it is also possible to acquire an image of the entire document at one time for subsequent processing. Use of bands, however, reduces the demand for memory since the glyph is typically very small relative to the entire document.




After acquiring images of the series of bands, the band images are processed to make a coarse determination of the location of the glyph. In particular, each image is divided into a series of square blocks


56


of pixels. The size of the block, for instance 32×32, is chosen to be slightly less than one-half of the size of the glyph. This insures that at least one of the blocks will fall entirely within the glyph. See FIG.


8


.




Each block is processed to determine how much it resembles a glyph. This is accomplished in the disclosed embodiment by calculating vertical and horizontal projections for each block as follows:







v


[
y
]


=




x
=
1

BlockSize



Block
[

x
,
y

]







h


[
x
]


=




y
=
1

BlockSize



Block
[

x
,
y

]












See FIG.


9


. These projections are then compared to a generally saw-tooth or step-function shaped reference projection or kernel based on the appearance of an average glyph. More specifically, a series of correlations are computed between the reference projection and the vertical projections to locate a maximum correlation for each block. It is necessary to compute a series of correlations because the location of the strokes within the part of a glyph contained in the block is unknown. Therefore, the correlations must be computed over a series of shifted positions or phases spanning a range equivalent to the spacing between strokes to insure that the maximum correlation value is located. A suitable reference projection is:






ref=[2, 1, −2, −2, −2, −1, 2, 2, 2, 0, −2, −2, −2, 0, 2, 2, 1, −1, −2, −2, −1, 1, 1, 2, 2, 0]






It should be noted that the reference function only includes 25 elements to allow a complete correlation to be conducted at each of 8 shifted positions in the depicted example. Also, the values used in the reference function are chosen to reflect real world printing and image capture variations. Thus, the boundaries between rows or columns of strokes are not perfectly defined.




After computing the horizontal and vertical correlations for each of the blocks, an overall score is computed for each block as follows:











block



score

=






0.5




max





hor




cor



block




max





hor




cor



overall




+












0.5




max





vert




cor



block




max





vert




cor



overall

















where the max_hor_cor_block is the maximum correlation located in the particular block and the max_hor_col_overall is the maximum horizontal correlation found in any block. The block with the maximum block_score in any of the blocks in any of the bands is taken as the coarse location of the glyph.




Once the coarse position of the glyph is determined, the image is searched outwardly from the box to locate the boundaries of the glyph, as indicated schematically in FIG.


10


. The outward search of the disclosed embodiment relies on the relative contrast between regions of the image containing strokes and regions between strokes. Typically it will be assumed that the brightness W of the paper will be at least twice the brightness of printed pixels B on the page.




As shown in

FIG. 11

, symbols or strokes in the disclosed embodiment are printed as an upwardly or downwardly oriented pattern of three black pixels on a five-by-five cell having a white background. It should be noted that the patterns for the two types of symbols have a region of overlap at the center where there is a black pixel for either stroke type. The vertical or horizontal projections through regions containing strokes will have an average brightness of 0.8W+0.2B, i.e., one in five pixels will be black along any line. Regions between strokes will have an average brightness of W because they contain no black pixels. By applying the constraint that W≧2×B, the ratio of the average brightness in stroke regions to average brightness between columns or rows of strokes is determined to be ≦0.9.




To find the extent of the glyph, a search is conducted in each direction from the center of the block outward until the edge of the glyph is located. The pseudo-code algorithm set forth in Table 1 illustrates how the search is conducted to find the right edge of the glyph. The searches in the other three directions are essentially the same.












TABLE 1











//initialize valleys and peaks by finding the valley and






//peak in the distance from one stroke to the






previous_peak_pixel = 0






previous_peak_value = 0






previous_valley_pixel = 0






previous_valley_value = 0






previous_was_peak = false






pixel = “center pixel ofblock”






stroke_distance=“distance from one stroke to another stroke”













while(pixel <= “center pixel ofblock” + stroke_distance) // traverse one stroke distance













{














pixel_value = h[pixel]








if(“pixel_value is less than all its neighbors”)




// at the bottom of a valley













{













previous_valley_pixel = pixel







previous_valley_value = pixel_value







previous_was_peak = false













}













else if(“pixel_value is greater than all its neighbors”) // at the top of a peak













{













previous_peak_pixel = pixel







previous_peak_value = pixel_value







previous_was_peak = true













}













pixel++; // move to next pixel












}




// end while traversing one stroke distance











contrast_threshold = 0.9












while (pixel < image_width)




// while extending rightward













{













pixel_value = h[pixel]














if(previous_was_peak)




// heading downward







{













if(“pixel_value is less than all its neighbors” and













(pixel_value/previous_peak_value)<=contrast_threshold)














{




// at the bottom of a valley













previous_valley_pixel = pixel







previous_valley_value = pixel_value







previous_was_peak = false







}













}







else














{




// heading upward














if




(“pixel_value is _greater than all its neighbors” and













(previous_valley_value/pixel_value)<=contrast_threshold)














{




//at the top of a peak







previous_peak_pixel = pixel













previous_peak_value = pixel_value







previous_was_peak = true







}













}







if_previous_was_peak)













{







if((pixel-previous_peak_pixel)>=stroke_distance)














break




// should have encountered a valley by now













}













else













{







if(pixel - previous_valley_pixel) >= stroke_distance)














break




// should have encountered a peak by now













}














pixel++




// move to next pixel














}




// end while extending rightward












glyph_extent_right = pixel




// record how far the search went on the right














Once the extent of the glyph is determined, it is desirable to more precisely locate the top-left corner of the glyph. In the disclosed embodiment, this is accomplished by running a correlation with a sparse kernel


70


as illustrated in FIG.


12


. The kernel is divided into cells


72


corresponding in size to the symbol cell size. The spacing between non-zero entries in the kernel matches the cell size and therefore the interstroke spacing within the glyph. See FIG.


13


. The non-zero values are positioned at the region of intersection of the two symbol types so that, when the correlation kernel is properly aligned, a strong response is generated for both upstrokes and downstrokes.




By placing positive values in the top and left rows and negative values in the lower right section, the correlation response is maximized when the kernel is centered with the upper left negative one centered over the top left stroke. When the kernel is centered at this location, the negative values are multiplied by the low brightness pixel values at the centers of the stroke and the positive values in the kernel are multiplied by the high brightness pixel values in the white space around the glyph. In searching for the maximum, correlations are typically taken over a range of 10-25 pixels horizontally and vertically, as indicated by the box


74


in FIG.


14


. It should be understood that it is possible to ignore any entries in the kernel that are equal to zero when computing the correlation value. Use of a plus or minus one in the non-zero entries makes the correlation computation into a simple series of additions. Thus, although the effect of the disclosed system is to utilize a sparse kernel, the actual implementation would simply sum periodically spaced pixels in the image.




Having relatively precisely located the glyph and a starting stroke in the glyph, each stroke is processed to determine whether it is an up or a down stroke. The processing of strokes is summarized in Table 2.














TABLE 2













for each row in the glyph







{







    for each column in the glyph







    {







        <refine the location of the stroke>







        <read the stroke>







        <advance to the next stroke>







    }







    <advance to the next glyph row>







}















As set forth in Table 2, the first step in processing a stroke is to further refine the location. This is accomplished in the disclosed embodiment by searching for a minimum pixel over a block


78


of pixels centered at the expected center of the stroke. See

FIG. 15. A

typical size for this block is 5×5 pixels. This refinement process allows the system to accommodate slight printing variations in the stroke location or size.




After the refinement of the location of a stroke is completed, a square block of pixels centered on the stroke is correlated with an X-shaped kernel


80


as shown in FIG.


16


. The X-shaped kernel includes negative ones in one diagonal and positive ones in the other diagonal. The entry at the region of intersection of the two diagonals is zero since the brightness of the corresponding pixel is not indicative of whether the stroke is an upstroke or a downstroke. If the stroke is a down stroke, the result of the correlation will be greater than zero because of the additive effect of the brighter white pixels in the other diagonal. If the stroke is an upstroke, on the other hand, the correlation will be negative because of the subtractive effect of the brighter white pixels on the downstroke diagonal. A threshold value can be incorporated wherein the system reports the stroke as unknown unless the result of the correlation exceeds the threshold.




The next stroke is located by adding the nominal distance between strokes to the refined position of the current stroke and then refining the location of the new stroke as previously described. This process is repeated for each of the strokes and the resulting pattern of upstrokes and downstrokes is converted to a string of ones and zeros. In the disclosed embodiment, the resultant data string is used to verify that the document has been printed on the correct stock, for example, or to verify proper association of documents. It should be recognized that the disclosed system can be used to locate and decode multiple glyphs on the same document, some of which may be placed on the document at different stages in the document processing, or be pre-printed on the document in the case of forms.




It should be understood that the pixel resolution of the image acquisition system might be different than the printing resolution. In such cases, appropriate scale factors are used to adjust the expected spacing between glyphs, stroke sizes and other known properties of the printed image.




Although it has been possible to read glyphs in the past, the techniques disclosed herein and the concepts embodied therein contribute to speeding up the process of reading glyphs sufficiently to allow real-time use in high-speed printing and document handling. In particular, using the disclosed system, it is possible to read glyphs in approximately 20 ms using a 266 Mhz Pentium II processor with an error rate of approximately one in one-hundred thousand.




Additional details of the user interface and operational details of the system


5


disclosed herein are described in Xreader, High Speed Glyph Reader for Xerox, Operations Manual and Troubleshooting Guide, Copyright 1998, FSI Automation, Inc., which is incorporated herein by reference.




While the invention has been disclosed in its preferred form, the specific embodiments thereof as disclosed and illustrated herein are not to be considered in a limiting sense as numerous variations are possible. Applicants regard the subject matter of their invention to include all novel and non-obvious combinations and subcombinations of the various elements, features, functions and/or properties disclosed herein. No single feature, function, element or property of the disclosed embodiments is essential. The following claims define certain combinations and subcombinations that are regarded as novel and non-obvious. Other combinations and subcombinations of features, functions, elements and/or properties may be claimed through amendment of the present claims or presentation of new claims in this or a related application. Such claims, whether they are different from, broader, narrower or equal in scope to the original claims, are also regarded as included within the subject matter of applicants' invention.



Claims
  • 1. A machine vision system for locating a glyph on a document, the system comprising a central processing unit configured to monitor at least one image of at least part of the document that includes the glyph, the central processing unit incorporating instructions for and capable of carrying out the function of:i) preparing a one-dimensional kernel function having a strong correlation response when aligned with a one-dimensionial projection of the glyph; ii) computing a one-dimensional projection over at least part of the image, where the at least part of the at least one image includes at least part of the glyph; iii) correlating the one-dimensional projection with the kernel function, where the kernel is located at a selected position in the projection; iv) repeating the step of correlating for a plurality of relative positions of the kernel in the one dimensional projection; and v) analyzing the results of the repeated correlations to locate a position where there is an extremum in correlation response.
  • 2. The machine vision system of claim 1, wherein the kernel is a saw-tooth kernel.
  • 3. The machine vision system of claim 1, wherein the glyph includes symbols positioned in a grid of rows and columns.
  • 4. The machine vision system of claim 1, wherein the glyph includes upstroke and downstroke symbols.
  • 5. The machine vision system of claim 1, wherein the system is capable of acquiring images of plural overlapping areas of the document.
  • 6. The machine vision system of claim 5, wherein the areas have a minimum dimension at least twice as large as the maximum dimension of the glyph.
  • 7. The machine vision system of claim 5, wherein the areas are chosen as bands extending across one dimension of the document, and where the bands are half-overlapping with each other.
  • 8. The machine vision system of claim 1, wherein the glyph includes symbols of a first type and a second type, each type having a predetermined pattern with the two patterns having a region of intersection, each symbol defining a cell, and wherein the central processing unit further incorporates instructions for and is capable of carrying out the further function of:i) forming a second kernel function including a plurality of cells corresponding to the symbol cells, at least a portion of the kernel cells including non-zero entries at locations corresponding to the region of intersection of the symbol cells; and ii) conducting a correlation of the second kernel function with a selected portion of the image for a plurality of selected portions of the image to search for a maximum correlation response.
  • 9. The machine vision system of claim 8, wherein the central processing unit further incorporates instructions for and is capable of carrying out the further function of:i) forming a third kernel function, where the third kernel function includes positive entries in locations corresponding to the pattern of the first symbol type and negative entries in locations corresponding to the pattern of the second symbol type; ii) computing a correlation between the kernel function and a symbol cell in the glyph; and iii) declaring the symbol to be of the first type if the result of the correlation exceeds a first predetermined value and declaring the symbol to be of the second type if the result of the correlation is less than a second predetermined value.
  • 10. The machine vision system of claim 1 further comprising an imaging device operatively coupled with the central processing unit and adapted to acquire the at least one image monitored by the central processing unit.
  • 11. The machine vision system of claim 10 further comprising an automated registration device that automatically brings the document and the imaging device into register for analysis of the document.
  • 12. The machine vision system of claim 10 further comprising an illuminator configured to illuminate the document during image acquisition by the imaging device.
  • 13. A machine vision system for decoding symbols in an image of a glyph, where there are first and second symbol types in the glyph, each symbol type having a predetermined pattern, the system comprising a central processing unit configured to monitor at least one image of the glyph, the central processing unit incorporating instructions for and capable of decoding symbols from the image of the glyph by carrying out the function of:i) forming a kernel function, where the kernel function includes positive entries in locations corresponding to the pattern of the first symbol type and negative entries in locations corresponding to the pattern of the second symbol type; ii) computing a correlation between the kernel function and a symbol in the glyph; and iii) declaring the symbol to be of the first type if the result of the correlation exceeds a first predetermined value and declaring the symbol to be of the second type if the result of the correlation is less than a second predetermined value.
  • 14. The machine vision system of claim 13, wherein the first and second predetermined values are both equal to zero.
  • 15. The machine vision system of claim 13, wherein the entries in the kernel function which do not correspond to the pattern of either symbol type are zero.
  • 16. The machine vision system of claim 13, wherein the positive entries are all equal to one.
  • 17. The machine vision system of claim 13, wherein the first and second symbol types are upstrokes and downstrokes, respectively.
  • 18. The machine vision system of claim 17, wherein the kernel function has non-zero entries along diagonals thereof.
  • 19. The machine vision system of claim 13 further comprising an imaging device operatively coupled with the central processing unit and adapted to acquire the at least one image of the glyph monitored by the central processing unit.
  • 20. The machine vision system of claim 19, the glyph being positioned on a document, further comprising an automated registration device that automatically brings the portion of the document containing the glyph and the imaging device into register for analysis of the glyph.
  • 21. The machine vision system of claim 19 further comprising an illuminator configured to illuminate the glyph during image acquisition by the imaging device.
  • 22. A machine vision system for finding the location of symbols within an image incorporating a glyph with a plurality of symbols disposed in a regular array of cells therein, where the symbols include a first type and a second type, each type having a predetermined pattern with the two patterns having a region of intersection, the system comprising a central processing unit configured to monitor at least one image of at least part of the glyph, the central processing unit incorporating instructions for and capable of finding the location of symbols within the image of the glyph by carrying out the function of:i) forming a correlation kernel including a plurality of cells corresponding to the symbol cells, at least a portion of the kernel cell including non-zero entries at locations corresponding to the region of intersection of the symbol cells; and ii) conducting a correlation of the kernel with a selected portion of the image for a plurality of selected portions of the image to search for a maximum correlation response.
  • 23. The machine vision system of claim 22, wherein only the locations in the correlation kernel corresponding to regions of intersection are non-zero.
  • 24. The machine vision system of claim 22, wherein there are between 4 and 10 cells in the correlation kernel.
  • 25. The machine vision system of claim 22, wherein the entries in cells along a first side of the correlation kernel are opposite in sign to the entries in cells not along a side of the correlation kernel.
  • 26. The machine vision system of claim 25, wherein the entries in cells along a second side adjacent the first side of the correlation kernel are opposite in sign to the entries in cells not along a side of the correlation kernel.
  • 27. The machine vision system of claim 22, wherein no more than one entry in each cell is non-zero.
  • 28. The machine vision system of claim 22, wherein the non-zero entries have a magnitude of one.
  • 29. The machine vision system of claim 22 further comprising an imaging device operatively coupled with the central processing unit and adapted to acquire the at least one image of at least part of the glyph monitored by the central processing unit.
  • 30. The machine vision system of claim 29, the glyph being positioned on a document, further comprising an automated registration device that automatically brings the portion of the document containing the glyph and the imaging device into register for analysis of the glyph.
  • 31. The machine vision system of claim 29 further comprising an illuminator configured to illuminate the glyph during image acquisition by the imaging device.
  • 32. A machine vision system for reading data glyphs on a document, comprising:an imaging device capable of acquiring at least one image of at least part of the document that includes the glyph; and a central processing unit, coupled with the imaging device, capable of monitoring the at least one image of at least part of the document, the central processing unit incorporating instructions for and capable of carrying out the function of locating the glyph in the image by correlating a one-dimensional projection of the image against a reference function.
  • 33. The machine vision system of claim 32, wherein the central processing unit further incorporates instructions for and is capable of carrying out the further function of refining the position of the glyph by correlating against a kernel designed to have a maximum response when aligned over a comer of the glyph.
  • 34. The machine vision system of claim 33, wherein the central processing unit further incorporates instructions for and is capable of carrying out the further function of decoding the glyph utilizing a kernel which generates a positive response for one symbol type and a negative response for the other.
  • 35. The machine vision system of claim 32, wherein the central processing unit further incorporates instructions for and is capable of carrying out the further function of decoding the glyph utilizing a kernel which generates a positive response for one symbol type and a negative response for the other.
  • 36. The machine vision system of claim 32, wherein the imaging device includes a camera.
  • 37. The machine vision system of claim 36, wherein the camera includes a CCD sensor.
  • 38. The machine vision system of claim 32 further comprising an automated registration device that automatically brings the document and the imaging device into register for analysis of the document.
  • 39. The machine vision system of claim 38, wherein the automated registration device includes a document conveyor.
  • 40. The machine vision system of claim 32 further comprising an illuminator configured to illuminate the document during image acquisition by the imaging device.
  • 41. The machine vision system of claim 40, wherein the illuminator includes an array of LEDs.
  • 42. The machine vision system of claim 40, further comprising a sensor, operatively coupled with the central processing unit, that is capable of detecting the location of the document, where the central processing unit triggers the illuminator when the sensor indicates that the document is in an appropriate position for the imaging device to acquire the at least one image of at least part of the document.
  • 43. The machine vision system of claim 42, wherein the sensor includes a proximity sensor, and wherein the illuminator includes an array of LEDs.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 09/399,638, filed Sep. 20, 1999, now U.S. Pat. No. 6,078,698, which claims priority from U.S. Provisional Patent Application Ser. No. 60/125,797, filed Mar. 23, 1999, and U.S. Provisional Patent Application Ser. No. 60/129,742, filed Apr. 16, 1999.

US Referenced Citations (120)
Number Name Date Kind
3260517 Sather Jul 1966
3570840 Sather et al. Mar 1971
3588086 Bell Jun 1971
3606728 Sather et al. Sep 1971
3652828 Sather et al. Mar 1972
3949363 Holm Apr 1976
4061900 Masciarelli Dec 1977
4251798 Swartz et al. Feb 1981
4369361 Swartz et al. Jan 1983
4387297 Swartz et al. Jun 1983
4402088 McWaters et al. Aug 1983
4408344 McWaters et al. Oct 1983
4409470 Shepard et al. Oct 1983
4435732 Hyatt Mar 1984
4473746 Edmonds Sep 1984
4488679 Bockholt et al. Dec 1984
4542528 Sanner et al. Sep 1985
4639873 Baggarly et al. Jan 1987
4672457 Hyatt Jun 1987
4728195 Silver Mar 1988
4743773 Katana et al. May 1988
4760248 Swartz et al. Jul 1988
4782220 Shuren Nov 1988
4794239 Allais Dec 1988
4855581 Mertel et al. Aug 1989
4861972 Elliott et al. Aug 1989
4874933 Sanner Oct 1989
4879456 Cherry et al. Nov 1989
4896026 Krichever et al. Jan 1990
4972359 Silver et al. Nov 1990
4998010 Chandler et al. Mar 1991
5013022 Graushar May 1991
5033725 van Duursen Jul 1991
5039075 Mayer Aug 1991
5060980 Johnson et al. Oct 1991
5067088 Schneiderhan Nov 1991
5114128 Harris, Jr. et al. May 1992
5135160 Tasaki Aug 1992
5144118 Actis et al. Sep 1992
5184005 Ukai et al. Feb 1993
5191540 Ramsey Mar 1993
5192856 Schaham Mar 1993
5196684 Lum et al. Mar 1993
5220770 Szewczyk et al. Jun 1993
5237161 Grodevant Aug 1993
5239169 Thomas Aug 1993
5245168 Shigeta et al. Sep 1993
5260554 Grodevant Nov 1993
5276315 Surka Jan 1994
5291009 Roustaei Mar 1994
5304786 Pavlidis et al. Apr 1994
5308962 Havens et al. May 1994
5317654 Perry et al. May 1994
5329104 Ouchi et al. Jul 1994
5349172 Roustaei Sep 1994
5352879 Milch Oct 1994
5354977 Roustaei Oct 1994
5359185 Hanson Oct 1994
5367439 Mayer et al. Nov 1994
5377003 Lewis et al. Dec 1994
5383130 Kalisiak Jan 1995
5396260 Adel et al. Mar 1995
5408084 Brandorff et al. Apr 1995
5414270 Henderson et al. May 1995
5448049 Shafer et al. Sep 1995
5459307 Klotz, Jr. Oct 1995
5468946 Oliver Nov 1995
5481098 Davis et al. Jan 1996
5481620 Vaidyanathan Jan 1996
5486686 Zdybel, Jr. et al. Jan 1996
5495537 Bedrosian et al. Feb 1996
5510997 Hines et al. Apr 1996
5521372 Hecht et al. May 1996
5526050 King et al. Jun 1996
5528368 Lewis et al. Jun 1996
5532467 Roustaei Jul 1996
5536924 Ackley Jul 1996
5536928 Seigel Jul 1996
5548326 Michael Aug 1996
5576532 Hecht Nov 1996
5583954 Garakani Dec 1996
5593017 Powell et al. Jan 1997
5600121 Kahn et al. Feb 1997
5602937 Bedrosian et al. Feb 1997
5608639 Twardowski et al. Mar 1997
5608820 Vaidyanathan Mar 1997
5637854 Thomas Jun 1997
5640199 Garakani et al. Jun 1997
5655759 Perkins et al. Aug 1997
5657403 Wolff et al. Aug 1997
5659167 Wang et al. Aug 1997
5673334 Nichani et al. Sep 1997
5676302 Petry, III Oct 1997
5697699 Seo et al. Dec 1997
5707055 DeJoseph et al. Jan 1998
5710417 Joseph et al. Jan 1998
5717785 Silver Feb 1998
5726434 Seo Mar 1998
5729003 Brigg, III Mar 1998
5734566 Stengl Mar 1998
5734747 Vaidyanathan Mar 1998
5739518 Wang Apr 1998
5742037 Scola et al. Apr 1998
5742504 Meyer et al. Apr 1998
5744790 Li Apr 1998
5751853 Michael May 1998
5754670 Shin et al. May 1998
5754679 Koljonen et al. May 1998
5756981 Roustaei et al. May 1998
5763864 O'Hagan et al. Jun 1998
5768443 Michael et al. Jun 1998
5777314 Roustaei Jul 1998
5777743 Bacchi et al. Jul 1998
5780831 Seo et al. Jul 1998
5783811 Feng et al. Jul 1998
5786582 Roustaei et al. Jul 1998
5793031 Tani et al. Aug 1998
5801373 Kahn et al. Sep 1998
5862271 Petrie Jan 1999
6078698 Lorton et al. Jun 2000
Non-Patent Literature Citations (1)
Entry
Worthington Data Solutions Bar Code Basics Excerpts, © 1997.
Provisional Applications (2)
Number Date Country
60/125797 Mar 1999 US
60/129742 Apr 1999 US
Continuations (1)
Number Date Country
Parent 09/399638 Sep 1999 US
Child 09/571062 US