Enhanced matrix symbol error correction method

Information

  • Patent Grant
  • 11449700
  • Patent Number
    11,449,700
  • Date Filed
    Thursday, October 22, 2020
    3 years ago
  • Date Issued
    Tuesday, September 20, 2022
    a year ago
Abstract
A system and method for error correction for machine-readable symbols having data codewords, and having error correction (EC) codewords derived from the data codewords and redundantly indicating the location and data contents of the data codewords. The symbols use Reed-Solomon (RS) error correction to retrieve damaged codewords. RS error correction normally requires two EC codewords to identify both the location and data contents of a data codeword. The present system and method perform optical contrast analysis on the codewords, identifying those codewords with the lowest contrast levels. Codewords with the lowest contrast levels are flagged as optically ambiguous, thereby marking, in the EC equations, the locations of the codewords most like to be in error. As a result, only a single EC codeword is required to retrieve the data for a flagged data codeword.
Description
FIELD OF THE INVENTION

The present invention relates to a method and apparatus for decoding machine-readable symbols, and more particularly, to a method and apparatus for decoding symbols requiring error correction.


BACKGROUND

Machine-readable symbols provide a means for encoding information in a compact printed form (or embossed form) which can be scanned and then interpreted by an optical-based symbol detector. Such machine readable symbols are often attached to (or impressed upon) product packaging, food products, general consumer items, machine parts, equipment, and other manufactured items for purposes of machine-based identification and tracking.


One exemplary type of machine-readable symbol is a bar code that employs a series of bars and white spaces vertically oriented along a single row. Groups of bars and spaces correspond to a codeword. The codeword is associated with an alpha-numeric symbol, one or more numeric digits, or other symbol functionality.


To facilitate encoding of greater amounts of information into a single machine-readable symbol, two-dimensional bar codes have been devised. These are also commonly referred to as stacked, matrix and/or area bar codes. Examples of such two-dimensional symbologies include Data Matrix, Code One, PDF-417, MaxiCode, QR Code, and Aztec Code. 2D matrix symbologies employ arrangements of regular polygon-shaped cells (also called elements or modules) where the center to center distance of adjacent elements is uniform. Typically, the polygon-shaped cells are squares. The specific arrangement of the cells in 2D matrix symbologies represents data characters and/or symbology functions.


As an example of a 2D matrix symbol technology, a Data Matrix code is a two-dimensional matrix barcode consisting of high-contrast “cells” (typically black and white cells) or modules arranged in either a square or rectangular pattern. The information to be encoded can be text or numeric data, or control symbols. The usual data size ranges from a few bytes up to 1556 bytes. Specific, designated, standardized groups of cells—typically eight cells—are each referred to as a “symbol character.” The symbol characters have values which are referred to as “codewords.” With a black cell interpreted as a 0 (zero) and a white cell interpreted as a 1 (one), an eight-cell codeword can code for numbers 0 through 255; in turn, these numeric values can be associated with alphanumeric symbols through standard codes such as ASCII, EBCDIC, or variations thereon, or other functionality.


The codewords—that is, the designated groups of cells in a symbol—have specific, standardized positions within the overall symbol. The interpretation of a symbol in a given context (for example, for a given manufacturer and/or a given product) therefore depends on the codewords within the symbol; and in particular, the interpretation depends on both: (i) the contents of each codeword (that is, the pattern of cells in each codeword), and (ii) the placement or position of each codeword in the symbol.


Typically, for sequential alphanumeric data (for example, a product identification number or a street address), each sequential data character is assigned to the symbols of a codeword in a standardized order. For example, the order may be left-to-right along the rows of the symbol, or according to a standardized diagonal pattern of placement. Because the codewords have specific, standards-specified placements within a symbol—and because no information about the placement is contained in the symbol character—the symbols may also be referred to as “matrix symbols” or “matrix symbology barcodes.”


Bar code readers are employed to read the matrix symbols using a variety of optical scanning electronics and methods. Ideally, the machine-readable symbols which are scanned by a bar code reader are in perfect condition, with all of the cells of consistent, uniform size; each cell being fully filled with either total black or total white; and the contrast between black and white cells being 100%.


In real, practical application the machine-readable symbols which are scanned by a bar code reader may be imperfect. They may be smudged by external substances (grease, dirt, or other chemicals in the environment); or the surface on which the symbols were printed may be stretched, compressed, or torn; or the printing process itself may be flawed (for example, due to low ink levels in a printer, clogged printheads, etc.). The defects in actual symbols may introduce errors in the machine reading process.


To address these practical problems, error correction techniques are often used to increase reliability: even if one or more cells are damaged so as to make a codeword unreadable, the unreadable codeword can be recovered through the error-correction process, and the overall message of the symbol can still be read.


For example, machine-readable symbols based on the Data Matrix ECC 200 standard employ Reed-Solomon codes for error and erasure recovery. ECC 200 allows the routine reconstruction of the entire encoded data string when the symbol has sustained 25% damage (assuming the matrix can still be accurately located).


Under this standard, approximately half the codewords in a symbol are used directly for the data to be represented, and approximately half the codewords are used for error correction. The error-correction (EC) symbols are calculated using a mathematical tool known as the Reed-Solomon algorithm. The codewords for the symbol are the input to the Reed-Solomon algorithm, and the error-correction (EC) symbols are the output of the Reed-Solomon algorithm. The complete machine-readable symbol includes both the data codewords and the EC codewords.


For a given symbol format (such as Data Matrix, PDF-417, QR-Code, Aztec Code, and others), and for a given size of the symbol matrix, there are a fixed, designated numbers of EC codewords. To recover any one, particular damaged (unreadable) codeword, two things must be recovered: (i) the location of the damaged data codeword within the symbol, and (ii) the contents (the bit pattern) of the damaged data codeword. In turn, to recover both the location and the bit pattern for a single codeword requires two of the available EC symbols. It follows that if a machine-readable symbol has two damaged codewords, four EC codewords are required to recover the full symbol. Generally, if a symbol has “N” damaged codewords, then 2*N EC codewords are required to recover the full symbol.


The number of EC codewords in a symbol is limited. This places a limit on the number of damaged, unreadable data codewords which can be recovered. Generally, with error correction techniques, and using present methods, the number of damaged data codewords which can be recovered is half the total number of EC codewords. For example, in a Data Matrix symbol with 16×16 cells, the total number of EC codewords is 12. This means that at most 6 damaged data codewords can be recovered. If more than 6 of the data codewords are damaged, the complete symbol may be unreadable.


However, if the location of the data codeword in error is already known, then only one EC codeword is needed to correct the error. This technique is called “erasure decoding”. Unfortunately, in Matrix Code symbols generally, the location of the errors is not known.


Therefore, there exists a need for a system and method for recovering more damaged data codewords in a symbol than may be recovered based on only the error-correcting symbols by themselves. More particularly, what is needed is a system and method for determining the location of a damaged or erroneous data codeword, independent of the information stored in the EC codewords.


SUMMARY

Accordingly, in one aspect, the present invention solves the problem of not being able to use erasure decoding with matrix symbologies by evaluating the gray-level information available in the scanner and keeping track of those codewords with the least contrast difference. The decoder then utilizes erasure decoding on these least-contrast codewords. Since the location of the erroneous data codewords has been estimated via the contrast detection, only one EC codeword is required to recover the data in the damaged codeword. (And so, only one EC codeword is required to fully recover the damaged data codeword, both its location and data.)


Because only one EC codeword is required instead of two, more EC codewords remain unused and available for decoding other possible errors. This increases the total number of data codewords that can be corrected. This is particularly useful in applications where symbols get dirty (e.g. automotive assembly), damaged (e.g. supply chain), have specular components (e.g. direct part marking (DPM)) and need to be scanned over a greater range (e.g. all applications).


The algorithm of the present invention has the effect of nearly doubling the number of codewords that can be corrected in matrix symbology decodes, thereby greatly improving the performance over what is currently available.


The foregoing illustrative summary, as well as other exemplary objectives and/or advantages of the invention, and the manner in which the same are accomplished, are further explained within the following detailed description and its accompanying drawings.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a perspective view of an exemplary hand-held symbol reader acquiring data from a machine-readable symbol.



FIG. 2 is an internal block diagram of an exemplary symbol reader for acquiring data from a machine-readable symbol



FIG. 3 illustrates several exemplary machine-readable 2D symbols.



FIG. 4 provides two views of an exemplary 2D symbol which is damaged.



FIG. 5 is a flow-chart of an exemplary method for optically enhanced Reed-Solomon error-correction for a 2D symbol.



FIG. 6 is a flow-chart of an exemplary method for contrast analysis for a 2D symbol as part of enhanced Reed-Solomon error correction.



FIG. 7 presents an exemplary array of codeword contrast values used for enhanced error correction by applying contrast analysis to a flawed codeword.



FIG. 8 illustrates an exemplary case-study of enhanced error correction by applying contrast analysis to a flawed codeword.





DETAILED DESCRIPTION

In the following description, certain specific details are set forth in order to provide a thorough understanding of various embodiments. However, one skilled in the art will understand that the invention may be practiced without these details. In other instances, well-known structures associated with imagers, scanners, and/or other devices operable to read machine-readable symbols have not been shown or described in detail to avoid unnecessarily obscuring descriptions of the embodiments.


Unless the context requires otherwise, throughout the specification and claims which follow, the word “comprise” and variations thereof, such as, “comprises” and “comprising” are to be construed in an open sense, that is as “including, but not limited to.”


Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.


The headings provided herein are for convenience only and do not interpret the scope or meaning of the claimed invention.


Symbol Reader


The present system and method embrace devices designed to read machine-readable symbols.


In an exemplary embodiment, such a device may be a hand-held scanner. FIG. 1 is a perspective view of an exemplary hand-held symbol reader 100 acquiring data from a machine-readable symbol 102.


The machine-readable symbol 102 is affixed to a package 104 or the like such that the user points the hand-held symbol reader 100 towards the machine-readable symbol 102. The symbol reader 100 may be a line scanner operable to emit and sweep a narrow beam of electromagnetic energy across a field-of-view 106 over two-dimensional (2D) machine-readable symbol 102. In other embodiments, an aperture means, mirror, lens or the like is adjusted to sweep across a symbol line to receive returning electromagnetic energy from a relatively small portion (e.g., cell) of the machine-readable symbol, which is detected by an optical detector system.


In yet other embodiments, a 2D array symbol reader acquires a captured image of the machine-readable symbol (and a suitable region of quiet area around the machine-readable symbol). For the present system and method, which relies upon a contrast analysis of the cells within the symbol 102, the acquisition of a captured image of the symbol may be a preferred method of operation for the symbol reader 100. Suitable image processing hardware 235 and software running on processors 242, 244 are used to deconstruct the capture image to determine the data bits represented by the cells, and to perform the contrast analysis of the present system and method (see FIG. 2 below).


The machine-readable symbol reader 100 is illustrated as having a housing 108, a display 110, a keypad 112, and an actuator device 114. Actuator device 114 may be a trigger, button, or other suitable actuator operable by the user to initiate the symbol reading process.


The machine-readable symbol 102 shown in the figure is intended to be generic and, thus, is illustrative of the various types and formats of machine-readable symbols. For example, some machine-readable symbols may consist of a single row of codewords (e.g., barcode). Other types of machine-readable symbols (e.g., matrix or area code) may be configured in other shapes, such as circles, hexagons, rectangles, squares and the like. It is intended that many various types and formats of machine-readable symbologies be included within the scope of the present system and method.


Symbol Reader Internal Block Diagram


An internal block diagram of an exemplary symbol reader 100 of a type which may implement the present system and method is shown in FIG. 2.


In one embodiment of the present system and method, the symbol reader 100 may be an optical reader. Optical reader 100 may include an illumination assembly 220 for illuminating a target object T, such as a 1D or 2D bar code symbol 102, and an imaging assembly 230 for receiving an image of object T and generating an electrical output signal indicative of the data which is optically encoded therein. Illumination assembly 220 may, for example, include an illumination source assembly 222, such as one or more LEDs, together with an illuminating optics assembly 224, such as one or more reflectors, for directing light from light source 222 in the direction of target object T. Illumination assembly 220 may be eliminated if ambient light levels are certain to be high enough to allow high quality images of object T to be taken.


In an embodiment, imaging assembly 230 may include an image sensor 232, such as a 2D CCD or CMOS solid state image sensor, together with an imaging optics assembly 234 for receiving and focusing an image of object T onto image sensor 32. The array-based imaging assembly shown in FIG. 2 may be replaced by a laser scanning based imaging assembly comprising a laser source, a scanning mechanism, emit and receive optics, a photodetector and accompanying signal processing circuitry. The field of view of the imaging assembly 230 will depend on the application. In general, the field of view should be large enough so that the imaging assembly can capture a bit map representation of a scene including an image data reading region at close reading range.


In an embodiment of the present system and method, exemplary symbol reader 100 of FIG. 2 also includes programmable controller 240 which may comprise an integrated circuit microprocessor 242 and an application specific integrated circuit (ASIC) 244. Processor 242 and ASIC 244 are both programmable control devices which are able to receive, output and process data in accordance with a stored program stored in either or both of a read/write random access memory (RAM) 245 and an erasable read only memory (EROM) 246. Processor 242 and ASIC 244 are also both connected to a common bus 248 through which program data and working data, including address data, may be received and transmitted in either direction to any circuitry that is also connected thereto. Processor 242 and ASIC 244 may differ from one another, however, in how they are made and how they are used.


In one embodiment, processor 242 may be a general purpose, off-the-shelf VLSI integrated circuit microprocessor which has overall control of the circuitry of FIG. 2, but which devotes most of its time to decoding image data stored in RAM 245 in accordance with program data stored in EROM 246. Processor 244, on the other hand, may be a special purpose VLSI integrated circuit, such as a programmable logic or gate array, which is programmed to devote its time to functions other than decoding image data, and thereby relieve processor 242 from the burden of performing these functions.


In an alternative embodiment, special purpose processor 244 may be eliminated entirely if general purpose processor 242 is fast enough and powerful enough to perform all of the functions contemplated by the present system and method. It will, therefore, be understood that neither the number of processors used, nor the division of labor there between, is of any fundamental significance for purposes of the present system and method.


In an embodiment, exemplary symbol reader 100 includes a signal processor 235 and an analog-to-digital (A/D) chip 236. These chips together take the raw data from image sensor 232 and convert the data to digital format, which in an exemplary embodiment may be a gray-level digital format, for further processing by programmable controller 240.


In an embodiment, the system and method of the present invention employs algorithms stored in EROM 246 which enable the programmable controller 240 to analyze the image data from signal processor 235 and A/D 236. In an embodiment, and as described further below, this image analysis may include analyzing gray-level information (contrast levels) in the image data. In an embodiment, and in part based on the contrast level analysis, programmable controller 240 may then implement an improved system and method of error correction for matrix symbols by relying on optical contrast-level analysis, as also described further below.


Exemplary symbol reader 100 may also include input/output (I/O) circuitry 237, for example to support the use of the keyboard 112 and trigger 114. Symbol reader 100 may also include output/display circuitry 238 to support display 110.


Exemplary Symbols



FIG. 3 illustrates several exemplary machine-readable 2D symbols 102 labeled 102.1 and 102.2.


Symbol 102.1 is an exemplary machine-readable symbol encoded according to the Data Matrix barcode (ECC 200) standard. The symbol 102.1, which is a 24×24 array, has two solid black borders 302 forming an “L-shape” which are the finder pattern, enabling the symbol reader to determine the location and orientation of the 2D symbol. The symbol also has two opposing borders of alternating dark and light cells which form a “timing pattern” 304 which help the symbol reader identify the size (the number of rows and columns) of the symbol.


Interior to the finder pattern 302 and timing pattern 304 are rows and columns of interior cells 306 which encode information. As may be evident from the figure, an ideal machine-readable symbol has a very high contrast level between the first color dark cells and the second color light cells, in many cases achieved by employing clearly printed, unobscured cells which are either all black or all white.


Symbol 102.2 is an exemplary 16×16 machine-readable symbol encoded according to the Data Matrix barcode (ECC 200) standard. In symbol 102.2, and for purposes of illustration only, the interior black data cells are omitted, and boundaries between the interior cells 306 are suggested by shaded, dotted lines which are not normally present in actual printed data matrix symbols.


Also, not normally present in actual printed symbols, but included here for purposes of illustration, are solid borders which indicate the boundaries of the codewords 308 formed by the interior cells 306. In an embodiment, each codeword 308 is composed of eight cells representing a single byte of data. It will be seen that there are several types of codewords, including data codewords 308.1 which encode the actual data to be represented by the symbol; error-correcting (EC) codewords 308.2 which are generated from the data codewords according to the Reed-Solomon algorithm; and padding codewords 308.3.


The figure also identifies one exemplary bar (black) cell 306.B and one exemplary space (white) cell 306.S.


The illustration here of machine-readable symbols based on the Data Matrix barcode standard, as well as the size, shape, and data contents illustrated, are exemplary only and should not be construed as limiting. The present system and method are applicable to a wide variety of 2D matrix barcodes according to a variety of known standards, as well as being applicable to other 2D machine-readable symbols which may be envisioned in the future.


Symbol Errors


As discussed above, the data content of symbols 102 is stored or presented in the form of cells 306 of contrasting colors within codewords 308. In an embodiment of the present system and method, the light cells (typically white) represent ones (1's) and the dark cells (typically black) represent zeros (0's). In an alternative embodiment, a light cell represents zero (0) and the dark cells represent (1). In alternative embodiments, other colors or levels of shading may be employed. As a general matter, however, for the coding to be effective the symbol reader 100 must be readily able to distinguish the dark cells from the light cells. Also, the data is stored not only in terms of the cells 306 per se, but also in terms of the positions of the cells 306 within the codewords 308, and the positions of each codeword 308 within the symbol 102.


If a symbol 102 is damaged, there may be insufficient contrast between light cells and dark cells for the symbol reader 100 to reliable distinguish the cells. Similarly, damage to the symbol may render it difficult for the symbol reader to identify the location or boundaries of cells 306 and codewords 308. In other cases, damage to cells 306 can cause a change from black to white or vice-versa. This in turn calls upon the error-correction methods, such as Reed-Solomon, already discussed above. The present system and method are intended to augment Reed-Solomon and similar error-correction methods with information based on contrast analysis.



FIG. 4 provides two views 102.D1, 102.D2 of an exemplary symbol which is damaged, so that the original high-contrast has been lost while the symbol 102 is in use in the field.


In the first view, the damaged symbol 102.D1 shown in the figure was photographed in a real-world automotive manufacturing plant. It is apparent that there is a dark vertical scuff mark 402 which is approximately down the middle of the symbol 102.D1. The scuffing is sufficiently dark that, when read with a standard symbol reader 100, the reader 100 mistakes many individual cells 306 for black when (as printed, and without damage or scuffing) they are white cells. This in turns causes codeword errors. This symbol 102.D1 will not read with current scanners.


The actual value of the codewords in symbol 102.D1 is listed here (codewords before the colon are data codewords, those after the colon are error-correction codewords):


237 151 230 204 27 207 144 165 112 27 13 43 222 60 125 34 137 186 71 163 223 254:96 9 171 31 209 21 131 100 111 146 225 95 109 112 182 218 118 203


The values for the codewords determined by a symbol reader 100 are shown here, with the incorrect codewords underlined:


237 151 230 204 27 207 144 165 112 27 173111 222 60 125 34 137 191127235 223 254:96 25175191208 21 131 100 111 146 225 95 111116 182 218 118 203


As is apparent in the image of symbol 102.D1, throughout the smudged region 402 the contrast between many individual cells is small, and is close to the threshold level between black and white. Compare for example a cluster of low contrast cells 404 within the smudged region 402 with a non-damaged, machine-readable high contrast region 406.


In the second view, the damaged symbol 102.D2 is illustrated as it was interpreted by an actual scanner 100 in the field. As shown by the codewords with shaded cells 306 in the illustration, there were eleven codewords 308 which provided flawed readings from the scanner 100, and may be described as flawed codewords 308.F.


Erasure Vs. Error:


By way of terminology, it is noted here that if the position of an erroneous codeword is known, but the data is not known (or is ambiguous), the codeword is referred to as an “erasure.” If the data of an erroneous codeword is unknown and the position of the codeword is also unknown, the codeword is referred to as an “error.”


Reed-Solomon Error Correction


In an embodiment, the present system and method includes application of error-correcting codes and analyses, augmented with optical analysis of a machine-readable symbol 102, to detect and correct errors in the machine-readable symbol 102. Various mathematical methods of error correction are well-known in the art, and a detailed description is beyond the scope of this document. However, review of a few basic elements of an exemplary error-correction method may aid in the understanding of the present system and method.


All standardized 2D matrix symbologies utilize the Reed-Solomon methodology. In Reed-Solomon codes, a set of data elements, such as bytes of data, may be redundantly encoded in a second set of error-correcting elements (also typically in byte form), which for present purposes can be referred to as EC codewords 308.2. The error-correcting codewords are transmitted or presented along with the principle data elements, enabling reconstruction of damaged data elements.


Methods of constructing the Reed-Solomon EC codewords (based on a given, particular data set) are outside the scope of this document. It suffices for present purposes to understand that Reed-Solomon-derived EC codewords 308.2 can be calculated, and the resulting EC codewords are included as part of 2D matrix symbols, as already described above.


There are a variety of methods of decoding a message with Reed-Solomon error correction. In one exemplary method, the values of the data codewords 308.1 of a symbol 102 are viewed as the coefficients of a polynomial s(x) that is subject to certain constraints (not discussed here):







S


(
x
)


=




i
=
0


n
-
1





c
i



x
i







It will be noted that not only the values of the data codewords 308.1 matter, but also their order. The ordinal placement of the codewords (1st, 2nd, 3rd, etc.) in the polynomial maps to the physical ordering of the data codewords 308.1 in the machine-readable symbol 102.


If the machine-readable symbol 102 is damaged or corrupted, this may result in data codewords 308.1 which are incorrect. The erroneous data can be understood as a received polynomial r(x):







r


(
x
)


=


s


(
x
)


+

e


(
x
)










e


(
x
)


=




i
=
0


n
-
1





e
i



x
i







where ei is the coefficient for the ith power of x. Coefficient ei will be zero if there is no error at that power of x (and so no error for the corresponding ith data codeword 308.1 in the symbol 102); while the coefficient ei will be nonzero if there is an error. If there are ν errors at distinct powers ik of x, then:







e


(
x
)


=




N


k
=
1





(

e_






i
k


)



(

x
^

i
k


)







The goal of the decoder is to find the number of errors (ν), the positions of the errors (ik), and the error values at those positions (e_ik). From those, e(x) can be calculated, and then e(x) can be subtracted from the received r(x) to get the original message s(x).


There are various algorithms which can be employed, as part of the Reed-Solomon scheme, to identify the error positions (ik) and the error values at those positions (e_ik), based solely on the received data codewords 308.1 and the received EC codewords 308.2. The processes involved, however, are generally a two-stage processes, where:


Stage (I) Error Locations:


The first calculation stage entails identifying the location of the errors. This entails first calculating an error-locator polynomial Λ, and based on Λ, calculating the non-zero error positions ik. This stage also determines the number of errors (ν). This first stage calculation inevitably requires the use of some of the EC codewords 308.2 in the symbol 102.


Stage (II) Corrected Values:


Employing the location errors ik as calculated in stage (i), the second calculation stage identifies the correct values (e_ik) associated with each error location.


It will be seen then that in the prior art, correcting errors is a two-stage process, where identifying error locations generally precedes, and is an input to, identifying the corrected data at each location. It is a goal of the present system and method to either reduce or possibly eliminate the calculations of stage (I), by using analyses apart from Reed-Solomon error correction to determine identify or mark the erroneous data codewords 308.1.


Persons skilled in the art will recognize that the non-zero error positions ik calculated via the alternative methods (discussed further below) can be input directly into stage (II), thereby enabling the calculations of the correct data values in stage (II).


Importantly, in the mathematics of standard Reed-Solomon error correction, errors (both location and data unknown) requires the use of two error correcting code words to repair a damaged codeword. If, on the other hand, knowledge of the location of the error exists, then the error is considered an erasure, and only one error correction codeword is required to repair the erased codeword.


Stated another way: Normally, error-correction in 2D matrix symbologies is used to correct codewords which are errors, meaning that both the location and contents of the codeword are unknown. The goal of the present system and method is to independently flag errors so that they are instead treated as erasures, for which the location is known, thereby requiring only one EC codeword for correction.


Optical Clarity and Optical Ambiguity, Decoding Disadvantage, and Reed-Solomon Error Correction


As discussed above, Reed-Solomon error correction requires the use of two EC codewords 308.2 to correctly recover both the location and the data contents of a single data codeword 308.1 which is flawed. However, the present system and method aims to enable the identification (at least provisionally) of the locations of the flawed or damaged codewords 308.F—and to make such identification independently of the EC codewords 308.2 in the symbol 102. Such alternative means of locating the data codewords 308.1 which are flawed supplements the data in the EC codewords 308.2; as a result, only a single EC codeword 308.2 is required to identify the data in a data codeword 308.1 Flawed codewords 308.F may also be referred to as codewords which have a “decoding disadvantage.”


To identify the locations of the codewords with a decoding disadvantage, independent of the error-correcting information within the symbol 102 itself, the present system and method identifies those codewords 308 in the symbol 102 which have a low level of optical clarity, or equivalently, a high level of optical ambiguity. By “optical clarity” is meant any codeword 308 which, as presented to the reader 100, is sufficiently clear and distinct (e.g., has high optical contrast) to be read very reliably by the symbol reader's optical system 230, 235. If a codeword 308 is not optically clear—for example, due to poor printing, smudging or marking in the field, ripping or tearing, or other causes—then the codeword is deemed optically ambiguous; there is a significant probability that the data for an optically ambiguous codeword, as determined by a reader 100, will not match the intended data of the same codeword.



FIG. 5 presents a flow-chart of an exemplary method 500 for optically enhanced Reed-Solomon error-correction for a symbol 102. The steps of exemplary method 500 are generally performed via the processor(s) 240, memory 245, and other components of the symbol reader 100.


In step 505, the symbol reader 100 identifies the location of the symbol 102 and the appropriate parameters such as the size. For example, for a DataMatrix symbol, the reader 100 finds the “L-shape” 302 and finds the clock track 304 to identify the number of rows and columns in the symbol. The L-shape 302 and clock track 304 help the reader 100 determine the symbol's tilt and orientation, and provide reference points from which to decode the matrix of data cells.


In step 510, the symbol reader 100 creates a matrix or array of sample points (pixels), indicating the reflectances (bright or dark) of points within the symbol 102. These sample points are used to determine reflectance of cells 306 within the symbol. A single cell 306 may have multiple sample points measured, and together these may be used (for example, averaged) to determine the reflectance of each cell 306.


As discussed above, the symbol 102 is composed of codewords 308 with standardized positions, that is, which are made up of standardized clusters of cells 306 with designated positions within the symbol matrix 102.


In step 515, the method 500 determines a level of optical clarity for each codeword 308. A high level of optical clarity, which is desirable, means the codeword's cells are distinctive and that the data value of the codeword can be read with a high probability of accuracy.


A low level of optical clarity—or equivalently, a high level of optical ambiguity—may result from physical damage to a symbol, or from dirt or grease marking the symbol, or other causes as discussed above. Low optical clarity, or high optical ambiguity, means that the codeword's cells are not distinctive and the codeword has a decoding disadvantage. The low level of optical clarity therefore means that the data value of the codeword can be ascertained only with a suboptimal degree of reliability.


Optical clarity/ambiguity may be determined in a variety of ways. In one embodiment of the present system and method, discussed in detail below, the optical clarity/ambiguity is determined based on an analysis of the contrast level between cells 306 within each codeword 308. Codewords 306 which exhibit the lowest internal contrast levels may be marked as optically ambiguous.


In an alternative embodiment, optical clarity/ambiguity may be determined based on analysis of the degree to which a codeword 308 is in-focus or not in-focus. In an alternative embodiment, optical clarity/ambiguity may be determined based on analysis of the definition or lack of definition of lines separating the dark cells 306 from light cells 306.


In an alternative embodiment, optical clarity/ambiguity may be determined based on a degree to which the horizontal and vertical lines of the codewords 308 are parallel to, or are not parallel to, the border-L shape. Other methods of assessing optical clarity of a codeword 308 may be envisioned as well, and fall within the scope and spirit of the present system and method.


In step 520, exemplary method 500 ranks the codewords 308 according to optical clarity, for example from highest to lowest in optical clarity. In step 525, method 500 identifies the lowest ranked codewords (those which are most optically ambiguous), up to the number of codewords 308 to be used as erasures.


In steps 530 and 535, the lowest-ranked codewords 308 identified in step 525—that is, the codewords with the highest optical ambiguity—are marked as erasures in the error-correction equations, and the Reed-Solomon error-correction equations are then executed. Steps 530 and 535 thereby reduce or eliminate the calculations discussed above for a phase (I) of the Reed-Solomon error correction process, and thereby also reduce or eliminate the use of EC codewords 308.2 to identify the locations of flawed codewords 308.F.


Gray-Scale Contrast Analysis Algorithm


In one embodiment, the present system and method identifies codewords 308 with high optical ambiguity (low optical clarity) via contrast analysis of the codewords within the symbol 102.


The present system and method employ a “matrix-cell contrast analysis algorithm,” “gray-scale contrast analysis algorithm,” or simply “contrast analysis algorithm” (CAA) for short. The contrast analysis algorithm of the present system and method determines the actual gray level of each cell 306 in the symbol 102. The CAA also identifies the black/white contrast threshold for the symbol 102. The black/white contrast threshold is the brightness level above which a cell 306 is considered to be white, and below which a cell is considered to be black. The algorithm then determines the difference between the contrast level of each cell 306 and the black/white threshold. If the differential is comparatively low for one or more cells 306 in a particular codeword 308, the codeword 306 may have a decoding disadvantage.


More generally, the CAA may identify a light/dark threshold level, which is a brightness level above which a cell 306 is considered to be of a first, lighter color (for example, white); and below which a cell is considered to be of a second, darker color (for example, black).


A scanner 100 will conventionally store, in memory 245, the “color” of each cell 306, for example, a red-green-blue (RGB) value or a hue-saturation-brightness (HSB) value. The present system and method will also store, in the memory (245) of the scanner 100, an actual, measured gray-scale level for each cell 306.



FIG. 6 presents a flow-chart of an exemplary method 600 for contrast analysis according to the present system and method. Steps 605 through 645 of exemplary method 600 collectively may be considered to be one exemplary embodiments of step 515 of method 500, already discussed above. (Step 515 determines an optical clarity for each codeword 308 in the symbol 102.) Step 650 of exemplary method 600 may be considered to be one exemplary embodiment of step 520 of method 500, that is, ranking the codewords for optical clarity.


Where exemplary method 500 was directed to generally determining and ranking codewords 308 by optical clarity, the exemplary method 600 particularly employs an exemplary approach to contrast analysis in order to determine and rank optical clarity. The steps of exemplary method 600 are generally performed via the processor(s) 240, memory 245, and other components of the symbol reader 100.


In step 605, the symbol reader 100 determines a local black/white contrast threshold (BWCT). The black/white contrast threshold (BWCT), as described above, is a reflectance level above which a cell 306 is considered white, and below which a cell 306 is considered black. This is typically determined by (i) identifying the reflectance of all the cells 306 in the symbol; (ii) identifying the highest reflectance value and the lowest reflectance value; and (iii) identifying a middle-value, such as the mean or the median, and using the middle-value as the BWCT. The present system and method refine this by employing a local BWCT for each cell 306. In an exemplary embodiment, a local BWCT for a given cell 306 may be determined by considering only those other cells local to the given cell 306, and then identifying the mean or median reflectance among those cells. In an embodiment, the number of local cells used to determine the local BWCT may be twenty (20). In an alternative embodiment the number of local cells used to determine the BWCT for a given cell may be higher or lower than twenty (20).


In step 610, the method 600 selects a particular codeword 308, (as specified in the appropriate standards for the size and shape of the symbol 102), and identifies the contrast level (the grayscale level) of each cell in the codeword.


In step 615, the method 600 determines, for the particular codeword at hand, a bar cell (306.B) with a contrast value closest to the BWCT; and a space cell (306.S) with a contrast value closest to the BWCT; and then stores these two cell contrast values in a codeword contrast values array in memory (see FIG. 7 below for an example). The contrast values may be labeled as RSmin for the space cell (306.S) closest to the BWCT, and RBmax for the bar cell (306.B) closest to the BWCT. An equivalent phrasing: RSmin is the smallest space cell reflectance (darkest), and RBmax is the largest bar cell reflectance (lightest).


Steps 610 and 615 are repeated for all codewords 308 in the symbol 102. This results in a listing of RSmin and RBmax for each codeword 308 in the symbol.


In step 620, the method 600 determines the erasure gap for spaces between RSmin and the local black/white contrast threshold for each codeword. (ESgap=RSmin−localBWCT)


In step 625, the method 600 determines the erasure gap for bars between RBmax and the local black/white contrast threshold for each codeword. (EBgap=localBWCT−RBmax)


In step 630, the method 600 identifies the largest space cell value in the entire array, that is the largest value for RSmin. This value, which may be labeled as RSmm, is used for normalization in the following step.


In step 635, the method 600 divides all space gap value entries (ESgap) by the largest space cell (“white cell”) value in the array, RSmm, generating an Sgap % value for each codeword. (Sgap %=ESgap/RSmm)


In step 640, the method 600 identifies the largest bar cell (“black cell”) value in the entire array, that is the largest value for RBmin. This value, which may be labeled as RBmm, is used for normalization in the following step.


In step 645, the method 600 divides all bar gap value entries (EBgap) by the largest bar cell value in the array, RBmm, generating a Bgap % value for each codeword. (Bgap %=EBgap/RBmm)


Sgap % and Bgap %, then, are the percentage relative closeness of the deviant cell to the black/white contrast threshold. These percentage values, Sgap % and Bgap %, may also be referred to as the minimum interior contrast levels 702 for each cell 306. The minimum interior contrast levels 702 are a measure of the optical clarity of the codewords 308 in the symbol 102. Specifically: Those codewords 308 with the lowest values for Sgap % and/or the lowest values for Bgap % have the highest optical ambiguity (and therefore the least or worst optical clarity).


As noted above, the preceding steps 605 through 645 of method 600 may collectively be considered to be one exemplary embodiment of step 515 of method 500, already discussed above, that is, determining an optical clarity for each codeword 308 in the symbol 102.


In step 650, and based on the Sgap % and Bgap % values determined in steps 635 and 645, the method 600 ranks the lowest gap percent values up to the number of error correction codewords to be used as erasures. Step 650 of exemplary method 600 may be considered to be one exemplary embodiment of step 520 of method 500, that is, ranking the codewords for optical clarity/ambiguity.


These lowest ranked, least clear codewords are the codewords with the lowest optical clarity (or highest ambiguity), which are then used as erasures in the Reed-Solomon equations (step 530 of method 500).


Sample Applications



FIG. 7 presents an exemplary codeword contrast values array 700 of the kind which may be constructed according to exemplary method 600, above. The array contains actual codeword measurements for the symbol image 102.D1/102.D2 of FIG. 4, above. In array 700, CW is the codeword number; and, as per discussion above:

    • custom character RSmin is the smallest (darkest) space cell reflectance for each codeword 308;
    • custom character RBmax is the largest (lightest) bar cell reflectance for each codeword 308;
    • custom character ESgap is the erasure space gap calculated as RSmin minus the threshold (75 hex in this case);
    • custom character EBgap is the threshold minus RBmax for the bar cells;
    • custom character Sgap % and Bgap % are the relative closeness of the deviant cell to the black/white threshold in percent, also referred to as the minimum interior contrast levels 702; and
    • custom character Rank is a listing of the worst 12 codewords (those with the smallest gap percentage) in the symbol 102.



FIG. 8 illustrates an exemplary case-analysis demonstrating how poor cell contrast can identify a majority of flawed codewords 308.F. The damaged symbols shown in the figure are the same as the damaged symbol pictured and illustrated in FIG. 4, above. In FIG. 8, numbered codeword locations 805 are identified (by a standardized number scheme) for those codewords which are flowed or damaged.



102.D2, reproduced here from FIG. 4 for convenience, is the damaged symbol as it was interpreted by an actual scanner 100 in the field.


Symbol 102.D3 is the same symbol as it was interpreted according the exemplary contrast analysis algorithms discussed above in conjunction with FIG. 5 and FIG. 6.


As can be seen in FIG. 8, there are two codewords 815 which were assessed as being in error by the present system and method, but which were actually read correctly by the scanner 100. Of the latter codewords, one was in the damaged region 402 (codeword 27) and another was a codeword where there is a scratch through the dark cell, making it lighter (codeword 22).


As can also be seen from FIG. 8, there is one codeword 810 which was actually read in error by the scanner 100, but was not flagged by the gray-scale contrast analysis algorithm of the present system and method.


All the remaining, identified codewords 308 (a total of ten) which were flagged as being in error based on contrast analysis are codewords which were, in fact, read in error by the scanner 100.


The codeword that the analysis missed (codeword 27) is easily decoded using the 6 error correction codewords still remaining. This is an example of a symbol that was far from being decodable using standard decoding methods, yet using a gray-scale contrast analysis algorithm, the symbol can sustain this and slightly more damage and still be decodable.


Further Applications


The example shown (in FIGS. 4 and 8) clearly benefits from the gray-scale contrast analysis decoding since the damage to the symbol 102.D1/102.D2 is contrast based. However, the present system and method will also work with other types of damage such as matrix distortion, uniform dark or light damage and for DPM cell variation. When these types of distortion are present, there will be many sample points that rest on cell boundaries which will be recorded as reduced contrast values. As long as the matrix distortion (such as wrinkling) is localized or the dark/light damage is less than approximately one-third of the symbol, the present system and method will substantially improve decoding rates on all types of problem symbols 102.D.


SUMMARY

Improved matrix symbology decode performance is possible when there is some knowledge of potentially damaged codewords 308. One means of achieving improved decode performance is by measuring the gray-level contrast variation of every codeword, and marking those with contrast values that are closest to the black/white threshold as erasures. Using gray-level information and using erasure correction in matrix symbologies will allow successful decoding far into a damage region where current product decoding fails.


To supplement the present disclosure, this application incorporates entirely by reference the following patents, patent application publications, and patent applications:

  • U.S. Pat. Nos. 6,832,725; 7,128,266;
  • U.S. Pat. Nos. 7,159,783; 7,413,127;
  • U.S. Pat. Nos. 7,726,575; 8,294,969;
  • U.S. Pat. Nos. 8,317,105; 8,322,622;
  • U.S. Pat. Nos. 8,366,005; 8,371,507;
  • U.S. Pat. Nos. 8,376,233; 8,381,979;
  • U.S. Pat. Nos. 8,390,909; 8,408,464;
  • U.S. Pat. Nos. 8,408,468; 8,408,469;
  • U.S. Pat. Nos. 8,424,768; 8,448,863;
  • U.S. Pat. Nos. 8,457,013; 8,459,557;
  • U.S. Pat. Nos. 8,469,272; 8,474,712;
  • U.S. Pat. Nos. 8,479,992; 8,490,877;
  • U.S. Pat. Nos. 8,517,271; 8,523,076;
  • U.S. Pat. Nos. 8,528,818; 8,544,737;
  • U.S. Pat. Nos. 8,548,242; 8,548,420;
  • U.S. Pat. Nos. 8,550,335; 8,550,354;
  • U.S. Pat. Nos. 8,550,357; 8,556,174;
  • U.S. Pat. Nos. 8,556,176; 8,556,177;
  • U.S. Pat. Nos. 8,559,767; 8,599,957;
  • U.S. Pat. Nos. 8,561,895; 8,561,903;
  • U.S. Pat. Nos. 8,561,905; 8,565,107;
  • U.S. Pat. Nos. 8,571,307; 8,579,200;
  • U.S. Pat. Nos. 8,583,924; 8,584,945;
  • U.S. Pat. Nos. 8,587,595; 8,587,697;
  • U.S. Pat. Nos. 8,588,869; 8,590,789;
  • U.S. Pat. Nos. 8,596,539; 8,596,542;
  • U.S. Pat. Nos. 8,596,543; 8,599,271;
  • U.S. Pat. Nos. 8,599,957; 8,600,158;
  • U.S. Pat. Nos. 8,600,167; 8,602,309;
  • U.S. Pat. Nos. 8,608,053; 8,608,071;
  • U.S. Pat. Nos. 8,611,309; 8,615,487;
  • U.S. Pat. Nos. 8,616,454; 8,621,123;
  • U.S. Pat. Nos. 8,622,303; 8,628,013;
  • U.S. Pat. Nos. 8,628,015; 8,628,016;
  • U.S. Pat. Nos. 8,629,926; 8,630,491;
  • U.S. Pat. Nos. 8,635,309; 8,636,200;
  • U.S. Pat. Nos. 8,636,212; 8,636,215;
  • U.S. Pat. Nos. 8,636,224; 8,638,806;
  • U.S. Pat. Nos. 8,640,958; 8,640,960;
  • U.S. Pat. Nos. 8,643,717; 8,646,692;
  • U.S. Pat. Nos. 8,646,694; 8,657,200;
  • U.S. Pat. Nos. 8,659,397; 8,668,149;
  • U.S. Pat. Nos. 8,678,285; 8,678,286;
  • U.S. Pat. Nos. 8,682,077; 8,687,282;
  • U.S. Pat. Nos. 8,692,927; 8,695,880;
  • U.S. Pat. Nos. 8,698,949; 8,717,494;
  • U.S. Pat. Nos. 8,717,494; 8,720,783;
  • U.S. Pat. Nos. 8,723,804; 8,723,904;
  • U.S. Pat. Nos. 8,727,223; D702,237;
  • U.S. Pat. Nos. 8,740,082; 8,740,085;
  • U.S. Pat. Nos. 8,746,563; 8,750,445;
  • U.S. Pat. Nos. 8,752,766; 8,756,059;
  • U.S. Pat. Nos. 8,757,495; 8,760,563;
  • U.S. Pat. Nos. 8,763,909; 8,777,108;
  • U.S. Pat. Nos. 8,777,109; 8,779,898;
  • U.S. Pat. Nos. 8,781,520; 8,783,573;
  • U.S. Pat. Nos. 8,789,757; 8,789,758;
  • U.S. Pat. Nos. 8,789,759; 8,794,520;
  • U.S. Pat. Nos. 8,794,522; 8,794,525;
  • U.S. Pat. Nos. 8,794,526; 8,798,367;
  • U.S. Pat. Nos. 8,807,431; 8,807,432;
  • U.S. Pat. Nos. 8,820,630; 8,822,848;
  • U.S. Pat. Nos. 8,824,692; 8,824,696;
  • U.S. Pat. Nos. 8,842,849; 8,844,822;
  • U.S. Pat. Nos. 8,844,823; 8,849,019;
  • U.S. Pat. Nos. 8,851,383; 8,854,633;
  • U.S. Pat. Nos. 8,866,963; 8,868,421;
  • U.S. Pat. Nos. 8,868,519; 8,868,802;
  • U.S. Pat. Nos. 8,868,803; 8,870,074;
  • U.S. Pat. Nos. 8,879,639; 8,880,426;
  • U.S. Pat. Nos. 8,881,983; 8,881,987;
  • U.S. Pat. Nos. 8,903,172; 8,908,995;
  • U.S. Pat. Nos. 8,910,870; 8,910,875;
  • U.S. Pat. Nos. 8,914,290; 8,914,788;
  • U.S. Pat. Nos. 8,915,439; 8,915,444;
  • U.S. Pat. Nos. 8,916,789; 8,918,250;
  • U.S. Pat. Nos. 8,918,564; 8,925,818;
  • U.S. Pat. Nos. 8,939,374; 8,942,480;
  • U.S. Pat. Nos. 8,944,313; 8,944,327;
  • U.S. Pat. Nos. 8,944,332; 8,950,678;
  • U.S. Pat. Nos. 8,967,468; 8,971,346;
  • U.S. Pat. Nos. 8,976,030; 8,976,368;
  • U.S. Pat. Nos. 8,978,981; 8,978,983;
  • U.S. Pat. Nos. 8,978,984; 8,985,456;
  • U.S. Pat. Nos. 8,985,457; 8,985,459;
  • U.S. Pat. Nos. 8,985,461; 8,988,578;
  • U.S. Pat. Nos. 8,988,590; 8,991,704;
  • U.S. Pat. Nos. 8,996,194; 8,996,384;
  • U.S. Pat. Nos. 9,002,641; 9,007,368;
  • U.S. Pat. Nos. 9,010,641; 9,015,513;
  • U.S. Pat. Nos. 9,016,576; 9,022,288;
  • U.S. Pat. Nos. 9,030,964; 9,033,240;
  • U.S. Pat. Nos. 9,033,242; 9,036,054;
  • U.S. Pat. Nos. 9,037,344; 9,038,911;
  • U.S. Pat. Nos. 9,038,915; 9,047,098;
  • U.S. Pat. Nos. 9,047,359; 9,047,420;
  • U.S. Pat. Nos. 9,047,525; 9,047,531;
  • U.S. Pat. Nos. 9,053,055; 9,053,378;
  • U.S. Pat. Nos. 9,053,380; 9,058,526;
  • U.S. Pat. Nos. 9,064,165; 9,064,167;
  • U.S. Pat. Nos. 9,064,168; 9,064,254;
  • U.S. Pat. Nos. 9,066,032; 9,070,032;
  • U.S. Design Pat. No. D716,285;
  • U.S. Design Pat. No. D723,560;
  • U.S. Design Pat. No. D730,357;
  • U.S. Design Pat. No. D730,901;
  • U.S. Design Pat. No. D730,902;
  • U.S. Design Pat. No. D733,112;
  • U.S. Design Pat. No. D734,339;
  • International Publication No. 2013/163789;
  • International Publication No. 2013/173985;
  • International Publication No. 2014/019130;
  • International Publication No. 2014/110495;
  • U.S. Patent Application Publication No. 2008/0185432;
  • U.S. Patent Application Publication No. 2009/0134221;
  • U.S. Patent Application Publication No. 2010/0177080;
  • U.S. Patent Application Publication No. 2010/0177076;
  • U.S. Patent Application Publication No. 2010/0177707;
  • U.S. Patent Application Publication No. 2010/0177749;
  • U.S. Patent Application Publication No. 2010/0265880;
  • U.S. Patent Application Publication No. 2011/0202554;
  • U.S. Patent Application Publication No. 2012/0111946;
  • U.S. Patent Application Publication No. 2012/0168511;
  • U.S. Patent Application Publication No. 2012/0168512;
  • U.S. Patent Application Publication No. 2012/0193423;
  • U.S. Patent Application Publication No. 2012/0203647;
  • U.S. Patent Application Publication No. 2012/0223141;
  • U.S. Patent Application Publication No. 2012/0228382;
  • U.S. Patent Application Publication No. 2012/0248188;
  • U.S. Patent Application Publication No. 2013/0043312;
  • U.S. Patent Application Publication No. 2013/0082104;
  • U.S. Patent Application Publication No. 2013/0175341;
  • U.S. Patent Application Publication No. 2013/0175343;
  • U.S. Patent Application Publication No. 2013/0257744;
  • U.S. Patent Application Publication No. 2013/0257759;
  • U.S. Patent Application Publication No. 2013/0270346;
  • U.S. Patent Application Publication No. 2013/0287258;
  • U.S. Patent Application Publication No. 2013/0292475;
  • U.S. Patent Application Publication No. 2013/0292477;
  • U.S. Patent Application Publication No. 2013/0293539;
  • U.S. Patent Application Publication No. 2013/0293540;
  • U.S. Patent Application Publication No. 2013/0306728;
  • U.S. Patent Application Publication No. 2013/0306731;
  • U.S. Patent Application Publication No. 2013/0307964;
  • U.S. Patent Application Publication No. 2013/0308625;
  • U.S. Patent Application Publication No. 2013/0313324;
  • U.S. Patent Application Publication No. 2013/0313325;
  • U.S. Patent Application Publication No. 2013/0342717;
  • U.S. Patent Application Publication No. 2014/0001267;
  • U.S. Patent Application Publication No. 2014/0008439;
  • U.S. Patent Application Publication No. 2014/0025584;
  • U.S. Patent Application Publication No. 2014/0034734;
  • U.S. Patent Application Publication No. 2014/0036848;
  • U.S. Patent Application Publication No. 2014/0039693;
  • U.S. Patent Application Publication No. 2014/0042814;
  • U.S. Patent Application Publication No. 2014/0049120;
  • U.S. Patent Application Publication No. 2014/0049635;
  • U.S. Patent Application Publication No. 2014/0061306;
  • U.S. Patent Application Publication No. 2014/0063289;
  • U.S. Patent Application Publication No. 2014/0066136;
  • U.S. Patent Application Publication No. 2014/0067692;
  • U.S. Patent Application Publication No. 2014/0070005;
  • U.S. Patent Application Publication No. 2014/0071840;
  • U.S. Patent Application Publication No. 2014/0074746;
  • U.S. Patent Application Publication No. 2014/0076974;
  • U.S. Patent Application Publication No. 2014/0078341;
  • U.S. Patent Application Publication No. 2014/0078345;
  • U.S. Patent Application Publication No. 2014/0097249;
  • U.S. Patent Application Publication No. 2014/0098792;
  • U.S. Patent Application Publication No. 2014/0100813;
  • U.S. Patent Application Publication No. 2014/0103115;
  • U.S. Patent Application Publication No. 2014/0104413;
  • U.S. Patent Application Publication No. 2014/0104414;
  • U.S. Patent Application Publication No. 2014/0104416;
  • U.S. Patent Application Publication No. 2014/0104451;
  • U.S. Patent Application Publication No. 2014/0106594;
  • U.S. Patent Application Publication No. 2014/0106725;
  • U.S. Patent Application Publication No. 2014/0108010;
  • U.S. Patent Application Publication No. 2014/0108402;
  • U.S. Patent Application Publication No. 2014/0110485;
  • U.S. Patent Application Publication No. 2014/0114530;
  • U.S. Patent Application Publication No. 2014/0124577;
  • U.S. Patent Application Publication No. 2014/0124579;
  • U.S. Patent Application Publication No. 2014/0125842;
  • U.S. Patent Application Publication No. 2014/0125853;
  • U.S. Patent Application Publication No. 2014/0125999;
  • U.S. Patent Application Publication No. 2014/0129378;
  • U.S. Patent Application Publication No. 2014/0131438;
  • U.S. Patent Application Publication No. 2014/0131441;
  • U.S. Patent Application Publication No. 2014/0131443;
  • U.S. Patent Application Publication No. 2014/0131444;
  • U.S. Patent Application Publication No. 2014/0131445;
  • U.S. Patent Application Publication No. 2014/0131448;
  • U.S. Patent Application Publication No. 2014/0133379;
  • U.S. Patent Application Publication No. 2014/0136208;
  • U.S. Patent Application Publication No. 2014/0140585;
  • U.S. Patent Application Publication No. 2014/0151453;
  • U.S. Patent Application Publication No. 2014/0152882;
  • U.S. Patent Application Publication No. 2014/0158770;
  • U.S. Patent Application Publication No. 2014/0159869;
  • U.S. Patent Application Publication No. 2014/0166755;
  • U.S. Patent Application Publication No. 2014/0166759;
  • U.S. Patent Application Publication No. 2014/0168787;
  • U.S. Patent Application Publication No. 2014/0175165;
  • U.S. Patent Application Publication No. 2014/0175172;
  • U.S. Patent Application Publication No. 2014/0191644;
  • U.S. Patent Application Publication No. 2014/0191913;
  • U.S. Patent Application Publication No. 2014/0197238;
  • U.S. Patent Application Publication No. 2014/0197239;
  • U.S. Patent Application Publication No. 2014/0197304;
  • U.S. Patent Application Publication No. 2014/0214631;
  • U.S. Patent Application Publication No. 2014/0217166;
  • U.S. Patent Application Publication No. 2014/0217180;
  • U.S. Patent Application Publication No. 2014/0231500;
  • U.S. Patent Application Publication No. 2014/0232930;
  • U.S. Patent Application Publication No. 2014/0247315;
  • U.S. Patent Application Publication No. 2014/0263493;
  • U.S. Patent Application Publication No. 2014/0263645;
  • U.S. Patent Application Publication No. 2014/0267609;
  • U.S. Patent Application Publication No. 2014/0270196;
  • U.S. Patent Application Publication No. 2014/0270229;
  • U.S. Patent Application Publication No. 2014/0278387;
  • U.S. Patent Application Publication No. 2014/0278391;
  • U.S. Patent Application Publication No. 2014/0282210;
  • U.S. Patent Application Publication No. 2014/0284384;
  • U.S. Patent Application Publication No. 2014/0288933;
  • U.S. Patent Application Publication No. 2014/0297058;
  • U.S. Patent Application Publication No. 2014/0299665;
  • U.S. Patent Application Publication No. 2014/0312121;
  • U.S. Patent Application Publication No. 2014/0319220;
  • U.S. Patent Application Publication No. 2014/0319221;
  • U.S. Patent Application Publication No. 2014/0326787;
  • U.S. Patent Application Publication No. 2014/0332590;
  • U.S. Patent Application Publication No. 2014/0344943;
  • U.S. Patent Application Publication No. 2014/0346233;
  • U.S. Patent Application Publication No. 2014/0351317;
  • U.S. Patent Application Publication No. 2014/0353373;
  • U.S. Patent Application Publication No. 2014/0361073;
  • U.S. Patent Application Publication No. 2014/0361082;
  • U.S. Patent Application Publication No. 2014/0362184;
  • U.S. Patent Application Publication No. 2014/0363015;
  • U.S. Patent Application Publication No. 2014/0369511;
  • U.S. Patent Application Publication No. 2014/0374483;
  • U.S. Patent Application Publication No. 2014/0374485;
  • U.S. Patent Application Publication No. 2015/0001301;
  • U.S. Patent Application Publication No. 2015/0001304;
  • U.S. Patent Application Publication No. 2015/0003673;
  • U.S. Patent Application Publication No. 2015/0009338;
  • U.S. Patent Application Publication No. 2015/0009610;
  • U.S. Patent Application Publication No. 2015/0014416;
  • U.S. Patent Application Publication No. 2015/0021397;
  • U.S. Patent Application Publication No. 2015/0028102;
  • U.S. Patent Application Publication No. 2015/0028103;
  • U.S. Patent Application Publication No. 2015/0028104;
  • U.S. Patent Application Publication No. 2015/0029002;
  • U.S. Patent Application Publication No. 2015/0032709;
  • U.S. Patent Application Publication No. 2015/0039309;
  • U.S. Patent Application Publication No. 2015/0039878;
  • U.S. Patent Application Publication No. 2015/0040378;
  • U.S. Patent Application Publication No. 2015/0048168;
  • U.S. Patent Application Publication No. 2015/0049347;
  • U.S. Patent Application Publication No. 2015/0051992;
  • U.S. Patent Application Publication No. 2015/0053766;
  • U.S. Patent Application Publication No. 2015/0053768;
  • U.S. Patent Application Publication No. 2015/0053769;
  • U.S. Patent Application Publication No. 2015/0060544;
  • U.S. Patent Application Publication No. 2015/0062366;
  • U.S. Patent Application Publication No. 2015/0063215;
  • U.S. Patent Application Publication No. 2015/0063676;
  • U.S. Patent Application Publication No. 2015/0069130;
  • U.S. Patent Application Publication No. 2015/0071819;
  • U.S. Patent Application Publication No. 2015/0083800;
  • U.S. Patent Application Publication No. 2015/0086114;
  • U.S. Patent Application Publication No. 2015/0088522;
  • U.S. Patent Application Publication No. 2015/0096872;
  • U.S. Patent Application Publication No. 2015/0099557;
  • U.S. Patent Application Publication No. 2015/0100196;
  • U.S. Patent Application Publication No. 2015/0102109;
  • U.S. Patent Application Publication No. 2015/0115035;
  • U.S. Patent Application Publication No. 2015/0127791;
  • U.S. Patent Application Publication No. 2015/0128116;
  • U.S. Patent Application Publication No. 2015/0129659;
  • U.S. Patent Application Publication No. 2015/0133047;
  • U.S. Patent Application Publication No. 2015/0134470;
  • U.S. Patent Application Publication No. 2015/0136851;
  • U.S. Patent Application Publication No. 2015/0136854;
  • U.S. Patent Application Publication No. 2015/0142492;
  • U.S. Patent Application Publication No. 2015/0144692;
  • U.S. Patent Application Publication No. 2015/0144698;
  • U.S. Patent Application Publication No. 2015/0144701;
  • U.S. Patent Application Publication No. 2015/0149946;
  • U.S. Patent Application Publication No. 2015/0161429;
  • U.S. Patent Application Publication No. 2015/0169925;
  • U.S. Patent Application Publication No. 2015/0169929;
  • U.S. Patent Application Publication No. 2015/0178523;
  • U.S. Patent Application Publication No. 2015/0178534;
  • U.S. Patent Application Publication No. 2015/0178535;
  • U.S. Patent Application Publication No. 2015/0178536;
  • U.S. Patent Application Publication No. 2015/0178537;
  • U.S. Patent Application Publication No. 2015/0181093;
  • U.S. Patent Application Publication No. 2015/0181109;
  • U.S. patent application Ser. No. 13/367,978 for a Laser Scanning Module Employing an Elastomeric U-Hinge Based Laser Scanning Assembly, filed Feb. 7, 2012 (Feng et al.);
  • U.S. patent application Ser. No. 29/458,405 for an Electronic Device, filed Jun. 19, 2013 (Fitch et al.);
  • U.S. patent application Ser. No. 29/459,620 for an Electronic Device Enclosure, filed Jul. 2, 2013 (London et al.);
  • U.S. patent application Ser. No. 29/468,118 for an Electronic Device Case, filed Sep. 26, 2013 (Oberpriller et al.);
  • U.S. patent application Ser. No. 14/150,393 for Indicia-reader Having Unitary Construction Scanner, filed Jan. 8, 2014 (Colavito et al.);
  • U.S. patent application Ser. No. 14/200,405 for Indicia Reader for Size-Limited Applications filed Mar. 7, 2014 (Feng et al.);
  • U.S. patent application Ser. No. 14/231,898 for Hand-Mounted Indicia-Reading Device with Finger Motion Triggering filed Apr. 1, 2014 (Van Horn et al.);
  • U.S. patent application Ser. No. 29/486,759 for an Imaging Terminal, filed Apr. 2, 2014 (Oberpriller et al.);
  • U.S. patent application Ser. No. 14/257,364 for Docking System and Method Using Near Field Communication filed Apr. 21, 2014 (Showering);
  • U.S. patent application Ser. No. 14/264,173 for Autofocus Lens System for Indicia Readers filed Apr. 29, 2014 (Ackley et al.);
  • U.S. patent application Ser. No. 14/277,337 for MULTIPURPOSE OPTICAL READER, filed May 14, 2014 (Jovanovski et al.);
  • U.S. patent application Ser. No. 14/283,282 for TERMINAL HAVING ILLUMINATION AND FOCUS CONTROL filed May 21, 2014 (Liu et al.);
  • U.S. patent application Ser. No. 14/327,827 for a MOBILE-PHONE ADAPTER FOR ELECTRONIC TRANSACTIONS, filed Jul. 10, 2014 (Hejl);
  • U.S. patent application Ser. No. 14/334,934 for a SYSTEM AND METHOD FOR INDICIA VERIFICATION, filed Jul. 18, 2014 (Hejl);
  • U.S. patent application Ser. No. 14/339,708 for LASER SCANNING CODE SYMBOL READING SYSTEM, filed Jul. 24, 2014 (Xian et al.);
  • U.S. patent application Ser. No. 14/340,627 for an AXIALLY REINFORCED FLEXIBLE SCAN ELEMENT, filed Jul. 25, 2014 (Rueblinger et al.);
  • U.S. patent application Ser. No. 14/446,391 for MULTIFUNCTION POINT OF SALE APPARATUS WITH OPTICAL SIGNATURE CAPTURE filed Jul. 30, 2014 (Good et al.);
  • U.S. patent application Ser. No. 14/452,697 for INTERACTIVE INDICIA READER, filed Aug. 6, 2014 (Todeschini);
  • U.S. patent application Ser. No. 14/453,019 for DIMENSIONING SYSTEM WITH GUIDED ALIGNMENT, filed Aug. 6, 2014 (Li et al.);
  • U.S. patent application Ser. No. 14/462,801 for MOBILE COMPUTING DEVICE WITH DATA COGNITION SOFTWARE, filed on Aug. 19, 2014 (Todeschini et al.);
  • U.S. patent application Ser. No. 14/483,056 for VARIABLE DEPTH OF FIELD BARCODE SCANNER filed Sep. 10, 2014 (McCloskey et al.);
  • U.S. patent application Ser. No. 14/513,808 for IDENTIFYING INVENTORY ITEMS IN A STORAGE FACILITY filed Oct. 14, 2014 (Singel et al.);
  • U.S. patent application Ser. No. 14/519,195 for HANDHELD DIMENSIONING SYSTEM WITH FEEDBACK filed Oct. 21, 2014 (Laffargue et al.);
  • U.S. patent application Ser. No. 14/519,179 for DIMENSIONING SYSTEM WITH MULTIPATH INTERFERENCE MITIGATION filed Oct. 21, 2014 (Thuries et al.);
  • U.S. patent application Ser. No. 14/519,211 for SYSTEM AND METHOD FOR DIMENSIONING filed Oct. 21, 2014 (Ackley et al.);
  • U.S. patent application Ser. No. 14/519,233 for HANDHELD DIMENSIONER WITH DATA-QUALITY INDICATION filed Oct. 21, 2014 (Laffargue et al.);
  • U.S. patent application Ser. No. 14/519,249 for HANDHELD DIMENSIONING SYSTEM WITH MEASUREMENT-CONFORMANCE FEEDBACK filed Oct. 21, 2014 (Ackley et al.);
  • U.S. patent application Ser. No. 14/527,191 for METHOD AND SYSTEM FOR RECOGNIZING SPEECH USING WILDCARDS IN AN EXPECTED RESPONSE filed Oct. 29, 2014 (Braho et al.);
  • U.S. patent application Ser. No. 14/529,563 for ADAPTABLE INTERFACE FOR A MOBILE COMPUTING DEVICE filed Oct. 31, 2014 (Schoon et al.);
  • U.S. patent application Ser. No. 14/529,857 for BARCODE READER WITH SECURITY FEATURES filed Oct. 31, 2014 (Todeschini et al.);
  • U.S. patent application Ser. No. 14/398,542 for PORTABLE ELECTRONIC DEVICES HAVING A SEPARATE LOCATION TRIGGER UNIT FOR USE IN CONTROLLING AN APPLICATION UNIT filed Nov. 3, 2014 (Bian et al.);
  • U.S. patent application Ser. No. 14/531,154 for DIRECTING AN INSPECTOR THROUGH AN INSPECTION filed Nov. 3, 2014 (Miller et al.);
  • U.S. patent application Ser. No. 14/533,319 for BARCODE SCANNING SYSTEM USING WEARABLE DEVICE WITH EMBEDDED CAMERA filed Nov. 5, 2014 (Todeschini);
  • U.S. patent application Ser. No. 14/535,764 for CONCATENATED EXPECTED RESPONSES FOR SPEECH RECOGNITION filed Nov. 7, 2014 (Braho et al.);
  • U.S. patent application Ser. No. 14/568,305 for AUTO-CONTRAST VIEWFINDER FOR AN INDICIA READER filed Dec. 12, 2014 (Todeschini);
  • U.S. patent application Ser. No. 14/573,022 for DYNAMIC DIAGNOSTIC INDICATOR GENERATION filed Dec. 17, 2014 (Goldsmith);
  • U.S. patent application Ser. No. 14/578,627 for SAFETY SYSTEM AND METHOD filed Dec. 22, 2014 (Ackley et al.);
  • U.S. patent application Ser. No. 14/580,262 for MEDIA GATE FOR THERMAL TRANSFER PRINTERS filed Dec. 23, 2014 (Bowles);
  • U.S. patent application Ser. No. 14/590,024 for SHELVING AND PACKAGE LOCATING SYSTEMS FOR DELIVERY VEHICLES filed Jan. 6, 2015 (Payne);
  • U.S. patent application Ser. No. 14/596,757 for SYSTEM AND METHOD FOR DETECTING BARCODE PRINTING ERRORS filed Jan. 14, 2015 (Ackley);
  • U.S. patent application Ser. No. 14/416,147 for OPTICAL READING APPARATUS HAVING VARIABLE SETTINGS filed Jan. 21, 2015 (Chen et al.);
  • U.S. patent application Ser. No. 14/614,706 for DEVICE FOR SUPPORTING AN ELECTRONIC TOOL ON A USER'S HAND filed Feb. 5, 2015 (Oberpriller et al.);
  • U.S. patent application Ser. No. 14/614,796 for CARGO APPORTIONMENT TECHNIQUES filed Feb. 5, 2015 (Morton et al.);
  • U.S. patent application Ser. No. 29/516,892 for TABLE COMPUTER filed Feb. 6, 2015 (Bidwell et al.);
  • U.S. patent application Ser. No. 14/619,093 for METHODS FOR TRAINING A SPEECH RECOGNITION SYSTEM filed Feb. 11, 2015 (Pecorari);
  • U.S. patent application Ser. No. 14/628,708 for DEVICE, SYSTEM, AND METHOD FOR DETERMINING THE STATUS OF CHECKOUT LANES filed Feb. 23, 2015 (Todeschini);
  • U.S. patent application Ser. No. 14/630,841 for TERMINAL INCLUDING IMAGING ASSEMBLY filed Feb. 25, 2015 (Gomez et al.);
  • U.S. patent application Ser. No. 14/635,346 for SYSTEM AND METHOD FOR RELIABLE STORE-AND-FORWARD DATA HANDLING BY ENCODED INFORMATION READING TERMINALS filed Mar. 2, 2015 (Sevier);
  • U.S. patent application Ser. No. 29/519,017 for SCANNER filed Mar. 2, 2015 (Zhou et al.);
  • U.S. patent application Ser. No. 14/405,278 for DESIGN PATTERN FOR SECURE STORE filed Mar. 9, 2015 (Zhu et al.);
  • U.S. patent application Ser. No. 14/660,970 for DECODABLE INDICIA READING TERMINAL WITH COMBINED ILLUMINATION filed Mar. 18, 2015 (Kearney et al.);
  • U.S. patent application Ser. No. 14/661,013 for REPROGRAMMING SYSTEM AND METHOD FOR DEVICES INCLUDING PROGRAMMING SYMBOL filed Mar. 18, 2015 (Soule et al.);
  • U.S. patent application Ser. No. 14/662,922 for MULTIFUNCTION POINT OF SALE SYSTEM filed Mar. 19, 2015 (Van Horn et al.);
  • U.S. patent application Ser. No. 14/663,638 for VEHICLE MOUNT COMPUTER WITH CONFIGURABLE IGNITION SWITCH BEHAVIOR filed Mar. 20, 2015 (Davis et al.);
  • U.S. patent application Ser. No. 14/664,063 for METHOD AND APPLICATION FOR SCANNING A BARCODE WITH A SMART DEVICE WHILE CONTINUOUSLY RUNNING AND DISPLAYING AN APPLICATION ON THE SMART DEVICE DISPLAY filed Mar. 20, 2015 (Todeschini);
  • U.S. patent application Ser. No. 14/669,280 for TRANSFORMING COMPONENTS OF A WEB PAGE TO VOICE PROMPTS filed Mar. 26, 2015 (Funyak et al.);
  • U.S. patent application Ser. No. 14/674,329 for AIMER FOR BARCODE SCANNING filed Mar. 31, 2015 (Bidwell);
  • U.S. patent application Ser. No. 14/676,109 for INDICIA READER filed Apr. 1, 2015 (Huck);
  • U.S. patent application Ser. No. 14/676,327 for DEVICE MANAGEMENT PROXY FOR SECURE DEVICES filed Apr. 1, 2015 (Yeakley et al.);
  • U.S. patent application Ser. No. 14/676,898 for NAVIGATION SYSTEM CONFIGURED TO INTEGRATE MOTION SENSING DEVICE INPUTS filed Apr. 2, 2015 (Showering);
  • U.S. patent application Ser. No. 14/679,275 for DIMENSIONING SYSTEM CALIBRATION SYSTEMS AND METHODS filed Apr. 6, 2015 (Laffargue et al.);
  • U.S. patent application Ser. No. 29/523,098 for HANDLE FOR A TABLET COMPUTER filed Apr. 7, 2015 (Bidwell et al.);
  • U.S. patent application Ser. No. 14/682,615 for SYSTEM AND METHOD FOR POWER MANAGEMENT OF MOBILE DEVICES filed Apr. 9, 2015 (Murawski et al.);
  • U.S. patent application Ser. No. 14/686,822 for MULTIPLE PLATFORM SUPPORT SYSTEM AND METHOD filed Apr. 15, 2015 (Qu et al.);
  • U.S. patent application Ser. No. 14/687,289 for SYSTEM FOR COMMUNICATION VIA A PERIPHERAL HUB filed Apr. 15, 2015 (Kohtz et al.);
  • U.S. patent application Ser. No. 29/524,186 for SCANNER filed Apr. 17, 2015 (Zhou et al.);
  • U.S. patent application Ser. No. 14/695,364 for MEDICATION MANAGEMENT SYSTEM filed Apr. 24, 2015 (Sewell et al.);
  • U.S. patent application Ser. No. 14/695,923 for SECURE UNATTENDED NETWORK AUTHENTICATION filed Apr. 24, 2015 (Kubler et al.);
  • U.S. patent application Ser. No. 29/525,068 for TABLET COMPUTER WITH REMOVABLE SCANNING DEVICE filed Apr. 27, 2015 (Schulte et al.);
  • U.S. patent application Ser. No. 14/699,436 for SYMBOL READING SYSTEM HAVING PREDICTIVE DIAGNOSTICS filed Apr. 29, 2015 (Nahill et al.);
  • U.S. patent application Ser. No. 14/702,110 for SYSTEM AND METHOD FOR REGULATING BARCODE DATA INJECTION INTO A RUNNING APPLICATION ON A SMART DEVICE filed May 1, 2015 (Todeschini et al.);
  • U.S. patent application Ser. No. 14/702,979 for TRACKING BATTERY CONDITIONS filed May 4, 2015 (Young et al.);
  • U.S. patent application Ser. No. 14/704,050 for INTERMEDIATE LINEAR POSITIONING filed May 5, 2015 (Charpentier et al.);
  • U.S. patent application Ser. No. 14/705,012 for HANDS-FREE HUMAN MACHINE INTERFACE RESPONSIVE TO A DRIVER OF A VEHICLE filed May 6, 2015 (Fitch et al.);
  • U.S. patent application Ser. No. 14/705,407 for METHOD AND SYSTEM TO PROTECT SOFTWARE-BASED NETWORK-CONNECTED DEVICES FROM ADVANCED PERSISTENT THREAT filed May 6, 2015 (Hussey et al.);
  • U.S. patent application Ser. No. 14/707,037 for SYSTEM AND METHOD FOR DISPLAY OF INFORMATION USING A VEHICLE-MOUNT COMPUTER filed May 8, 2015 (Chamberlin);
  • U.S. patent application Ser. No. 14/707,123 for APPLICATION INDEPENDENT DEX/UCS INTERFACE filed May 8, 2015 (Pape);
  • U.S. patent application Ser. No. 14/707,492 for METHOD AND APPARATUS FOR READING OPTICAL INDICIA USING A PLURALITY OF DATA SOURCES filed May 8, 2015 (Smith et al.);
  • U.S. patent application Ser. No. 14/710,666 for PRE-PAID USAGE SYSTEM FOR ENCODED INFORMATION READING TERMINALS filed May 13, 2015 (Smith);
  • U.S. patent application Ser. No. 29/526,918 for CHARGING BASE filed May 14, 2015 (Fitch et al.);
  • U.S. patent application Ser. No. 14/715,672 for AUGUMENTED REALITY ENABLED HAZARD DISPLAY filed May 19, 2015 (Venkatesha et al.);
  • U.S. patent application Ser. No. 14/715,916 for EVALUATING IMAGE VALUES filed May 19, 2015 (Ackley);
  • U.S. patent application Ser. No. 14/722,608 for INTERACTIVE USER INTERFACE FOR CAPTURING A DOCUMENT IN AN IMAGE SIGNAL filed May 27, 2015 (Showering et al.);
  • U.S. patent application Ser. No. 29/528,165 for IN-COUNTER BARCODE SCANNER filed May 27, 2015 (Oberpriller et al.);
  • U.S. patent application Ser. No. 14/724,134 for ELECTRONIC DEVICE WITH WIRELESS PATH SELECTION CAPABILITY filed May 28, 2015 (Wang et al.);
  • U.S. patent application Ser. No. 14/724,849 for METHOD OF PROGRAMMING THE DEFAULT CABLE INTERFACE SOFTWARE IN AN INDICIA READING DEVICE filed May 29, 2015 (Barten);
  • U.S. patent application Ser. No. 14/724,908 for IMAGING APPARATUS HAVING IMAGING ASSEMBLY filed May 29, 2015 (Barber et al.);
  • U.S. patent application Ser. No. 14/725,352 for APPARATUS AND METHODS FOR MONITORING ONE OR MORE PORTABLE DATA TERMINALS (Caballero et al.);
  • U.S. patent application Ser. No. 29/528,590 for ELECTRONIC DEVICE filed May 29, 2015 (Fitch et al.);
  • U.S. patent application Ser. No. 29/528,890 for MOBILE COMPUTER HOUSING filed Jun. 2, 2015 (Fitch et al.);
  • U.S. patent application Ser. No. 14/728,397 for DEVICE MANAGEMENT USING VIRTUAL INTERFACES CROSS-REFERENCE TO RELATED APPLICATIONS filed Jun. 2, 2015 (Caballero);
  • U.S. patent application Ser. No. 14/732,870 for DATA COLLECTION MODULE AND SYSTEM filed Jun. 8, 2015 (Powilleit);
  • U.S. patent application Ser. No. 29/529,441 for INDICIA READING DEVICE filed Jun. 8, 2015 (Zhou et al.);
  • U.S. patent application Ser. No. 14/735,717 for INDICIA-READING SYSTEMS HAVING AN INTERFACE WITH A USER'S NERVOUS SYSTEM filed Jun. 10, 2015 (Todeschini);
  • U.S. patent application Ser. No. 14/738,038 for METHOD OF AND SYSTEM FOR DETECTING OBJECT WEIGHING INTERFERENCES filed Jun. 12, 2015 (Amundsen et al.);
  • U.S. patent application Ser. No. 14/740,320 for TACTILE SWITCH FOR A MOBILE ELECTRONIC DEVICE filed Jun. 16, 2015 (Bandringa);
  • U.S. patent application Ser. No. 14/740,373 for CALIBRATING A VOLUME DIMENSIONER filed Jun. 16, 2015 (Ackley et al.);
  • U.S. patent application Ser. No. 14/742,818 for INDICIA READING SYSTEM EMPLOYING DIGITAL GAIN CONTROL filed Jun. 18, 2015 (Xian et al.);
  • U.S. patent application Ser. No. 14/743,257 for WIRELESS MESH POINT PORTABLE DATA TERMINAL filed Jun. 18, 2015 (Wang et al.);
  • U.S. patent application Ser. No. 29/530,600 for CYCLONE filed Jun. 18, 2015 (Vargo et al);
  • U.S. patent application Ser. No. 14/744,633 for IMAGING APPARATUS COMPRISING IMAGE SENSOR ARRAY HAVING SHARED GLOBAL SHUTTER CIRCUITRY filed Jun. 19, 2015 (Wang);
  • U.S. patent application Ser. No. 14/744,836 for CLOUD-BASED SYSTEM FOR READING OF DECODABLE INDICIA filed Jun. 19, 2015 (Todeschini et al.);
  • U.S. patent application Ser. No. 14/745,006 for SELECTIVE OUTPUT OF DECODED MESSAGE DATA filed Jun. 19, 2015 (Todeschini et al.);
  • U.S. patent application Ser. No. 14/747,197 for OPTICAL PATTERN PROJECTOR filed Jun. 23, 2015 (Thuries et al.);
  • U.S. patent application Ser. No. 14/747,490 for DUAL-PROJECTOR THREE-DIMENSIONAL SCANNER filed Jun. 23, 2015 (Jovanovski et al.); and
  • U.S. patent application Ser. No. 14/748,446 for CORDLESS INDICIA READER WITH A MULTIFUNCTION COIL FOR WIRELESS CHARGING AND EAS DEACTIVATION, filed Jun. 24, 2015 (Xie et al.).


In the specification and/or figures, typical embodiments of the invention have been disclosed. The present invention is not limited to such exemplary embodiments. The use of the term “and/or” includes any and all combinations of one or more of the associated listed items. The figures are schematic representations and so are not necessarily drawn to scale. Unless otherwise noted, specific terms have been used in a generic and descriptive sense and not for purposes of limitation.


The foregoing detailed description has set forth various embodiments of the devices and/or processes via the use of block diagrams, flow charts, schematics, exemplary data structures, and examples. Insofar as such block diagrams, flow charts, schematics, exemplary data structures, and examples contain one or more functions and/or operations, it will be understood by those skilled in the art that each function and/or operation within such block diagrams, flowcharts, schematics, exemplary data structures, or examples can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof.


In one embodiment, the present subject matter may be implemented via Application Specific Integrated Circuits (ASICs). However, those skilled in the art will recognize that the embodiments disclosed herein, in whole or in part, can be equivalently implemented in standard integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more controllers (e.g., microcontrollers) as one or more programs running on one or more processors (e.g., microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and or firmware would be well within the skill of one of ordinary skill in the art in light of this disclosure.


In addition, those skilled in the art will appreciate that the control mechanisms taught herein are capable of being distributed as a program product in a variety of tangible forms, and that an illustrative embodiment applies equally regardless of the particular type of tangible instruction bearing media used to actually carry out the distribution. Examples of tangible instruction bearing media include, but are not limited to, the following: recordable type media such as floppy disks, hard disk drives, CD ROMs, digital tape, flash drives, and computer memory.


The various embodiments described above can be combined to provide further embodiments. These and other changes can be made to the present systems and methods in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the invention to the specific embodiments disclosed in the specification and the claims, but should be construed to include all machine-readable symbol scanning and processing systems and methods that read in accordance with the claims. Accordingly, the invention is not limited by the disclosure, but instead its scope is to be determined entirely by the following claims.

Claims
  • 1. A method of error correction of a two-dimensional (2D) symbol, the method comprising: reading, by a hardware processor, a plurality of codewords in the 2D symbol, wherein each of the plurality of codewords is associated with a number that is indicative of a location of a codeword within the 2D symbol;identifying, by the hardware processor, an optically ambiguous codeword of the plurality of codewords in the 2D symbol, wherein the optically ambiguous codeword corresponds to a codeword having lowest minimum interior contrast level amongst the minimum interior contrast level of other codewords in the plurality of codewords; anddetermining, by the hardware processor, the location of the optically ambiguous codeword based on the number associated with the optically ambiguous codeword; andcorrecting, by the hardware processor, errors in the optically ambiguous codeword based on the location of the optically ambiguous codeword and an erroneous decoded value associated with the optically ambiguous codeword.
  • 2. The method of claim 1, wherein each of the plurality of codewords comprises a plurality of cells, wherein the plurality of cells comprises a set of space cells and a set of bar cells.
  • 3. The method of claim 2 further comprising determining, within a codeword of the plurality of codewords, a space cell having a smallest space cell reflectance, and a bar cell having a largest bar cell reflectance.
  • 4. The method of claim 3 further comprising determining, by the hardware processor the minimum interior contrast level associated with the codeword based on the smallest space cell reflectance, the bar cell having a largest bar cell reflectance, and a black/white contrast threshold.
  • 5. The method of claim 4 further comprising determining a black/white contrast threshold associated with each of the codeword based on identifying a reflectance value of the plurality of cells in a codeword, identifying a highest reflectance value and a lowest reflectance value amongst the reflectance value associated with each of the plurality of cells, and determining a mean of the highest reflectance value and the lowest reflectance value to determine the black/white contrast threshold.
  • 6. The method of claim 1, wherein the location of the optically ambiguous codeword is determined independent of error correction codewords included in the 2D symbol.
  • 7. The method of claim 1, wherein the errors in the optically ambiguous codeword are corrected by utilizing an error correcting codeword.
  • 8. An electronic device for performing error correction of a two-dimensional (2D) symbol, the electronic device comprising: an optical scanner configured to optically read, by an optical scanner of a symbol reader, a plurality of codewords in the 2D symbol, wherein each of the plurality of codewords is associated with a number that is indicative of a location of a codeword within the 2D symbol;a memory device configured to store an error correction equation; a hardware processor configured to:determine a location of a codeword of the plurality of codewords in the 2D symbol which is optically ambiguous, wherein determining the location comprises:performing a contrast analysis on the 2D symbol, wherein performing of the contrast analysis comprises:identifying a respective minimum interior contrast level for each codeword in the 2D symbol, andflagging, as optically ambiguous, the codeword with minimum interior contrast level that satisfies a minimum interior contrast level criterion;determining, by the hardware processor, the location of the optically ambiguous codeword based on the number associated with the optically ambiguous codeword; andexecute the error correction equation based on the location of the optically ambiguous codeword in the 2D symbol to correct an error in the plurality of codewords.
  • 9. The electronic device of claim 8, wherein each of the plurality of codewords comprises a plurality of cells, wherein the plurality of cells comprises a set of space cells and a set of bar cells.
  • 10. The electronic device of claim 9, wherein the hardware processor is configured to determine, within a codeword of the plurality of codewords, a space cell having a smallest space cell reflectance, and a bar cell having a largest bar cell reflectance.
  • 11. The electronic device of claim 10, wherein the hardware processor is further configured to determine the minimum interior contrast level associated with the codeword based on the smallest space cell reflectance, the bar cell having a largest bar cell reflectance, and a black/white contrast threshold.
  • 12. The electronic device of claim 11, wherein the hardware processor is further configured to determine a black/white contrast threshold associated with each of the codeword based on identifying a reflectance value of the plurality of cells in a codeword, identifying a highest reflectance value and a lowest reflectance value amongst the reflectance value associated with each of the plurality of cells, and determining a mean of the highest reflectance value and the lowest reflectance value to determine the black/white contrast threshold.
  • 13. The electronic device of claim 8, wherein the location of the optically ambiguous codeword is determined independent of error correction codewords included in the 2D symbol.
  • 14. The electronic device of claim 8, wherein the errors in the optically ambiguous codeword are corrected by utilizing an error correcting codeword.
  • 15. A computer readable, non-transitory storage medium storing instructions that, when executed by a hardware processor of a symbol reader, causes the hardware processor to execute a method of error correction for a two-dimensional (2D) symbol, the method comprises: optically reading, by an optical scanner of a symbol reader, a plurality of codewords in the 2D symbol, wherein each of the plurality of codewords is associated with a number that is indicative of a location of a codeword within the 2D symbol;determining, via a hardware processor of the symbol reader, a location of a codeword which is optically ambiguous, the determining comprising:performing a contrast analysis on the 2D symbol comprising:identifying a respective minimum interior contrast level for each codeword in the 2D symbol; andflagging, as optically ambiguous, the codeword that has a minimum interior contrast level below a minimum interior contrast level threshold; anddetermining, by the hardware processor, the location of the optically ambiguous codeword based on the number associated with the optically ambiguous codeword; andexecuting, via the hardware processor, an error correction equation based on the location of the one or more optically ambiguous codewords in the 2D symbol to correct errors in the read plurality of codewords.
  • 16. The computer readable, non-transitory storage medium of claim 15, wherein each codeword of the plurality of codewords comprises a plurality of cells, wherein the plurality of cells comprises a set of space cells and a set of bar cells.
  • 17. The computer readable, non-transitory storage medium of claim 16 further comprising determining, within a codeword of the plurality of codewords, a space cell having a smallest space cell reflectance, and a bar cell having a largest bar cell reflectance.
  • 18. The computer readable, non-transitory storage medium of claim 17 further comprising determining, by the hardware processor the minimum interior contrast level associated with the codeword based on the smallest space cell reflectance, the bar cell having a largest bar cell reflectance, and a black/white contrast threshold.
  • 19. The computer readable, non-transitory storage medium of claim 17 further comprising determining a black/white contrast threshold associated with each of the codeword based on identifying a reflectance value of the plurality of cells in a codeword, identifying a highest reflectance value and a lowest reflectance value amongst the reflectance value associated with each of the plurality of cells, and determining a mean of the highest reflectance value and the lowest reflectance value to determine the black/white contrast threshold.
  • 20. The computer readable, non-transitory storage medium of claim 15, wherein the location of the optically ambiguous codeword is determined independent of error correction codewords included in the 2D symbol.
CROSS-REFERENCE TO RELATED APPLICATIONS

The present application is a continuation of U.S. Non-Provisional application Ser. No. 16/268,721, filed Feb. 6, 2019 (and published Jun. 13, 2019 as U.S. Patent Application Publication No. 2019/0180068), which is a continuation of and claims the benefit of U.S. patent application Ser. No. 15/006,561 for Enhanced Matrix Symbol Error Correction Method filed Jan. 26, 2016 (and published Jul. 27, 2017 as US. Patent Application Publication No. 2017/0213064), and subsequently issued as U.S. Pat. No. 10,235,547. Each of the foregoing patent applications, patent, and patent publications is hereby incorporated by reference in its entirety.

US Referenced Citations (538)
Number Name Date Kind
5553084 Ackley et al. Sep 1996 A
6330972 Wiklof et al. Dec 2001 B1
6832725 Gardiner et al. Dec 2004 B2
7128266 Zhu et al. Oct 2006 B2
7159783 Walczyk et al. Jan 2007 B2
7413127 Ehrhart et al. Aug 2008 B2
7726575 Wang et al. Jun 2010 B2
8177108 Kincaid et al. May 2012 B1
8294969 Plesko Oct 2012 B2
8317105 Kotlarsky et al. Nov 2012 B2
8322622 Liu Dec 2012 B2
8366005 Kotlarsky et al. Feb 2013 B2
8371507 Haggerty et al. Feb 2013 B2
8376233 Horn et al. Feb 2013 B2
8381979 Franz Feb 2013 B2
8390909 Plesko Mar 2013 B2
8408464 Zhu et al. Apr 2013 B2
8408468 Van et al. Apr 2013 B2
8408469 Good Apr 2013 B2
8424768 Rueblinger et al. Apr 2013 B2
8448863 Xian et al. May 2013 B2
8457013 Essinger et al. Jun 2013 B2
8459557 Havens et al. Jun 2013 B2
8469272 Kearney Jun 2013 B2
8474712 Kearney et al. Jul 2013 B2
8479992 Kotlarsky et al. Jul 2013 B2
8490877 Kearney Jul 2013 B2
8517271 Kotlarsky et al. Aug 2013 B2
8523076 Good Sep 2013 B2
8528818 Ehrhart et al. Sep 2013 B2
8544737 Gomez et al. Oct 2013 B2
8548420 Grunow et al. Oct 2013 B2
8550335 Samek et al. Oct 2013 B2
8550354 Gannon et al. Oct 2013 B2
8550357 Kearney Oct 2013 B2
8556174 Kosecki et al. Oct 2013 B2
8556176 Van et al. Oct 2013 B2
8556177 Hussey et al. Oct 2013 B2
8559767 Barber et al. Oct 2013 B2
8561895 Gomez et al. Oct 2013 B2
8561903 Sauerwein, Jr. Oct 2013 B2
8561905 Edmonds et al. Oct 2013 B2
8565107 Pease et al. Oct 2013 B2
8571307 Li et al. Oct 2013 B2
8579200 Samek et al. Nov 2013 B2
8583924 Caballero et al. Nov 2013 B2
8584945 Wang et al. Nov 2013 B2
8587595 Wang Nov 2013 B2
8587697 Hussey et al. Nov 2013 B2
8588869 Sauerwein et al. Nov 2013 B2
8590789 Nahill et al. Nov 2013 B2
8596539 Havens et al. Dec 2013 B2
8596542 Havens et al. Dec 2013 B2
8596543 Havens et al. Dec 2013 B2
8599271 Havens et al. Dec 2013 B2
8599957 Peake et al. Dec 2013 B2
8600158 Li et al. Dec 2013 B2
8600167 Showering Dec 2013 B2
8602309 Longacre et al. Dec 2013 B2
8608053 Meier et al. Dec 2013 B2
8608071 Liu et al. Dec 2013 B2
8611309 Wang et al. Dec 2013 B2
8615487 Gomez et al. Dec 2013 B2
8621123 Caballero Dec 2013 B2
8622303 Meier et al. Jan 2014 B2
8628013 Ding Jan 2014 B2
8628015 Wang et al. Jan 2014 B2
8628016 Winegar Jan 2014 B2
8629926 Wang Jan 2014 B2
8630491 Longacre et al. Jan 2014 B2
8635309 Berthiaume et al. Jan 2014 B2
8636200 Kearney Jan 2014 B2
8636212 Nahill et al. Jan 2014 B2
8636215 Ding et al. Jan 2014 B2
8636224 Wang Jan 2014 B2
8638806 Wang et al. Jan 2014 B2
8640958 Lu et al. Feb 2014 B2
8640960 Wang et al. Feb 2014 B2
8643717 Li et al. Feb 2014 B2
8646692 Meier et al. Feb 2014 B2
8646694 Wang et al. Feb 2014 B2
8657200 Ren et al. Feb 2014 B2
8659397 Vargo et al. Feb 2014 B2
8668149 Good Mar 2014 B2
8678285 Kearney Mar 2014 B2
8678286 Smith et al. Mar 2014 B2
8682077 Longacre, Jr. Mar 2014 B1
D702237 Oberpriller et al. Apr 2014 S
8687282 Feng et al. Apr 2014 B2
8692927 Pease et al. Apr 2014 B2
8695880 Bremer et al. Apr 2014 B2
8698949 Grunow et al. Apr 2014 B2
8702000 Barber et al. Apr 2014 B2
8717494 Gannon May 2014 B2
8720783 Biss et al. May 2014 B2
8723804 Fletcher et al. May 2014 B2
8723904 Marty et al. May 2014 B2
8727223 Wang May 2014 B2
8736909 Sato et al. May 2014 B2
8740082 Wilz, Sr. Jun 2014 B2
8740085 Furlong et al. Jun 2014 B2
8746563 Hennick et al. Jun 2014 B2
8750445 Peake et al. Jun 2014 B2
8752766 Xian et al. Jun 2014 B2
8756059 Braho et al. Jun 2014 B2
8757495 Qu et al. Jun 2014 B2
8760563 Koziol et al. Jun 2014 B2
8763909 Reed et al. Jul 2014 B2
8777108 Coyle Jul 2014 B2
8777109 Oberpriller et al. Jul 2014 B2
8779898 Havens et al. Jul 2014 B2
8781520 Payne et al. Jul 2014 B2
8783573 Havens et al. Jul 2014 B2
8789757 Barten Jul 2014 B2
8789758 Hawley et al. Jul 2014 B2
8789759 Xian et al. Jul 2014 B2
8794520 Wang et al. Aug 2014 B2
8794522 Ehrhart Aug 2014 B2
8794525 Amundsen et al. Aug 2014 B2
8794526 Wang et al. Aug 2014 B2
8798367 Ellis Aug 2014 B2
8807431 Wang et al. Aug 2014 B2
8807432 Van et al. Aug 2014 B2
8820630 Qu et al. Sep 2014 B2
8822848 Meagher Sep 2014 B2
8824692 Sheerin et al. Sep 2014 B2
8824696 Braho Sep 2014 B2
8842849 Wahl et al. Sep 2014 B2
8844822 Kotlarsky et al. Sep 2014 B2
8844823 Fritz et al. Sep 2014 B2
8849019 Li et al. Sep 2014 B2
D716285 Chaney et al. Oct 2014 S
8851383 Yeakley et al. Oct 2014 B2
8854633 Laffargue et al. Oct 2014 B2
8866963 Grunow et al. Oct 2014 B2
8868421 Braho et al. Oct 2014 B2
8868519 Maloy et al. Oct 2014 B2
8868802 Barten Oct 2014 B2
8868803 Caballero Oct 2014 B2
8870074 Gannon Oct 2014 B1
8879639 Sauerwein, Jr. Nov 2014 B2
8880426 Smith Nov 2014 B2
8881983 Havens et al. Nov 2014 B2
8881987 Wang Nov 2014 B2
8903172 Smith Dec 2014 B2
8908995 Benos et al. Dec 2014 B2
8910870 Li et al. Dec 2014 B2
8910875 Ren et al. Dec 2014 B2
8914290 Hendrickson et al. Dec 2014 B2
8914788 Pettinelli et al. Dec 2014 B2
8915439 Feng et al. Dec 2014 B2
8915444 Havens et al. Dec 2014 B2
8916789 Woodburn Dec 2014 B2
8918250 Hollifield Dec 2014 B2
8918564 Caballero Dec 2014 B2
8925818 Kosecki et al. Jan 2015 B2
8939374 Jovanovski et al. Jan 2015 B2
8942480 Ellis Jan 2015 B2
8944313 Williams et al. Feb 2015 B2
8944327 Meier et al. Feb 2015 B2
8944332 Harding et al. Feb 2015 B2
8950678 Germaine et al. Feb 2015 B2
D723560 Zhou et al. Mar 2015 S
8967468 Gomez et al. Mar 2015 B2
8971346 Sevier Mar 2015 B2
8976030 Cunningham et al. Mar 2015 B2
8976368 El et al. Mar 2015 B2
8978981 Guan Mar 2015 B2
8978983 Bremer et al. Mar 2015 B2
8978984 Hennick et al. Mar 2015 B2
8985456 Zhu et al. Mar 2015 B2
8985457 Soule et al. Mar 2015 B2
8985459 Kearney et al. Mar 2015 B2
8985461 Gelay et al. Mar 2015 B2
8988578 Showering Mar 2015 B2
8988590 Gillet et al. Mar 2015 B2
8991704 Hopper et al. Mar 2015 B2
8996194 Davis et al. Mar 2015 B2
8996384 Funyak et al. Mar 2015 B2
8998091 Edmonds et al. Apr 2015 B2
9002641 Showering Apr 2015 B2
9007368 Laffargue et al. Apr 2015 B2
9010641 Qu et al. Apr 2015 B2
9015513 Murawski et al. Apr 2015 B2
9016576 Brady et al. Apr 2015 B2
D730357 Fitch et al. May 2015 S
9022288 Nahill et al. May 2015 B2
9030964 Essinger et al. May 2015 B2
9033240 Smith et al. May 2015 B2
9033242 Gillet et al. May 2015 B2
9036054 Koziol et al. May 2015 B2
9037344 Chamberlin May 2015 B2
9038911 Xian et al. May 2015 B2
9038915 Smith May 2015 B2
D730901 Oberpriller et al. Jun 2015 S
D730902 Fitch et al. Jun 2015 S
D733112 Chaney et al. Jun 2015 S
9047098 Barten Jun 2015 B2
9047359 Caballero et al. Jun 2015 B2
9047420 Caballero Jun 2015 B2
9047525 Barber et al. Jun 2015 B2
9047531 Showering et al. Jun 2015 B2
9049640 Wang et al. Jun 2015 B2
9053055 Caballero Jun 2015 B2
9053378 Hou et al. Jun 2015 B1
9053380 Xian et al. Jun 2015 B2
9057641 Amundsen et al. Jun 2015 B2
9058526 Powilleit Jun 2015 B2
9064165 Havens et al. Jun 2015 B2
9064167 Xian et al. Jun 2015 B2
9064168 Todeschini et al. Jun 2015 B2
9064254 Todeschini et al. Jun 2015 B2
9066032 Wang Jun 2015 B2
9070032 Corcoran Jun 2015 B2
D734339 Zhou et al. Jul 2015 S
D734751 Oberpriller et al. Jul 2015 S
9082023 Feng et al. Jul 2015 B2
9224022 Ackley et al. Dec 2015 B2
9224027 Van et al. Dec 2015 B2
D747321 London et al. Jan 2016 S
9230140 Ackley Jan 2016 B1
9250712 Todeschini Feb 2016 B1
9258033 Showering Feb 2016 B2
9261398 Amundsen et al. Feb 2016 B2
9262633 Todeschini et al. Feb 2016 B1
9262664 Soule et al. Feb 2016 B2
9274806 Barten Mar 2016 B2
9282501 Wang et al. Mar 2016 B2
9292969 Laffargue et al. Mar 2016 B2
9298667 Caballero Mar 2016 B2
9310609 Rueblinger et al. Apr 2016 B2
9319548 Showering et al. Apr 2016 B2
D757009 Oberpriller et al. May 2016 S
9342724 Mccloskey et al. May 2016 B2
9342827 Smith May 2016 B2
9355294 Smith et al. May 2016 B2
9367722 Xian et al. Jun 2016 B2
9375945 Bowles Jun 2016 B1
D760719 Zhou et al. Jul 2016 S
9390596 Todeschini Jul 2016 B1
9396375 Qu et al. Jul 2016 B2
9398008 Todeschini et al. Jul 2016 B2
D762604 Fitch et al. Aug 2016 S
D762647 Fitch et al. Aug 2016 S
9405011 Showering Aug 2016 B2
9407840 Wang Aug 2016 B2
9412242 Van et al. Aug 2016 B2
9418252 Nahill et al. Aug 2016 B2
D766244 Zhou et al. Sep 2016 S
9443123 Hejl Sep 2016 B2
9443222 Singel et al. Sep 2016 B2
9448610 Davis et al. Sep 2016 B2
9478113 Xie et al. Oct 2016 B2
D771631 Fitch et al. Nov 2016 S
9507974 Todeschini Nov 2016 B1
D777166 Bidwell et al. Jan 2017 S
9582696 Barber et al. Feb 2017 B2
D783601 Schulte et al. Apr 2017 S
9616749 Chamberlin Apr 2017 B2
9618993 Murawski et al. Apr 2017 B2
D785617 Bidwell et al. May 2017 S
D785636 Oberpriller et al. May 2017 S
D790505 Vargo et al. Jun 2017 S
D790546 Zhou et al. Jun 2017 S
D790553 Fitch et al. Jun 2017 S
9715614 Todeschini et al. Jul 2017 B2
9734493 Gomez et al. Aug 2017 B2
9786101 Ackley Oct 2017 B2
9857167 Li et al. Jan 2018 B2
9891612 Charpentier et al. Feb 2018 B2
9892876 Bandringa Feb 2018 B2
9954871 Hussey et al. Apr 2018 B2
9978088 Pape May 2018 B2
10007112 Fitch et al. Jun 2018 B2
10019334 Caballero et al. Jul 2018 B2
10021043 Sevier Jul 2018 B2
10038716 Todeschini et al. Jul 2018 B2
10066982 Ackley et al. Sep 2018 B2
10235547 Ackley Mar 2019 B2
10327158 Wang et al. Jun 2019 B2
10360728 Venkatesha et al. Jul 2019 B2
10401436 Young et al. Sep 2019 B2
10410029 Powilleit Sep 2019 B2
10846498 Ackley Nov 2020 B2
20020041712 Roustaei Apr 2002 A1
20030009725 Reichenbach Jan 2003 A1
20040182930 Nojiri Sep 2004 A1
20040190085 Silverbrook et al. Sep 2004 A1
20040190092 Silverbrook et al. Sep 2004 A1
20050199721 Chang et al. Sep 2005 A1
20070051813 Kiuchi et al. Mar 2007 A1
20070063048 Havens et al. Mar 2007 A1
20080185432 Caballero et al. Aug 2008 A1
20080298688 Cheong et al. Dec 2008 A1
20090121024 Umeda May 2009 A1
20090134221 Zhu et al. May 2009 A1
20090212111 Krichi et al. Aug 2009 A1
20090230193 Al-Hussein et al. Sep 2009 A1
20090242649 Mizukoshi et al. Oct 2009 A1
20100014784 Silverbrook et al. Jan 2010 A1
20100177076 Essinger et al. Jul 2010 A1
20100177080 Essinger et al. Jul 2010 A1
20100177707 Essinger et al. Jul 2010 A1
20100177749 Essinger et al. Jul 2010 A1
20100265880 Rautiola et al. Oct 2010 A1
20110169999 Grunow et al. Jul 2011 A1
20110202554 Powilleit et al. Aug 2011 A1
20120111946 Golant May 2012 A1
20120168511 Kotlarsky et al. Jul 2012 A1
20120168512 Kotlarsky et al. Jul 2012 A1
20120193423 Samek Aug 2012 A1
20120203647 Smith Aug 2012 A1
20120223141 Good et al. Sep 2012 A1
20120228382 Havens et al. Sep 2012 A1
20120248188 Kearney Oct 2012 A1
20130033617 Schueler et al. Feb 2013 A1
20130043312 Van Horn Feb 2013 A1
20130075168 Amundsen et al. Mar 2013 A1
20130082104 Kearney et al. Apr 2013 A1
20130175341 Kearney et al. Jul 2013 A1
20130175343 Good Jul 2013 A1
20130182002 Macciola et al. Jul 2013 A1
20130257744 Daghigh et al. Oct 2013 A1
20130257759 Daghigh Oct 2013 A1
20130270346 Xian et al. Oct 2013 A1
20130287258 Kearney Oct 2013 A1
20130292475 Kotlarsky et al. Nov 2013 A1
20130292477 Hennick et al. Nov 2013 A1
20130293539 Hunt et al. Nov 2013 A1
20130293540 Laffargue et al. Nov 2013 A1
20130301870 Mow et al. Nov 2013 A1
20130306731 Thuries et al. Nov 2013 A1
20130307964 Bremer et al. Nov 2013 A1
20130308625 Park et al. Nov 2013 A1
20130313324 Koziol et al. Nov 2013 A1
20130313325 Wilz et al. Nov 2013 A1
20130342717 Havens et al. Dec 2013 A1
20140001267 Giordano et al. Jan 2014 A1
20140002828 Laffargue et al. Jan 2014 A1
20140008439 Wang Jan 2014 A1
20140025584 Liu et al. Jan 2014 A1
20140034734 Sauerwein, Jr. Feb 2014 A1
20140036848 Pease et al. Feb 2014 A1
20140039693 Havens et al. Feb 2014 A1
20140042814 Kather et al. Feb 2014 A1
20140049120 Kohtz et al. Feb 2014 A1
20140049635 Laffargue et al. Feb 2014 A1
20140061306 Wu et al. Mar 2014 A1
20140063289 Hussey et al. Mar 2014 A1
20140066136 Sauerwein et al. Mar 2014 A1
20140067692 Ye et al. Mar 2014 A1
20140070005 Nahill et al. Mar 2014 A1
20140071840 Venancio Mar 2014 A1
20140074746 Wang Mar 2014 A1
20140076974 Havens et al. Mar 2014 A1
20140078341 Havens et al. Mar 2014 A1
20140078342 Li et al. Mar 2014 A1
20140078345 Showering Mar 2014 A1
20140097249 Gomez et al. Apr 2014 A1
20140098792 Wang et al. Apr 2014 A1
20140100774 Showering Apr 2014 A1
20140100813 Showering Apr 2014 A1
20140103115 Meier et al. Apr 2014 A1
20140104413 Mccloskey et al. Apr 2014 A1
20140104414 Mccloskey et al. Apr 2014 A1
20140104416 Giordano et al. Apr 2014 A1
20140104451 Todeschini et al. Apr 2014 A1
20140106594 Skvoretz Apr 2014 A1
20140106725 Sauerwein, Jr. Apr 2014 A1
20140108010 Maltseff et al. Apr 2014 A1
20140108402 Gomez et al. Apr 2014 A1
20140108682 Caballero Apr 2014 A1
20140110485 Toa et al. Apr 2014 A1
20140114530 Fitch et al. Apr 2014 A1
20140121438 Long et al. May 2014 A1
20140121445 Fontenot et al. May 2014 A1
20140124577 Wang et al. May 2014 A1
20140124579 Ding May 2014 A1
20140125842 Winegar May 2014 A1
20140125853 Wang May 2014 A1
20140125999 Longacre et al. May 2014 A1
20140129378 Richardson May 2014 A1
20140131438 Kearney May 2014 A1
20140131441 Nahill et al. May 2014 A1
20140131443 Smith May 2014 A1
20140131444 Wang May 2014 A1
20140131445 Ding et al. May 2014 A1
20140131448 Xian et al. May 2014 A1
20140133379 Wang et al. May 2014 A1
20140136208 Maltseff et al. May 2014 A1
20140140585 Wang May 2014 A1
20140144996 Friedman et al. May 2014 A1
20140151453 Meier et al. Jun 2014 A1
20140152882 Samek et al. Jun 2014 A1
20140158770 Sevier et al. Jun 2014 A1
20140159869 Zumsteg et al. Jun 2014 A1
20140166755 Liu et al. Jun 2014 A1
20140166757 Smith Jun 2014 A1
20140166759 Liu et al. Jun 2014 A1
20140168787 Wang et al. Jun 2014 A1
20140175165 Havens et al. Jun 2014 A1
20140175172 Jovanovski et al. Jun 2014 A1
20140191644 Chaney Jul 2014 A1
20140191913 Ge et al. Jul 2014 A1
20140197238 Liu et al. Jul 2014 A1
20140197239 Havens et al. Jul 2014 A1
20140197304 Feng et al. Jul 2014 A1
20140203087 Smith et al. Jul 2014 A1
20140204268 Grunow et al. Jul 2014 A1
20140214631 Hansen Jul 2014 A1
20140217166 Berthiaume et al. Aug 2014 A1
20140217180 Liu Aug 2014 A1
20140231500 Ehrhart et al. Aug 2014 A1
20140232930 Anderson Aug 2014 A1
20140247315 Marty et al. Sep 2014 A1
20140263493 Amurgis et al. Sep 2014 A1
20140263645 Smith et al. Sep 2014 A1
20140267609 Laffargue Sep 2014 A1
20140270196 Braho et al. Sep 2014 A1
20140270229 Braho Sep 2014 A1
20140278387 Digregorio Sep 2014 A1
20140278391 Braho et al. Sep 2014 A1
20140282210 Bianconi Sep 2014 A1
20140284384 Lu et al. Sep 2014 A1
20140288933 Braho et al. Sep 2014 A1
20140297058 Barker et al. Oct 2014 A1
20140299665 Barber et al. Oct 2014 A1
20140312121 Lu et al. Oct 2014 A1
20140319220 Coyle Oct 2014 A1
20140319221 Oberpriller et al. Oct 2014 A1
20140326787 Barten Nov 2014 A1
20140332590 Wang et al. Nov 2014 A1
20140344943 Todeschini et al. Nov 2014 A1
20140346233 Liu et al. Nov 2014 A1
20140351317 Smith et al. Nov 2014 A1
20140353373 Van et al. Dec 2014 A1
20140361073 Qu et al. Dec 2014 A1
20140361082 Xian et al. Dec 2014 A1
20140362184 Jovanovski et al. Dec 2014 A1
20140363015 Braho Dec 2014 A1
20140369511 Sheerin et al. Dec 2014 A1
20140374483 Lu Dec 2014 A1
20140374485 Xian et al. Dec 2014 A1
20150001301 Ouyang Jan 2015 A1
20150001304 Todeschini Jan 2015 A1
20150003673 Fletcher Jan 2015 A1
20150009338 Laffargue et al. Jan 2015 A1
20150009610 London et al. Jan 2015 A1
20150014416 Kotlarsky et al. Jan 2015 A1
20150021397 Rueblinger et al. Jan 2015 A1
20150028102 Ren et al. Jan 2015 A1
20150028103 Jiang Jan 2015 A1
20150028104 Ma et al. Jan 2015 A1
20150029002 Yeakley et al. Jan 2015 A1
20150032709 Maloy et al. Jan 2015 A1
20150039309 Braho et al. Feb 2015 A1
20150039878 Barten Feb 2015 A1
20150040378 Saber et al. Feb 2015 A1
20150048168 Fritz et al. Feb 2015 A1
20150049347 Laffargue et al. Feb 2015 A1
20150051992 Smith Feb 2015 A1
20150053766 Havens et al. Feb 2015 A1
20150053768 Wang et al. Feb 2015 A1
20150053769 Thuries et al. Feb 2015 A1
20150060544 Feng et al. Mar 2015 A1
20150062366 Liu et al. Mar 2015 A1
20150063215 Wang Mar 2015 A1
20150063676 Lloyd et al. Mar 2015 A1
20150069130 Gannon Mar 2015 A1
20150071818 Scheuren et al. Mar 2015 A1
20150071819 Erik Mar 2015 A1
20150083800 Li et al. Mar 2015 A1
20150086114 Erik Mar 2015 A1
20150088522 Hendrickson et al. Mar 2015 A1
20150096872 Woodburn Apr 2015 A1
20150099557 Pettinelli et al. Apr 2015 A1
20150100196 Hollifield Apr 2015 A1
20150102109 Huck Apr 2015 A1
20150115035 Meier et al. Apr 2015 A1
20150127791 Kosecki et al. May 2015 A1
20150128116 Chen et al. May 2015 A1
20150129659 Feng et al. May 2015 A1
20150133047 Smith et al. May 2015 A1
20150134470 Hejl et al. May 2015 A1
20150136851 Harding et al. May 2015 A1
20150136854 Lu et al. May 2015 A1
20150142492 Kumar May 2015 A1
20150144692 Hejl May 2015 A1
20150144698 Teng et al. May 2015 A1
20150144701 Xian et al. May 2015 A1
20150149946 Benos et al. May 2015 A1
20150161429 Tao Jun 2015 A1
20150169925 Chen et al. Jun 2015 A1
20150169929 Williams et al. Jun 2015 A1
20150178523 Gelay et al. Jun 2015 A1
20150178534 Jovanovski et al. Jun 2015 A1
20150178535 Bremer et al. Jun 2015 A1
20150178536 Hennick et al. Jun 2015 A1
20150178537 El et al. Jun 2015 A1
20150181093 Zhu et al. Jun 2015 A1
20150181109 Gillet et al. Jun 2015 A1
20150186703 Chen et al. Jul 2015 A1
20150193644 Kearney et al. Jul 2015 A1
20150193645 Colavito et al. Jul 2015 A1
20150199957 Funyak et al. Jul 2015 A1
20150204671 Showering Jul 2015 A1
20150210199 Payne Jul 2015 A1
20150220753 Zhu et al. Aug 2015 A1
20150221077 Kawabata et al. Aug 2015 A1
20150254485 Feng et al. Sep 2015 A1
20150327012 Bian et al. Nov 2015 A1
20160014251 Hejl Jan 2016 A1
20160040982 Li et al. Feb 2016 A1
20160042241 Todeschini Feb 2016 A1
20160057230 Todeschini et al. Feb 2016 A1
20160109219 Ackley et al. Apr 2016 A1
20160109220 Laffargue et al. Apr 2016 A1
20160109224 Thuries et al. Apr 2016 A1
20160112631 Ackley et al. Apr 2016 A1
20160112643 Laffargue et al. Apr 2016 A1
20160124516 Schoon et al. May 2016 A1
20160125217 Erik May 2016 A1
20160125342 Miller et al. May 2016 A1
20160125873 Braho et al. May 2016 A1
20160133253 Braho et al. May 2016 A1
20160171720 Todeschini Jun 2016 A1
20160178479 Goldsmith Jun 2016 A1
20160180678 Ackley et al. Jun 2016 A1
20160189087 Morton et al. Jun 2016 A1
20160227912 Oberpriller et al. Aug 2016 A1
20160232891 Pecorari Aug 2016 A1
20160267369 Picard et al. Sep 2016 A1
20160292477 Bidwell Oct 2016 A1
20160294779 Yeakley et al. Oct 2016 A1
20160306769 Kohtz et al. Oct 2016 A1
20160314276 Wilz et al. Oct 2016 A1
20160314294 Kubler et al. Oct 2016 A1
20160377414 Thuries et al. Dec 2016 A1
Foreign Referenced Citations (13)
Number Date Country
1402191 Mar 2003 CN
103593695 Feb 2014 CN
103903225 Jul 2014 CN
104715221 Jun 2015 CN
10-111904 Apr 1998 JP
H10-171912 Jun 1998 JP
2001-167222 Jun 2001 JP
2013-148981 Aug 2013 JP
10-2011-0002833 Jan 2011 KR
2013163789 Nov 2013 WO
2013173985 Nov 2013 WO
2014019130 Feb 2014 WO
2014110495 Jul 2014 WO
Non-Patent Literature Citations (44)
Entry
CN Notice of Allowance dated Jan. 12, 2022 for CN Application No. 201710140373, 2 pages.
English translation of CN Notice of Allowance dated Jan. 12, 2022 for CN Application No. 201710140373, 3 pages.
CN Office Action dated Aug. 4, 2021 for CN Application No. 201710140373, 6 pages.
Decision to grant a European patent received for European Application No. 17151685.9, dated Jul. 1, 2021, 2 pages.
English Translation of CN Office Action dated Aug. 4, 2021 for CN Application No. 201710140373, 9 pages.
English Translation of JP Decision to Grant dated Feb. 22, 2021 for JP Application No. 2017007324, 2 pages.
JP Decision to Grant dated Feb. 22, 2021 for JP Application No. 2017007324, 3 pages.
Extended European search report and written opinion dated Nov. 22, 2021 for EP Application No. 21186794, 6 pages.
Advisory Action (PTOL-303) dated Feb. 21, 2018 for U.S. Appl. No. 15/006,561.
Advisory Action (PTOL-303) dated Sep. 24, 2018 for U.S. Appl. No. 15/006,561.
Annex to the communication dated Feb. 28, 2019 for EP Application No. 17151685.
Communication from the Examining Division dated Feb. 28, 2019 for EP Application No. 17151685.
English Translation of JP Office Action dated Nov. 17, 2020 for JP Application No. 2017007324.
English translation of JP Search report dated Nov. 11, 2020 for JP Application No. 2017007324.
European search opinion dated Jun. 16, 2017 for EP Application No. 17151685.
European Search Report and Search Opinion Received for EP Application No. 17151685.9, dated Jun. 16, 2017, 7 pages.
European search report dated Jun. 16, 2017 for EP Application No. 17151685.
Examination Report in related European Application No. 17151685.9 dated Feb. 28, 2019, pp. 1-4.
Extended Search Report in related European Application No. 17151685.9 dated Jun. 16, 2017, pp. 1-7.
Final Rejection dated Dec. 4, 2017 for U.S. Appl. No. 15/006,561.
Final Rejection dated Jul. 11, 2018 for U.S. Appl. No. 15/006,561.
Furumoto, K., et al., Recognition Accuracy Improvement of QR Code by Using GMD Decoding, IEICE Technical Report IT2014-27(Jul. 2014), Jul. 10, 2014, vol. 114, No. 138, pp. 89-94, The Institute of Electronics Information and Communication Engineers, Japan. (Abstract Only).
Hahn, H.I., et al., “Implementation of Algorithm to Decode Two-Dimensional Barcode PDF-417”, 6th International Conference on Signal Processing, 2002, Conference Location: Beijing, China, Date of Conference: Aug. 26-30, 2002, pp. 1791-1794. (Year:2002).
JP Office Action dated Nov. 17, 2020 for JP Application No. 2017007324.
JP Search report dated Nov. 11, 2020 for JP Application No. 2017007324.
Non-Final Rejection dated Apr. 9, 2020 for U.S. Appl. No. 16/268,721.
Non-Final Rejection dated May 18, 2017 for U.S. Appl. No. 15/006,561.
Notice of Allowance and Fees Due (PTOL-85) dated Jul. 23, 2020 for U.S. Appl. No. 16/268,721.
Notice of Allowance and Fees Due (PTOL-85) dated Nov. 1, 2018 for U.S. Appl. No. 15/006,561.
Office Action received for European Patent Application No. 17151685.9, dated Feb. 28, 2019, 4 pages.
U.S. Appl. No. 13/367,978 for a Laser Scanning Module Employing an Elastomeric U-Hinge Based Laser Scanning Assembly, filed Feb. 7, 2012 (Feng et al.), now abandoned.
U.S. Appl. No. 14/277,337 for Multipurpose Optical Reader, filed May 14, 2014 (Jovanovski et al.); 59 pages; now abandoned.
U.S. Appl. No. 14/283,282 for Terminal Having Illumination and Focus Control filed May 21, 2014 (Liu et al.); 31 pages; now abandoned.
U.S. Appl. No. 14/446,391 for Multifunction Point of Sale Apparatus With Optical Signature Capture filed Jul. 30, 2014 (Good et al.); 37 pages; now abandoned.
U.S. Appl. No. 14/676,109 for Indicia Reader filed Apr. 1, 2015 (Huck); now abandoned.
U.S. Appl. No. 16/268,721, filed Feb. 6, 2019, U.S. Pat. No. 10,846,498, Patented.
U.S. Appl. No. 15/006,561, filed Jan. 26, 2016, U.S. Pat. No. 10,235,547, Patented.
CN Office Action, including Search Report, dated Jan. 21, 2021 for CN Application No. 201710140373, 11 pages.
Communication about intention to grant a European patent dated Feb. 23, 2021 for EP Application No. 17151685, 5 pages.
English Translation of CN Office Action, including Search Report, dated Jan. 21, 2021 for CN Application No. 201710140373, 14 pages.
English Translation of JP Search report dated Apr. 21, 2022 for JP Application No. 2021053085, 21 pages.
JP Search report dated Apr. 21, 2022 for JP Application No. 2021053085, 19 pages.
English Translation of JP Office Action dated Jun. 1, 2022 for JP Application No. 2021053085, 6 pages.
JP Office Action dated Jun. 1, 2022 for JP Application No. 2021053085, 6 pages.
Related Publications (1)
Number Date Country
20210073501 A1 Mar 2021 US
Continuations (2)
Number Date Country
Parent 16268721 Feb 2019 US
Child 17077658 US
Parent 15006561 Jan 2016 US
Child 16268721 US