1. Technical Field
Aspects of the present invention relate to data capturing. More particularly, aspects of the invention relate to capturing information and associating it with a document.
2. Description of Related Art
Computer users are accustomed to using a mouse and keyboard as a way of interacting with a personal computer. While personal computers provide a number of advantages over written documents, most users continue to perform certain functions using printed paper. Some of these functions include reading and annotating written documents. In the case of annotations, the printed document assumes a greater significance because of the annotations placed on it by the user.
One technology that can help users is an image-capturing pen, where the pen attempts to determine the location of the pen's tip based on a captured image. Current systems do not adequately associate a captured stroke with an electronic version of a document.
Aspects of the invention address one or more problems described above, thereby providing an improved association between a captured stroke and an electronic document.
Aspects of the present invention are illustrated by way of example and not limited in the accompanying figures in which like reference numerals indicate similar elements.
Aspects of the present invention relate to associating (or binding) a stroke performed by a camera-enabled pen with a document.
It is noted that various connections are set forth between elements in the following description. It is noted that these connections in general and, unless specified otherwise, may be direct or indirect and that this specification is not intended to be limiting in this respect.
The following description is divided into sections to assist the reader.
Terms
The invention is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
With reference to
Computer 110 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 110 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, and removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by computer 110. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above should also be included within the scope of computer readable media.
The system memory 130 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 131 and random access memory (RAM) 132. A basic input/output system 133 (BIOS), containing the basic routines that help to transfer information between elements within computer 110, such as during start-up, is typically stored in ROM 131. RAM 132 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 120. By way of example, and not limitation,
The computer 110 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only,
The drives and their associated computer storage media discussed above and illustrated in
The computer 110 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 180. The remote computer 180 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 110, although only a memory storage device 181 has been illustrated in
When used in a LAN networking environment, the computer 110 is connected to the LAN 171 through a network interface or adapter 170. When used in a WAN networking environment, the computer 110 typically includes a modem 172 or other means for establishing communications over the WAN 173, such as the Internet. The modem 172, which may be internal or external, may be connected to the system bus 121 via the user input interface 160, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 110, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation,
In some aspects, a pen digitizer 165 and accompanying pen or stylus 166 are provided in order to digitally capture freehand input. Although a direct connection between the pen digitizer 165 and the user input interface 160 is shown, in practice, the pen digitizer 165 may be coupled to the processing unit 110 directly, parallel port or other interface and the system bus 130 by any technique including wirelessly. Also, the pen 166 may have a camera associated with it and a transceiver for wirelessly transmitting image information captured by the camera to an interface interacting with bus 130. Further, the pen may have other sensing systems in addition to or in place of the camera for determining strokes of electronic ink including accelerometers, magnetometers, and gyroscopes.
It will be appreciated that the network connections shown are illustrative and other techniques for establishing a communications link between the computers can be used. The existence of any of various well-known protocols such as TCP/IP, Ethernet, FTP, HTTP and the like is presumed, and the system can be operated in a client-server configuration to permit a user to retrieve web pages from a web-based server. Any of various conventional web browsers can be used to display and manipulate data on web pages.
Image Capturing Pen
Aspects of the present invention include placing an encoded data stream in a displayed form. The displayed form may be printed paper (or other physical medium) or may be a display projecting the encoded data stream in conjunction with another image or set of images. For example, the encoded data stream may be represented as a physical encoded image on the paper or an encoded image overlying the displayed image or may be a physical encoded image on a display screen (so any image portion captured by a pen is locatable on the display screen).
This determination of the location of a captured image may be used to determine the location of a user's interaction with the paper, medium, or display screen. In some aspects of the present invention, the pen may be an ink pen writing on paper. In other aspects, the pen may be a stylus with the user writing on the surface of a computer display. Any interaction may be provided back to the system with knowledge of the encoded image on the document or supporting the document displayed on the computer screen. By repeatedly capturing the location of the camera, the system can track movement of the stylus being controlled by the user.
The input to the pen 201 from the camera 203 may be defined as a sequence of image frames {Ii}, i=1, 2, . . . , A, where Ii is captured by the pen 201 at sampling time ti. The selection of sampling rate is due to the maximum motion frequency of pen tip, which may be the same as the frequency of the hand when one writes. The frequency is known as to be from 0 up to 20 Hz. By the Nyquist-Shannon sampling theorem, the minimum sampling rate should be 40 Hz, typically 100 Hz. In one example, the sampling rate is 110 Hz. The size of the captured image frame may be large or small, depending on the size of the document and the degree of exactness required. Also, the camera image size may be determined based on the size of the document to be searched.
The image captured by camera 203 may be used directly by the processing system or may undergo pre-filtering. This pre-filtering may occur in pen 201 or may occur outside of pen 201 (for example, in a personal computer).
The image size of
The image sensor 211 may be large enough to capture the image 210. Alternatively, the image sensor 211 may be large enough to capture an image of the pen tip 202 at location 212. For reference, the image at location 212 is referred to as the virtual pen tip. It is noted that the virtual pen tip location with respect to image sensor 211 is fixed because of the constant relationship between the pen tip, the lens 208, and the image sensor 211. Because the transformation from the location of the virtual pen tip 212 (represented by Lvirtual-pentip) to the location of the real pen tip 202 (represented by Lpentip) one can determine the location of the real pen tip in relation to a captured image 210.
The following transformation FS→P transforms the image captured by camera to the real image on the paper:
Lpaper=FS→P(LSensor)
During writing, the pen tip and the paper are on the same plane. Accordingly, the transformation from the virtual pen tip to the real pen tip is also FS→P:
Lpentip=FS→P(Lvirtual-pentip).
The transformation FS→P may be referred to as a perspective transformation, which approximates FS→P as:
in which θ, sx, and sy are the rotation and scale of two orientations of the pattern captured at location 204. Further, one can refine F′S→P to FS→P by matching the captured image with the corresponding background image on paper. “Refine” means to get a more precise perspective matrix FS→P (8 parameters) by a kind of optimization algorithm referred to as a recursive method. The recursive method treats the matrix F′S→P as the initial value. FS→P describes the transformation between S and P more precisely than F′S→P.
Next, one can determine the location of virtual pen tip by calibration.
One places the pen tip 202 on a known location Lpentip on paper. Next, one tilts the pen, allowing the camera 203 to capture a series of images with different pen poses. For each image captured, one may receive the transform FS→P. From this transform, one can obtain the location of the virtual image of pen tip Lvirtual-pentip:
Lvirtual-pentip=FP→S(Lpentip),
and,
FP→S=[FS→P]−1.
By averaging the Lvirtual-pentip received from every image, an accurate location of the virtual pen tip Lvirtual-pentip may be determined.
The location of the virtual pen tip Lvirtual-pentip is now known. One can also obtain the transformation FS→P from image captured. Finally, one can use this information to determine the location of the real pen tip Lpentip:
Lpentip=FS→P(Lvirtual-pentip).
Overview
Aspects of the invention relate to an architecture for associating a captured stroke and binding it to an electronic version of a document. While a user performs a stroke (or action) using a pen, a camera on the pen captures a location of the pen in relation to a surface. The surface may include codes that permit a computer to determine where the pen is in relation to the surface. The surface may include a computer monitor, printed document, and the like.
Aspects of the invention include processing a pattern in a captured image, decoding symbols extracted from the captured image and determining a location of a point in the stroke, and transforming the location with other locations to a stroke of information. The stroke is then associated with an electronic document.
Most pens use ink. Some pens also include a camera to capture an image. The image may be used to determine the location of the pen tip. In accordance with aspects of the present invention, some systems can determine the location of the pen tip based on embedded codes that are captured by the camera.
Different embedded codes may be used. For instance, codes comprising a combination of dots arranged along a grid may be used. Alternatively, a maze of perpendicular lines may form the embedded interaction codes. As shown in
In the illustrative examples shown herein, the EIC elements lie on grid lines in EIC symbol array.
An m-array may be used to represent X, Y position in an array and multiple m-arrays may be used to represent metadata. These multiple m-arrays may be encoded in EIC symbols. In each EIC symbol, one bit from each m-array may be encoded. EIC symbols in place of the bits of the m-array representing X, Y position form an EIC symbol array (visually, the tiling of EIC symbols forms the EIC pattern).
EIC pattern analysis includes two main steps. First, images may be processed to improve contrast or other preprocessing. Next, features of an effective EIC pattern in the image are analyzed. A digital pen as shown in
One can consider the EIC symbol array as being a large map, covering all pages of a digital document. When digital pen is used to write on these pages, a small segment of EIC symbol array is captured in an image taken by the pen (such as the image shown in
This can be done by analyzing each image obtained.
As can be seen from
Images are therefore can be first normalized for illumination. Then, images of EIC dots, referred to as effective EIC pattern, and images of document content are identified. An effective EIC pattern mask and a document content mask can specify which regions of normalized image are effective EIC pattern and which regions are document content.
Once the grid lines are determined, black dots on the grid lines are identified. Positions of the black dots help to determine which grid cells correspond to EIC symbols and which direction is the correct orientation of EIC symbols.
The grid cells formed by grid lines may or may not correspond to EIC symbols. As can be seen in
It is also important to determine the correct orientation of EIC symbols. EIC symbols captured in image may be rotated due to pen rotation. Only when EIC symbols are at the correct orientation (i.e. oriented the same as EIC symbols in EIC symbol array), the segment of EIC symbols captured in image can be matched against EIC symbol array, i.e. bits extracted from EIC symbols can be matched against the m-array.
Once one knows which grid cells correspond to EIC symbols and the correct orientation of the symbols, the EIC symbols captured in an image can be recognized. One can then consider a large enough section of the EIC symbol array that encompasses all the grid lines and corresponding EIC symbols of the image. See
In
H′, V′ is the coordinate system of the grid, with the top (relative to image) intersection point of the farthest grid lines in image, CH′V′, as the origin, and grid cells as the unit of measure. The H′, V′ coordinate system is determined in relation to the image. The rotation angle from X to H′ is always smaller than that from X to V′, and all intersections of grid lines in image have non-negative coordinates in the H′, V′ coordinate system.
Note that what is depicted inside the image in
X′, Y′ is the coordinate system of the section of EIC symbol array encompassing all the grid lines and corresponding EIC symbols of the image, with the top-left corner of the section, CX′Y′, as the origin, and EIC symbols as the unit of measure. Note that X′, Y′ is always in the direction of EIC symbol array, and the origin is always at the top-left corner of a symbol.
whereas it was
in
Given a particular EIC symbol design, and the identified correct orientation of EIC symbols in an image, a transformation from the section of EIC symbol array (that encompasses all the grid lines and corresponding EIC symbols of the image) to grid, i.e. from X′, Y′ to H′, V′, can be obtained. For example, with EIC symbol 8-a-16, the scale from the unit of measure in H′, V′ to that of X′, Y′ is √{square root over (2)}, and the rotation from H′, V′ to X′, Y′ may be
depending on the correct orientation of EIC symbols in image (
From a previous step, a homography matrix describing the perspective transform from grid to image, i.e. from H′, V′ to X, Y, HGrid→Image, is known.
Thus, a homography matrix, HSymbol→Image, describing the transformation from X′, Y′ to X, Y can be obtained as:
HSymbol→Image=HGrid→Image·HSymbol→Grid
The homography matrix HSymbol→Image specifies the transformation of every point in the section of EIC symbol array encompassing the image to a point in the image coordinate system. The homography matrix HSymbol→Image−1, specifies the transformation of every point in the image coordinate system to a point in the section of EIC symbol array encompassing the image.
From recognized EIC symbols in the section of EIC symbol array encompassing the image, EIC bits are extracted. For each m-array, a stream of bits is extracted. Any bit can be chosen as the bit whose position in m-array is decoded.
For convenience, one can choose the top-left corner of the section of EIC symbol array encompassing the image, CX′Y′, as the position to decode. In the bit stream starting from CX′Y′, some of the bits are known (bits extracted from recognized symbols), and some are unknown (bits that can't be extracted or EIC symbols are not captured in image). As long as the number of extracted bits is more than the order of the m-array, decoding can be done.
EIC decoding obtains a location vector r by solving bt=rtM, where b is a vector of extracted bits, and M is a coefficient matrix obtained by cyclically shifting the m-sequence. Note that t in the equation stands for transpose. Location of extracted bits in m-sequence can be obtained from r by discrete logarithm. Position of extracted bits in m-array is then obtained based on how m-array is generated from m-sequence.
The position obtained from an m-array representing metadata is used to calculate the metadata. Metadata is encoded using the same m-array as the one representing X, Y position, but shifted according to the value of the metadata. Therefore, positions obtained from the two m-arrays representing X, Y position and metadata respectively, are different. The difference (or distance) between the two positions, however, is always the same, and is the value of the metadata. If multiple m-arrays are used to encode a global metadata such as a document ID, values of metadata from each of the multiple m-arrays are combined to get the document ID.
The position obtained from the m-array representing X, Y position is the coordinates of CX′Y′ in EIC symbol array. For example, in
To recover ink stroke, one needs to find the position of the pen tip in EIC symbol array. To do this, the “virtual pen tip,” which is simply the image of the real pen tip on the image sensor plane, can be used as shown in
Location of virtual pen tip on the image sensor plane is position of the pen tip in the image coordinate system. Therefore, using the homography matrix HSymbol→Image−1, one can obtain position of the pen tip in X′, Y′ coordinate system.
Given position of the pen tip in X′, Y′ coordinate system and coordinates of CX′Y′ in EIC symbol array, position of the pen tip in EIC symbol array can be obtained by summing the two. With a series of images captured for an ink stroke, from each image successfully decoded, position of the pen tip in EIC symbol array is obtained. These positions are filtered, interpolated and smoothed to generate the ink stroke.
With the document ID, the corresponding digital document can be found. How EIC symbol array is allocated to each page of the document is known. Therefore, position of the pen tip in a document page can be obtained by subtracting position of the top-left corner of the page in EIC symbol array from position of the pen tip in EIC symbol array. Thus, the ink stroke is bound to a document page.
Ink strokes written with digital pen on printed document are now recovered in the corresponding electronic document.
The above process is implemented in EIC core algorithms. EIC core algorithms include pre-processing, EIC pattern analysis, EIC symbol recognition, EIC decoding and EIC document mapping.
The input of EIC core algorithms is a series of captured images. For each captured image, pre-processing segments effective EIC pattern. EIC pattern analysis analyzes effective EIC pattern and segments effective EIC symbols. EIC symbol recognition recognizes the EIC symbols and extracts EIC bits. EIC decoding obtains the location of the extracted bits in m-array. After all the images of a stroke are processed, EIC document mapping calculates metadata and generates ink stroke in EIC document.
Pre-processing step 704 receives the captured image 702 in data in 705 and outputs through data out 706 an effective EIC pattern 707. The effective EIC pattern is received through data in 709 at the EIC pattern analysis 708 and output through data out 710 an effective EIC symbol and homography matrix 712, which are both received in data in 714 of EIC symbol recognition 713.
EIC symbol recognition 713 outputs through data out 715 EIC bits and an upgraded homography matrix with orientation information 717.
Next, in the EIC decoding section, EIC decoding is performed in step 718. Here, EIC bits 716 are received in data in 719. The location in an array of bits (m-array) is determined. The output 721 is provided at data out 720.
The process then moves 722 to the next image 703. The combination of images provide Once all images have been captured and processed, the process then transforms the collection of image locations into strokes and associates the strokes with a document in the EIC document mapping step 723. Here, the locations in the m-array 721 are received through data in interface 724 and collected into one or more strokes. The EIC document mapping step 723 outputs through data out 725 the strokes in the EIC document 726, any local metadata 727, and any global metadata 728. Finally, the process ends in step 729.
The following describes the various sections in greater detail.
Pre-Processing
The process starts in step 701 with captured image 702. In step 801, the illumination of the captured image 702 (from data in 802) is normalized. The normalized image 804 is output from data out 803. The normalized image 804 then segments an EIC pattern 805 having received the normalized image in data in 806. An effective EIC pattern 808 is output from data out 807.
The illumination normalization step 801 is based on the following: illumination in the field of view is typically non-uniform and changes with pen rotation and tilting. This step 801 estimates the illumination distribution of a captured image and normalizes the image for further processing.
The EIC pattern segmentation step 805 performs the following: both EIC pattern and document content are captured in an image. This step 805 segments an EIC pattern from document content by thresholding the image.
Thresholding can be done in two steps (but can be done in more or less steps depending on the implementation). First, high-contrast areas are identified. These are areas that have a large difference in gray levels. Thresholding in these areas identifies a threshold for segmenting document content. After the document content areas are identified, thresholding in non-document content areas segments an effective EIC pattern.
Once the brightness of the captured image has been normalized, the image brightness normalization module 905 provides the normalized image to the pattern determination module 909. As will also be described in more detail below, the pattern determination module 909 analyzes the normalized image to identify areas having differences in brightness above a threshold level, in order to distinguish those areas in the normalized image that represent content from those areas in the normalized image that represent the information pattern. In this manner, the information pattern can be more accurately distinguished from the remainder of the captured image. The preprocessed image is then provided to the pattern analysis module 911 for further processing to determine the portion of the information pattern captured in the image, and thus the location of the pen/camera device 901 when the image was obtained.
As discussed herein, a code symbol may be considered as the smallest unit of visual representation of an information pattern. Generally, a code symbol will include the pattern data represented by the symbol. As shown in the illustrated example, one or more bits may be encoded in one code symbol. Thus, for a code symbol with 1 bit represented, the represented data may be “0” or “1”, for a code symbol representing 2 bits, the represented data may be “00”, “01”, “10” or “1.” Thus, a code symbol can represent any desired amount of data for the information pattern. The code symbol also will have a physical size. When the information pattern is, for example, printed on paper, the size of a code symbol can be measured by printed dots. For example, the illustrated code symbol is 16×16 printed dots. With a 600 dpi printer, the diameter of a printed dot will be about 0.04233 mm.
Still further, a code symbol will have a visual representation. For example, if a code symbol represents 2 bits, the visual representation refers to the number and position distribution of the black dots used to represent the data values “00”, “01”, “10” or “11”. Thus, the code symbol illustrated in
Brightness Normalization
Turning now to
Next, in step 1203, the image segmentation module 1101 segments the image 1301 into blocks of areas. In the illustrated example, the image brightness normalization module 905 uses pixels as the areas upon which operations are performed. It should be appreciated, however, that alternately embodiments of the invention may use other units for the area. For example, with larger images, some embodiments of the invention may use groups of four adjacent pixels as the areas upon which operations are performed, while still other embodiments of the invention may use groups of six, eight, nine, sixteen, or any other number of pixels as the areas upon which operations are performed.
More particularly, the image segmentation module 1101 segments the image into blocks starting from the top of the image 1201, as shown in
Because the image 1301 in the illustrated example has a height of 100 pixels and the blocks 1401 are formed from 16×16 groups of pixels, there is a small region 1403 at the bottom of the image 1301 in which the pixels are not segmented into blocks 1401. As will be apparent from the detailed explanation provided below, this discrepancy may skew the accuracy of the brightness normalization process. Accordingly, as shown in
Next, in step 1205, the block brightness estimation module 1103 estimates the brightness value for each block 1401 and 1501. That is, the block brightness estimation module 1103 estimates an overall representative brightness value for each block 1401 and 1501 based upon the gray level of each individual pixel making up the block. In the illustrated example, the block brightness estimation module 1103 estimates the brightness value of a block 1401 or 1501 by creating a histogram of the number of pixels in the block at each gray-level.
It also should be noted that the illustrated example relates to a black-and-white image. Accordingly, the brightness level corresponds to a gray scale level. Various embodiments of the invention alternately may be used to process color images. With these embodiments, the block brightness estimation module 1103 will operate based upon the color brightness level of each pixel in the image.
After the block brightness estimation module 1103 has estimated the brightness value for each block 1401 and 1501, the area brightness distribution determination module 1105 performs a bilinear fitting of the brightness distribution for each area in step 1207. As previously noted, there is a region 1403 at the bottom of image 1301 that has not been segmented into any of the blocks 1401. The brightness distribution values for the pixels in these regions thus are determined using the blocks 1501 rather than the blocks 1401. Accordingly, the brightness distribution values are determined in a two-step process. The pixels that are primarily within blocks 1401 (i.e., the pixels having a y coordinate value of 0-87 are determined using the estimated brightness values of the blocks 1401, while the pixels that are primarily within blocks 1501 (i.e., the pixels having a y coordinate value of 88-99) are determined using the estimated brightness values of the blocks 1501.
With the illustrated embodiment, for each pixel (x, y), where y=0, 1, . . . 87, the brightness distribution value of that pixel D(x,y) is estimated by using bilinear fitting method as:
D(x,y)=(1−ηy)·[(1−ηx)·IB(m,n)+ηx·IB(m+1,n)]+ηy·[(1−ηx)·IB(m,n+1)+ηx·IB(m+1,n+1)]
where IB(m,n)=G90th(m, n), s is the size of a block (in the illustrated example, s=16),
It should be noted that int(x) is a function that returns the largest integer less than or equal to x. For example, int(1.8)=1, int(−1.8)=−2.
The brightness value information employed to determine the brightness distribution value of a pixel using this process is graphically illustrated in
Similarly, for each pixel (x, y), where y=88, 89, . . . 99, the brightness distribution value of that pixel D(x, y) is estimated as:
D(x,y)=(1−ηy)·[(1−ηx)·IB(m
where IB(m
the height of the image sensor. In the illustrated example, height=100.
Again, some pixels will fall along the image border outside of any region that can be equally distributed among four adjacent blocks 1501. For these pixels in border regions, the above equations may still be applied to determine their brightness distribution values, except that extrapolation will be used instead of interpolation. The different regions are graphically illustrated in
Once the area brightness distribution determination module 1105 has determined the brightness distribution value for each area, the area brightness normalization module 1107 determines the normalized gray level value for each area in step 1209. More particularly, the area brightness normalization module 1107 determines the normalized gray level value for each area by dividing the area's original gray level value for the brightness distribution value for that area. Next, in step 1211, the area brightness normalization module 1107 obtains an adjusted normalized gray level value for each area by multiplying the normalized gray level value for each area by a uniform brightness level G0. In the illustrated example, the value of uniform brightness level G0 is 200, but alternate embodiments of the invention may employ different values for the uniform brightness level G0. The uniform brightness level G0 represents the supposed gray level of the captured image in a blank area for an ideal situation (i.e., a uniform illumination with an ideal image sensor). Thus, in an ideal case, the gray level of all pixels of a captured image from a blank area should be equal to the uniform brightness level G0.
Lastly in step 1213, the area brightness normalization module 1107 selects a final normalized gray level value for each pixel by assigning each pixel a new gray level value that is the lesser of its adjusted normalized gray level value and the maximum gray level value. Thus, with the illustrated example, the final normalized gray level value for each pixel is determined as a gray level G(x, y) where:
where G0=200 and 255 is the maximum gray level (i.e., white). Then, in step 1215, area brightness normalization module 407 outputs a normalized image using the final normalized gray level value for each pixel.
Pattern Determination
After the image brightness normalization module 905 normalizes the image captured by the pen/camera device 901, the pattern determination module 909 distinguishes the areas of the normalized image that represent content in a document from the areas of the normalized image that represent the information pattern.
The pattern determination module 909 also includes a content brightness threshold determination module 2205, a content identification module 2207, a pattern brightness threshold determination module 2209, and a pattern identification module 2211. As will be discussed in greater detail below, for a black-and-white image, the content brightness threshold determination module 2205 determines a first gray level value threshold that the content identification module 2207 then uses to identify areas of the image representing content. Similarly, for a black-and-white image, the pattern brightness threshold determination module 2209 determines a second gray level value threshold that the pattern identification module 2211 uses to identify areas of the image that represent an information pattern.
The pattern determination module 909 takes advantage of the fact that, in an image of a document containing both content (e.g., printed text, pictures, etc.) and an information pattern, the information pattern, document content and document background tend to have different brightness levels. Thus, with a black-and-white image, the areas representing the information pattern, document content and document background will typically have different gray levels, with the areas representing the document content being the darkest, the areas representing the information pattern being the second darkest, and the areas representing the document background being the least dark. Thus, the pattern determination module 109 can distinguish the three different areas by thresholding.
In order to more efficiently determine the appropriate thresholds to separate the three brightness levels, the pattern determination module 909 first identifies high-contrast regions. For black-and-white images, these are regions that have a relatively large difference in gray levels between adjacent image areas (e.g., such as pixels). Thus, the threshold for segmenting the areas representing document content from other areas in the image can be more effectively identified in the high-contrast areas. Once the threshold is found, regions that are darker than the threshold are identified as representing document content. These regions can then be marked as being made up of areas representing content. For example, the areas in a content region may be assigned a value of 1 in a document content mask.
After the regions representing document content have been identified, the brightness values of the remaining areas can then be analyzed. Those regions having an gray level value above a second threshold are then identified as representing the information pattern. These regions can then be marked as being made up of areas representing the information pattern. For example, the areas in a pattern region may be assigned a value of 1 in an information pattern mask. Thus distinguished from the rest of the image, the areas representing the information pattern can be more accurately analyzed by the pattern analysis module 911.
The operation of the pattern determination module 909 will now be described with reference to
Initially, high contrast areas are identified to more efficiently locate regions that represent content, as previously noted. Because the regions representing the information pattern may also have a large difference in brightness levels, however, the image areas are first filtered to reduce the brightness level value difference in the regions surrounding the information pattern. More particularly, in step 2301, the area average filtering module 2201 applies an averaging filter to each area in the image. For black-and-white images, this filtering operation replaces the gray level of each pixel by an average of the gray levels of the surrounding eight pixels and the gray level of the pixel itself. That is, for every pixel (x, y)
where G(x,y) is the gray level of pixel (x,y). It should be note that G(x,y) is the brightness-normalized gray level.
Next, in step 2303, the high-contrast region determination module 2203 identifies the high-contrast regions in the image using the averaged gray level of each pixel. In particular, for each pixel, the high-contrast region determination module 2203 identifies the maximum and the minimum averaged gray level values in the 17×17 pixel neighborhood surrounding the pixel. That is, for every pixel (x, y),
Gmax(x,y)=max(Gaverage(p,q)|max(x−8, 0)≦p≦min(x+8, 127), max(y−8, 0)≦q≦min(y+8, 127))
Gmin(x,y)=min(Gaverage(p,q)|max(x−8, 0)≦p≦min(x+8, 127), max(y−8, 0)≦q≦min(y+8, 127))
It should be appreciated that the determination described above is based upon the specific number of pixels of the image used in the illustrated example. A similar determination, using different pixels coordinate values, would be employed for embodiments of the invention used to process images of different sizes. Next, the high-contrast region determination module 2203 defines a high-contrast region as
High Contrast Region={(x,y)|[Gmax(x,y)−Gmin(x,y)]>D0}
where D0 is a predetermined threshold. The value of D0 is determined empirically. In the illustrated example, D0=140, but it should be appreciated, however, that other embodiments of the invention may employ different threshold values depending, e.g., upon the contrast quality provided by the camera/pen device 101.
Next, in step 2305, the content brightness threshold determination module 2205 determines a threshold for separating areas representing document content from the other areas of the image. To determine the threshold, the content brightness threshold determination module 2205 creates a gray-level histogram for the high-contrast regions. An example of such a histogram 2501 is illustrated in
Once the threshold value T0 has been determined, the content identification module 2207 uses the threshold T0 to identify the areas of the image representing content in step 2307. First, given T0, pixels in the image that are darker than T0 are identified as images representing the document content and are assigned a value of 1 in a document content mask. Thus, for every pixel (x, y), if
Gaverage(x,y)≦T0,
then Document Content Mask (x, y)=1, else Document Content Mask (x, y)=0.
After the document content mask has been created, those regions Rt, are identified, where t=1, 2, . . . T of pixels (xi,yi) as follows:
R={(xi,yi) Document Content Mask (xi,yi)=1, (xi,yi) are neighbors}.
Two pixels are neighbors if they are directly below, above or next to each other, as shown in
Next, in step 2309, the pattern brightness threshold determination module 2209 determines a second threshold for separating the areas representing the information pattern from the remaining areas of the image (i.e., the non-content areas). Initially, the pattern brightness threshold determination module 2209 segments the image into 8×8 pixel blocks. For black-and-white images, the pattern brightness threshold determination module 2209 then creates a gray-level value histogram for each 8×8 pixel block, such as the histogram 2701 in
From the histogram, a second threshold T0 is identified to distinguish information pattern areas from the remaining background areas. The second threshold T0 is empirically chosen, based on the size of the camera sensor in the pen/camera device 901 and the size of code symbol, to be approximately equal to the ratio of black dots in the code symbol. In the illustrated example, the code symbol is the 8-a-16 code symbol illustrated in
Once the second threshold T0 is determined, the pattern identification module 1511 identifies the areas of the image representing the information pattern in step 1611. More particularly, for every pixel (x,y) in a block, if Document Content Mask (x,y)=0 and G(x,y)≦T0, then the pattern identification module 1511 assigns Pattern Mask (x,y)=1, else, Pattern Mask (x,y)=0.
For the bottom pixels (i.e., the 4×128 pixel region along the bottom border of the image), the 4×128 pixel area directly above may be used to form 8×8 pixel blocks. Within each of these bottom blocks, the second threshold is determined using the same method described in detail above. Only those pixels in the bottom region are compared against the threshold, however, as the pixels “borrowed” from the region directly above will already have been analyzed using the second threshold established for their original blocks. Those bottom pixels that are darker than the threshold are identified as representing the information pattern.
After all of the pixels having a gray level below their respective second threshold values have been identified, those identified pixels that are adjacent to pixels representing document content are removed from the information pattern mask. That is, for every pixel (x,y), if Pattern Mask (x,y)=1 and a pixel among 8 neighbors of (x,y) has been identified as representing document content (i.e., there exists i, j, where i=−1, 0, 1, j=−1, 0, 1, such that Document Content Mask (x+i,y+j)=1), then Pattern Mask (x,y)=0. In this manner, the pixels making up the information pattern can be accurately distinguished from the other pixels in the image. Further, the image preprocessing system 903 according to various examples of the invention can output a new image that clearly distinguishes an information pattern from the remainder of the image.
Pattern Analysis
The EIC pattern analysis step 708 includes an EIC pattern feature extraction step 902 from start 2801 with data in 2803 and data out 2804, receiving an effective EIC pattern 2809. It also includes an EIC symbol segmentation step 2806 with data in 2807 and 2808.
The EIC pattern feature extraction step 2802 rotates, scales (distance between parallel lines), and translates (distance between the origins) the effective EIC pattern 2809 so as to obtain an assumed affine transform from grid to image by analyzing all directions formed by pairs of connected effective EIC pattern regions and projecting the effective EIC pattern to two main directions. This results in features 2805 having possibly rotation, scale, and translation information.
The EIC symbol segmentation step 2806 obtains a perspective transform from grid to image by fitting the effective EIC pattern to affine transformed grid lines. The perspective transform is described by a homography matrix HGrid→Image 2812, with which, grid lines in image are obtained. The grid lines in the image are associated with expected grid lines through the perspective transform of the homography matrix 2812. Grid cells thus obtained can be referred to as effective EIC symbols 2811. The effective EIC symbol 2811 and homography matrix 2812 are sent to the EIC symbol recognition process as described with respect to
The above process is referred to as an EIC pattern analysis.
Next, in step 2806, input data 2807 (namely features 2805) is processed by EIC symbol segmentation. Data output 2808 from EIC symbol segmentation 2806 results in an effective EIC symbol 2811 and homography matrix (having a perspective transformation) 2812 as shown in
Feature Extraction
EIC pattern feature extraction obtains an affine transform to convert a grid to an image by analyzing an effective EIC pattern in a captured image. An affine transform keeps evenly spaced parallel lines evenly spaced and parallel, but perpendicular lines may not be perpendicular anymore. This step obtains the rotation, scale (distance between parallel lines) and translation (distance between the origins) of the affine transform. Output of this step is a homography matrix that describes the affine transform.
First, the system finds two main directions of EIC symbols. This step looks at all the directions formed by pairs of connected effective EIC pattern regions and finds two directions that occur the most often.
First, given effective EIC pattern mask, regions Rt, where t=1, 2, . . . , T, of pixels (xi,yi) are identified:
Rt={(xi,yi)|EIC Pattern Mask (xi,yi)=1, (xi,yi) are neighbors}.
Two pixels are neighbors if they are directly below, above or next to each other.
Next, gray-level centroids of the regions are identified. For each region Rt, where t=1, 2, . . . , T, gray-level centroid ({overscore (x)}i,{overscore (y)}t) is:
where (xi,yi) is a pixel in region Rt, G(xi,yi) is the gray-level of the pixel, and Nt is the total number of pixels in region Rt.
Third, for each pair of regions, Ru and Rv, a direction of the pair is obtained:
where 0≦θu,v<180.
Once all the directions are obtained, a histogram of directions can be created. The X axis is θ. The Y axis is the frequency count of θ.
Next, as shown in
mod(x, y) is a function that returns the positive remainder of x divided by Y. For example, mod(3,2)=1, mod(−3,2)=1.
Next, as shown in
From the four candidates, 2 pairs of near perpendicular directions are identified. That is, for a candidate xi, select another candidate xj, such that abs(90−abs(xi−xj)) is minimized. abs(x) is a function that returns the absolute value of x. For example, abs(1.8)=1.8, abs(−1.8)=1.8.
Now, one can select (xi,xj) such that Y(xi)+Y(xj) is maximized.
Given the pair selected, (xi,xj), centroid of a small area near xi and xj is calculated:
The two centroids are the two main directions. That is, suppose {overscore (x)}i<{overscore (x)}j, θh={overscore (x)}i, and θv={overscore (x)}j.
Next, the system determines the scale and translation for the EIC symbols.
In the step, one looks for the scale and translation of the affine transform. Scale is the distance between two adjacent parallel lines. Translation is the distance between the image center and the origin of the coordinate system formed by the grid lines. Both scale and translation are measured in pixels.
Note that the H, V coordinate system shown in
The X, Y coordinate system shown in
To obtain the two scales Sh,Sv, the image may be rotated counterclockwise with θh,θv respectively.
In the middle region of the rotated image (shown as the shadowed area in
For every pixel (x,y), where x=0, 1, . . . , 127, y=0, 1, . . . , 99, coordinates of the pixel in the new coordinate system X′, Y′ are:
x′=int((x−xC)·cos θ+(y−yC)·sin θ+width·0.3),
y′=int(−(x−xC)·sin θ+(y−yC)·cos θ+width·0.3),
where (xC, yC) are coordinates of the image center in the pixel index coordinate system depicted in
where width is the width of the image sensor, height is the height of the image sensor. This is because in the pixel index coordinate system depicted in
In one implementation, width=128, height=100, xC=63.5, yC=495. Of course, other values may be used as well.
Let Rotate Mask (x′,y′)=EIC Pattern Mask (x,y).
Now, effective EIC pattern in the middle region are projected to the Y′ axis to create a histogram. See
Next, one attempts to obtain scale and translation information from the histogram.
First, one finds all the Y values that are local maximums. That is, find all the Y values that satisfy Y(x)>Y(x−1) and Y(x)>Y(x+1), where x=1, 2, . . . , 74. The Y values are kept and other Y values are set to 0. Next, the process then sets Y(0)=0 and Y(75)=0. If two local maximums are too close, for example, if both Y(x1) and Y(x2) are local maximums, and abs(x1−x2)<5, then the system keeps the larger Y value, i.e. if Y(x1)>Y(x2), then the system keeps the value of Y(x1) and set Y(x2)=0.
Next, the system finds the global maximum (xmax,ymax) in the histogram. If ymax=0, EIC pattern analysis fails for this image. If ymax≠0, the local maximums are compared with the global maximum. If the local maximum is less than ⅓ of the global maximum, the local maximum is set to 0.
Suppose the system has found a total of n local maximums, and xi, where i=0, 1, . . . , n−1, are the X values of the local maximums. Let di, where i=0, 1, . . . , n−2, be the distance between xi and xi+1, i.e. di=xi+1−xi. The system obtains the first
estimate of scale S by averaging all the distances, i.e.
Next, the system finds the distance di, where di≠0, i=0, 1, . . . , n−2, that differs from S the most, i.e.
If dj is not that different from S, i.e. if
then S is the best scale. If dj is too much bigger than S, for example, if
then dj may be multiples of the actual scale and will affect calculating the average of the distances. Therefore the system sets dj=0. If dj is too much smaller than S, for example, if
the system combines dj with the next distance dj+1, if dj+1>0; if dj+1=0, the system sets dj=0. The system calculates S again by averaging the non-zero di's, and goes back to the beginning (where distance di is found). The output is a best scale S.
With the best scale S obtained, the system finds the X value, which is an integer multiples of S away from xmax, i.e. Xstart=mod(xmax,S). Translation Δ is: Δ=S−mod((xcenter−xstart),S). Here, since the size of the middle region is width·0.6=76.8,
in this example.
Next, the system obtains an initial homography matrix. This step obtains a homography matrix, H, that describes the affine transform. The homography matrix transforms a point in the coordinate system of the grid, i.e. the H, V coordinate system, to a point in the coordinate system of the image, i.e. the X, Y coordinate system (see
Given the rotation, scale and translation obtained, the homography matrix describing the affine transform is:
In the next step, the system uses H as an initial value to obtain a homography matrix that describes a perspective transform from grid to image. That is, grid lines drawn in image may not be evenly spaced or parallel anymore. Instead, they may appear to converge to a vanishing point.
Symbol Segmentation
The next step refines the initial homography matrix by fitting effective EIC pattern to the affine transformed grid lines. The output is a homography matrix H that transforms lines from H, V to X, Y.
First, one can find the relationship between the homograph matrix H that transforms lines from H, V to X, Y and the one H that transforms points from H, V to X, Y.
In X, Y coordinate system, a line L can be represented as:
x cos θ+y sin θ+R=0,
where θ is the angle between the normal line of L and the X axis, −R is the distance from the origin to line L. See
Given this representation, the distance from any point (x1,y1) to line L is: D=x1 cos θ+y1 sin θ+R.
In other words, a line can be represented as:
When c2+s2=1, distance of any point (x1,y1) to the line is:
The system uses these representations to represent grid lines in image. Suppose a grid line in the H, V coordinate system is
In X, Y coordinate system, the same line is
Since
this leaves,
Hence (H−1)t transforms a line from H, V to X, Y and therefore H=(H−1)t.
The homography matrix obtained from EIC pattern feature extraction gives an initial value of H, i.e.
The system may refine H by the least squares regression. In H, V coordinate system, the grid lines can be represented as:
h·0+v·1+kih=0
h·1+v·0+kiv=0
where kih and kiv are indexes of the grid lines along the H and V directions respectively (one can refer to these as H and V lines, respectively), and are positive or negative integers.
Suppose
Then in X, Y coordinate system, the H and V lines are:
cih·x+sih·y+Rih=0,
civ·x+siv·y+Riv=0,
where
are scalars that make (cih)2+(sih)2=1 and (civ)2+(siv)2=1.
Now, given the grid lines represented in the X, Y coordinate system, the system looks for all effective EIC pattern pixels close enough to each line. These effective EIC pattern pixels can be used to refine the lines.
If an effective EIC pattern pixel is within a distance of D to a line, it is considered associated with the line. See
That is, for every pixel (x,y), where x=0, 1, 2, . . . , 127, y=0, 1, 2, . . . , 99, If
EIC Pattern Mask (x,y)=1abs((cih·(x−xC)+sih·(y−yC)+Rih)<D,
then, (x,y) is considered associated with the i-th H line. If
EIC Pattern Mask (x,y)=1abs((civ·(x−xC)+siv·(y−yC)+Riv)<D,
Again, (xC, yC) are coordinates of the image center in the pixel index coordinate system, and xC=63.5, yC=49.5
Suppose one has identified that effective EIC pattern pixels (xijh,yijh) are associated with the i-th H line, where i=1, 2 . . . , mh, j=1, 2, . . . , mih. mh is the total number of H lines in the image and mih is the total number of effective EIC pattern pixels associated with the i-th H line. Effective EIC pattern pixels (xijv,yijv) are associated with the i-th V line, where i=1, 2, . . . , mv, j=1, 2, . . . , miv. mv is the total number of V lines in the image and miv is the total number of effective EIC pattern pixels associated with the i-th V line.
Next, one wants to find the optimal homography matrix H, such that it minimizes the distance between the effective EIC pattern pixels and their associated lines, i.e. one wants to minimize
where
i.e. N is the total number of effective EIC pattern pixels associated with all the lines, γijh and γijv are weights. In one implementation, γijh=1 and γijv=1.
Define:
one may re-write g(H) as:
After H is initialized, to get more accurate estimates of H, suppose one wants to update the current H by δH, then the increment δH should minimize g(H+δH).
Since
gijh(H+δH)≈gijh(H)+(∇gijh)tδH, gijv(H+δH)≈gijv(H)+(∇gijv)tδH,
one has,
By making the above variation 0 in order to minimize g(H+δH), one has:
After solving for δH, if g(H+δH)<g(H), one may update H, i.e. H=H+δH. If g(H+δH)≧g(H), the systems stops and the last H is the final H.
Given the new H, one can repeat the process outlined in this section, by updating the points that are associated to the updated grid lines. This process can converge or stop until a number of iterations have been performed. For instance, the process may end at 30 times.
Finally, convert homography matrix for line transform (H) to homography matrix for point transform (H):
H=(Ht)−1.
This is referred to as the homography matrix HGrid→Image, which is the final output of EIC pattern analysis.
Further Details for Homography Computation
The following provides further details for fast homography computation by reusing the intermediate computation results. Computing summations directly as written above may be used. Alternatively, one can reduce repeated computation by reformulating the summations in the following manner.
The above variables can be computed and stored so that they can be reused. Some of the variables are computed from previously defined ones. This makes the computation efficient.
Further define:
ωijh=xijhφ123,ih+yijhφ223,ih+φ323,ih, ωijv=xijvφ113,iv+yijvφ213,iv+φ313,iv,
then
gijh(H)=√{square root over (ρih)}ωijh, gijv(H)=√{square root over (ρiv)}ωijv.
Here, one may want to express the gradients in (*) with the variables defined in (1)˜(4) so that repeated computation can be minimized.
First, the components of ∇gijh and ∇gijv can be represented explicitly:
Therefore, at this point, two observations are made:
Further, one can reduce the computation by additional efforts.
the rest terms,
are not described herein because they are zeros. Moreover,
are not described either because they can be obtained by multiplying kih or kiv to
according to previous observations.
The other entries of
are not shown as they are either 0 or differ from one of the above entries by kih or kiv.
One observation of the long list of variables from equations (1) to (4) is that the complexity of update only depends on the number of lines. This reduces the complexity of the processes that are performed to determine the actual mapping of the image on an objective plane to the captured image.
Aspects of the present invention may be applied to environments as well. For example, one may capture images with a camera having different levels of magnification. With each new level of magnification (or range of levels as determined by a developer), a homography between a captured image and an expected image may be performed. Here, the camera may photograph or otherwise obtain an image having roughly two sets of parallel lines. The sets of parallel lines may be used to determine a homography for a given level of magnification. For instance, the homography matrix may be stored in a memory of the camera and/or memory in a lens (if the lens has a memory) to be used to adjust a received image. As a user then changes the magnification for the camera, for example, a different homography matrix may be applied to the received image to transform it into a better image, reducing distortion present in the lens system and/or imaging system of the camera.
Symbol Recognition
Next, the symbol recognition process of step 713 of
Next, the EIC bit extraction step 5113 extracts bits from the rotated EIC dots 5111 as received through data in 5114. EIC bits 5116 are output through data out 5115. Optionally, the updated homography matrix 5112 may be output as homography matrix 5117 as now containing orientation information.
One objective of EIC symbol recognition is to obtain EIC bits encoded in EIC symbols and obtain a homography matrix HSymbol→Image, which transforms every point in the section of EIC symbol array encompassing the image to a point in the image plane.
EIC symbol recognition includes the following components.
The EIC dot detection 5104 is used to detect black dots at EIC data dot positions and the orientation of the dot positions on each edge of the grid cells in image. Dot detection depends on relative (instead of absolute) gray levels of positions on each edge. This increases the robustness of dot detection.
EIC symbol orientation determination step 5108 is used to determine which grid cells correspond to EIC symbols and the correct orientation of the symbols by counting the number of detected black dots at orientation dot positions given different assumptions. The assumption under which the total count is the smallest is accepted of course other values may be used. The section of EIC symbol array encompassing the image is determined. A homography matrix HSymbol→Image, that describes the transformation of every point in the section to a point in the image plane, is obtained.
The EIC bit extraction step 5113 is used to extract bits based on the position of the black dots in EIC symbols.
Bit representation on each edge is a gray code, i.e. only one bit changes from one position to the next, for example, 00, 01, 11, 10. Here, gray code minimizes the number of error bits.
From EIC pattern analysis, HGrid→Image is obtained, with which grid lines in image are obtained. Grid cells thus obtained are effective EIC symbols. Given effective EIC symbols, the next step is to recognize the symbols. The goal of EIC symbol recognition is to obtain bits encoded in EIC symbols and obtain a homography matrix HSymbol→Image, which describes the transformation from the section of EIC symbol array encompassing the image to image. Input of EIC symbol recognition is homography matrix obtained from EIC pattern analysis HGrid→Image, normalized image, and document content mask. Example input to EIC symbol recognition is shown in
The EIC symbol recognition system shown in
EIC Dot Detection
The EIC-dot-detection module 5104 detects black dots on each edge. First, the origin of H, V is moved to get the H′, V′ coordinate system. By moving the origin of H, V, all grid intersections in the image have non-negative coordinates. The new coordinate system is called H′, V′, as shown in
Suppose C′ has coordinates (h′,v′) in H, V coordinate system. After moving, its coordinates are (0, 0).
Suppose the homography matrix obtained from EIC pattern analysis is:
the homography matrix that transforms a point in the H′, V′ coordinate system to a point in the X, Y coordinate system is:
This homography matrix is referred to herein as the final HGrid→Image.
With homography matrix HGrid→Image all the grid lines in image are obtained (by transforming the grid lines in EIC symbol array using the homography matrix) and form the H′, V′ coordinate system, as shown in
These grid lines are referred to as H lines and V lines. Grid cells are indexed by the H′, V′ coordinates of the top corner of the cell. Edges of the cells are identified as either on the H lines or on the V lines. For example, in
Next, gray levels are obtained of selected positions on each edge. For EIC symbol 8-a-16, for example, there are 5 EIC dot positions on each edge, as shown in
Gray levels of the 5 positions on each edge, as shown in
For each position s on each edge (i, j) on the V line, where s=1, 2, . . . , 5, i=0, 1, . . . , Nh, j=0, 1, . . . , Nv−1, the H′, V′ coordinates are:
Next, with the homography matrix HGrid→Image, coordinates of the positions in the X, Y coordinate system are obtained. For each position s on each edge (i, j) on the H line, where s=1, 2, . . . , 5, i=0, 1, . . . , Nh, j=0, 1, . . . , Nv−1, the X, Y coordinates are: (xsh,i,j,ysh,i,j,1)t=HGrid→Image
For each position s on each edge (i,j) on the V line, where s=1, 2, . . . , 5, i=0, 1, . . . , Nh, j=0, 1, . . . , Nv−1, the X, Y coordinates are: (xsv,i,j,ysv,i,j,1)t=HGrid→Image
Gray levels of the positions are calculated using bilinear sampling of the pixels surrounding the positions. Other sampling approaches may be used. For each position s on edge (i, j) on the H line, where s=1, 2, . . . , 5, i=0, 1, . . . , Nh−1, j=0, 1, . . . , Nv, get the index of the first pixel for bilinear sampling: x1=int(xsh,i,j+63.5), y1=int(ysh,i,j+49.5).
If
0≦x1≦126
0≦y1≦98
Document Content Mask (x1,y1)=0
Document Content Mask (x1+1,y1)=0
Document Content Mask (x1,y1+1)=0
Document Content Mask (x1+1,y1+1)=0
then,
The position is valid.
ηx=decimal(xsh,i,j+63.5)
ηy=decimally(ysh,i,j+49.5)
Gsh,i,j=(1−ηy)·[(1−ηx)·G(x
else,
The position is not valid.
Gsh,i,j=null.
The function decimal(x) returns the decimal fraction part of x, where x≧0. For example, decimal(1.8)=0.8. (x1,y1), (x1+1,y1), (x1,y1+1) and (x1+1,y1+1) are indexes of the pixels used for bilinear sampling, defined by the coordinate system shown in
Similarly, for each position s on edge (i, j) on the V line, where s=1, 2, . . . , 5, i=0, 1, . . . , Nh, j=0, 1, . . . , Nv−1, get the index of the first pixel for bilinear sampling:
x1=int(xzv,i,j+63.5)
y1=int(ysv,i,j+49.5)
If
0≦x1≦126
0≦y1≦98
Document Content Mask (x1,y1)=0
Document Content Mask (x1+1,y1)=0
Document Content Mask (x1,y1+1)=0
Document Content Mask (x1+1,y1+1)=0
then,
The position is valid.
ηx=decimal(xsv,i,j+63.5)
ηy=decimally(ysv,i,j+49.5)
Gsv,i,j=(1−ηy)·[(1−ηx)·G(x
else,
The position is not valid.
Gsv,i,j=null
Next, black dots are detected.
Based on the relative gray levels of the positions, black dots are determined. First, the five positions on each edge are named as follows (see
For each edge, let the count of valid positions be VDk,i,j, where k=h, v. If there are at least two valid positions on an edge, i.e. VDk,i,j≧2, let
i.e., u1 is the darkest position and u2 is the second darkest position. If the gray level difference between the darkest and the second darkest position is large enough, i.e. exceeds a threshold (e.g., T0=20), the darkest position is considered a black dot.
For each edge (i, j) on the H line, where i=0, 1, . . . , Nh−1, j=0, 1, . . . , Nv and mod(i+j,2)=0,
If (Gu
heu
hesi,j=0, where s=1, 2, . . . , 5 and s≠u,
Dsh,i,j=hesi,j
Diffh,i,j=Gu
else,
hesi,j=0, where s=1, 2, . . . , 5
Dsh,i,j=null
Diffh,i,j=null
For each edge (i, j) on the H line, where i=0, 1, . . . , Nh−1, j=0, 1, . . . , Nv and mod(i+j,2)=1,
If (Gu
hou
hosi,j=0, where s=1, 2, . . . , 5 and s≠u1
Dsh,i,j=hosi,j
Diffh,i,j=Gu
else,
hosi,j=0, where s=1, 2, . . . , 5
Dsh,i,j=null
Diffh,i,j=null
For each edge (i, j) on the V line, where i=0, 1, . . . , Nh, j=0, 1, . . . , Nv−1 and mod(i+j,2)=0,
If (Gu
veu
vesi,j=0, where s=1, 2, . . . , 5 and s≠u1
Dsv,i,j=vesi,j
Diffv,i,j=Gu
else,
vesi,j=0, where s=1, 2, . . . , 5
Dsv,i,j−null
Diffv,i,j=null
For each edge (i, j) on the V line, where i=0, 1, . . . , Nh, j=0, 1, . . . , Nv−1 and mod(i+j,2)=1,
If (Gu
vou
vosi,j=0, where s=1, 2, . . . , 5 and s≠u,
Dsv,i,j=vosi,j
Diffv,i,j=Gu
else,
vosi,j=0, where s=1, 2, . . . , 5
Dsv,i,j=null
Diffv,i,j=null
By now, substantially all of the black dots are detected. hesi,j, hosi,j, vesi,j and vosi,j will be used to determine which grid cells correspond to EIC symbols and the correct orientation of the symbols. Dsh,i,j and Dsv,i,j will be used for bit extraction.
EIC Symbol Orientation Determination
Now that the black dots are detected, the EIC-symbol-orientation-determination module 5108, which accepts EIC dots 5107 as input, determines which grid cells correspond to EIC symbols and which direction is the correct orientation of the symbols, as illustrated in
The orientation dot positions are designed to help to determine the correct orientation of a symbol. When EIC symbols are rotated, the location of the orientation dot positions are different, as illustrated in FIGS. 60A-D.
Since there should be no black dots at orientation dot positions, the total number of detected black dots at orientation dot positions assuming no rotation, rotated 90 degrees clockwise, rotated 180 degrees clockwise, and rotated 270 degrees clockwise, can be obtained. The assumption (of a correct orientation) is accepted if the total count under the assumption is the smallest.
Therefore, the EIC-symbol-orientation-determination module first obtains the total number of black dots at orientation dot positions under different assumptions about which grid cells correspond to EIC symbols and the correct orientation of the symbols. Then, based on the smallest count, which grid cells correspond to EIC symbols and the correct orientation of the symbols are determined.
The section of EIC symbol array encompassing the image, i.e. the X′, Y′ coordinate system discussed above in connection with
Finally, given HSymbol→Grid and HGrid→Image obtained from EIC pattern analysis, a homography matrix HSymbol→Image, which describes the transformation from the section of EIC symbol array encompassing the image to image, i.e. from the X′, Y′ coordinate system to the X, Y coordinate system, is obtained.
The total number of black dots at orientation dot positions is determined as follows.
Here Qi, where i=1, 2, . . . , 7, represent the total number of detected black dots at orientation dot positions, given different assumptions about which grid cells correspond to EIC symbols and the correct orientation of the symbols.
Q0 is the total number of detected black dots at orientation dot positions if grid cell (i, j) is a symbol and (i, j) is the top corner of the symbol (assuming mod(i+j,2)=0, see
Q4 is the total number of detected black dots at orientation dot positions if grid cell (i+1, j) is a symbol, and (i+1, j) is the top corner of the symbol. Q5 is the total number of detected black dots at orientation dot positions if grid cell (i+1, j) is a symbol, and (i+2, j) is the top corner of the symbol. Q6 is the total number of detected black dots at orientation dot positions if grid cell (i+1, j) is a symbol, and (i+2, j+1) is the top corner of the symbol. Q7 is the total number of detected black dots at orientation dot positions if grid cell (i+1, j) is a symbol, and (i+1, j+1) is the top corner of the symbol.
Next, determinations are made with respect to which grid cells correspond to EIC symbols and what the correct orientation is for the symbols.
Let O=int(j/4). O represents which grid cells correspond to EIC symbols. If O=0, grid cell (0, 0) is a symbol. If O=1, grid cell (1, 0) is a symbol. See
Let Q=mod(j,4). Q represents the correct orientation of the symbols. EIC symbols in image are rotated
clockwise.
Next, the homography matrix, which transforms symbol to image, is obtained.
Now in that which grid cells correspond to EIC symbols and the correct orientation of the symbols are known, the section of EIC symbol array encompassing the image, i.e. the X′, Y′ coordinate system 1408, can be determined. Next, the homography matrix HSymbol→Grid, which describes the transformation from X′, Y′ to H′, V′, is obtained.
First, the H″, V″ coordinate system may now be used. H″, V″ is H′, V′ rotated, with the origin moved to the corner of the grid lines that correspond to the top corner of a symbol.
When Q=0, the top corner of the H′, V′ grid lines corresponds to the top corner of a symbol. H″, V″ is the same as H′, V′. X′, Y′ is the section of EIC symbol array encompassing the image. See
When Q=1, the far right corner of the H′, V′ grid lines corresponds to the top corner of a symbol. H″, V″ is H′, V′ rotated 90 degrees clockwise, with the origin moved to the far right corner of the H′, V′ grid lines. X′, Y′ is the section of EIC symbol array encompassing the image. See
When Q=2, the bottom corner of the H′, V′ grid lines corresponds to the top corner of a symbol. H″, V″ is H′, V′ rotated 180 degrees clockwise, with the origin moved to the bottom corner of the H′, V′ grid lines. X′, Y′ is the section of EIC symbol array encompassing the image. See
When Q=3, the far left corner of the H′, V′ grids corresponds to the top corner of a symbol. H″, V″ is H′, V′ rotated 270 degrees clockwise, with the origin moved to the far left corner of the H′, V′ grid lines. X′, Y′ is the section of EIC symbol array encompassing the image. See
Let the rotation angle from H′, V′ to H″, V″ be θQ:
Let θx be the angle from H′, V′ to X′, Y′:
Let the origin of the H″, V″ coordinate system, CH″V″, have the coordinates (h′C
Let the transform from H″, V″ to H′, V′ be ΔHQ, i.e.
This results with
Now, ΔH0 is obtained. ΔH0 is the transform from X′, Y′ to H″, V″, i.e.
Let O0 be the offset in H″, V″ coordinate system. The result is
Let Nh0+1 and Nv0+1 be the total number of H and V lines in H″, V″ coordinate system. The next result is
Let the origin of the X′, Y′ coordinate system, CX′Y′ have the coordinates (h″C
Since the rotation from H″, V″ to X′, Y′ is −π/4, and the scale is √{square root over (2)} from the unit of measure in H″, V″ to X′, Y′, the result is
Therefore, the transform from X′, Y′ to H′, V′ is:
HSymbol→Grid=ΔHQ·ΔH0.
From EIC pattern analysis, HGrid→Image is obtained, i.e.
Therefore, a transform from the coordinate system of the section of EIC symbol array encompassing the image (X′, Y′ coordinate system) to the coordinate system of the image (the X, Y coordinate system), HSymbol→Image, can be obtained:
i.e.,
HSymbol→Image=HGrid→Image·HSymbol→Grid.
An output of this step is HSymbol→Image, i.e. the updated homography matrix with orientation information 5112 in
Rotated EIC Dots 5111 (i.e., D0 and Diff0) are also output of 5108 in
For each position s on edge (i, j) on the H line in H″, V″ coordinate system, where s=1, 2, . . . , 5, i=0, 1, . . . , Nh0−1, j=0, 1, . . . , Nv0,
For each position s on edge (i, j) on the V line in H″, V″ coordinate system, where s=1, 2, . . . , 5, i=0, 1, . . . , Nh0, j=0, 1, . . . , Nv0−1,
Here, 2 bits are encoded on each edge of an EIC symbol. Let Blh,i,j and Blv,i,j be the two bits, where l=0, 1.
EIC Bit Extraction
Now that it is known which grid cells correspond to EIC symbols and the correct orientation of the symbols, bits can be extracted based on the positions of black dots on each edge of a symbol. The EIC-bit-extraction module 5113 takes as input the rotated EIC dots 5111 and produces EIC bits 5116.
Bit extraction is done in H″, V″ coordinate system, i.e. EIC symbols are oriented at the correct orientation.
For each edge, if there is a black dot detected, and all 5 positions on the edge are valid, bits are extracted. Otherwise, bits are not extracted.
For each edge (i, j) on the H line in H″, V″ coordinate system, where i=0, 1, . . . , Nh0−1, j=0, 1, . . . , Nv0,
If there exists w and D0,wh,i,j=1, where wε{1,2,3,4}, and VDh,i,j=5, then,
Similarly, for each edge (i, j) on the V line in H″, V″ coordinate system, where i=0, 1, . . . , Nh0, j=0, 1, . . . , Nv0−1, let q=mod(i+j+O0,2),
If there exists w and D0,w+qv,i,j=1, where wε{1,2,3,4}, and VDv,i,j=5, then,
The bits extracted are Blh,i,j B0h,i,j, i.e. if the 1st position on the edge is a black dot, the bits are 00; if the 2nd position on the edge is a black dot, the bits are 01; if the 3rd position on the edge is a black dot, the bits are 11; if the 4th position on the edge is a black dot, the bits are 10. Note that 00, 01, 11; 10 is a Gray code, which ensures that the number of error bits is at most 1 if the position of black dot is incorrect. See
Here, a total of 8 bits are encoded in an EIC symbol. Each bit is a bit from an m-array (one dimension). Bits are now obtained from each dimension.
Let Bbm,n be the bit of dimension b, where b=0, 1, . . . , 7, encoded in EIC symbol (m,n), where (m,n) are the coordinates of the symbol in X′, Y′ coordinate system. Let Cbm,n be the confidence of bit Bbm,n as shown in
Note that Bbm,n is a matrix in which substantially all the bits encoded in all the EIC symbols in the section of EIC symbol array encompassing the image, are stored. Each element (m,n) in matrix Bbm,n corresponds to a square (formed by the horizontal and vertical dashed lines in
For EIC symbols not captured in image, values of the corresponding elements in Bbm,n. will be null. Even if EIC symbols are captured in image, if one is unable to extract the bits encoded in the symbols, values of the corresponding elements in Bbm,n will also be null. Only when bits are extracted, the corresponding elements in Bbm,n will have the value of the bits.
One can now store all the extracted bits in Bbm,n, and their confidence values in Cbm,n.
For each dimension b, where b=0, 1, . . . , 7, initialize Bbm,n and Cbm,n as:
Bbm,n=null,
Cbm,n=null.
For each bit l on edge (i, j) on H line, where i=0, 1, . . . , Nh0−1, j=0, 1, . . . , Nv0, l=0, 1, find the corresponding b, m and n, and assign values to Bbm,n and Cbm,n:
For each bit l on edge (i, j) on V line, where i=0, 1, . . . , Nh0, j=0, 1, . . . ,Nv0−1, l=0, 1, find the corresponding b, m and n, and assign values to Bbm,n and Cbm,n:
One can now normalize the confidence values. Let Cmax=max(Cbm,n), where Bbm,n≠null. The normalized confidence values are:
This completes EIC symbol recognition in accordance with embodiments of the invention. Output of EIC symbol recognition is homography matrix HSymbol→Image, which is shown as homography matrix 5117 in
Decoder and Decoding
The following describes EIC decoding in step 718 of
Multidimensional Arrays
Some codes relate to one-dimensional arrays, where each bit corresponds to a single position in the array. Various examples of the invention, however, may employ multi-dimensional arrays. With multi-dimensional arrays, each position in the array includes a group of bits. For example, in the multi-dimensional array 6900 shown in
As discussed herein, a code symbol is the smallest unit of visual representation of a location pattern. Generally, a code symbol will include the pattern data represented by the symbol. As shown in the illustrated example, one or more bits may be encoded in one code symbol. Thus, for a code symbol with 1 bit represented, the represented data may be “0” or “1”, for a code symbol representing 2 bits, the represented data may be “00”, “01”, “10” or “11.” Thus, a code symbol can represent any desired amount of data for the location pattern. The code symbol also will have a physical size. When the location pattern is, for example, printed on paper, the size of a code symbol can be measured by printed dots. For example, the illustrated code symbol is 16×16 printed dots. With a 600 dpi printer, the diameter of a printed dot will be about 0.04233 mm.
Still further, a code symbol will have a visual representation. For example, if a code symbol represents 2 bits, the visual representation refers to the number and position distribution of the black dots used to represent the data values “00”, “01”, “10” or “11”. Thus, the code symbol illustrated in
The bit values for the additional dimensions in a multidimensional array may conveniently be generated by cyclically shifting an original m-sequence to create a multidimensional m-array. More particularly, multiplying Q(x)/Pn(x) by xk will result in an m-sequence that is the k-th cyclical shift of m. That is, letting Q′(x)=xkQ(x), if the order of Q′(x) is still less than n, then the m-sequence m′ generated by Q′(x)/Pn(x) is the k-th cyclic shift of m, i.e. m′=σk(m). Here σk(m) means cyclically-shifting m to the right by k times. For example, referring to the generation of the m-sequence described in detail above, if Q2 (x)=x+x2+x3=xQ1(x), the division Q2 (x)/Pn(x) will generate an m-sequence m2=010110010001111, which is the first cyclical shift of m, i.e. m2=σ1(m1)
Accordingly, cyclically shifted m-sequences may be formed into a multidimensional m-array. That is, the first bit in each group of bits may belong to a first m-sequence. The second bit in each group may then belong to a second m-sequence that is cyclically shifted by a value k, from the first m-sequence. The third bit in each group may then belong to a third m-sequence that is cyclically shifted by a value k2 from the first m-sequence, and so on to form a multidimensional m-array.
As shown in
Decoding An M-Array
In order to determine the position of an image relative to a document using an m-array, it is important to determine the position of a bit captured in the bit relative to the m-array. That is, it is necessary to determine if the bit is the first bit, second bit, etc. in the m-sequence to determine the position of the bit in the m-array.
For any number s, where 0≦s<2n−1, there exists a unique polynomial r(x), n−1 where
whose order is less than n, such that xs≡r(x)(mod Pn(x)), and vice versa. In other words, there is a one-to-one relationship between s and r(x). Thus, xs/Pn(x) and r(x)/Pn(x) will generate the same m-sequence. For convenience, setting Q(x)=1, m can be assumed to be the m-sequence generated by 1/Pn(x). If a bit is the s′-th bit of m, where 0≦s′<2n−1, the m-sequence that starts from that bit is R=σ−s′(m)=σ2
As previously noted, there exists
that satisfies r(x)≡xs(mod Pn(x)). R also corresponds to division r(x)/Pn(x). Letting m=(m0 m1 . . . mi . . . m2
With R corresponding to the division r(x)/Pn(x), and σi(m) corresponding to xi·1/Pn(x), then,
Rt=rt{circumflex over (M)}
where R is the m-sequence that starts from the s′-th bit of m, r=(r0 r1 r2 . . . rn-1)t are the coefficients of r(x), and
Again, the addition and multiplication operations are binary operations, i.e. addition is XOR and multiplication is AND.
If an image captures K bits b=(b0 b1 b2 . . . bK−1)t of m (K≧n), and the relative distances between the positions of the bits in the m-sequence are: si=d(bi,b0), where i=0, 1, . . . , K−1 and s0=0, selecting the si+1-th bits of R and the si+1-th columns of M will result in:
bt=rtM
where bt is the transpose of b, M is a sub-matrix of {circumflex over (M)} and consists of the si+1-th columns of {circumflex over (M)}, where i=0, 1, 2, . . . , K−1.
If M is a non-degenerate matrix and b does not contain error bits, then r can be solved by selecting n bits from b by solving for:
rt={tilde over (b)}t{tilde over (M)}−1
where {tilde over (M)} is any non-degenerate n×n sub-matrix of M, and {tilde over (b)} is the corresponding sub-vector of b consisting of the selected n bits.
Stochastic Decoding of an M-Array
In most cases, however, an image cannot capture a set of bits b that does not contain error bits. For example, improper illumination, document content, dust and creases can all obscure the visual representation of bits in an image, preventing these bits from being recognized or causing the value of these bits to be improperly recognized. The solution of r becomes difficult when there are error bits in b. Further, decoding becomes even more difficult because the coefficient matrix M is not fixed when the pen moves, changing the image from frame to frame. Moreover, the structure of M is irregular. Therefore, traditional decoding algorithms cannot effectively be applied to solve r under practical circumstances.
To address these difficulties, various embodiments of invention provide stochastic solution techniques that provide a high decoding accuracy under practical conditions. As will be described in more detail, these techniques solve the equation bt=rtM incrementally so that many solution candidates are readily available without having to solve this equation exactly.
According to various examples of the invention, independent n bits (i.e., the sub-matrix consisting of the corresponding columns of M is non-degenerate) are randomly selected from the group of b that are captured in an image of a document. Supposing that b(0) are the n bits chosen, a solution for r can then be obtained as:
[r(0)]t=[b(0)]t[M(0)]−1
where M(0) contains the corresponding columns of the array M for the chosen bits.
For simplicity, the n bits chosen from b to make up b(0) can be moved to the beginning of b, with the remaining bits making up b moved to the end of b. This leads to the relationship
([b(0)]t, [{overscore (b)}(0)]t)=[r(0)]t(M(0),{overscore (M)}(0))+(0nt,[e(0)]t)
where b(0) are the chosen n bits, {overscore (b)}(0) are the remaining bits from the set b, M(0) is the corresponding columns of M for the chosen bits, {overscore (M)}(0) is the corresponding columns of M for the remaining bits, 0nt=(0 0 . . . 0)1×n, [r(0)]t=[b(0)]t[M(0)]−1, and [e(0)]t=[{overscore (b)}(0)]t+[r(0)]t{overscore (M)}(0).
The value (0nt,[e(0)]t) refers to the “difference vector” between ([b(0)]t,[{overscore (b)}(0)]t) and [r(0)]t(M(0),{overscore (M)}(0)), or simply the different vector of r(0), and the number of 1's in (0nt,[e(0)]t) is called the number of different bits. The vector containing different bits between ([b(0)]t,[{overscore (b)}(0)]t) and [r(0)]t(M(0),{overscore (M)}(0)) alternately can be identified as D(0). If D(0)=(0nt,[e(0)]t), then the number d(0) of 1's in D(0) is d(0)=HammingWeight(D(0))=HammingWeight(e(0)). That is, d(0) is the number of different bits between ([b(0)]t, [{overscore (b)}(0)]t) and [r(0)]t(M(0), {overscore (M)}(0)).
Next, some of the chosen bits n from the set b are switched with some of the remaining bits from the set b. In particular, J bit pairs (kj,lj) are switched between the original chosen bits n and the remaining bits from the set of bits b, where k1≠k2≠ . . . ≠kj≦n, n<l1≠l2≠ . . . ≠j≦K. It should be noted that the bit order is redefined in ([b(0)]t,[{overscore (b)}(0)]t), and these bits are not maintained in their original order. The relationship between the bits before and after switching is:
[e(1)]t=[e(0)]t+[e(0)]tEl-n[PRj(0)]−1(EktP(0)+El-nt).
[r(1)]t=[r(0)]t+[e(0)]tEl-n[PRj(0)]−1Ekt[M(0)]−1.
P(1)=P(0)+(Ek+P(0)El-n)[PRj(0)]−1(EktP(0)+El-nt).
[M(1)]−1=[M(0)]−1+(Ek+P(0)El-n)[PRj(0)]−1Ekt[M(0)]−1.
where
Ek=(ek
If the choice of (kj,lj) is to make:
[e(0)]tEl-n[PR
where 1jt=(1 1 . . . 1)1×J, then
[e(1)]t=[e(0)]t+1jt(EktP(0)+El-nt)
[r(1)]t=[r(0)]t+1jtEkt[M(0)]−1.
In view of [e(0)]tEl-n[PR
With the above choice of l1,l2, . . . ,lJ, the number of different bits in e(i+1) is:
Thus, the decoding steps can be summarized as follows. First, an independent n-bit combination is generated from the group of bits b captured in an image. It should be noted that, with various embodiments of the invention, the selection of the n-bits can be combined with bit recognition confidence techniques, to help ensure that the most accurately recognized bits are selected for the n-bit combination.
Next, the relationship ([b(0)]t,[{overscore (b)}(0)]t)=[r(0)]t(M(0),{overscore (M)}(0))+(0nt,[e(0)]t) is solved to determine d(0)=HammingWeight(D(0))=HammingWeight(e(0)). If the number of different bits d(0) is 0, then the process is stopped and the solution r(0) is output. Otherwise, all J (=1 and 2) bit pairs are switched, and the number of different bits d is again determined using the relationship ([e(0)]t+1JtEktP(0))+J. It should be noted, however, that this relationship can only be evaluated when the rank of EktP(0)Ep−n is J. In this case there is no need to specify l1,l2, . . . ,lJ. Next, the minimal number d of different bits is determined.
The above process has to be repeated for several times in order to ensure a high enough probability of successful decoding. To estimate the times of selecting the n-bit b(0) from b, the number r of the error bits in b is first predicted to be d. If r is changed, then
is computed, which is the probability of the chosen n bits contain s error bits, where
is the combinatory number,
is the probability if the chosen n bits contain less than s+1 error bits. In practice, s=2 in order to minimize the computation load. Next, 52 is computed, such that 1−(1−P2)s
Decoding Using “Bit-Flipping”
While the above-described technique can be used to determine the number of a bit in an m-sequence, this technique can be further simplified using “bit-flipping.” As used herein, the term “bit flipping” refers to changing a bit with a value of “1” to a new value of “0,” changing a bit with a value of “0” to a new value of “1.”
Supposing [b(1)]t is [b(0)]t with J bits dipped, and the J bits are the ki-th bits of [b(0)]t, where i=1, 2, . . . , J, 1≦k1≦k2≦ . . . <kJ≦n, then the relationship.
[r(1)]t=[b(1)]t[M(0)]−1
can be used to solve for a new r. It can be proven that:
([b(1)]t,[{overscore (b)}(0)]t)=[r(1)]t(M(0),{overscore (M)}(0))+(EJ,[e(0)]t+EJP(0)
and
[r(1)]t=[r(0)]t+EJ[M(0)]−1
where
Now, D(1)=(EJ,[e(0)]t+EJP(0)), and the number of different bits d(1) is: d(1)=HammingWeight(D(1))=HammingWeight([e(0)]t+EJP(0))+J. If d(1)<d(0), then r(1) is a better solution of r than r(0).
The vector r is referred to as a location vector. Since division xs/Pn(x) and division r(x)/Pn(x) generates the same m-sequence R, once r, i.e. the coefficients of r(x), is solved, s can be obtained by using a discrete logarithm. Therefore, s′, the location of R in the original m-sequence m, can be obtained. Methods for solving a discrete logarithm are well known in the art. For example, one technique for solving a discrete logarithm is described in “Maximal and Near-Maximal Shift Register Sequences: Efficient Event Counters and Easy Discrete Logarithms,” Clark, D. W. and Weng, L-J., IEEE Transactions on Computers, 43(5), (1994), pp. 560-568, which is incorporated entirely herein by reference.
Thus, this simplified decoding process can be summarized by the following steps. First, n independent bits b(0) are randomly selected from the total set of bits b captured in an image of a document. The bits n may be randomly selected using, for example, Gaussian elimination. Once the bits n are selected, then the relationship ([b(0)]t,[{overscore (b)}(0)]t)=[r(0)]t(M(0),{overscore (M)}(0))+(0nt,[e(0)]t) is solved to determine r. If the HammingWeight value d(0) is 0, then the value of r is output and used to determine s′ as described above, giving the position of this bit in the document.
If the value d(0) is not 0, then J bits of the chosen n bits are flipped, where 1≦J<n, and the number of different bits using the equation d(1)=HammingWeight([e(0)]t+EJP(0))+J is computed. Next, another set of n independent bits is selected, and the process is repeated. The new b(0) is different from all previous sets. Finally, the value of r is output that corresponds to the smallest d, i.e. the least number of different bits. In various implementations of the invention, up to two bits are flipped, and b(0) is only selected once.
Tool for Decoding an M-Array
This process is performed N times where N is the number of dimensions (for instance, N=8 here) as shown by steps 6719 and 6703 then ends in step 6721.
Also,
Coefficient Matrix M Preparation
In order to solve for r as discussed above, the arrays b and M are configured. First, all of the bits extracted for one dimension are stored in a matrix called Extracted_Bits_Array. For dimension b, where b=0, 1, . . . , 7, the Extracted_Bits_Array (m, n)=Bbm,n. As illustrated in
Once an Extracted_Bits_Array is created for a dimension, the total number of non-FF bits is counted. If the number is fewer than n, where n is the order of the m-array (in the illustrated example, n=28), then too few bits have been obtained to decode the array, and the decoding fails for this dimension. If the number is more than 2n, up to the 2n bits that have the highest recognition confidence values are kept, and “FF” is assigned to all other elements in the Extracted_Bits_Array.
In the illustrated example, it should be noted that the size of Extracted_Bits_Array is 20×20. This size is considered large enough to account for all possible positions of the extracted bits for a pattern encoded using an 8-a-16 symbol. That is, given the 128×100 pixel image sensor and the size of the symbol 8-a-16, a size 20×20 matrix is considered large enough to hold the bits in the image, regardless of how the image is rotated.
To obtain M, the coefficient matrix M preparation module 703 creates a matrix called M_Const_Matrix as a constant table. The size of M_Const_Matrix is the same as the size of Extracted_Bits_Array, i.e. 20×20 in the illustrated implementation. The M_Const_Matrix table is constructed in the following manner. For every i and j, where 1≦i≦20, 1≦j≦20,
M(i,j)T=(A(i,j),A(i+1,j+1), . . . ,A(i+26,j+26),A(i+27,j+27))T
where A(i,j) is element (i,j) of the m-array based on the m-sequence m.
Next, the bM matrix preparation module 6805 constructs matrix bm_Matrix to contain b and M. For every non-FF bit in the Extracted_Bits_Array, the bM matrix preparation module 6805 places the bit in the last column of bM_Matrix. Next, the corresponding element in M_Const_Matrix is retrieved (which is a vector), and that element is placed in the first n columns of the same row of bM_Matrix. With various examples of the invention, the bM matrix preparation module 6805 may reorder the rows of bM_Matrix according to the recognition confidence of the corresponding bits, from highest to lowest.
Stochastic Decoding
Next, the stochastic decoder module 6807 obtains a solution for r. More particularly, a first solution for r may be obtained with Gaussian elimination. In the bM_Matrix, through Gaussian elimination, n linearly independent bits are selected to solve for r. The process proceeds as follows. In bM_Matrix, starting from the first row down, a row is located that has a “1” in the first column. If it is not the first row of bM_Matrix, the row is switched with the first row of bM_Matrix. Next, in the bM_Matrix, the new first row (with a “1” in the first column) is used to perform a XOR operation with all the remaining rows that have a “1” in the first column and the result of the operation replaces the value of the original row. Now, all of the rows in bM_Matrix have a “0” in the first column except the first row, which has a “1” in the first column.
Next, starting from the second row down in the bM_Matrix, a row is identified that has a “1” in the second column. If it is not the second row of the bM_Matrix, this row is switched with the second row of bM_Matrix. In bM_Matrix, the new second row (with a “1” in the second column) to perform an XOR operation with all the remaining rows (including the first row of bM_Matrix) that have a “1” in the second column, letting the result replace the original value for the row. Now, all the rows in bM_Matrix have a “0” in the second column except the second row which has a “1” in the second column. This process continues until there is a “1” along the diagonal of the first n rows of bM_Matrix, as shown in
The first n rows of bM_Matrix correspond to the n bits selected for solving r, i.e. b(0) as described above. The rest of the rows of bM_Matrix correspond to the rest of the bits, i.e. {overscore (b)}(0) also described above. Further, the last column of the first n rows of the bM_Matrix is the solution for r(O) noted above, which will be referred to as r_Vector here. The last column of the rest of the rows is e(0) noted above, which will be referred to as e_Vector here. Letting d be the number of 1's in e_Vector, d is the number of different bits, d(0), described above. If d=0, it means there are no error bits. The process is stopped, and r_Vector is output as the as the solution of r. If d>0, however, then there are error bits, and the process is continued.
In bM_Copy, the same row switching is done as in bM_Matrix, but no XOR operation is performed. The first n rows and n columns of bM_Copy is M(0) (transposed) as described above, which will be referred to as M_Matrix here. The rest of the rows and the first n columns of bM_Copy is the {overscore (M)}(0) (transposed) described above, which will be referred to as MB_Matrix here. From M_Matrix and MB_Matrix, MR_Matrix is obtained, which is [M(0)]−1 (transposed), and P_Matrix, which is P(0) described above:
MR_Matrix=M_Matrix−1
P_Matrix=MB_Matrix·MR_Matrix
Because there may be error bits in b, it can be assumed that each of the n bits selected for solving r may be wrong, and its value “flipped” (i.e., the value changed from 0 to 1 or from 1 to 0) to solve for r again. If the new r results in a smaller d, the new r is a better solution for r, and dmin is initialized as d.
For every flipped bit, to calculate the new d, it is not necessary to repeat the process of Gaussian elimination. As previously discussed, d(1)=HammingWeight([e(0)]t+EJP(0))+J, therefore if [e(0)]t+EJP(0) can be obtained, then a new d is obtained.
Accordingly, each of the n bits selected is flipped. For every column of P_Matrix, the column, the XOR operating is performed with e_Vector. The result is e_Vector_Flip. As illustrated in
Letting d=HammingWeight(e_Vector_Flip)+1, where d is the new count of different bits. If d<dmin, then let dmin=d, and i1=index of the corresponding column in P_Matrix. This process continues until all columns in P_Matrix have been processed. If dmin=1, the process is stopped, as the error bit has been located. As discussed in detail above, [r(1)]t=[r(0)]t+EJ[M(0)]−1, where J=1. Therefore, the new r_Vector is calculated by performing the XOR operation on the i1-th row of MR_Matrix and the original r_Vector (the one from Gaussian elimination), as shown in
If dmin≠1, it means that there are more than 1 error bits. Accordingly, two of the n selected bits are flipped to determine if a smaller d can be obtained. For every pair of columns of P_Matrix, the two columns are obtained and the XOR operation is performed with e_Vector. As shown in
If dmin=2, then the process is stopped, as it indicates that the two error bits have been identified. As discussed above, [r(1)]t=[r(0)]t+EJ[M(0)]−1, where J=2 Therefore, the new r_Vector is calculated by performing the XOR operation on the i1-th and i2-th row of MR_Matrix and the original r_Vector (the one from Gaussian elimination). As shown in
Thus, if dmin is the d obtained with no bit flipping, the original r_Vector (the one from Gaussian elimination) is output as the solution to r. If dmin is the d obtained with one bit flipping, the new r_Vector is calculated by performing the XOR operation on the i1-th row of MR_Matrix and the original r_Vector. The new r_Vector is output as the solution to r. If dmin is the d obtained with two bit flipping, the new r_Vector by is calculated by performing the XOR operating with the i1-th and i2-th row of MR_Matrix and the original r_Vector. The new r_Vector is output as the solution to r. Thus, the output of the stochastic decoding process is the location vector r.
Calculation of L by Discrete Logarithm
Given location vector r, the discrete logarithm determination module 6809 can obtain L (referred to as the bit “s” above in paragraphs 42 and 43) by a discrete logarithm determination technique. L is the location of the first element in the Extracted_Bits_Array of the m-sequence, and Lε{0, 1, . . . , 2n−2}, where n is the order of the m-sequence. r can be viewed as an element of the finite field F2
r=αL
where α is a primitive element of the finite field F2
Letting n be the order of the m-sequence, m be the period of the m-sequence, i.e. m=2n−1, mi be the prime factors of m=2n−1, and w be the number of mi's. For each mi, vi is chosen such that
where i=1, . . . , w.
In the illustrated implementation, n=28, so α=(1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1)t (correspondingly, the primitive polynomial in division
that generates the m-sequence is Pn(x)=1+x3+x28), m=228−1. There are 6 prime factors of m, i.e., w=6, and the prime factors are: 3, 43, 127, 5, 29, 113. Correspondingly, vi are: 2, 25, 32, 1, 1, 30. All these are stored in constant tables.
For each mi, qε{0, 1, 2 . . . , mi−1} is found such that
Note that again, these are multiplications over the finite field F2
Localization in the M-Array
Based on the method used in generating the m-array from the m-sequence, the position of the first element in Extracted_Bits_Array in m-array can be obtained:
x=mod(L,m1)
y=mod(L,m2)
where m1 is the width of the m-array, and M2 is the height of the m-array. When the order of the m-sequence is
For each dimension, the decoding process described above outputs position (x, y). Letting (xp, yp) be the output of the dimension representing the X, Y position in Cartesian coordinates, as illustrated above, (xp,yp) are the coordinates of point CX′Y′ in the symbol pattern array.
Solving Multiple Dimensions of m-Arrays Simultaneously
As discussed in detail above, a document may have multiple (e.g., 8) dimensions of m-arrays. Supposing that the dimensions are bi, i=1, 2, . . . , C, and the metadata are encoded by the relative shift dj between bi and b1, where b1 is the position dimension and j=2, 3, . . . , C. The metadata are the same no matter where the image is obtained. Therefore, the metadata can be extracted sometime before the error-correcting decoding starts. When dj, j=2, 3, . . . , C, are known, bi, i=1, 2, . . . , C, can be jointly used for the decoding of position. The process is as follows.
Supposing bit=[rbi]tMbi, i=1, 2, . . . , C, then the relationship between rb
(b1t b2t . . . bCt)=[rb
The procedure to solve this equation is the same as solving bit=[rbi]tMbi, i=1, 2, . . . , C, separately. However, solving them jointly is more efficient in two ways. First, the speed can be nearly C times faster because only one linear system is solved instead (but with some overhead to compute Q−d
The goal of EIC decoding is to obtain position of the extracted bits in m-array.
For each dimension, EIC decoding may include one or more of the following:
After all the images of a stroke are processed, EIC document mapping starts.
Document Mapping
An ink stroke is generated in step 7505 using the above information, and outputting ink stroke 7508. The location in the m-array 7502 is also used to calculate global/local metadata. The output includes a document id 7512, global meta data 7513 and local meta data 7514.
The document ID 7512 may then be used in step 7515 to find an EIC document. The EIC document 7518 is made available and combined with ink stroke 7518 in step 7519 to provide a mapping of the coordinates in the EIC array space to the document page space, which results in the strokes being located within the document 7522. Global meta data 7513 and local meta data 7514 may be also made available.
It is appreciated that the strokes may be added into the document. Alternatively, the strokes may be associated with the document but added to another document (for instance filling out a form) where the strokes may be maintained separately from the first document (here the form).
The following describes how global meta data and local meta data may be encoded into and obtained from a document. Global meta data may or may not include a document ID. Local meta data may include specific information of a location where the ink stroke is located (for example, a field in a form—a telephone number field or a social security number field among other fields). Other information may be encoded as meta data (global or local or both) as well.
In accordance with embodiments of the invention, regardless of whether a region is embedded with local metadata, the regions may be encoded using a combined m-array, where the local-metadata m-array may be the same as the m-array that represents X, Y position information, and the metadata m-array may be shifted according to the value of the metadata. If a region is not embedded with local metadata, 0 may be chosen as the value of its local metadata, i.e., the metadata m-array is not shifted. Therefore, 0 is not used as the value of local metadata in regions that are selected to embed local metadata.
The two m-arrays may be combined, in accordance with embodiments of the invention, to encode two bits in one EIC symbol. An example of an EIC symbol is depicted in
When the position m-array 9302 and the local metadata m-array 9304 are combined, based on the value of the local metadata (e.g., 11), the start of the local metadata m-array 9304 is shifted to position (xd,yd), as depicted at 9330 in
where n is the order of the m-array and 0≦local metadata≦2n−2.
In
As shown in the partially combined m-array 9308, the local metadata m-array 9304 starts at position (2,1) of the position m-array 9302. Since the position m-array 9302 and the local metadata m-array 9304 repeat themselves, a combined m-array with encoded local metadata 9310, which is shown in the lower right corner of
The value of the metadata is the distance in the combined array between the position m-array 9302 and the local metadata m-array 9304. The distance is kept the same in every pair of bits in the combined array 9310. Therefore, if the position of each bit in its corresponding m-array is obtained, the distance in the combined array 9310 can be determined.
Local Metadata Decoding
To decode local metadata, the m-arrays that have been combined to form the combined array 1008 are each separately decoded. For example, referring to the example shown in
The value of the local metadata may then be calculated as follows:
where n is the order of the combined m-array 7608.
In the example shown in
local metadata=mod(3−2,23−1)·(23+1)+mod(4−2,23+1)=11. 6.
Metadata Solutions
In accordance with embodiments of the invention, local metadata may be embedded via multiple independent channels. For example, an EIC local metadata embedding solution for resolving local metadata conflicts, in accordance with embodiments of the invention may be based on 8-bit embedded interaction code (EIC) symbol (such as EF-diamond-8 bit-a-16 and EF-diamond-8 bit-i-14). As previously described, an example of an 8-bit EIC symbol is shown in
A potential metadata allocation method for an 8-dimension EIC symbol is 1:6:1 (1:1:1:1:1:1:1:1)—one share is used for position, six shares are used for global metadata and one share is used for local metadata. And each of 8 shares constructs a physical data channel, which are each of order 28 respectively in the example (i.e., the width of each m-array used to encode each share is 214+1, and the height of each m-array used to encode each share is 214−1).
A metadata allocation method in accordance with embodiments of the invention allocates 8 local-metadata shares as follows: 1:5:0.5:0.5:0.5:0.5, in which 1 share of order 28 is used for position, five shares of order 28 for each whole share are used for global metadata, and four 0.5 shares (also referred to as half shares) of order 14 for each 0.5 share are used for four independent local metadata values. Due to this bit-proportion change, an m-array of order 14 may be used in each 0.5 share data channel to construct the EIC array.
An independent local metadata channel with 0.5 shares may be implemented in accordance with embodiments of the invention as follows.
Now that four independent local metadata values are available, each of four potentially conflicting local metadata fields, namely, Field, A 8002, Field B 8004, Field C 8006, and Field D 8008, may be assigned a respective local metadata channel, as shown in
Various considerations and/or rules (also referred to as a set of local-metadata conflict-resolution rules) may be applied when embedding and decoding local metadata in potentially conflicting regions. For example:
After decoding the pen-tip position and the local metadata, a local metadata conflict may occur, which means that there are multiple potential local metadata results for one captured image. Then, some considerations/rules may be applied for resolving potential conflicts regarding the local metadata decoding result. These considerations may include:
Certain regions of a document may have no local metadata conflict. For instance, suppose that c=1023 values of each independent local metadata channel are reserved for allocating local metadata values to conflict regions in a single document. Therefore, for each of four local metadata channels, there are l=(214−1)−c values that can be used for conflict-free regions. These four 0.5 shares may be unified and allocated together. The number of unique local metadata values available for conflict-free regions may be expressed as L=l4. Then the range of L is c≦L<(l4+c), and local-metadata values within this range may be allocated to conflict free regions.
In the preceding example, the local metadata address space L is larger then 55 bits—approximately 55.6 bit, but less than the optimization maximum 4×14=56 bits, which means that no more than a reasonable address space is used for addressing the situation in which up to four local metadata regions have potential conflict areas that overlap.
A local metadata embedding solution in accordance with embodiments of the invention is extensible such that other types of conflicts may also be resolved. The description above relates to a single case of an EIC-array solution, and there are more extensible designs of EIC arrays behind the EIC-array solution described above. For example, to resolve potential conflicts of three overlapped enlarged regions in the horizontal and/or the vertical directions, the bits in the EIC array may be allocated in 1:5:0.33:0.33:0.33:0.33:0.33:0.33 proportion. Then, rules, which are similar to those described above, may be applied when embedding and decoding the local metadata. Accordingly, various types of partial shares, such as half shares, one-third shares, one-fourth shares, and the like, may be used in accordance with embodiments of the invention.
Universal local metadata, which is local metadata reserved by application or system, may be used in various documents and forms in accordance with embodiments of the invention. For a particular system, such as a university's files for a particular student, the student information will occur in various documents and/or forms and their various versions. Substantially all of the forms' fields that have the same information, such as student name, ID, and major, may be assigned a common local metadata value. The values assigned to information fields of this type may be synchronized with the university's student-information database.
Local-metadata embedding and decoding techniques in accordance with embodiments of the invention may support use of local metadata as described above as follows. When a local-metadata-decoding conflict occurs, a mapping table may be built from conflict-reserved local metadata to unique local metadata. The mapping table may be saved with the EIC document, and a copy of the mapping table may be saved by an image-capturing pen so that the local-metadata-decoding conflict may be efficiently resolved while the image-capturing pen is being used for interacting with the EIC document.
Global Metadata Encoding
Global metadata in a particular region of an EIC document may be encoded using the same m-array as the m-array that represents X, Y position information. The metadata m-array may be shifted, however, according to the value of the metadata.
The position m-array and the global metadata m-array may contain repeating bit sequences that are the same length but that have different bit sequences relative to each other. Stated differently, different primitive polynomials of order n may be used to generate different m-arrays, which will then contain different repeating bit sequences.
The two m-arrays may be combined, in accordance with embodiments of the invention, to encode two bits in one EIC symbol. An example of an EIC symbol is depicted in
When the position m-array 9602 and the global metadata m-array 9604 are combined, based on the value of the global metadata (e.g., 11), the start of the global metadata m-array 9604 is shifted to position (xd,yd), as depicted at 9630 in
where n is the order of the m-array and 0≦local metadata≦2n−2.
In
As shown in the partially combined m-array 9608, the global metadata m-array 9604 starts at position (2,1) of the position m-array 9602. Since the position m-array 9602 and the global metadata m-array 9604 repeat themselves, a combined m-array with encoded global metadata 9610, which is shown in the lower right corner of
The value of the metadata is the distance in the combined array between the position m-array 9602 and the global metadata m-array 9604. The distance is kept the same in every pair of bits in the combined array 9610. Therefore, if the position of each bit in its corresponding m-array is obtained, the distance in the combined array 9610 can be determined.
Global Metadata Decoding
To decode global metadata, the m-arrays that have been combined to form the combined array 9408 are each separately decoded. For example, referring to the example shown in
The value of the global metadata may then be calculated as follows:
where n is the order of the combined m-array 9408.
In the example shown in
global metadata=mod(3−2,23−1)·(23+1)+mod(4−2,23+1)=11. 12.
For real-world applications, there may be multi-dimensional global metadata. For example, suppose there are 1 position dimension and 7 dimensions for global metadata. Then the overall global metadata may be calculated as follows.
After decoding for each dimension, position (xp,yp) is the output of the dimension representing X, Y position and (xmi,ymi) are the output of the remaining 7 global metadata dimensions, where i=0, 1, 2, . . . , 6. Therefore, metadata encoded in each dimension can be obtained:
where n is the order of the m-array.
For dimension i, where i=0, 1, 2, . . . , 6, a value of global metadata portions is obtained from each image successfully decoded for that dimension. For all images, the value that occurs most often may be considered the value of that portion of the global metadata.
Now that the metadata encoded in each of the 7 dimensions representing a document ID is obtained, the document ID may be calculated as:
where n is the order of the m-array. As will be apparent, any suitable number of dimensions may be used for embedding global metadata.
Embedding global metadata in multiple dimensions allows surprisingly large global-metadata values to be encoded. For instance, suppose there are 8 dimensions available, one dimension is used for X,Y position data and 7 dimensions are used for global metadata. For the 7 dimensions of global metadata, suppose that an order 28 m-array is used (i.e., the m-array will have 214+1 columns and 214−1 rows. Then the number of possible values of global metadata that can be encoded in seven dimensions is (228−2)7.
The goal of EIC document mapping is to bind ink strokes to EIC documents.
EIC document mapping includes one or more of the following:
Aspects of the present invention have been described in terms of preferred and illustrative embodiments thereof. Numerous other embodiments, modifications and variations within the scope and spirit of the appended claims will occur to persons of ordinary skill in the art from a review of this disclosure.