This patent application relates to apparatuses and methods that process an image from a camera of a mobile device, to identify symbols therein.
Handheld devices and mobile devices such as a cell phone 108 (
Recognition of text in image 107 (
Block 121 which is to be subject to recognition may be formed to fit tightly around an MSER (e.g. so that each of four sides of the block touch a boundary of the region). In some examples a rectangular portion 103 (
A block 121 that is a candidate for recognition may be further divided up, by use of a predetermined grid (
Optical character recognition (OCR) methods of the prior art originate in the field of document processing, wherein a document's image obtained by use of a flatbed scanner contains a series of lines of text (e.g. 20 lines of text). Such prior art OCR methods may extract a vector (called “feature vector”) from binary values of pixels in each sub-block 121I. Feature vectors Z in number are sometimes obtained for a block 121 that is subdivided into Z sub-blocks, and these Z vectors may be stacked to form a block-level vector that represents the entirety of block 121, and it is this block-level vector that is then compared with a library of reference vectors generated ahead of time (based on training images of letters of an alphabet to be recognized). Next, a letter of an alphabet which is represented by a reference vector in the library that most closely matches the vector of block 121 is identified as recognized, so as to conclude the OCR (“document” OCR) of a character in block 121 in portion 103 of a document's image.
One feature vector of such prior art has four dimensions, each dimension representing a gradient, based on a count of transitions in intensity, between the two binary values along a row or a column in a sub-block. Specifically, two dimensions in the feature vector keep count of black-to-white and white-to-black transitions in the horizontal direction (e.g. left to right) along a row of pixels in the sub-block, and two additional dimensions in the feature vector keep count of black-to-white and white-to-black transitions in the vertical direction (e.g. bottom to top) along a column of the sub-block. Exactly four counts are formed. In forming the four counts, block 121 is assumed to be surrounded by a white boundary, and any transition at the boundary is counted as a half transition. These four counts are divided by total number of pixels N in each sub-block, even though the sum of these four counts does not add up to N.
In the example shown in
Hence, a histogram of the above-described intensity transitions in sub-block 121I has the following four values (0, 3, 1, 0), as shown in
In some prior art methods, an 80 dimension vector of the type described above is compared with reference vectors (each of which also has 80 dimensions) in the library, by use of a Euclidean distance metric (square root of squares of difference in each dimension), or a simplified version thereof (e.g. sum of absolute value of difference in each dimension). One issue that the current inventors find in use of such distance metrics to identify characters is that the above-described division by N, which is used to generate a four dimensional vector 121V as described above, affects accuracy because the sum of the four elements prior to division by N does not add up to N (and, in the example shown in
Moreover, the current inventors note that ambiguity can arise in use of four counts to represent nine pixels, which can increase the difficulty in recognizing (from a handheld camera captured image), letters of an alphabet whose rules permit ambiguity, such as Devanagari wherein, for example, a left half portion of a letter can be combined with another letter, and/or a letter may or may not have an accent mark at the bottom or the top of that letter, etc. Furthermore, the current inventors note that use of just four counts may be insufficient to represent details necessary to uniquely characterize regions of text, in certain scripts such as Devanagari that have a large number of characters in their alphabet. Therefore, the current inventors believe that use of an 80 element feature vector (obtained by cascading groups of 4 counts for 20 sub-blocks) can result in false positives and/or negatives that render prior art techniques impractical.
Hence, the current inventors believe there is a need for a new vector that is more representative of pixels in the image, and use of the new vector with a new comparison measure that provides a better match to a reference vector in a library, as described below.
In several embodiments, an image of real world (or a frame of video, also called image) is processed to identify one or more portions therein, for use as candidates to be compared with a set of symbols that have predetermined shapes (also called “reference symbols”), such as logos, traffic signs, and/or letters of a predetermined alphabet or script such as Devanagari. Each such image portion is traversed to generate counts in a group of counts. The group of counts has a predetermined size, e.g. 6 counts or 8 counts. Each count in the group indicates either that there is a change of at least a predetermined amount between intensity values of two pixels (in a given direction of traversal) or that the change in intensity values of two pixels (in the given direction) is smaller than the predetermined amount (e.g. change may be absent or zero, when binary values of pixel intensities are identical). Hence, each pixel in an image portion contributes to at least one of the counts. Moreover, the group of counts does not encode positions at which the changes occur in the image (so the group of counts results in a lossy compression).
Depending on the embodiment, the counts in such a group may be normalized, e.g. based on a number of pixels in the image portion (and in embodiments that traverse the image portion in multiple directions, also based on the number of directions of traversal), so that the sum of the counts becomes 1. One or more vector(s) based on such counts may be compared with multiple predetermined vectors of reference symbols (e.g. letters of an alphabet), using any measure of difference between probability distributions, such as the Jensen-Shannon divergence metric. Whichever reference symbol (e.g. letter of the alphabet) has a predetermined vector that most closely matches the vector(s) for the image portion is thereby identified and stored in memory, as being recognized to be present in the image.
It is to be understood that several other aspects of the described embodiments will become readily apparent to those skilled in the art from the description herein, wherein it is shown and described various aspects by way of illustration. The drawings and detailed description are to be regarded as illustrative in nature.
In several aspects of the described embodiments, one or more portions of a natural image (also called “handheld camera captured image”) of a scene of a real world are received in an act 201 (
In some embodiments, the image portions received in act 201 are blocks that are rectangular in shape, identified from a rectangular portion 103 (
In one illustrative example, a predetermined test to detect pixels that form a header line or shiro-rekha 192 (
In act 202 of several embodiments, an image portion (e.g. block 121) which is received in act 201 is traversed (e.g. by one or more processors in a mobile device 401, such as a smart phone), to identify changes in intensity of pixels in the image. Then in act 203, the intensity changes are used (e.g. by one or more processors) to generate a group of counts for the image portion, such as group 321 in
Regardless of a specific size that is predetermined for a group, an intensity of each pixel in the portion of the image is used in act 203 (e.g. by one or more processors) to generate at least one count in the group, by checking (as per act 203A in
Specifically, in act 203B, a count (“first count”) is incremented when an intensity change, in traversing from a current pixel to a next pixel, exceeds the predetermined threshold. An intensity change in grayscale values may exceed the threshold of 100, for example, when the current pixel is at grayscale value 10 and the next pixel is at grayscale value 120. Similarly, an intensity change in binary values may exceed the threshold of 0, for example, when the current pixel is of binary value 0 and the next pixel is of binary value 1.
When the intensity change does not exceed the predetermined threshold, act 203C may be performed to increment another count (“second count”), when the intensity change is positive. When the intensity change is not positive, yet another count in the group may be incremented as per act 203D. The intensity change is negative, for example, when the current pixel is of grayscale value 255 and the next pixel is of grayscale value 0 (with intensity change being positive when the values are the opposite, e.g. current pixel is of grayscale value 0 and the next pixel is of grayscale value 255).
After any count in the group is incremented (e.g. by one or more processors), act 204 is performed, to check if all pixels in the image portion have been traversed and if not, the one or more processors loop back to act 202 (described above). When all pixels in the image portion have been traversed, the one or more processors of some aspects perform act 205, to recognize a symbol (e.g. a character in Devanagari script), as follows. Specifically, act 205 compares a vector (also called “feature” vector) that is automatically derived based on at least the group of counts, with predetermined vectors for corresponding symbols in a set, to identify a specific symbol, and the specific symbol 354 identified is then stored in one or more memories 329 (
Note that act 205 uses a measure of difference, between two probability distributions, to perform the comparison in several embodiments. In order to use such a measure, the above-described feature vector is obtained in some embodiments by dividing each count in a group by N to obtain an element of the vector, wherein N is the number of pixels in the image portion. In several embodiments, the elements in a feature vector are fractions that when added up to one another, sum to the value 1 (e.g. after rounding up). A feature vector of the type described above, with elements that add to 1, enables comparison of such a vector with other similarly-generated vectors (whose elements also add up to 1), by use of a measure of difference between probability distributions, such as Jensen-Shannon divergence or any other metric of divergence. The measure that is used to compare vectors of the type described above may or may not be symmetric, depending on the embodiment.
The above-described method illustrated in
Then, in act 212 (
Independent of how each sub-block is traversed, each count in the group identifies either a number of times that changes in intensity (of pixels adjacent to one another in the sub-block) are sufficiently large to exceed a threshold (also referred to as a “gradient count”), or a number of times that the intensity changes are sufficiently small (below the threshold) to be treated as absent (also referred to as a “constant count”). Both kinds of the just-described counts (namely at least one gradient count and at least one constant count) are maintained in each group for each sub-block in some embodiments of the type described herein, thereby to ensure that a sum of counts (for a sub-group in each traversal direction respectively) is equal to a number of pixels in each sub-block.
Note that the counts being generated in act 212 as described above do not encode positions in a sub-block (e.g. a specific row among the multiple rows and a specific column among the multiple columns), at which changes in pixel intensity occur in the sub-block. Hence, such counts indicate the frequency of intensity changes at sub-block level not at pixel level (so, these counts are common across all pixels in a sub-block). So, counts generated in act 212 (“sub-block level counts”) are insufficient to re-construct a specific intensity value of any specific pixel in a sub-block. Therefore, generation of counts in act 212 may be conceptually thought of as averaging across pixel positions in the two dimensions of a sub-block, thereby performing a “lossy” compression of pixel intensities in the sub-block (in contrast to “lossless” compression, encoding information sufficient to re-construct intensity of each pixel in a sub-block).
Referring to the sub-block 121I shown in
Note that the sum of these four values (described in the preceding paragraph) is 0+3+5+1 which is 9, identical to the number of pixels in sub-block 121I. Hence, a contribution of each pixel in a sub-block is being automatically included by act 212 (
Some embodiments perform traversals in more than one direction, and hence act 214 (
In some embodiments that perform two traversals in act 212 of sub-block 121I, e.g. in the horizontal and vertical directions, a group of eight counts 301-308 (
In the method of
Depending on the embodiment, the elements of vectors of each sub-block may be divided by the number of sub-blocks Z in each block, so that the results when used as elements of a block-level vector, add up to 1. Next, in act 217, one or more processors of mobile device 401 check whether all blocks received in act 201 have been processed and if not return to act 211 (described above). When the result of act 217 is that symbols in all blocks have been recognized, then the symbols that have just been recognized by act 216 may be displayed on a screen, such as screen 403 or 402 (
Certain described embodiments use a variant of the 4S vector, also referred to as a 3S vector that is stroke width invariant, as illustrated in
To improve the stroke width invariance, certain embodiments maintain, in each direction of traversal, a single constant count for no transition in intensities, regardless of whether the intensity at a current pixel is black or the intensity at the current pixel is white (also called white-and-black constant count). Therefore, in the example of the 4S vector illustrated in
In the 3S vector as well, contributions each of the N pixels of a sub-block are included S times in the counts of these embodiments. Accordingly, the just-described counts 301, 302, 309, 305, 306, 310 (
In some embodiments that use the 3S vector, the value of a constant count in a given direction is additionally obtained by subtracting the gradient counts from the total number of pixels N in a sub-block. So in the example illustrated in
In several of the described embodiments, act 216 described above uses a metric of divergence of probability density functions (PDFs), because the vector(s) are deliberately generated (in act 212, e.g. by a vector generator 351 in
The above-described use of a metric of PDF divergence in in act 511 of the described embodiments provides a more accurate comparison than the prior art Euclidean distance or its simplified version. As will be readily apparent in view of this detailed description, any metric of divergence of probability density functions can be used in act 511, and some embodiments use the Jensen-Shannon divergence metric, as described below.
Specifically, a value of the Jensen-Shannon divergence metric is computed (e.g. in act 511) as follows in some embodiments. A predetermined vector is hereinafter P, and the vector of normalized counts for a block is hereinafter Q. In act 511, one or more processors in a mobile device 401 compute a mean vector as
followed by computing the metric as
wherein ln is natural logarithm, and
wherein i represents an index to vectors P and Q.
Note that although the Jensen-Shannon divergence metric is used in some embodiments as described above, other embodiments use other metrics of divergence between probability density functions (PDFs), as will be readily apparent in view of this detailed description.
Mobile device 401 of some embodiments that performs the method shown in
In some embodiments, vector generator 351 implements means for traversing a portion of the image to: identify changes in intensities of pixels in the image, and generate a group of counts based on the changes, without encoding positions at which the changes occur in the image, as described above. During operation, vector generator 351 of such embodiments uses an intensity of each pixel in the portion of the image and position of pixels (e.g. identified in a list of coordinates of pixels), to generate at least one count in the group of counts 355, wherein at least a first count in the group of counts is incremented when an intensity change between two pixels, in a direction of traversal, exceeds a predetermined threshold, and wherein at least a second count in the group of counts is incremented, when the intensity change is positive and the intensity change does not exceed the predetermined threshold. In several such embodiments, the PDF comparator 352 implements means for using a measure of difference between two probability distribution functions (PDFs). During operation, PDF comparator 352 of such embodiments compares a vector based on at least the group of counts 355 (e.g. received from vector generator 351) with vectors of corresponding symbols in a library 353 (
In addition to memory 329, mobile device 401 may include one or more other types of memory such as flash memory (or SD card) 1008 and/or a hard disk and/or an optical disk (also called “secondary memory”) to store data and/or software for loading into memory 329 (also called “main memory”) and/or for use by processor(s) 404. Mobile device 401 may further include a wireless transmitter and receiver in transceiver 1010 and/or any other communication interfaces 1009. It should be understood that mobile device 401 may be any portable electronic device such as a cellular or other wireless communication device, personal communication system (PCS) device, personal navigation device (PND), Personal Information Manager (PIM), Personal Digital Assistant (PDA), laptop, camera, smartphone, tablet (such as iPad available from Apple Inc) or other suitable mobile platform that is capable of creating an augmented reality (AR) environment.
A mobile device 401 of the type described above may include other position determination methods such as object recognition using “computer vision” techniques. The mobile device 401 may also include means for remotely controlling a real world object which may be a toy, in response to user input on mobile device 401 e.g. by use of transmitter in transceiver 1010, which may be an IR or RF transmitter or a wireless a transmitter enabled to transmit one or more signals over one or more types of wireless communication networks such as the Internet, WiFi, cellular wireless network or other network. The mobile device 401 may further include, in a user interface, a microphone and a speaker (not labeled). Of course, mobile device 401 may include other elements unrelated to the present disclosure, such as a read-only-memory 1007 which may be used to store firmware for use by processor 404.
Also, depending on the embodiment, a mobile device 401 may perform reference free tracking and/or reference based tracking using a local detector in mobile device 401 to detect predetermined symbols in images, in implementations that execute the OCR software 1014 to identify, e.g. characters of text in an image. The above-described identification of blocks for use by OCR software 1014 may be performed in software (executed by one or more processors or processor cores) or in hardware or in firmware, or in any combination thereof.
In some embodiments of mobile device 401, the above-described vector generator 351 and PDF comparator 352 are included in OCR software 1014 that is implemented by a processor 404 executing the software 320 in memory 329 of mobile device 401, although in other embodiments any one or more of vector generator 351 and PDF comparator 352 are implemented in any combination of hardware circuitry and/or firmware and/or software in mobile device 401. Hence, depending on the embodiment, various functions of the type described herein of OCR software may be implemented in software (executed by one or more processors or processor cores) or in dedicated hardware circuitry or in firmware, or in any combination thereof.
Accordingly, depending on the embodiment, any one or more of vector generator 351 and PDF comparator 352 can, but need not necessarily include, one or more microprocessors, embedded processors, controllers, application specific integrated circuits (ASICs), digital signal processors (DSPs), and the like. The term processor is intended to describe the functions implemented by the system rather than specific hardware. Moreover, as used herein the term “memory” refers to any type of computer storage medium, including long term, short term, or other memory associated with the mobile platform, and is not to be limited to any particular type of memory or number of memories, or type of media upon which memory is stored.
Hence, methodologies described herein may be implemented by various means depending upon the application. For example, these methodologies may be implemented in firmware 1013 (
In some embodiments, an apparatus as described herein is implemented by the following: a camera implements means for receiving a portion of an image of a scene of real world and a processor is programmed to implement means for traversing at least the portion of the image to identify changes in intensities of pixels in the image and generate a group of counts based on the changes, without encoding positions at which the changes occur in the image. Depending on the embodiment, the just-described processor or another processor in the apparatus is programmed to implement means for using a measure of difference between two probability distributions, to compare a vector based on at least the group of counts with multiple predetermined vectors of corresponding symbols in a set, to identify a specific symbol therein. Furthermore, any of the just-described processors may be configured to implement means for storing the specific symbol in a memory, as being recognized in the image.
Any machine-readable medium tangibly embodying software instructions (also called “computer instructions”) may be used in implementing the methodologies described herein. For example, software 320 (
A non-transitory computer-readable storage media includes physical computer storage media. A storage medium may be any available medium that can be accessed by a computer. By way of example, and not limitation, such non-transitory computer-readable media can comprise RAM, ROM, Flash Memory, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other non-transitory medium that can be used to store program code in the form of instructions or data structures and that can be accessed by a computer; disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of non-transitory computer-readable media.
Although illustrated in connection with specific embodiments for instructional purposes, the embodiments are not limited thereto. Hence, although an item shown in
Depending on a specific symbol recognized in a handheld camera captured image, a user can receive different types of feedback depending on the embodiment. Additionally haptic feedback (e.g. by vibration of mobile device 401) is provided by triggering haptic feedback circuitry 1018 (
Various adaptations and modifications may be made without departing from the scope of the embodiments. Therefore, the spirit and scope of the appended claims should not be limited to the foregoing description. It is to be understood that several other aspects of the embodiments will become readily apparent to those skilled in the art from the description herein, wherein it is shown and described various aspects by way of illustration. Numerous such embodiments are encompassed by the attached claims.
This application claims priority under 35 USC §119 (e) from U.S. Provisional Application No. 61/673,677 filed on Jul. 19, 2012 and entitled “Feature Extraction And Use With A Probability Density Function (PDF) Divergence Metric”, which is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
3710321 | Rubenstein | Jan 1973 | A |
4654875 | Srihari et al. | Mar 1987 | A |
5321768 | Fenrich et al. | Jun 1994 | A |
5459739 | Handley et al. | Oct 1995 | A |
5465304 | Cullen et al. | Nov 1995 | A |
5519786 | Courtney et al. | May 1996 | A |
5563403 | Bessho et al. | Oct 1996 | A |
5633954 | Gupta et al. | May 1997 | A |
5751850 | Rindtorff | May 1998 | A |
5764799 | Hong et al. | Jun 1998 | A |
5768451 | Hisamitsu et al. | Jun 1998 | A |
5805747 | Bradford | Sep 1998 | A |
5835633 | Fujisaki et al. | Nov 1998 | A |
5844991 | Hochberg et al. | Dec 1998 | A |
5978443 | Patel | Nov 1999 | A |
6023536 | Visser | Feb 2000 | A |
6092045 | Stubley et al. | Jul 2000 | A |
6266439 | Pollard et al. | Jul 2001 | B1 |
6393443 | Rubin et al. | May 2002 | B1 |
6473517 | Tyan et al. | Oct 2002 | B1 |
6674919 | Ma et al. | Jan 2004 | B1 |
6678415 | Popat et al. | Jan 2004 | B1 |
6687421 | Navon | Feb 2004 | B1 |
6738512 | Chen et al. | May 2004 | B1 |
6954795 | Takao et al. | Oct 2005 | B2 |
7110621 | Greene et al. | Sep 2006 | B1 |
7142727 | Notovitz et al. | Nov 2006 | B2 |
7263223 | Irwin | Aug 2007 | B2 |
7333676 | Myers et al. | Feb 2008 | B2 |
7403661 | Curry et al. | Jul 2008 | B2 |
7450268 | Martinez et al. | Nov 2008 | B2 |
7724957 | Abdulkader | May 2010 | B2 |
7738706 | Aradhye et al. | Jun 2010 | B2 |
7783117 | Liu et al. | Aug 2010 | B2 |
7817855 | Yuille et al. | Oct 2010 | B2 |
7889948 | Steedly et al. | Feb 2011 | B2 |
7961948 | Katsuyama | Jun 2011 | B2 |
7984076 | Kobayashi et al. | Jul 2011 | B2 |
8009928 | Manmatha et al. | Aug 2011 | B1 |
8189961 | Nijemcevic et al. | May 2012 | B2 |
8194983 | Al-Omari et al. | Jun 2012 | B2 |
8285082 | Heck | Oct 2012 | B2 |
8306325 | Chang | Nov 2012 | B2 |
8417059 | Yamada | Apr 2013 | B2 |
8542926 | Panjwani et al. | Sep 2013 | B2 |
8644646 | Heck | Feb 2014 | B2 |
20020037104 | Myers et al. | Mar 2002 | A1 |
20030026482 | Dance | Feb 2003 | A1 |
20030099395 | Wang et al. | May 2003 | A1 |
20030215137 | Wnek | Nov 2003 | A1 |
20040179734 | Okubo | Sep 2004 | A1 |
20040240737 | Lim et al. | Dec 2004 | A1 |
20050041121 | Steinberg et al. | Feb 2005 | A1 |
20050123199 | Mayzlin et al. | Jun 2005 | A1 |
20050238252 | Prakash et al. | Oct 2005 | A1 |
20060039605 | Koga | Feb 2006 | A1 |
20060215231 | Borrey et al. | Sep 2006 | A1 |
20060291692 | Nakao et al. | Dec 2006 | A1 |
20070110322 | Yuille et al. | May 2007 | A1 |
20070116360 | Jung et al. | May 2007 | A1 |
20070217676 | Grauman et al. | Sep 2007 | A1 |
20080008386 | Anisimovich et al. | Jan 2008 | A1 |
20080063273 | Shimodaira | Mar 2008 | A1 |
20080112614 | Fluck et al. | May 2008 | A1 |
20090060335 | Rodriguez Serrano et al. | Mar 2009 | A1 |
20090202152 | Takebe et al. | Aug 2009 | A1 |
20090232358 | Cross | Sep 2009 | A1 |
20090252437 | Li et al. | Oct 2009 | A1 |
20090316991 | Geva et al. | Dec 2009 | A1 |
20090317003 | Heilper et al. | Dec 2009 | A1 |
20100049711 | Singh et al. | Feb 2010 | A1 |
20100067826 | Honsinger et al. | Mar 2010 | A1 |
20100080462 | Miljanic et al. | Apr 2010 | A1 |
20100128131 | Tenchio et al. | May 2010 | A1 |
20100141788 | Hwang et al. | Jun 2010 | A1 |
20100144291 | Stylianou et al. | Jun 2010 | A1 |
20100172575 | Lukac et al. | Jul 2010 | A1 |
20100195933 | Nafarieh | Aug 2010 | A1 |
20100232697 | Mishima et al. | Sep 2010 | A1 |
20100239123 | Funayama et al. | Sep 2010 | A1 |
20100245870 | Shibata | Sep 2010 | A1 |
20100272361 | Khorsheed et al. | Oct 2010 | A1 |
20100296729 | Mossakowski | Nov 2010 | A1 |
20110052094 | Gao et al. | Mar 2011 | A1 |
20110081083 | Lee et al. | Apr 2011 | A1 |
20110188756 | Lee et al. | Aug 2011 | A1 |
20110215147 | Goncalves et al. | Sep 2011 | A1 |
20110222768 | Galic et al. | Sep 2011 | A1 |
20110249897 | Chaki et al. | Oct 2011 | A1 |
20110274354 | Nijemcevic | Nov 2011 | A1 |
20110280484 | Ma et al. | Nov 2011 | A1 |
20110285873 | Showering et al. | Nov 2011 | A1 |
20120051642 | Berrani et al. | Mar 2012 | A1 |
20120066213 | Ohguro | Mar 2012 | A1 |
20120092329 | Koo et al. | Apr 2012 | A1 |
20120114245 | Lakshmanan et al. | May 2012 | A1 |
20120155754 | Chen et al. | Jun 2012 | A1 |
20130001295 | Goncalves | Jan 2013 | A1 |
20130058575 | Koo et al. | Mar 2013 | A1 |
20130129216 | Tsai et al. | May 2013 | A1 |
20130194448 | Baheti et al. | Aug 2013 | A1 |
20130195315 | Baheti et al. | Aug 2013 | A1 |
20130195360 | Krishna Kumar et al. | Aug 2013 | A1 |
20130308860 | Mainali et al. | Nov 2013 | A1 |
20140003709 | Ranganathan et al. | Jan 2014 | A1 |
20140023270 | Baheti et al. | Jan 2014 | A1 |
20140023271 | Baheti et al. | Jan 2014 | A1 |
20140023273 | Bahetti et al. | Jan 2014 | A1 |
20140023274 | Barman et al. | Jan 2014 | A1 |
20140023275 | Krishna Kumar et al. | Jan 2014 | A1 |
20140168478 | Baheti et al. | Jun 2014 | A1 |
Number | Date | Country |
---|---|---|
1146478 | Oct 2001 | EP |
1840798 | Oct 2007 | EP |
2192527 | Jun 2010 | EP |
2453366 | Apr 2009 | GB |
2468589 | Sep 2010 | GB |
2004077358 | Sep 2004 | WO |
Entry |
---|
Dalai, Navneet, and Bill Triggs. “Histograms of oriented gradients for human detection.” Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on. vol. 1. IEEE, 2005. |
Lowe, David G. “Distinctive image features from scale-invariant keypoints.” International journal of computer vision 60.2 (2004): 91-110. |
Newell, Andrew J., and Lewis D. Griffin. “Multiscale histogram of oriented gradient descriptors for robust character recognition.” Document Analysis and Recognition (ICDAR), 2011 International Conference on. IEEE, 2011. |
Papandreou, A. et al. “A Novel Skew Detection Technique Based on Vertical Projections”, International Conference on Document Analysis and Recognition, Sep. 18, 2011, pp. 384-388, XP055062043, DOI: 10.1109/ICDAR.2011.85, ISBN: 978-1-45-771350-7. |
Setlur, S. et al. “Creation of data resources and design of an evaluation test bed for Devanagari script recognition”, Research Issues in Data Engineering: Multi-lingual Information Management, RIDE-MLIM 2003. Proceedings. 13th International Workshop, 2003, pp. 55-61. |
Chaudhuri, B.B. et al. “Skew Angle Detection of Digitized Indian Script Documents”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, No. 2, Feb. 1997, pp. 182-186. |
Chen, X. et al. “Detecting and Reading Text in Natural Scenes,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVRP'04), 2004, pp. 1-8. |
Epshtein, B. et al. “Detecting text in natural scenes with stroke width transform,” Computer Vision and Pattern Recognition (CVPR) 2010, pp. 1-8, (as downloaded from “http://research.microsoft.com/pubs/149305/1509.pdf”). |
Jain, A. K. et al. “Automatic text location in images and video frames”, Pattern Recognition, vol. 31, No. 12, 1998, pp. 2055-2076. |
Jayadevan, R. et al. “Offline Recognition of Devanagari Script: A Survey”, IEEE Transactions on Systems, Man, and Cybernetics—Part C: Applications and Reviews, 2010, pp. 1-15. |
Kapoor, R. et al. “Skew angle detection of a cursive handwritten Devanagari script character image”, Indian Institute of Science, May-Aug. 2002, pp. 161-175. |
Lee, S-W. et al. “A new methodology for gray-scale character segmentation and recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18, No. 10, Oct. 1996, pp. 1045-1050. |
Li, H. et al. “Automatic Text Detection and Tracking in a Digital Video”, IEEE Transactions on Image Processing, vol. 9 No. 1, Jan. 2000, pp. 147-156. |
Matas, et al. “Robust Wide Baseline Stereo from Maximally Stable Extremal Regions”, Proc. of British Machine Vision Conference, 2002, pp. 384-393. |
Mikulik, et al. “Construction of Precise Local Affine Frames,” Center for Machine Perception, Czech Technical University in Prague, Czech Republic, Abstract and second paragraph of Section 1; Algorithms 1 & 2 of Section 2 and Section 4, International Conference on Pattern Recognition, 2010, pp. 1-5. |
Pal, U. et al. “Indian script character recognition: a survey”, Pattern Recognition Society, published by Elsevier Ltd, 2004, pp. 1887-1899. |
“4.1 Points and patches” In: Szeliski Richard: “Computer Vision—Algorithms and Applications”, 2011, Springer-Verlag, London, XP002696110, p. 195, ISBN: 978-1-84882-934-3. |
Agrawal, M. et al. “Generalization of Hindi OCR Using Adaptive Segmentation and Font Files,” V. Govindaraju, S. Setlur (eds.), Guide to OCR for Indic Scripts, Advances in Pattern Recognition, DOI 10.1007/978-1-84800-330-9—10, Springer-Verlag London Limited 2009, pp. 181-207. |
Agrawal, M. et al. “2 Base Devanagari OCR System” In: Govindaraju V, Srirangataj S (Eds.): “Guide to OCR for Indic Scripts—Document Recognition and Retrieval”, 2009, Springer Science+Business Media, London, XP002696109, pp. 184-193. |
Chen, H. et al. “Robust Text Detection in Natural Images With Edge-Enhanced Maximally Stable Extremal Regions,” believed to be published in IEEE International Conference on Image Processing (ICIP), Sep. 2011, pp. 1-4. |
Chowdhury, A. R. et al. “Text Detection of Two Major Indian Scripts in Natural Scene Images”, Sep. 22, 2011, Camera-Based Document Analysis and Recognition, Springer Berlin Heidelberg, pp. 42-57, XP019175802, ISBN: 978-3-642-29363-4. |
Dlagnekov, L. et al. “Detecting and Reading Text in Natural Scenes,” Oct. 2004, pp. 1-22. |
Elgammal, A. M. et al. “Techniques for Language Identification for Hybrid Arabic-English Document Images,” believed to be published in 2001 in Proceedings of IEEE 6th International Conference on Document Analysis and Recognition, pp. 1-5. |
Ghoshal, R. et al. “Headline Based Text Extraction from Outdoor Images”, 4TH International Conference on Pattern Recognition and Machine Intelligence, Springer LNCS, vol. 6744, Jun. 27, 2011, pp. 446-451, XP055060285. |
Holmstrom, L. et al. “Neural and Statistical Classifiers—Taxonomy and Two Case Studies,” IEEE Transactions on Neural Networks, Jan. 1997, pp. 5-17, vol. 8 (1). |
Jain, A. K. et al. “Automatic Text Location in Images and Video Frames,” believed to be published in Proceedings of Fourteenth International Conference on Pattern Recognition, vol. 2, Aug. 1998, pp. 1497-1499. |
Machine Learning, retrieved from http://en.wikipedia.org/wiki/Machine—learning, May 7, 2012, pp. 1-8. |
Moving Average, retrieved from http://en.wikipedia.org/wiki/Moving—average, Jan. 23, 2013, pp. 1-5. |
Nister, D. et al. “Linear Time Maximally Stable Extremal Regions,” ECCV, 2008, Part II, LNCS 5303, pp. 183-196, published by Springer-Verlag Berlin Heidelberg. |
Pardo, M. et al. “Learning From Data: A Tutorial With Emphasis on Modern Pattern Recognition Methods,” IEEE Sensors Journal, Jun. 2002, pp. 203-217, vol. 2 (3). |
Park, J-M. et al. “Fast Connected Component Labeling Algorithm Using a Divide and Conquer Technique,” believed to be published in Matrix (2000), vol. 4 (1), pp. 4-7, Publisher: Elsevier Ltd. |
Renold, M. “Detecting and Reading Text in Natural Scenes,” Master's Thesis, May 2008, pp. 1-59. |
Shin, H. et al. “Application of Floyd-Warshall Labelling Technique: Identification of Connected Pixel Components in Binary Image,” Kangweon-Kyungki Math. Jour. 14(2006), No. 1, pp. 47-55. |
Vedaldi, A. “An Implementation of Multi-Dimensional Maximally Stable Extremal Regions” Feb. 7, 2007, pp. 1-7. |
VLFeat—Tutorials—MSER, retrieved from http://www.vlfeat.org/overview/mser.html, Apr. 30, 2012, pp. 1-2. |
Chen Y.L., “A knowledge-based approach for textual information extraction from mixed text/graphics complex document images”, Systems man and Cybernetics (SMC), 2010 IEEE International Conference on, IEEE, Piscataway, NJ, USA, Oct. 10, 2010, pp. 3270-3277, XP031806156, ISBN: 978-1-4244-6586-6. |
Co-pending U.S. Appl. No. 13/748,562, filed on Jan. 23, 2013, (47 pages). |
Co-pending U.S. Appl. No. 13/831,237, filed on Mar. 14, 2013, (34 pages). |
Co-pending U.S. Appl. No. 13/842,985, filed on Mar. 15, 2013, (53 pages). |
Song Y., et al., “A Handwritten Character Extraction Algorithm for Multi-language Document Image”, 2011 International Conference on Document Analysis and Recognition, Sep. 18, 2011, pp. 93-98, XP05568675, DOI: 10.1109/ICDAR.2011.28 ISBN: 978-1-45-771350-7. |
Wu V., et al., “TextFinder: An Automatic System to Detect and Recognize Text in Images”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, No. 11, Nov. 1, 1999, pp. 1224-1229, XP055068381. |
“Histogram of oriented gradients,” Wikipedia, retrieved from http://en.wikipedia.org/wiki/Histogram—of—oriented—gradients on Apr. 30, 2015, 7 pages. |
“Connected-component labeling,” Wikipedia, retrieved from http://en.wikipedia.org/wiki/Connected-component—labeling on May 14, 2012, 7 pages. |
Lowe, D.G., “Distinctive Image Features from Scale-Invariant Keypoints,” accepted for publication in the International Journal of Computer Vision, 2004, Jan. 5, 2004, 28 pages. |
Chaudhuri B., Ed., “Digital Document Processing—Major Directions and Recent Advances”, 2007, Springer-Verlag London Limited, XP002715747, ISBN : 978-1-84628-501-1 pp. 103-106, p. 106, section “5.3.5 Zone Separation and Character Segmentation”, paragraph 1. |
Chaudhuri B.B., et al., “An OCR system to read two Indian language scripts: Bangla and Devnagari (Hindi)”, Proceedings of the 4TH International Conference on Document Analysis and Recognition. (ICDAR). Ulm, Germany, Aug. 18-20, 1997; [Proceedings of the ICDAR], Los Alamitos, IEEE Comp. Soc, US, vol. 2, Aug. 18, 1997, pp. 1011-1015, XP010244882, DOI: 10.1109/ICDAR.1997.620662 ISBN: 978-0-8186-7898-1 the whole document. |
Chaudhury S (Eds.): “OCR Technical Report for the project Development of Robust Document Analysis and Recognition SYstem for Printed Indian Scripts”, 2008, pp. 149-153, Retrieved from the Internet: URL:http://researchweb.iiit.ac.inj-jinesh/ocrDesignDoc.pdf [retrieved on Sep. 5, 2013]. |
Dalal N., et al., “Histograms of oriented gradients for human detection”, Computer Vision and Pattern Recognition, 2005 IEEE Computer Society Conference on, IEEE, Piscataway, NJ, USA, Jun. 25, 2005, pp. 886-893 vol. 1, XP031330347, ISBN: 978-0-7695-2372-9 Section 6.3. |
Forsesen P.E., et al., “Shape Descriptor for Maximally Stable Extremal Regions”, Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on, IEEE, Pl, Oct. 1, 2007, pp. 1-8, XP031194514 , ISBN: 978-1-4244-1630-1 abstract Section 2. Multi-resoltuion MSER. |
International Search Report and Written Opinion—PCT/US2013-049379—ISA/EPO—Oct. 17, 2013. |
Minoru M., Ed., “Character Recognition”, Aug. 2010, Sciyo, XP002715748, ISBN: 978-953-307-105-3 pp. 91-95, p. 92, section “7.3 Baseline Detection Process”. |
Pal U et al., “Multi-skew detection of Indian script documents” Document Analysis and Recognition, 2001. Proceedings. Sixth International Conference on Seattle, WA, USA Sep. 10-13, 2001, Los Aalmitos, CA, USA, IEEE Comput. Soc. US, Sep. 10, 2001, pp. 292-296, XP010560519, DOI:10.1109/ICDAR.2001.953801, ISBN: 978-0-7695-1263-1. |
Pal U., et al., “OCR in Bangla: an Indo-Bangladeshi language”, Pattern Recognition, 1994. vol. 2—Conference B: Computer Vision & Image Processing., Proceedings of the 12th IAPR International. Conferenc e on Jerusalem, Israel Oct. 9-13, 1994, Los Alamitos, CA, USA, IEEE ICPR.1994.576917 ISBN: 978-0-8186-6270-6 the whole document. |
Premaratne H.L., et al., “Lexicon and hidden Markov model-based optimisation of the recognised Sinhala script”, Pattern Recognition Letters, Elsevier, Amsterdam, NL, vol. 27, No. 6, Apr. 15, 2006, pp. 696-705, XP027922538, ISSN: 0167-8655. |
Ray A.K et al., “Information Technology—Principles and Applications”. 2004. Prentice-Hall of India Private Limited. New Delhi! XP002712579, ISBN: 81-203-2184-7, pp. 529-531. |
Senda S., et al., “Fast String Searching in a Character Lattice,” IEICE Transactions on Information and Systems, Information & Systems Society, Tokyo, JP, vol. E77-D, No. 7, Jul. 1, 1994, pp. 846-851, XP000445299, ISSN: 0916-8532. |
Senk V., et al., “A new bidirectional algorithm for decoding trellis codes,” EUROCON' 2001, Trends in Communications, International Conference on Jul. 4-7, 2001, Piscataway, NJ, USA, IEEE, Jul. 4, 2001, pp. 34-36, vol. 1, XP032155513, DOI :10.1109/EURCON.2001.937757 ISBN : 978-0-7803-6490-5. |
Sinha R.M.K., et al., “On Devanagari document processing”, Systems, Man and Cybernetics, 1995. Intelligent Systems for the 21st Century., IEEE International Conference on Vancouver, BC, Canada Oct. 22-25, 1995, New York, NY, USA,IEEE, US, vol. 2, Oct. 22, 1995, pp. 1621-1626, XP010194509, DOI: 10.1109/ICSMC.1995.538004 ISBN: 978-0-7803-2559-3 the whole document. |
Uchida S et al., “Skew Estimation by Instances”, 2008 The Eigth IAPR International Workshop on Document Analysis Systems, Sep. 1, 2008, pp. 201-208, XP055078375, DOI: 10.1109/DAS.2008.22, ISBN: 978-0-76-953337-7. |
Unser M., “Sum and Difference Histograms for Textue Classification”, Transactions on Pattern Analysis and Machine Intelligence, IEEE, Piscataway, USA, vol. 30, No. 1, Jan. 1, 1986, pp. 118-125, XP011242912, ISSN: 0162-8828 section A; p. 122, right-hand column p. 123. |
Number | Date | Country | |
---|---|---|---|
20140023278 A1 | Jan 2014 | US |
Number | Date | Country | |
---|---|---|---|
61673677 | Jul 2012 | US |