This application is also related to U.S. application Ser. No. 13/748,539, filed concurrently herewith, entitled “Identifying Regions Of Text To Merge In A Natural Image or Video Frame” which is assigned to the assignee hereof and which is incorporated herein by reference in its entirety.
This application is also related to U.S. application Ser. No. 13/748,574, filed concurrently herewith, entitled “Rules For Merging Blocks Of Connected Components In Natural Images” which is assigned to the assignee hereof and which is incorporated herein by reference in its entirety.
This patent application relates to devices and methods for detecting and manual correction of skew, in regions of natural images or video frames that are not yet classified as text or non-text.
Identification of text regions in papers that are scanned (e.g. by a flatbed scanner of a copier) is significantly easier (e.g. due to upright orientation, large size and slow speed) than detecting text regions in images of scenes in the real world (also called “natural images”) captured in real time by a handheld device (such as a smartphone) having a built-in digital camera.
Hence, detection of text regions in a real world image is performed using different techniques. For additional information on techniques used in the prior art, to identify text regions in natural images, see the following articles that are incorporated by reference herein in their entirety as background:
When a natural image 107 (
In several aspects of described embodiments, an electronic device and method use a camera to capture an image of an environment outside the electronic device, followed by identification of regions of pixels in the image based on pixel intensities. Each region is identified to include pixels that are contiguous with one another (e.g. a connected component), and in some embodiments each region is identified to include therein a local extrema of intensity in the image.
After one or more regions in the image are identified, corresponding one or more values of an indicator of skew are automatically computed. One or more regions for which a skew indicator is computed may be automatically selected, e.g. based on a geometric property (such as an aspect ratio of a region, or stroke width), and/or based on presence of a line of pixels of a common binary value in the region. For example, some embodiments compute, for use as an indicator of skew of a region i, a ratio Mi of an area Hi*Wi of a minimum bounding box of height Hi and width Wi around region i, and a count Ni of pixels identified to be included in the region Qi (e.g. in a list of positions).
Then a predetermined test is applied, to multiple values of the skew indicator (corresponding to multiple regions identified in the image), to determine whether skew is unacceptable globally, in the image as a whole. For example, in some embodiments, a counter is incremented each time a value of skew of a region exceeds a threshold (also called “first threshold”), and the predetermined test is found to be met when the counter exceeds another threshold (also called “second threshold”).
When skew is determined to be unacceptable, user input is automatically requested, to correct skew of the image. For example, some embodiments display a symbol on a screen, to prompt user input. User input may be received as rotation of an area of touch on the screen, through an angle between a direction of the symbol and a direction of the image. User input may alternatively or additionally be received via rotation of a camera, in order to align the direction of the symbol with the direction of the image. Then the above-described process may be repeated in some embodiments, e.g. after receipt of user input of the type just described, or alternatively after passage of a predetermined duration of time (also called “timeout”), to check whether the predetermined test is satisfied. Certain embodiments may capture a skew-corrected image (based on the user input), detect skew again this time in the skew-corrected image, and if necessary request user input once again. At any stage, when skew is found to be less than a preset limit (e.g. 5 degrees) in some embodiments, the image may be further processed, in the normal manner (e.g. subject to OCR). In several embodiments, the above-described process is performed prior to classification (e.g. by a neural network) of the image's regions (whose skew is being determined and corrected), as text or non-text.
It is to be understood that several other aspects of the described embodiments will become readily apparent to those skilled in the art from the description herein, wherein it is shown and described various aspects by way of illustration. The drawings and detailed description below are to be regarded as illustrative in nature and not as restrictive.
Regions in an image of a scene in real world are initially identified in described embodiments, in the normal manner. For example, as described above in reference to
A specific manner in which pixels of a region differ from surrounding pixels at the boundary may be identified by use of an MSER method in a predetermined manner in some embodiments by use of a lookup table in memory to obtain input parameters. Such a lookup table may supply one or more specific combinations of values for the parameters Δ and Max Variation, which are input to an MSER method (also called MSER input parameters). Such a lookup table may be populated ahead of time, with specific values for Δ and Max Variation, e.g. determined by experimentation to generate contours that are appropriate for recognition of text in a natural image, such as value 8 for Δ and value 0.07 for Max Variation.
In some embodiments, pixels are identified in a set (which may be implemented in a list) that in turn identifies a region Qi which includes a local extrema of intensity (such as local maxima or local minima) in the image 107. Such a region Qi may be identified in act 212 (
Regions may be identified in act 213 by use of a method described in an article entitled “Robust Wide Baseline Stereo from Maximally Stable Extremal Regions” by J. Matas, O. Chum, M. Urban, and T. Pajdla, BMVC 2002, 10 pages that is incorporated by reference herein in its entirety. Alternatively other methods can be used to perform connected component analysis and identification of regions in act 212 e.g. as described in an article entitled “Application of Floyd-Warshall Labelling Technique: Identification of Connected Pixel Components In Binary Image” by Hyunkyung Shin and Joong Sang Shin, published in Kangweon-Kyungki Math. Jour. 14 (2006), No. 1, pp. 47-55 that is incorporated by reference herein in its entirety, or as described in an article entitled “Fast Connected Component Labeling Algorithm Using A Divide and Conquer Technique” by Jung-Me Park, Carl G. Looney and Hui-Chuan Chen, published Matrix (2000), Volume: 4, Issue: 1, Publisher: Elsevier Ltd, pages 4-7 that is also incorporated by reference herein in its entirety.
A specific manner in which regions of an image 107 are identified in act 213 by mobile device 200 in described embodiments can be different, depending on the embodiment. In several embodiments, each region of image 107 that is identified by use of an MSER method of the type described above is represented by act 213 in the form of a list of pixels, with two coordinates for each pixel, namely the x-coordinate and the y-coordinate in two dimensional space (of the image).
After identification of regions, each region is initially included in a single rectangular block which may be automatically identified by mobile device 200 of some embodiments in act 212, e.g. as a minimum bounding rectangle of a region, by identification of a largest x-coordinate, a largest y-coordinate, a smallest x-coordinate and a smallest y-coordinate of all pixels within the region. The just-described four coordinates may be used in act 212, or subsequently when needed, to identify the corners of a rectangular block that tightly fits the region. As discussed below, such a block (and therefore its four corners) may be used in checking whether a clustering rule 503 (
The above-described acts 211 and 213 are performed in several embodiments, in an initialization operation 210 (
After the regions are identified, a mobile device 200 of many described embodiments performs skew presence detection in an operation 220 (see
During operation 220, processor 1013 of some embodiments performs acts 221 and 222 as follows. In act 221, mobile device 200 calculates a value of an indicator of skew locally, in a specific region identified in act 213. The indicator of skew that is computed in act 221 can be different, depending on the embodiment. Some embodiments of processor 1013 compute a value of the indicator of skew in act 221 for each region Qi, by using (a) an area of the rectangle that tightly fits the region Qi (also called “minimum bounding rectangle”) and (b) a count of pixels in the region Qi, to obtain a metric Mi, which may be used to determine skew of the region i. In several such embodiments, the metric Mi is compared with a threshold t1 to determine whether or not skew in the region Qi is acceptable or not (e.g. not acceptable when skew angle of a region is greater than ±5 degrees), thereby to obtain a binary-valued indicator of skew in each region Qi. In other such embodiments, the metric Mi is directly used, as a real-valued indicator of skew in each region i.
A value of an indicator of skew that is computed in act 221 for each region is stored either individually (for each region) or in aggregate (across multiple regions), at a specific location in memory. Some embodiments increment in the memory a skew count for the entire image each time a region is marked as skew-present. Other embodiments label each region individually in memory as either skew-present or skew-absent. Note that it is not known at this stage in such embodiments whether or not a feature formed by the region is text or non-text, although a value of an indicator of skew is being determined for the region. In several aspects of the described embodiments, after act 221 (
The predetermined test, in which multiple values of the indicator of skew of different regions are used in aggregate in act 222 by processor 1013 can be different, in different embodiments. For example, certain embodiments of the predetermined test of act 222 may use statistical methods to compute mean or median of the multiple values, followed by filtering outliers among the multiple values, followed by re-computation of mean or median of the filtered values and comparison to a threshold (e.g. greater than ±5 degrees) to determine whether or not skew in the image as a whole is acceptable.
After operation 220, when skew is found to be acceptable across multiple regions of an image, processor 1013 performs an operation 240 (
Operation 250 is followed by an operation 252 (
Operations 240, 250, 252, 260 and 270 described above (in the preceding two paragraphs above) are performed by processor 1013 when skew is acceptable, as illustrated by block 304 (
After operation 230, skew is corrected by user action although a specific action to be performed by a user to correct skew of an image can be different, depending on the embodiment, etc. For example, a user may rotate mobile device 200 in a counter clockwise direction, as shown by arrow 205 in
As another example (not shown), the user may rotate an object 100 that carries text in the scene being imaged (such as a newspaper) so that skew of the image is reduced, relative to a field of view of a camera 204 in mobile device 200. Accordingly, after operation 230, processor 1013 waits for a preset time period and then returns to act 211 to repeat the above-described process. When metrics Mi . . . Mj of skew in corresponding regions i . . . j become sufficiently small, act 222 indicates skew present globally in the image is acceptable, and then processor 1013 performs operations 240, 250, 252, 260 and 270 described above.
As will be readily apparent in view of this detailed description skew can be reduced by user action in other ways, as illustrated in
In act 231, processor 1013 of some embodiments changes orientation of one or more regions in image 203 based on user input (as a measurement of rotation of an area of touch, as shown by arrow 208 in
In another example (see
Accordingly, accuracy in identifying text regions of a natural image (or video frame) is higher when using blocks that do not have skew or whose skew has been corrected, as illustrated in
A specific manner in which act 221 (
Mi=(Hi*Wi)/Ni
wherein Hi is the height of the block in number of pixels, e.g. H1 in
The above-described formula for skew metric Mi is based on an observation that the number Ni of pixels of the region i remains unchanged as an angle of skew is changed, but the area of the block (or minimum bounding rectangle) around the region changes (e.g. the block becomes larger with increasing skew, and the block becomes smaller with decreasing skew or increasing alignment). A value obtained by use of the above formula for skew metric Mi of a region Qi is then stored in memory 1012 (which is a non-transitory memory) in location 303 therein (
When skew is corrected based on receipt of user input (e.g. sensing of movement of an external object carrying text thereon, such as newspaper 100 or sensing of movement of mobile device 200, or sensing movement of both relative to one another), the same region 301 in a new image (after rotation) may now be contained in a block 304 that is smaller (
The above-described skew metric Mi is used in some embodiments to decide whether or not an image has skew, followed by skew correction as illustrated in
First threshold t1 is predetermined in several embodiments, based on empirical data as follows. Four examples of empirical data are illustrated in
In several of the above-described examples, first threshold t1 depends on font size, and therefore all regions of an image may be sorted by relative size, followed by use of a median to pick the appropriate threshold t1 (e.g. value 3 for regions of size larger than median and value 4 for regions of size smaller than median).
Referring back to
In act 315 (
In act 316, when the answer is yes, the image is identified as having skew present in an act 317, followed by an operation 230 of skew correction as follows. Specifically, in act 321, mobile device 200 displays on screen 201, a symbol 202 (
Symbol 202 may be displayed on screen 201 with image 203 in the background, as shown in
In certain embodiments, after performance of act 323, mobile device 200 uses the user input to rotate the image relative to screen 201 that is sensitive to touch (or normal screen 1002, depending on the embodiment) as per act 324. As noted above, depending on the embodiment, the user input may be received from a sensor that senses movement of an area of touch on screen 201 of mobile device 200, as the user reduces skew angle by aligning a first direction of a symbol (e.g. “+” sign) relative to a second direction of the image. In this example, the user input is in the form of a signal from a touch screen to a processor 1013 in mobile device 200. Mobile device 200 may use such input (indicative of skew angle) in any manner, e.g. to correct skew in image 203 without physical movement of mobile device 200.
In several embodiments, a skew metric of the type described above is computed after testing a block for presence of a peak in a histogram, e.g. within a specific region of the block (and on finding the test to be satisfied). A histogram of some embodiments is of counts of black pixels (alternatively counts of white pixels), as a function of height of the block. Presence of the just-described peak in such a histogram of a block typically occurs due to presence in the block of a line pixels of a common binary value (e.g. value 1 for black pixels), which may form a header line in a block that contains text in Hindi language (also called shiro-rekha) written in the Devanagri script (see line 399 in
Hence, one or more blocks of an image that are determined to have a line of pixels present therein (e.g. determined by performing a test of the type described in the immediately preceding paragraph) may then be subject to skew metric computation in operation 220, followed by skew correction in operation 230, as described above. As will be readily apparent in view of this disclosure, specific criteria that are used to test for presence of a pixel line that connects multiple characters of text in an image may be different, e.g. depending on the language and script of text to be detected in an image.
Although in some embodiments, skew correction in operation 230 is based on prompting for and receiving user input on tilt, other embodiments (described in the next paragraph, below) automatically search coarsely, followed by searching finely within a coarsely determined range of tilt angle. After automatic skew correction as just described, the skew-corrected blocks are subjected to a merger operation wherein one or more blocks are merged with one another, followed by checking for the presence of a line of pixels in a block 504 (
As noted above, a specific manner in which skew is corrected in operation 230 can be different in different embodiments. In some embodiments, processor 1013 is programmed to automatically detect skew as follows. Processor 1013 checks whether at a candidate angle, one or more attributes of a histogram of counts of pixels of a common binary value meet at least one test for presence of a straight line of pixels. Some embodiments detect a peak of the histogram of a block at the candidate angle by comparing a highest value Np in the counters to a mean Nm of all values in the counters e.g. by forming a ratio therebetween as Np/Nm, followed by comparing that ratio against a predetermined limit (e.g. ratio>1.75 indicates peak).
When a peak is found (e.g. the predetermined limit is exceeded by the ratio), some embodiments of processor 1013 perform an additional test wherein a y-coordinate of the peak is compared with a height of the block to determine whether the peak occurs in an upper 30% of the block. If the additional test is found to be satisfied, in some embodiments of processor 1013 the candidate angle (at which the pixel line is determined to be present) is selected for use in a voting process, and a counter associated with the candidate angle is incremented. Processor 1013 repeats the process described in this paragraph with additional blocks of the image, and after a sufficient number of such votes have been counted (e.g. 10 votes), the candidate angle of a counter which has the largest number of votes is used as the skew angle, which is then used to automatically correct skew in each block (e.g. by rotating each block through negative of the skew angle).
While various examples described herein use Devanagari to illustrate certain concepts, such as detecting a peak as noted above, those of skill in the art will appreciate that these concepts may be applied to languages or scripts other than Devanagari. For example, the peak-location preset criterion for Arabic may be 0.4≦Hp/H≦0.6, to test for presence of a peak in a middle 20% region of a block, based on profiles for Arabic text shown and described in an article entitled “Techniques for Language Identification for Hybrid Arabic-English Document Images” by Ahmed M. Elgammal and Mohamed A. Ismail, believed to be published 2001 in Proc. of IEEE 6th International Conference on Document Analysis and Recognition, pages 1100-1104, which is incorporated by reference herein in its entirety. Note that although certain criteria are described for Arabic and English, other similar criteria may be used for text in other languages wherein a horizontal line is used to interconnect letters of a word, e.g. text in the language Bengali (or Bangla). Moreover, embodiments described herein may also be used to detect and correct skew in Korean, Chinese, Japanese, Greek, Hebrew and/or other languages.
Several operations and acts of the type described herein are implemented by a processor 1013 (
Also, mobile device 200 may additionally include a graphics engine 1004, an image processor 1005, a position processor. Mobile device 200 may also include a disk 1008 to store data and/or software for use by processor 1013. Mobile device 200 may further include a wireless transmitter and receiver circuitry (in circuit 1010) and/or any other communication interfaces 1009. A transmitter in circuit 1010, which may be an IR or RF transmitter or a wireless a transmitter enabled to transmit one or more signals over one or more types of wireless communication networks such as the Internet, WiFi, cellular wireless network or other network.
Note that input to mobile device 200 can be in video mode, where each frame in the video is equivalent to the image input which is used to identify regions, and to compute a skew metric as described herein. Also, the image used to compute a skew metric as described herein can be fetched from a pre-stored file in a memory 1012 of mobile device 200.
It should be understood that mobile device 200 may be any portable electronic device such as a cellular or other wireless communication device, personal communication system (PCS) device, personal navigation device (PND), Personal Information Manager (PIM), Personal Digital Assistant (PDA), laptop, camera, or other suitable mobile device that is capable of augmented reality (AR).
A mobile device 200 of the type described above may include an optical character recognition (OCR) system as well as software that uses “computer vision” techniques. The mobile device 200 may further include, in a user interface, a microphone and a speaker (not labeled) in addition to screen 201 (which is a touch screen), or normal screen 1002 for displaying captured images. Of course, mobile device 200 may include other elements unrelated to the present disclosure, such as a read-only-memory 1007 which may be used to store firmware for use by processor 1013.
Mobile device 200 of some embodiments includes, in memory 1012 (
In several embodiments, skew correction module 1230 (
In some embodiments, memory 1012 may include instructions for a classifier that when executed by processor 1013 classifies the blocks 302, 304 that are unmerged (
Depending on the embodiment, various functions of the type described herein may be implemented in software (executed by one or more processors or processor cores) or in dedicated hardware circuitry or in firmware, or in any combination thereof. Accordingly, depending on the embodiment, any one or more of skew presence detection module 1220, skew correction module 1230, and user interface 181U illustrated in
Accordingly, in some embodiments, skew presence detection module 1220 implements means for computing a plurality of values of an indicator of skew in a plurality of regions in an image. Moreover, skew presence detection module 1220 of several such embodiments also implements means for determining whether skew of the image is unacceptable, by applying a predetermined test to the plurality of values of the indicator. Furthermore, user interface 181U implements means for requesting user input to correct skew of the image, in response to skew being determined to be unacceptable by the means for determining. Additionally, skew correction module 1230 implements means for correcting skew, based on a user-input skew angle received from user interface 181U. In certain embodiments, a storage module implements means for storing in at least one memory, information related to a skew-corrected block which may be merged with an adjacent block by a block merging module.
Hence, methodologies described herein may be implemented by various means depending upon the application. For example, these methodologies may be implemented in firmware in read-only-memory 1007 (
Any machine-readable medium tangibly embodying computer instructions may be used in implementing the methodologies described herein. For example, merger software 141 and rectification software 181 (
One or more non-transitory computer readable media include physical computer storage media. A computer readable medium may be any available medium that can be accessed by a computer. By way of example, and not limitation, non-transitory computer readable storage media can comprise RAM, ROM, Flash Memory, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store program code in the form of software instructions (also called “processor instructions” or “computer instructions”) or data structures and that can be accessed by a computer; disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of one or more non-transitory computer readable storage media.
Although several aspects are illustrated in connection with specific embodiments for instructional purposes, the present invention is not limited thereto. For example, although mobile device 200 is shown in
This application claims priority under 35 USC §119 (e) from U.S. Provisional Application No. 61/590,966 filed on Jan. 26, 2012 and entitled “Identifying Regions Of Text To Merge In A Natural Image or Video Frame”, which is assigned to the assignee hereof and which is incorporated herein by reference in its entirety. This application claims priority under 35 USC §119 (e) from U.S. Provisional Application No. 61/590,983 filed on Jan. 26, 2012 and entitled “Detecting and Correcting Skew In Regions Of Text In Natural Images”, which is assigned to the assignee hereof and which is incorporated herein by reference in its entirety. This application claims priority under 35 USC §119 (e) from U.S. Provisional Application No. 61/590,973 filed on Jan. 26, 2012 and entitled “Rules For Merging Blocks Of Connected Components In Natural Images”, which is assigned to the assignee hereof and which is incorporated herein by reference in its entirety. This application claims priority under 35 USC §119 (e) from U.S. Provisional Application No. 61/673,703 filed on Jul. 19, 2012 and entitled “Automatic Correction of Skew In Natural Images and Video”, which is assigned to the assignee hereof and which is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
3710321 | Rubenstein | Jan 1973 | A |
4654875 | Srihari et al. | Mar 1987 | A |
5321768 | Fenrich et al. | Jun 1994 | A |
5459739 | Handley et al. | Oct 1995 | A |
5519786 | Courtney et al. | May 1996 | A |
5633954 | Gupta et al. | May 1997 | A |
5764799 | Hong et al. | Jun 1998 | A |
5768451 | Hisamitsu et al. | Jun 1998 | A |
5805747 | Bradford | Sep 1998 | A |
5835633 | Fujisaki et al. | Nov 1998 | A |
5844991 | Hochberg et al. | Dec 1998 | A |
5978443 | Patel | Nov 1999 | A |
6023536 | Visser | Feb 2000 | A |
6393443 | Rubin et al. | May 2002 | B1 |
6473517 | Tyan et al. | Oct 2002 | B1 |
6674919 | Ma et al. | Jan 2004 | B1 |
6678415 | Popat et al. | Jan 2004 | B1 |
6687421 | Navon | Feb 2004 | B1 |
6738512 | Chen et al. | May 2004 | B1 |
7263223 | Irwin | Aug 2007 | B2 |
7333676 | Myers et al. | Feb 2008 | B2 |
7403661 | Curry et al. | Jul 2008 | B2 |
7724957 | Abdulkader | May 2010 | B2 |
7738706 | Aradhye et al. | Jun 2010 | B2 |
7783117 | Liu et al. | Aug 2010 | B2 |
7817855 | Yuille et al. | Oct 2010 | B2 |
7889948 | Steedly et al. | Feb 2011 | B2 |
7984076 | Kobayashi et al. | Jul 2011 | B2 |
8009928 | Manmatha et al. | Aug 2011 | B1 |
8189961 | Nijemcevic et al. | May 2012 | B2 |
8194983 | Al-Omari et al. | Jun 2012 | B2 |
20030026482 | Dance | Feb 2003 | A1 |
20030099395 | Wang et al. | May 2003 | A1 |
20040179734 | Okubo | Sep 2004 | A1 |
20040240737 | Lim et al. | Dec 2004 | A1 |
20050041121 | Steinberg et al. | Feb 2005 | A1 |
20050123199 | Mayzlin et al. | Jun 2005 | A1 |
20050238252 | Prakash et al. | Oct 2005 | A1 |
20060215231 | Borrey et al. | Sep 2006 | A1 |
20060291692 | Nakao et al. | Dec 2006 | A1 |
20070110322 | Yuille et al. | May 2007 | A1 |
20070116360 | Jung et al. | May 2007 | A1 |
20080008386 | Anisimovich et al. | Jan 2008 | A1 |
20080112614 | Fluck et al. | May 2008 | A1 |
20090202152 | Takebe et al. | Aug 2009 | A1 |
20090232358 | Cross | Sep 2009 | A1 |
20090252437 | Li et al. | Oct 2009 | A1 |
20090316991 | Geva et al. | Dec 2009 | A1 |
20090317003 | Heilper et al. | Dec 2009 | A1 |
20100049711 | Singh et al. | Feb 2010 | A1 |
20100067826 | Honsinger et al. | Mar 2010 | A1 |
20100080462 | Miljanic et al. | Apr 2010 | A1 |
20100128131 | Tenchio et al. | May 2010 | A1 |
20100141788 | Hwang et al. | Jun 2010 | A1 |
20100144291 | Stylianou et al. | Jun 2010 | A1 |
20100172575 | Lukac et al. | Jul 2010 | A1 |
20100195933 | Nafarieh | Aug 2010 | A1 |
20100232697 | Mishima et al. | Sep 2010 | A1 |
20100239123 | Funayama et al. | Sep 2010 | A1 |
20100245870 | Shibata | Sep 2010 | A1 |
20100272361 | Khorsheed et al. | Oct 2010 | A1 |
20100296729 | Mossakowski | Nov 2010 | A1 |
20110052094 | Gao et al. | Mar 2011 | A1 |
20110081083 | Lee et al. | Apr 2011 | A1 |
20110188756 | Lee et al. | Aug 2011 | A1 |
20110249897 | Chaki et al. | Oct 2011 | A1 |
20110274354 | Nijemcevic | Nov 2011 | A1 |
20110280484 | Ma et al. | Nov 2011 | A1 |
20110285873 | Showering et al. | Nov 2011 | A1 |
20120051642 | Berrani et al. | Mar 2012 | A1 |
20120066213 | Ohguro | Mar 2012 | A1 |
20120092329 | Koo et al. | Apr 2012 | A1 |
20120114245 | Lakshmanan et al. | May 2012 | A1 |
20120155754 | Chen et al. | Jun 2012 | A1 |
20130194448 | Baheti et al. | Aug 2013 | A1 |
20130195315 | Baheti et al. | Aug 2013 | A1 |
20130195360 | Kumar et al. | Aug 2013 | A1 |
20140023270 | Baheti et al. | Jan 2014 | A1 |
20140023271 | Baheti et al. | Jan 2014 | A1 |
20140023273 | Baheti et al. | Jan 2014 | A1 |
20140023274 | Barman et al. | Jan 2014 | A1 |
20140023275 | Kumar et al. | Jan 2014 | A1 |
20140023278 | Kumar et al. | Jan 2014 | A1 |
Number | Date | Country |
---|---|---|
1146478 | Oct 2001 | EP |
1840798 | Oct 2007 | EP |
2192527 | Jun 2010 | EP |
2453366 | Apr 2009 | GB |
2468589 | Sep 2010 | GB |
2004077358 | Sep 2004 | WO |
Entry |
---|
Chaudhuri, B.B. et al. “Skew Angle Detection of Digitized Indian Script Documents”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, No. 2, Feb. 1997, pp. 182-186. |
Chen, X. et al. “Detecting and Reading Text in Natural Scenes,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'04), 2004, pp. 1-8. |
Epshtein, B. et al. “Detecting text in natural scenes with stroke width transform,” Computer Vision and Pattern Recognition (CVPR) 2010, pp. 1-8, (as downloaded from “http://research.microsoft.com/pubs/149305/1509.pdf”). |
Jain, A. K. et al. “Automatic text location in images and video frames”, Pattern Recognition, vol. 31, No. 12, 1998, pp. 2055-2076. |
Jayadevan, R. et al. “Offline Recognition of Devanagari Script: A Survey”, IEEE Transactions on Systems, Man, and Cybernetics—Part C: Applications and Reviews, 2010, pp. 1-15. |
Kapoor, R. et al. “Skew angle detection of a cursive handwritten Devanagari script character image”, Indian Institute of Science, May-Aug. 2002, pp. 161-175. |
Lee, S-W. et al. “A new methodology for gray-scale character segmentation and recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18, No. 10, Oct. 1996, pp. 1045-1050. |
Li, H. et al. “Automatic Text Detection and Tracking in a Digital Video”, IEEE Transactions on Image Processing, vol. 9 No. 1, Jan. 2000, pp. 147-156. |
Matas, J. et al. “Robust Wide Baseline Stereo from Maximally Stable Extremal Regions”, Proc. of British Machine Vision Conference, 2002, pp. 384-393. |
Mikulik, A. et al. “Construction of Precise Local Affine Frames,” Center for Machine Perception, Czech Technical University in Prague, Czech Republic, Abstract and second paragraph of Section 1; Algorithms 1 & 2 of Section 2 and Section 4, International Conference on Pattern Recognition, 2010, pp. 1-5. |
Pal, U. et al. “Indian script character recognition: a survey”, Pattern Recognition Society, published by Elsevier Ltd, 2004, pp. 1887-1899. |
Shin, H. et al. “Application of Floyd-Warshall LabellingTechnique: Identification of Connected Pixel Components in Binary Image”, Kangweon-Kyungki Math. Jour. 14 (2006), No. 1, pp. 47-55. |
Nister, D. et al. “Linear Time Maximally Stable Extremal Regions”, ECCV, 2008, Part II, LNCS 5303, pp. 183-196, published by Springer-Verlag Berlin Heidelberg. |
Park, J-M. et al. “Fast Connected Component Labeling Algorithm Using a Divide and Conquer Technique”, believed to be published in Matrix (2000), vol. 4, Issue: 1, Publisher: Elsevier Ltd, pp. 4-7. |
Elgammal, A. M. et al. “Techniques for Language Identification for Hybrid Arabic-English Document Images”, believed to be published in 2001 in Proceedings of IEEE 6th International Conference on Document Analysis and Recognition, pp. 1-5. |
Pardo, M. et al. “Learning From Data: A Tutorial With Emphasis on Modern Pattern Recognition Methods,” IEEE Sensors Journal, vol. 2, No. 3, Jun. 2002, pp. 203-217. |
Holmstrom, L. et al. “Neural and Statistical Classifiers—Taxonomy and Two Case Studies,” IEEE Transactions on Neural Networks, vol. 8, No. 1, Jan. 1997, pp. 5-17. |
Machine Learning, retrieved from http://en.wikipedia.org/wiki/Machine—learning, May 7, 2012, pp. 1-8. |
Moving Average, retrieved from http://en.wikipedia.org/wiki/Moving—average, Jan. 23, 2013, pp. 1-5. |
Chen, H. et al. “Robust Text Detection in Natural Images With Edge-Enhanced Maximally Stable Extremal Regions”, believed to be published in IEEE International Conference on Image Processing (ICIP), Sep. 2011, pp. 1-4. |
Dlagnekov, L. et al. “Detecting and Reading Text in Natural Scenes”, Oct. 2004, pp. 1-22. |
Vedaldi, A. “An Implementation of Multi-Dimensional Maximally Stable Extremal Regions”, Feb. 7, 2007, pp. 1-7. |
VLFeat—Tutorials—MSER, retrieved from http://www.vlfeat.org/overview/mser.html, Apr. 30, 2012, pp. 1-2. |
Renold, M. “Detecting and Reading Text in Natural Scenes”, Master's Thesis, May 2008, pp. 1-59. |
Jain, A. K. et al. “Automatic Text Location in Images and Video Frames,” believed to be published in Proceedings of Fourteenth International Conference on Pattern Recognition, vol. 2, Aug. 1998, pp. 1497-1499. |
Chen Y.L., “A knowledge-based approach for textual information extraction from mixed text/graphics complex document images”, Systems Man and Cybernetics (SMC), 2010 IEEE International Conference on, IEEE, Piscataway, NJ, USA, Oct. 10, 2010, pp. 3270-3277, XP031806156, ISBN: 978-1-4244-6586-6. |
Co-pending U.S. Appl. No. 13/831,237, filed Mar. 14, 2013, (34 pages). |
Co-pending U.S. Appl. No. 13/842,985, filed Mar. 15, 2013 (53 pages). |
Song Y., et al., “A Handwritten Character Extraction Algorithm for Multi-language Document Image”, 2011 International Conference on Document Analysis and Recognition, Sep. 18, 2011, pp. 93-98, XP055068675, DOI: 10.1109/ICDAR2011.28 ISBN: 978-1-45-771350-7. |
Wu V., et al., “TextFinder: An Automatic System to Detect and Recognize Text in Images”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21. No. 11, Nov. 1, 1999, pp. 1224-1229, XP055068381. |
Agrawal, et al., “Generalization of Hindi OCR Using Adaptive Segmentation and Font Files,” V. Govindaraju, S. Setlur (eds.), Guide to OCR for Indic Scripts, Advances in Pattern Recognition, DOI 10.1007/978-1-84800-330-9—10, Springer-Verlag London Limited 2009, pp. 181-207. |
Chaudhuri B., Ed., “Digital Document Processing—Major Directions and Recent Advances”, 2007, Springer-Verlag London Limited, XP002715747, ISBN : 978-1-84628-501-1 pp. 103-106, p. 106, section “5.3.5 Zone Separation and Character Segmentation”, paragraph 1. |
Chaudhuri B.B., et al., “An OCR system to read two Indian language scripts: Bangla and Devnagari (Hindi)”, Proceedings of the 4th International Conference on Document Analysis and Recognition (ICDAR). Ulm, Germany, Aug. 18-20, 1997; [Proceedings of the ICDAR], Los Alamitos, IEEE Comp. Soc, US, vol. 2, Aug. 18, 1997, pp. 1011-1015, XP010244882, DOI: 10.1109/ICDAR.1997.620662 ISBN: 978-0-8186-7898-1 the whole document. |
Chaudhury S (Eds.): “OCR Technical Report for the project Development of Robust Document Analysis and Recognition System for Printed Indian Scripts”, 2008, pp. 149-153, XP002712777, Retrieved from the Internet: URL:http://researchweb.iiit.ac.inj-jinesh/ocrDesignDoc.pdf [retrieved on Sep. 5, 2013]. |
Dalal N., et al., “Histograms of oriented gradients for human detection”, Computer Vision and Pattern Recognition, 2005 IEEE Computer Society Conference on, IEEE, Piscataway, NJ, USA, Jun. 25, 2005, pp. 886-893 vol. 1, XP031330347, ISBN: 978-0-7695-2372-9 Section 6.3. |
Forssen P.E., et al., “Shape Descriptors for Maximally Stable Extremal Regions”, Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on, IEEE, PI, Oct. 1, 2007, pp. 1-8, XP031194514 , ISBN: 978-1-4244-1630-1 abstract Section 2. Multi-resoltuion MSER. |
Minoru, M., Ed., “Character Recognition”, Aug. 2010, Sciyo, XP002715748, ISBN: 978-953-307-105-3 pp. 91-95, p. 92, section “7.3 Baseline Detection Process”. |
Pal U et al., “Multi-skew detection of Indian script documents” Document Analysis and Recognition, 2001. Proceedings. Sixth International Conference on Seattle, WA, USA Sep. 10-13, 2001, Los Aalmitos, CA, USA, IEEE Comput. Soc. US, Sep. 10, 2001, pp. 292-296, XP010560519, DOI:10.1109/ICDAR.2001.953801, ISBN: 978-0-7695-1263-1. |
Pal U., et al., “OCR in Bangla: an Indo-Bangladeshi language”, Pattern Recognition, 1994. vol. 2—Conference B: Computer Vision & Image Processing., Proceedings of the 12th IAPR International. Conferenc E on Jerusalem, Israel Oct. 9-13, 1994, Los Alamitos, CA, USA, IEEE Comput. Soc, vol. 2, Oct. 9, 1994, pp. 269-273, XP010216292, DOI: 10.1109/ICPR.1994.576917 ISBN: 978-0-8186-6270-6 the whole document. |
Premaratne H.L., et al., “Lexicon and hidden Markov model-based optimisation of the recognised Sinhala script”, Pattern Recognition Letters, Elsevier, Amsterdam, NL, vol. 27, No. 6, Apr. 15, 2006 , pp. 696-705, XP027922538, ISSN: 0167-8655. |
Ray A.K et al., “Information Technology—Principles and Applications”. 2004. Prentice-Hall of India Private Limited. New Delhi! XP002712579, ISBN: 81-203-2184-7, pp. 529-531. |
Senda S., et al., “Fast String Searching in a Character Lattice,” IEICE Transactions on Information and Systems, Information & Systems Society, Tokyo, JP, vol. E77-D, No. 7, Jul. 1, 1994, pp. 846-851, XP000445299, ISSN: 0916-8532. |
Senk V., et al., “A new bidirectional algorithm for decoding trellis codes,” EUROCON' 2001, Trends in Communications, International Conference on Jul. 4-7, 2001, Piscataway, NJ, USA, IEEE, Jul. 4, 2001, pp. 34-36, vol. I, XP032155513, DOI :10.1109/EUROCON.2001.937757 ISBN : 978-0-7803-6490-5. |
Sinha R.M.K., et al., “On Devanagari document processing”, Systems, Man and Cybernetics, 1995. Intelligent Systems for the 21st Century., IEEE International Conference on Vancouver, BC, Canada Oct. 22-25, 1995, New York NY, USA,IEEE, US, vol. 2, Oct. 22, 1995, pp. 1621-1626, XP010194509, DOI: 10.1109/ICSMC.1995.538004 ISBN: 978-0-7803-2559-3 the whole document. |
Uchida S et al., “Skew Estimation by Instances”, 2008 The Eighth IAPR International Workshop on Document Analysis Systems, Sep. 1, 2008, pp. 201-208, XP055078375, DOI: 10.1109/DAS.2008.22, ISBN: 978-0-76-953337-7. |
Unser M., “Sum and Difference Histograms for Texture Classification”, Transactions on Pattern Analysis and Machine Intelligence, IEEE, Piscataway, USA, vol. 30, No. 1, Jan. 1, 1986, pp. 118-125, XP011242912, ISSN: 0162-8828 section A; p. 122, right-hand col. p. 123. |
Written Opinion of the International Preliminary Examining Authority—PCT/US2013/023003—IPEA/EPO—Feb. 13, 2014. |
“4.1 Points and patches” In: Szeliski Richard: “Computer Vision—Algorithms and Applications”, 2011, Springer-Verlag, London, XP002696110, p. 195, ISBN: 978-1-84882-934-3. |
Agrawal M., et al., “2 Base Devanagari OCR System” In: Govindaraju V, Srirangataj S (Eds.): “Guide to OCR for Indic Scripts—Document Recognition and Retrieval”, 2009, Springer Science+Business Media, London, XP002696109, pp. 184-193, ISBN: 978-1-84888-329-3. |
Chowdhury A.R., et al., “Text Detection of Two Major Indian Scripts in Natural Scene Images”, Sep. 22, 2011, Camera-Based Document Analysis and Recognition, Springer Berlin Heidelberg, pp. 42-57, XP019175802, ISBN: 978-3-642-29363-4. |
Ghoshal R., et al., “Headline Based Text Extraction from Outdoor Images”, 4th International Conference on Pattern Recognition and Machine Intelligence, Springer LNCS, vol. 6744, Jun. 27, 2011, pp. 446-451, XP055060285. |
International Search Report and Written Opinion—PCT/US2013/023003—ISA/EPO—May 16, 2013, pp. 1-11. |
Papandreou A. et al., “A Novel Skew Detection Technique Based on Vertical Projections”, International Conference on Document Analysis and Recognition, Sep. 18, 2011, pp. 384-388, XP055062043, DOI: 10.1109/ICDAR.2011.85, ISBN: 978-1-45-771350-7. |
Setlur, et al., “Creation of data resources and design of an evaluation test bed for Devanagari script recognition”, Research Issues in Data Engineering: Multi-lingual Information Management, RIDE-MLIM 2003. Proceedings. 13th International Workshop, 2003, pp. 55-61. |
Number | Date | Country | |
---|---|---|---|
20130195376 A1 | Aug 2013 | US |
Number | Date | Country | |
---|---|---|---|
61590966 | Jan 2012 | US | |
61590983 | Jan 2012 | US | |
61590973 | Jan 2012 | US | |
61673703 | Jul 2012 | US |