Field
This disclosure generally relates to techniques for recognizing text from captured images, and specifically for deciding on a word based on a trellis structure and possible characters decisions resulting from optical character recognition.
Background
Image and text recognition is providing important functionality to today's mobile devices. In particular, a user may travel to a country where the national language is unknown to the user, and then may translate signs, menus, or other text included in a camera image into the user's home language (i.e., from a first language to a second language). With some systems, a user may access additional information once the text in the image is recognized.
Some languages are especially difficult to perform word recognition based on the wide range of character combinations, a large dictionary and limited power. What is needed is a means for increasing efficiency of optical word recognition, more quickly determining a valid word, and reducing power consumption.
Systems, apparatuses, and methods disclosed herein provide for efficient and accurate recognition of text in images.
According to some aspects, disclosed is a method to relate images of words to a list of words in an optical character recognition (OCR) system, the method comprising: receiving a plurality of OCR characters corresponding to an image of a word, wherein the plurality of OCR characters are from an OCR system; determining a most likely (ML) path based on the plurality of OCR characters applied to a loaded forward trellis and a loaded reverse trellis, thereby forming a decoded word; and displaying the decoded word.
According to some aspects, disclosed is a mobile device to relate images of words to a list of words in an optical character recognition (OCR) system, the mobile device comprising: a camera configured to capture an image of a word; a processor coupled to the camera, the processor comprising code to: receive a plurality of OCR characters corresponding to the image of the word; and determine a most likely (ML) path based on the plurality of OCR characters applied to a loaded forward trellis and a loaded reverse trellis, thereby forming a decoded word; and a display coupled to the processor and configured to display the decoded word.
According to some aspects, disclosed is a mobile device to relate images of words to a list of words in an optical character recognition (OCR) system, the mobile device comprising: means for receiving a plurality of OCR characters corresponding to an image of a word, wherein the plurality of OCR characters are from an OCR system; means for determining a most likely (ML) path based on the plurality of OCR characters applied to a loaded forward trellis and a loaded reverse trellis, thereby forming a decoded word; and means for displaying the decoded word.
According to some aspects, disclosed is a device to relate images of words to a list of words in an optical character recognition (OCR) system, the device comprising a processor and a memory wherein the memory includes software instructions to: receive a plurality of OCR characters corresponding to an image of a word, wherein the plurality of OCR characters are from an OCR system; determine a most likely (ML) path based on the plurality of OCR characters applied to a loaded forward trellis and a loaded reverse trellis, thereby forming a decoded word; and display the decoded word.
According to some aspects, disclosed is a non-transitory computer-readable storage medium including program code stored thereon, for a method to relate images of words to a list of words in an optical character recognition (OCR) system, comprising program code to: receive a plurality of OCR characters corresponding to an image of a word, wherein the plurality of OCR characters are from an OCR system; determine a most likely (ML) path based on the plurality of OCR characters applied to a loaded forward trellis and a loaded reverse trellis, thereby forming a decoded word; and display the decoded word.
According to some aspects, disclosed is a method to prepare a forward trellis and a reverse trellis in an optical character recognition (OCR) system, the method comprising: accessing a list of words; loading the forward trellis using the list of words to form a loaded forward trellis; and loading the reverse trellis using the list of words to form a loaded reverse trellis.
According to some aspects, disclosed is a server to prepare a forward trellis and a reverse trellis in an optical character recognition (OCR) system, the server comprising: a list of words; the forward trellis; the reverse trellis; a processor coupled to receive the list of words and coupled to load the forward trellis and the reverse trellis, wherein the processor comprises program code to: access the list of words; load the forward trellis using the list of words to form a loaded forward trellis; and load the reverse trellis using the list of words to form a loaded reverse trellis.
According to some aspects, disclosed is a mobile device to relate images of words to a list of words in an optical character recognition (OCR) system, the mobile device comprising: means for accessing the list of words; means for loading a forward trellis using the list of words to form a loaded forward trellis; and means for loading a reverse trellis using the list of words to form a loaded reverse trellis.
According to some aspects, disclosed is a server to prepare a forward trellis and a reverse trellis in an optical character recognition (OCR) system, the server comprising a processor and a memory, wherein the memory includes software instructions to: access a list of words; load the forward trellis using the list of words to form a loaded forward trellis; and load the reverse trellis using the list of words to form a loaded reverse trellis.
According to some aspects, disclosed is a non-transitory computer-readable storage medium including program code stored thereon for a server to prepare a forward trellis and a reverse trellis, the non-transitory computer-readable storage medium comprising program code to: access a list of words; load the forward trellis using the list of words to form a loaded forward trellis; and load the reverse trellis using the list of words to form a loaded reverse trellis.
The features and advantages of the disclosed method and apparatus will become more apparent to those skilled in the art after considering the following detailed description in connection with the accompanying drawing.
After processing an image through text detection and Optical Character Recognition (OCR), a word decoder (that leverages a dictionary or list of words) is used to determine a most likely valid word. The importance of an efficient word decoder increases with the increase of dictionary size. Systems and techniques described herein provide a trellis based word decoder that efficiently searches through a list of words to find a valid word. The techniques may also include an additional reverse pass, where the list of words is processed in a reverse order to create the trellis. The forward pass and reverse pass results may then be combined to obtain the final output.
At 126, some embodiments begin with the processor determining a most likely path through a trellis based on the set of characters 124. The processor then forms a word decision 129. The set of characters 124 may be represented by a plurality of possible characters for each position with a corresponding probability (as shown in
In
The trellis loading process begins with a first word, for example, containing three letters. Either the letters may be individually loaded or links connecting two letters may be loaded. At a first stage, a first letter is read. For the first example word “BAD,” a link from letter B at a first stage (B1) to letter A as a second stage (A2) is shown. A second link from A2 to D3 completes the path for the first example word “BAD.” Therefore, the word “BAD” may be saved as two links: B1→A2 and A2→D3. The process continues with example words CAB, DAD and BED with corresponding links C1 to A2 and A2 to B3 for “CAB,” D1 to A2 and A2 to D3 for “DAB,” and B1 to E2 and E2 to D3 for “BED.” In sum, for each node after the first column of nodes (i.e., representing a letter after the first letter position), an input linking vector is shown that identifies valid paths to that node.
Pre-processing may also compute probabilities associated with each node for entering and/or exiting the node. For the example given, node B1 has an equal 50% probability between two links: B1→A2 and B1→E2. Similarly, node A2 has an even probability from three links to enter node A2 (shown as 33%) and from two links to exit node A2 (shown as 50%). Node D3 has a ⅔ probability (67%) of entering from a previous ‘A’ and a ⅓ probability (33%) of entering from a previous ‘E.’
Words may be loaded into a trellis by creating input linking vectors as described above. During runtime, the input linking vectors are examined when checking for a valid link and possible valid paths. The input linking vectors may be created offline, for example, by a server during a pre-processing period. The input linking vector includes one bit for each character in the alphabet. For example for Roman letter, the length of the input linking vector may be 52 for 26 lower case letter and 26 upper case letters. The length may be increased to include 10 digits and various punctuation marks.
For Devanagari script, the length of the input linking vector may vary depending on the complexity of the system deployed. Devanagari vowels may be written independently or be used to modified a basic consonant with diacritical marks, which are written above, below, before or after the consonant they belong to. When a vowel modifies a basic consonant, the vowel is referred to as a “modifier.” The character formed by modifying a basic consonant with a vowel is referred to as a “conjunct.” When concatenating two or more consonant together, the new character is referred to as a “composite character” or a “compound character.” Compound characters may include both basic consonants, modified consonants and half characters. Of the 33 consonants, 24 of them have half forms. Characters in a word are joined by a horizontal bar referred to as a “Shiro-rekha” or a headline. The many forms of compound characters lead to a large number of resulting characters, which make OCR of Devanagari script very difficult.
Considering Devanagari script, the length of the input linking vector may be 44 for the 44 basic characters: 33 consonant and 11 vowels. Alternatively, the length of the input linking vector may be increase to include 10 modifiers. Additionally, the length of the input linking vector may be increase to include 24 half characters. Considering 44 basic characters (33 consonant referred to as “vyanjan” and 11 vowels referred to as “svar”), 10 modifiers and 24 half characters, the length of the input linking vector may be 78. Again, the length may be increased to include 10 digits and various punctuation. The input linking vector may be made shorter by just including common characters and excluding infrequently used characters.
In this example, the input linking vector is length five. A first node in the second column (denoted as A2 representing a letter ‘A’ in the second position of a word) is shown having an input linking vector [01110]. The input linking vector [01110] may be interpreted from the individual binary values ‘0’ (which means a first link is not valid), ‘1’ ‘1’ ‘1’ (the next three links are valid) and ‘0’ (the last link is also not valid). A link may be considered valid if a word has a transition between a previous node and the current node. Nodes representing the letters B, C, and D have no valid input links so the vector is shown as [00000]. Only one valid input link is shown for E2 (the final letter E in the second position in a word) from B1; therefore, the node's vector is [01000], representing the only valid input link to E2 is from B1. For the final stage, representing a third letter in a word, input linking vectors are shown as [00000], [10000], [00000], [10001] and [00000], respectively for A3, B3, C3, D3 and E3, representing transitions to that node in column 3 by the previous stage from column 2.
Also shown, a set of words includes a letter represented by each node. That is, a word-node set is computed while building the trellis to show what words pass through that particular node. For example, regarding the first stage (a first letter in a word) no word in the list of words 202 starts with A1, so an empty set { } is shown. Two words start with B1, which are shown as {BAD, BED}. One word starts with C1, which is shown as {CAB}. Another word starts with D1, which is shown as {DAD}. No word starts with E1, so an empty set { } is shown. Regarding a second stage, three words contain A2; therefore, the set is shown as {BAD, CAB, DAD}. No words contain B2, C2 or D2 in the list of words 202; therefore, empty sets { } are shown. A single word includes E2, so the set shows {BED}. For the final stage, letters A3, B3, C3, D3 and E3 include sets { },{CAB}, { }, {BAD, DAD, BED] and { }, respectively.
The process of building the trellis by creating the input linking vectors and word-node sets may be performed during a pre-processing period by the mobile device or alternatively by a remote server and downloaded to the mobile device.
In the example shown, B1 is assigned a value of 0.6 and D1 is assigned a value of 0.4 to correspond to the table in
At 302, a processor considers a first pair of characters. The first pair of characters contains a first character and a second character represented as a previous character and a current character, respectively. That is, the current character is initialized from the second character and the previous character is initialized from the first character.
At 304, the processor finds the highest probability for the current character and initializes an end of a possibly selected link. In the example above, the current character is be A2 with probability 0.4 and D2 with probability 0.6. The previous character is be B1 with probability 0.6 and D1 with probability 0.4. Thus, the highest probability for the current character is D2 with probability 0.6.
At 306, a beginning of an outer loop starts. The processor selects the highest probability for the previous character. In the example above, the highest probability for the previous character is B1 with probability 0.6.
At 308, a beginning of an inner loop starts. The processor makes a determination whether the selected current character and the selected previous character form a valid link in the trellis. For example, the processor examines the appropriate input linking vector. In the example case, the input linking vector is [01010], which indicates that B1→A2 and D1→A2 are valid links. If the current and previous characters form a valid link from the trellis, processing continues at 310. If not, processing continues at 312.
At 310, if the link exists in the trellis, a link between the current and previous characters is selected as the best link. The process continues at 302 with new current and previous characters to find the next best link between the next stages.
At 312, if no link exists in the trellis, the process advances to examine the next highest probability as the previous character at 314 and then returns to 308. Otherwise, at 318, a check is made as whether next character exists and may be set as the current character. If a next character exists, the next highest probability for the current character is selected at 316, then the process continues at 306. At 320, if no possible links exist, then the process selects the first link (having the highest probability) even though the link is not in the trellis.
For the second stage, the word-node set {BAD, CAB, DAD} is shown along the best path. The counters for these words are incremented. The count is now BAD=2, CAB=1, DAD=1 and BED=1. Finally, the last stage passes through D3. The word-node set includes {BAD, DAD, BED} for this node. Counters for these words are similarly incremented. The count is now BAD=3, CAB=1, DAD=2 and BED=2. The word having the highest count (e.g., BAD=3) is the word having the minimum Hamming distance to the best path. Therefore, the processor selects the word having the highest count as the selected word for this forward pass.
Similar to a forward pass, pre-processing may also create a reverse-order dictionary where a list of words 202 is sorted from last character position to first character position. At 504, the processor performs a reverse pass with the reverse-ordered list of words 202 to form an ML path 128-2 from the reverse trellis. At 508, the processor computes a probability of the selected path. At 510, the processor compares the probabilities from 508 and 508 and then selects the greater probability of the two paths as the ML path 128. In some test results with noisy images, a 5% improvement was found by adding a reverse pass to a forward pass system.
In the examples above, a simple form of the English language using an alphabet of five characters was used. The method may be expanded to a 26-character alphabet or to an alphabet that includes both upper and lower case letters, numbers and punctuation. In the examples below, Devanagari is used to illustrate the method for more complicated characters sets. Most North Indic scripts (e.g., the Devanagari script, also called Nāgarī, which is used in India and Nepal among other countries) are written from left to right, do not have distinct character cases, and are recognizable by a horizontal bar or line that runs along the top of characters. Devanagari script is commonly used to write standard Hindi, Marathi, Nepali and Sanskrit. The Devanagari script may be used for many other languages as well, including Bhojpuri, Gujari, Pahari, Garhwali, Kumaoni, Konkani, Magahi, Maithili, Marwari, Bhili, Newari, Santhali, Tharu, Sindhi, Dogri and Sherpa.
In Devanagari in general, modifiers (e.g., upper and lower modifiers) add a great deal of complexity to the basic character set due to the large variety. In fact, over a 1000 character combinations and contractions are possible. Currently, OCR systems have difficulty identifying works with such a complex set of character variations. An OCR system may be simplified to 100 characters, which is most commonly used, in the Devanagari alphabet.
In some embodiments, each word is considered as a sequence of characters with a unique identifier. As an example (India) is represented as +++. A dictionary of words is a list of valid words, where each word is represented as a sequence of characters (as shown in the example above). An OCR unit outputs one, two, three, four or more possibilities for each OCR character with their corresponding likelihood. An OCR character is a character recognized by an OCR system. The table of
In some implementations, forward and reverse trellises may be created offline (away from a mobile device) during a pre-processing period. A list of words may be provided. The list of words can be processed from left to right to generate a forward trellis, and processed from right to left to generate a reverse trellis. The information indicative of the forward and reverse trellises can then be stored as a trellis database for later access.
In order to identify text included in image data, a candidate sequence of decoded characters 124 is identified (e.g., at the mobile device or a network resource such as a server). The candidate sequence is processed using a trellis based decoder, which accesses the trellis database. Additionally, a reverse candidate sequence is generated that includes the decoded characters 124 in reverse order, which is also processed using the trellis based decoder. Based on processing the candidate sequence and the reverse candidate sequence, the candidate sequence may be matched to a dictionary word having the highest confidence or highest probability. The “highest confidence” may be indicated by a confidence score, which may be the fraction of match between the winning or ML path 128 and the decoded word. In some implementations, if a match cannot be determined with a minimum confidence, the algorithm may provide an indication that the search has failed or is not reliable.
For each node of the trellis, the following two quantities may be maintained for pre-processing. First, a (binary) characteristic vector or input linking vector in N bits long and represents the list of characters from previous level connected to the node. For the node highlighted with a dotted circle, the input linking vector is [0101000]. Second, a list of words that visit the node in the trellis may be maintained as a word-node set. The node highlighted node contains two words {, } passing through the node.
During runtime, the trellis structure may also contain a counter for each word at each character position. An OCR unit provides a set of decoded characters 124 and associated probabilities for matching within the trellis. The OCR unit may provide a single possible character for each position. Alternatively, the OCR unit provides multiple possible characters for each position along with an associated probability for each character. Passing a set of decoded characters 124 from the OCR unit through the trellis results in a best path. The word from the list of words that is closest to the ML path 128 is selected as the word decision.
Assume for this example an OCR unit provides three characters at each position. For the first character position, the OCR unit provides three possible characters and associated probabilities. For the second character position, the OCR unit provides another three possible characters with associated probabilities. Looking back from the second position to the first position, the link having the highest probability is saved for each of the possible second characters. That is, up to three links are saved. For each node in the second character position identified by the OCR unit, a link may be formed from each of the three possible characters from the first character position to the second character position. That is, the number of possibilities at each position is squared to determine how many possible links exist (e.g., 32=9).
Instead of saving every link to between the positions, only the link with the highest probability is saved. For example, the first character position has possibilities of 0.7, 0.2 and 0.1 for the first, second and third positions of the first character positions, respectfully. The second character position has possibilities of 0.5, 0.3 and 0.2 for the first, second and third positions of the second character positions, respectfully. The different permutations from first to second character positions results in nine links: three links to the first node of the second position with link probabilities of (0.7*0.5=0.35, 0.2*0.5=0.10 and 0.1*0.5=0.05); three links to the second node of the second position with link probabilities of (0.7*0.3=0.21, 0.2*0.3=0.06 and 0.1*0.3=0.03); and three links to the third node of the second position with link probabilities of (0.7*0.2=0.14, 0.2*0.2=0.04 and 0.1*0.2=0.02).
Of the three links from nodes of the first position to the first node of the second position, only one link is saved. The saved link has the highest priority that is also a valid link in the trellis. If the highest priority link is an invalid link, it is discarded. Therefore, the link with the highest probability to the first node in the second position and found as a valid link from the input linking vector is saved. Similarly, the highest probability valid link to the second node in the second position determined from the input linking vector is saved. Finally, the link with the highest valid probability to the third node in the second position is saved.
If the highest probable link is determined to be an invalid link, the next highest probability is considered as long as it is a valid link. If also an invalid link, the next highest probability is considers and so on. If no valid links are found, the link with the highest probability is used.
As we progress from the first and second positions to the second and third positions, the same strategy of saving one link for each possible node (the highest valid link) is saved. We then progress to the next pair of positions and so on keeping track of the highest valid links and therefore, keeping track of a ML path 128 of entering into a node at the current level. At the final level, three paths are formed; one path into each OCR node. At this final stage, the node having the highest path likelihood is selected and referred to as the winning path, best path or most likely (ML) path. That is, the forward trellis process results in one winning path. Similarly, the reverse trellis also results in one winning path. The processor compares the likelihood of the forward path to the likelihood of the reverse path and selected the overall winning path as the ML path 128 having a higher likelihood between the two paths. Often the forward path and the reverse path are identical. In cases when they are different though, comparing the likelihoods and selecting the path with a greater likelihood as the ML path 128 improves performance over a system that only examines a single path.
There may be some issues with the ML path. For example, the final overall winning path may correspond to an invalid word (i.e., a word not in the list of words. As shown in
Another issue is that the selected word could be longer than the input word. This happens when the initial segment of a longer word “matches” the path; e.g., if the input word is , the final word could be . This issue may be mitigated by imposing a length constraint on the output word.
As noted above, each word in the list of words is included in the reverse order, and another trellis is created (referred to as a reverse trellis). Upon receiving OCR output, we reverse the input string and search through the reverse trellis to obtain the most likely path. This step is referred to herein as the reverse pass.
If the reverse pass returns a word with higher confidence than the forward pass, then its output can be retained. If not, the output of the forward pass is retained. Note, in some embodiments the input string need not be completely reversed to create the reverse trellis. We can instead start somewhere in between, and traverse both forward and reverse to create the trellis.
The following process may be performed to incorporate a reverse pass in the word decoder. Each word in the list of words can be read in the reverse order and another trellis constructed. Upon receiving the OCR output, the input string can be reversed and the reverse trellis searched to obtain the ML path 128-2 (called “reverse pass”). As noted above, if the reverse pass returns a word with higher confidence than the forward pass, its output is retained. If not, the output of the forward pass can be retained.
As shown in
In
Various wireless communication networks based on infrared, radio, and/or microwave technology can be used to implement described techniques. Such networks can include, for example, a wireless wide area network (WWAN), a wireless local area network (WLAN), a wireless personal area network (WPAN), and so on. A WWAN may be a Code Division Multiple Access (CDMA) network, a Time Division Multiple Access (TDMA) network, a Frequency Division Multiple Access (FDMA) network, an Orthogonal Frequency Division Multiple Access (OFDMA) network, a Single-Carrier Frequency Division Multiple Access (SC-FDMA) network, and so on. A CDMA network may implement one or more radio access technologies (RATs) such as cdma2000, Wideband-CDMA (W-CDMA), and so on. Cdma2000 includes IS-95, IS-2000, and IS-856 standards. A TDMA network may implement Global System for Mobile Communications (GSM), Digital Advanced Mobile Phone System (D-AMPS), or some other RAT. GSM and W-CDMA are described in documents from a consortium named “3rd Generation Partnership Project” (3GPP). Cdma2000 is described in documents from a consortium named “3rd Generation Partnership Project 2” (3GPP2). 3GPP and 3GPP2 documents are publicly available. A WLAN may be an IEEE 802.11x network, and a WPAN may be a Bluetooth network, an IEEE 802.15x, or some other type of network. The techniques may also be used for any combination of WWAN, WLAN and/or WPAN.
Those skilled in the art will understand that information and signals may be represented using any of a variety of different technologies and techniques. For example: data, information, signals, bits, symbols, chips, instructions, and commands may be referenced throughout the above description. These may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
In one or more exemplary embodiments, the functions and processes described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. The term “control logic” used herein applies to software (in which functionality is implemented by instructions stored on a machine-readable medium to be executed using a processor), hardware (in which functionality is implemented using circuitry (such as logic gates), where the circuitry is configured to provide particular output for particular input, and firmware (in which functionality is implemented using re-programmable circuitry), and also applies to combinations of one or more of software, hardware, and firmware.
For a firmware and/or software implementation, the methodologies may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. Any machine readable medium tangibly embodying instructions may be used in implementing the methodologies described herein. For example, software codes may be stored in a memory, for example the memory of mobile station, and executed by a processor, for example the microprocessor of modem. Memory may be implemented within the processor or external to the processor. As used herein the term “memory” refers to any type of long term, short term, volatile, nonvolatile, or other memory and is not to be limited to any particular type of memory or number of memories, or type of media upon which memory is stored.
Moreover, the previous description of the disclosed implementations is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these implementations will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other implementations without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the features shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
This application is a divisional of and claims priority from U.S. application Ser. No. 13/829,960, filed on Mar. 14, 2013, titled “Trellis based word decoder with reverse pass,” which claims priority from U.S. Provisional Application No. 61/673,606, filed on Jul. 19, 2012, titled “Trellis based word decoder with reverse pass,” both of which are incorporated herein by reference in their entireties. U.S. application Ser. No. 13/829,960 is related to U.S. Provisional Application No. 61/677,291, filed on Jul. 30, 2012, titled “Method of handling complex variants of words through prefix-tree based decoding for Devanagiri OCR” and which is incorporated herein by reference in its entirety. U.S. application Ser. No. 13/829,960 is related to U.S. application Ser. No. 13/828,060, filed on Mar. 14, 2013, titled “Method of handling complex variants of words through prefix-tree based decoding for Devanagiri OCR” and which is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
3710321 | Rubenstein | Jan 1973 | A |
4654875 | Srihari | Mar 1987 | A |
5321768 | Fenrich et al. | Jun 1994 | A |
5459739 | Handley et al. | Oct 1995 | A |
5465304 | Cullen et al. | Nov 1995 | A |
5519786 | Courtney et al. | May 1996 | A |
5563403 | Bessho et al. | Oct 1996 | A |
5633954 | Gupta et al. | May 1997 | A |
5669007 | Tateishi | Sep 1997 | A |
5751850 | Rindtorff | May 1998 | A |
5764799 | Hong et al. | Jun 1998 | A |
5768451 | Hisamitsu et al. | Jun 1998 | A |
5805747 | Bradford | Sep 1998 | A |
5835633 | Fujisaki et al. | Nov 1998 | A |
5844991 | Hochberg et al. | Dec 1998 | A |
5883986 | Kopec | Mar 1999 | A |
5978443 | Patel | Nov 1999 | A |
6023536 | Visser | Feb 2000 | A |
6092045 | Stubley et al. | Jul 2000 | A |
6128606 | Bengio | Oct 2000 | A |
6266439 | Pollard et al. | Jul 2001 | B1 |
6393443 | Rubin et al. | May 2002 | B1 |
6424983 | Schabes | Jul 2002 | B1 |
6473517 | Tyan et al. | Oct 2002 | B1 |
6674919 | Ma et al. | Jan 2004 | B1 |
6678415 | Popat et al. | Jan 2004 | B1 |
6687421 | Navon | Feb 2004 | B1 |
6738512 | Chen et al. | May 2004 | B1 |
6954795 | Takao et al. | Oct 2005 | B2 |
7031530 | Driggs et al. | Apr 2006 | B2 |
7110621 | Greene et al. | Sep 2006 | B1 |
7142727 | Notovitz et al. | Nov 2006 | B2 |
7263223 | Irwin | Aug 2007 | B2 |
7333676 | Myers et al. | Feb 2008 | B2 |
7403661 | Curry et al. | Jul 2008 | B2 |
7450268 | Martinez et al. | Nov 2008 | B2 |
7471830 | Lim et al. | Dec 2008 | B2 |
7724957 | Abdulkader | May 2010 | B2 |
7738706 | Aradhye et al. | Jun 2010 | B2 |
7783117 | Liu et al. | Aug 2010 | B2 |
7817855 | Yuille et al. | Oct 2010 | B2 |
7889948 | Steedly et al. | Feb 2011 | B2 |
7961948 | Katsuyama | Jun 2011 | B2 |
7984076 | Kobayashi et al. | Jul 2011 | B2 |
8005294 | Kundu et al. | Aug 2011 | B2 |
8009928 | Manmatha et al. | Aug 2011 | B1 |
8189961 | Nijemcevic et al. | May 2012 | B2 |
8194983 | Al-Omari et al. | Jun 2012 | B2 |
8285082 | Heck | Oct 2012 | B2 |
8306325 | Chang | Nov 2012 | B2 |
8417059 | Yamada | Apr 2013 | B2 |
8542926 | Panjwani et al. | Sep 2013 | B2 |
8644646 | Heck | Feb 2014 | B2 |
8831381 | Baheti et al. | Sep 2014 | B2 |
8881005 | Al Badrashiny | Nov 2014 | B2 |
9014480 | Baheti et al. | Apr 2015 | B2 |
20020037104 | Myers et al. | Mar 2002 | A1 |
20030026482 | Dance | Feb 2003 | A1 |
20030099395 | Wang et al. | May 2003 | A1 |
20030215137 | Wnek | Nov 2003 | A1 |
20040086179 | Ma | May 2004 | A1 |
20040179734 | Okubo | Sep 2004 | A1 |
20050041121 | Steinberg et al. | Feb 2005 | A1 |
20050123199 | Mayzlin et al. | Jun 2005 | A1 |
20050238252 | Prakash et al. | Oct 2005 | A1 |
20060039605 | Koga | Feb 2006 | A1 |
20060215231 | Borrey et al. | Sep 2006 | A1 |
20060291692 | Nakao et al. | Dec 2006 | A1 |
20070116360 | Jung et al. | May 2007 | A1 |
20070217676 | Grauman et al. | Sep 2007 | A1 |
20080008386 | Anisimovich et al. | Jan 2008 | A1 |
20080063273 | Shimodaira | Mar 2008 | A1 |
20080112614 | Fluck et al. | May 2008 | A1 |
20090060335 | Rodriguez Serrano et al. | Mar 2009 | A1 |
20090202152 | Takebe et al. | Aug 2009 | A1 |
20090232358 | Cross | Sep 2009 | A1 |
20090252437 | Li et al. | Oct 2009 | A1 |
20090316991 | Geva et al. | Dec 2009 | A1 |
20090317003 | Heilper et al. | Dec 2009 | A1 |
20100049711 | Singh et al. | Feb 2010 | A1 |
20100067826 | Honsinger et al. | Mar 2010 | A1 |
20100080462 | Miljanic et al. | Apr 2010 | A1 |
20100128131 | Tenchio et al. | May 2010 | A1 |
20100141788 | Hwang et al. | Jun 2010 | A1 |
20100144291 | Stylianou et al. | Jun 2010 | A1 |
20100172575 | Lukac et al. | Jul 2010 | A1 |
20100195933 | Nafarieh | Aug 2010 | A1 |
20100232697 | Mishima et al. | Sep 2010 | A1 |
20100239123 | Funayama et al. | Sep 2010 | A1 |
20100245870 | Shibata | Sep 2010 | A1 |
20100272361 | Khorsheed et al. | Oct 2010 | A1 |
20100296729 | Mossakowski | Nov 2010 | A1 |
20110052094 | Gao et al. | Mar 2011 | A1 |
20110081083 | Lee et al. | Apr 2011 | A1 |
20110188756 | Lee et al. | Aug 2011 | A1 |
20110215147 | Goncalves et al. | Sep 2011 | A1 |
20110222768 | Galic et al. | Sep 2011 | A1 |
20110249897 | Chaki et al. | Oct 2011 | A1 |
20110274354 | Nijemcevic | Nov 2011 | A1 |
20110280484 | Ma et al. | Nov 2011 | A1 |
20110285873 | Showering et al. | Nov 2011 | A1 |
20120051642 | Berrani et al. | Mar 2012 | A1 |
20120066213 | Ohguro | Mar 2012 | A1 |
20120092329 | Koo et al. | Apr 2012 | A1 |
20120114245 | Lakshmanan et al. | May 2012 | A1 |
20120155754 | Chen et al. | Jun 2012 | A1 |
20130001295 | Goncalves | Jan 2013 | A1 |
20130058575 | Koo et al. | Mar 2013 | A1 |
20130129216 | Tsai et al. | May 2013 | A1 |
20130194448 | Baheti et al. | Aug 2013 | A1 |
20130195315 | Baheti et al. | Aug 2013 | A1 |
20130195360 | Krishna Kumar et al. | Aug 2013 | A1 |
20130308860 | Mainali et al. | Nov 2013 | A1 |
20140003709 | Ranganathan et al. | Jan 2014 | A1 |
20140022406 | Baheti et al. | Jan 2014 | A1 |
20140023270 | Baheti et al. | Jan 2014 | A1 |
20140023273 | Baheti et al. | Jan 2014 | A1 |
20140023274 | Barman et al. | Jan 2014 | A1 |
20140023275 | Krishna Kumar et al. | Jan 2014 | A1 |
20140023278 | Krishna Kumar et al. | Jan 2014 | A1 |
20140161365 | Acharya et al. | Jun 2014 | A1 |
20140168478 | Baheti et al. | Jun 2014 | A1 |
Number | Date | Country |
---|---|---|
1146478 | Oct 2001 | EP |
1840798 | Oct 2007 | EP |
2192527 | Jun 2010 | EP |
2453366 | Apr 2009 | GB |
2468589 | Sep 2010 | GB |
2004077358 | Sep 2004 | WO |
Entry |
---|
Bansal, et al., “Partitioning and Searching Dictionary for Correction of Optically Read Devanagari Character Strings,” International Journal on Document Analysis and Recognition manuscript, IJDAR 4(4): 269-280 (2002). |
Kompalli, et al., “Devanagari OCR using a recognition driven segmentation framework and stochastic language models,” IJDAR (2009) 12, pp. 123-138. |
Kristensen, F., et al., “Real-Time Extraction of Maximally Stable Extremal Regions on an FPGA,” IEEE International Symposium on Circuits and Systems 2007 (ISCAS 2007), New Orleans, LA, May 27-30, 2007, pp. 165-168. |
Lehal, et al., “Feature Extraction and Classification for OCR of Gurmukhi Script,” Journal of Vivek, 12, pp. 2-12, 1999. |
Vedaldi A., “An Implementation of Multi-Dimensional Maximally Stable Extremal Regions” Feb. 7, 2007, pp. 1-7. |
VLFeat—Tutorials—MSER, retrieved from http://www.vlfeat.org/overview/mser.html, Apr. 30, 2012, pp. 1-2. |
Wikipedia, “Connected-Component Labeling,”, retrieved from http://en.wikipedia.org/wiki/Connected-component—labeling on May 14, 2012, 7 pages. |
Wikipedia, “Histogram of Oriented Gradients,” retrieved from http://en.wikipedia.org/wiki/Histogram—of—oriented—gradients on Apr. 30, 2015, 7 pages. |
Wu V., et al., “TextFinder: An Automatic System to Detect and Recognize Text in Images”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21. No. 11, Nov. 1, 1999 (Nov. 1, 1999), pp. 1224-1229, XP055068381. |
“4.1 Points and patches” In: Szeliski Richard: “Computer Vision—Algorithms and Applications”, 2011, Springer-Verlag, London, XP002696110, p. 195, ISBN: 978-1-84882-934-3. |
Agrawal, et al., “Generalization of Hindi OCR Using Adaptive Segmentation and Font Files,” V. Govindaraju, S. Setlur (eds.), Guide to OCR for Indic Scripts, Advances in Pattern Recognition, DOI 10.1007/978-1-84800-330-9—10, Springer-Verlag London Limited 2009, pp. 181-207. |
Agrawal M., et al., “2 Base Devanagari OCR System” In: Govindaraju V, Srirangataj S (Eds.): “Guide to OCR for Indic Scripts—Document Recognition and Retrieval”, 2009, Springer Science+Business Media, London, XP002696109, pp. 184-193, ISBN: 978-1-8400-329-3. |
Chaudhuri B., Ed., “Digital Document Processing—Major Directions and Recent Advances”, 2007, Springer-Verlag London Limited, XP002715747, ISBN : 978-1-84628-501-1 pp. 103-106, p. 106, section “5.3.5 Zone Separation and Character Segmentation”, paragraph 1. |
Chaudhuri B.B., et al., “An OCR system to read two Indian language scripts: Bangle and Devnagari (Hindi)”, Proceedings of the 4th International Conference on Document Analysis and Recognition. (ICDAR). Ulm, Germany, Aug. 18-20, 1997; [Proceedings of the ICDAR], Los Alamitos, IEEE Comp. Soc, US, vol. 2, Aug. 18, 1997 (Aug. 18, 1997), pp. 1011-1015, XP010244882, DOI: 10.1109/ICDAR.1997.620662 ISBN: 978-0-8186-7898-1 the whole document. |
Chaudhuri et al., “Skew Angle Detection of Digitized Indian Script Documents”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Feb. 1997, pp. 182-186, vol. 19, No. 2. |
Chaudhury S (Eds.): “OCR Technical Report for the project Development of Robust Document Analysis and Recognition System for Printed Indian Scripts”, 2008, pp. 149-153, XP002712777, Retrieved from the Internet: URL:http://researchweb.iiit.ac.inj-jinesh/ocrDesignDoc.pdf [retrieved on Sep. 5, 2013]. |
Chen, et al., “Detecting and reading text in natural scenes,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'04), 2004, pp. 1-8. |
Chen H., et al., “Robust Text Detection in Natural Images With Edge-Enhanced Maximally Stable Extremal Regions,” believed to be published in IEEE International Conference on Image Processing (ICIP), Sep. 2011, pp. 1-4. |
Chen Y.L., “A knowledge-based approach for textual information extraction from mixed text/graphics complex document images”, Systems Man and Cybernetics (SMC), 2010 IEEE International Conference on, IEEE, Piscataway, NJ, USA, Oct. 10, 2010 (Oct. 10, 2010), pp. 3270-3277, XP031806156, ISBN: 978-1-4244-6586-6. |
Chowdhury A.R., et al., “Text Detection of Two Major Indian Scripts in Natural Scene Images”, Sep. 22, 2011 (Sep. 2, 2011), Camera-Based Document Analysis and Recognition, Springer Berlin Heidelberg, pp. 42-57, XP019175802, ISBN: 978-3-642-29363-4. |
Dalal N., et al., “Histograms of oriented gradients for human detection”, Computer Vision and Pattern Recognition, 2005 IEEE Computer Society Conference on, IEEE, Piscataway, NJ, USA, Jun. 25, 2005 (Jun. 25, 2005), pp. 886-893 vol. 1, XP031330347, ISBN: 978-0-7695-2372-9 Section 6.3. |
Dlagnekov L., et al., “Detecting and Reading Text in Natural Scenes,” Oct. 2004, pp. 1-22. |
Elgammal A.M., et al., “Techniques for Language Identification for Hybrid Arabic-English Document Images,” believed to be published in 2001 in Proceedings of IEEE 6th International Conference on Document Analysis and Recognition, pp. 1-5. |
Epshtein B., et al.,“Detecting text in natural scenes with stroke width transform”, 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) , Jun. 13-18, 2010, San Francisco, CA, USA, IEEE, Piscataway, NJ, USA, Jun. 13, 2010 (Jun. 13, 2010), pp. 2963-2970, XP031725870, ISBN: 978-1-4244-6984-0. |
Forssen P.E., et al., “Shape Descriptors for Maximally Stable Extremal Regions”, Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on, IEEE, PI, Oct. 1, 2007 (Oct. 1, 2007), pp. 1-8, XP031194514 , ISBN: 978-1-4244-1630-1 abstract Section 2. Multi-resoltuion MSER. |
Ghoshal R., et al., “Headline Based Text Extraction from Outdoor Images”, 4th International Conference on Pattern Recognition and Machine Intelligence, Springer LNCS, vol. 6744, Jun. 27, 2011 (Jun. 27, 2011), pp. 446-451, XP055060285. |
Holmstrom L., et al., “Neural and Statistical Classifiers—Taxonomy and Two Case Studies,” IEEE Transactions on Neural Networks, Jan. 1997, pp. 5-17, vol. 8 (1). |
International Search Report and Written Opinion—PCT/US2013/047572—ISA/EPO—Oct. 22, 2013. |
Jain A.K., et al., “Automatic Text Location in Images and Video Frames,” believed to be published in Proceedings of Fourteenth International Conference on Pattern Recognition, vol. 2, Aug. 1998, pp. 1497-1499. |
Jain, et al., “Automatic text location in images and video frames”, Pattern Recognition, 1998, pp. 2055-2076, vol. 31, No. 12. |
Jayadevan, et al., “Offline Recognition of Devanagari Script: A Survey”, IEEE Transactions on Systems, Man, and Cybernetics—Part C: Applications and Reviews, 2010, pp. 1-15. |
Kapoor et al., “Skew angle detection of a cursive handwritten Devanagari script character image”, Indian Institute of Science, May-Aug. 2002, pp. 161-175. |
Lee, et al., “A new methodology for gray-scale character segmentation and recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Oct. 1996, pp. 1045-1050, vol. 18, No. 10. |
Li et al., “Automatic Text Detection and Tracking in a Digital Video”, IEEE Transactions on Image Processing, Jan. 2000, pp. 147-156, vol. 9, No. 1. |
Lowe, D.G., “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision, Jan. 5, 2004, 28 pp. |
Machine Learning, retrieved from http://en.wikipedia.org/wiki/Machine—learning, May 7, 2012, pp. 1-8. |
Matas, et al., “Robust Wide Baseline Stereo from Maximally Stable Extremal Regions”, 2002, pp. 384-393. |
Mikulik, et al., “Construction of Precise Local Affine Frames,” Center for Machine Perception, Czech Technical University in Prague, Czech Republic, pp. 1-5, Abstract and second paragraph of Section 1; Algorithms 1 & 2 of Section 2 and Section 4, Aug. 23-26, 2010. |
Minoru M., Ed., “Character Recognition”, Aug. 2010 (Aug. 2010), Sciyo, XP002715748, ISBN: 978-953-307-105-3 pp. 91-95, p. 92, section “7.3 Baseline Detection Process.” |
Moving Average, retrieved from http://en.wikipedia.org/wiki/Moving—average, Jan. 23, 2013, pp. 1-5. |
Newell., et al.,“Multiscale histogram of oriented gradient descriptors for robust character recognition”, Document Analysis and Recognition (ICDAR), 2011 International Conference on. IEEE, 2011. |
Nister D., et al., “Linear Time Maximally Stable Extremal Regions,” ECCV, 2008, Part II, LNCS 5303, pp. 183-196, published by Springer-Verlag Berlin Heidelberg. |
Pal, et al., “Indian script character recognition: a survey”, Pattern Recognition Society, Published by Elsevier Ltd, 2004, pp. 1887-1899. |
Pal U et al., “Multi-skew detection of Indian script documents” Document Analysis and Recognition, 2001. Proceedings. Sixth International Conference on Seattle, WA, USA Sep. 10-13, 2001, Los Aalmitos, CA, USA, IEEE Comput. Soc. US, Sep. 10, 2001 (Sep. 10, 2001), pp. 292-296, XP010560519, DOI:10.1109/ICDAR.2001.953801, ISBN: 978-0-7695-1263-1. |
Pal U., et al., “OCR in Bangle: an Indo-Bangladeshi language”, Pattern Recognition, 1994. vol. 2—Conference B: Computer Vision & Image Processing., Proceedings of the 12th IAPR International. Conferenc e on Jerusalem, Israel Oct. 9-13, 1994, Los Alamitos, CA, USA, IEEE Comput. Soc, vol. 2, Oct. 9, 1994 (Oct. 9, 1994), pp. 269-273, XP010216292, DOI: 10.1109/ICPR.1994.576917 ISBN: 978-0-8186-6270-6 the whole document. |
Papandreou A. et al., “A Novel Skew Detection Technique Based on Vertical Projections”, International Conference on Document Analysis and Recognition, Sep. 18, 2011, pp. 384-388, XP055062043, DOI: 10.1109/ICDAR.2011.85, ISBN: 978-1-45-771350-7. |
Pardo M., et al., “Learning From Data: A Tutorial With Emphasis on Modern Pattern Recognition Methods,” IEEE Sensors Journal, Jun. 2002, pp. 203-217, vol. 2 (3). |
Park, J-M. et al., “Fast Connected Component Labeling Algorithm Using a Divide and Conquer Technique,” believed to be published in Matrix (2000), vol. 4 (1), pp. 4-7, Publisher: Elsevier Ltd. |
Premaratne H.L., et al., “Lexicon and hidden Markov model-based optimisation of the recognised Sinhala script”, Pattern Recognition Letters, Elsevier, Amsterdam, NL, vol. 27, No. 6, Apr. 15, 2006 (Apr. 15, 2006) , pp. 696-705, XP027922538, ISSN: 0167-8655. |
Ray A.K et al., “Information Technology—Principles and Applications”. 2004. Prentice-Hall of India Private Limited. New Delhi! XP002712579, ISBN: 81-203-2184-7, pp. 529-531. |
Renold M., “Detecting and Reading Text in Natural Scenes,” Master's Thesis, May 2008, pp. 1-59. |
Senda S., et al., “Fast String Searching in a Character Lattice,” IEICE Transactions on Information and Systems, Information & Systems Society, Tokyo, JP, vol. E77-D, No. 7, Jul. 1, 1994 (Jul. 1, 1994), pp. 846-851, XP000445299, ISSN: 0916-8532. |
Senk V., et al., “A new bidirectional algorithm for decoding trellis codes,” EUROCON' 2001, Trends in Communications, International Conference on Jul. 4-7, 2001, Piscataway, NJ, USA, IEEE, Jul. 4, 2001 (Jul. 4, 2001), pp. 34-36, vol. I, XP032155513, DOI :10.1109/EURCON.2001.937757 ISBN : 978-0-7803-6490-5. |
Setlur, et al., “Creation of data resources and design of an evaluation test bed for Devanagari script recognition”, Research Issues in Data Engineering: Multi-lingual Information Management, RIDE-MLIM 2003. Proceedings. 13th International Workshop, 2003, pp. 55-61. |
Shin H., et al., “Application of Floyd-Warshall Labelling Technique: Identification of Connected Pixel Components in Binary Image,” Kangweon-Kyungki Math. Jour. 14 (2006), No. 1, pp. 47-55. |
Sinha R.M.K., et al., “On Devanagari document processing”, Systems, Man and Cybernetics, 1995. Intelligent Systems for the 21st Century., IEEE International Conference on Vancouver, BC, Canada Oct. 22-25, 1995, New York, NY, USA,IEEE, US, vol. 2, Oct. 22, 1995 (Oct. 22, 1995), pp. 1621-1626, XP010194509, DOI: 10.1109/ICSMC.1995.538004 ISBN: 978-0-7803-2559-3 the whole document. |
Song Y., et al., “A Handwritten Character Extraction Algorithm for Multi-language Document Image”, 2011 International Conference on Document Analysis and Recognition, Sep. 18, 2011 (Sep. 18, 2011), pp. 93-98, XP055068675, DOI: 10.1109/ICDAR.2011.28 ISBN: 978-1-45-771350-7. |
Uchida S et al., “Skew Estimation by Instances”, 2008 The Eighth IAPR International Workshop on Document Analysis Systems, Sep. 1, 2008 (Sep. 1, 2008), pp. 201-208, XP055078375, DOI: 10.1109/DAS.2008.22, ISBN: 978-0-76-953337-7. |
Unser M., “Sum and Difference Histograms for Texture Classification”, Transactions on Pattern Analysis and Machine Intelligence, IEEE, Piscataway, USA, vol. 30, No. 1, Jan. 1, 1986 (Jan. 1, 1986), pp. 118-125, XP011242912, ISSN: 0162-8828 section A; p. 122, right-hand column p. 123. |
Number | Date | Country | |
---|---|---|---|
20150242710 A1 | Aug 2015 | US |
Number | Date | Country | |
---|---|---|---|
61673606 | Jul 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13829960 | Mar 2013 | US |
Child | 14698528 | US |