Character Detection Apparatus and Method

Information

  • Patent Application
  • 20150371399
  • Publication Number
    20150371399
  • Date Filed
    June 17, 2015
    9 years ago
  • Date Published
    December 24, 2015
    8 years ago
Abstract
According to one embodiment, a character detection apparatus includes a feature extractor, a determiner and an integrator. The feature extractor extracts a feature value of an image including character strings. The determiner determines each priority of a plurality of different character detection schemes in accordance with character detection accuracy with respect to an image region having a feature corresponding to the feature value. The integrator integrates text line candidates of the character detection schemes, and selects, as a text line, one of the text line candidates detected by the character detection scheme with the highest priority if a superimposition degree indicating a ratio of a superimposed region among the text line candidates is no less than a first threshold value.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is based upon and claims the benefit of priority from Japanese Patent Application No. 2014-126576, filed Jun. 19, 2014, the entire contents of which are incorporated herein by reference.


FIELD

Embodiments described herein relate generally to a character detection apparatus and method.


BACKGROUND

With the widespread use of smart phones and wearable devices, there has been a demand for detecting character strings from images existing in real space such as character strings on signboards, signs, and menus in restaurants photographed by a camera. The character strings in images photographed by a camera vary in the way they look depending on the lighting conditions of the character strings and the effects of shadows. As a method of detecting character strings from such images, there are, for example, a technique using a connected component which is obtained by connecting pixels in an image, or a technique using a detector based on mechanical learning.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram showing a character detection apparatus.



FIG. 2 is a diagram showing a detection processing executed by a character candidate region detector.



FIG. 3 is a diagram showing a detection result of the detection processing executed by the character candidate region detector.



FIG. 4 is a diagram showing a text line generation processing executed by a second text line generator.



FIG. 5 is a diagram explaining a method of calculating a matching rate in a priority determiner.



FIG. 6 is a diagram showing examples of correspondence between character detection schemes and feature values.



FIG. 7 is a diagram explaining the concept of a length and a width of a text line.



FIG. 8 is a flowchart showing an integration processing executed by an integrator.



FIG. 9 is a diagram showing an example of an integration processing result obtained by the integrator.



FIG. 10 is a diagram showing another example of the integration processing result obtained by the integrator.



FIG. 11 is a graph showing an evaluation result of detection accuracy of the character detection apparatus.





DETAILED DESCRIPTION

A method using a connected component fails to detect a character string if the connected component is detected incorrectly. Therefore, for example, in the case where the characters and the part of the background are similar colors, or in the case where the colors of characters are changed significantly when reflected or are in shade, the connected component would not be detected correctly, causing the character string to not be properly detected. Furthermore, in the case of using a detector which is based on mechanical learning, since the detection of the character string would depend on the learned data, if a specific logo, a regular handwriting, or an ornamental writing etc. differs significantly from the learned data, the character string would not be detected.


In general, according to one embodiment, a character detection apparatus includes a feature extractor, a determiner and an integrator. The feature extractor extracts a feature value of an image including one or more character strings. The determiner determines each priority of a plurality of different character detection schemes in accordance with character detection accuracy with respect to an image region having a feature corresponding to the feature value. The integrator integrates text line candidates of the character detection schemes, and selects, as a text line, one of the text line candidates detected by the character detection scheme with the highest priority if a superimposition degree indicating a ratio of a superimposed region among the text line candidates is no less than a first threshold value, the text line candidates being obtained as a result of detecting the character string using the plurality of character detection schemes and being a candidate of a region including the character string.


In the following, the character detection apparatus, method and program according to the present embodiment will be described in detail with reference to the drawings. In the embodiment described below, elements specified by the same reference numbers carry out the same operations, and a duplicate description of such elements will be omitted.


The character detection apparatus according to the present embodiment will be explained with reference to the block diagram shown in FIG. 1.


The character detection apparatus 100 according to the present embodiment includes an image acquirer 101, a first text line detector 102 (a first detector), a second text line detector 103 (a second detector), a feature extractor 104, a priority determiner 105, and an integrator 106.


The first text line detector 102 includes a connected component extractor 107 and a first text line generator 108. The second text line detector 103 comprises a character candidate region detector 109 and a second text line generator 110.


The image acquirer 101 acquires an image including one or more character strings. Here, the image is assumed to be an image of character strings in real space such as signboards, signs, and menus in restaurants photographed by a camera. However, the image may be any image that includes at least one character string.


The first text line detector 102 receives an image from the image acquirer 101 and uses a first character detection scheme, which is a scheme for detecting a character string, to detect one or more text line candidates (also referred to as a first text line candidate). The text line candidate in the present embodiment is a candidate of a region including a character string, which, for example, is a region expressed by a rectangle, a trapezoid, other quadrangles, or a closed polygon. The text line candidate expressed by a rectangle, a trapezoid, or other quadrangles may be described by coordinate values defining a region, coordinate values of a starting point and an ending point, or the center line and width of the character string etc. In the present embodiment, the character string is assumed to be written horizontally. However, the character string may also be written vertically, in which case the text line would also become a longitudinal region in a vertical direction in line with the direction of the character string.


The second text line detector 103 receives an image from the image acquirer 101 and uses a second character detection scheme which is a scheme different from the first character detection scheme to detect one or more text line candidates (also referred to as a second text line candidate). Since the first text line candidate and the second text line candidate are detected with respect to the same image, the coordinate systems are the same, and the first text line candidate and the second text line candidate will be detected for the same character string.


The feature extractor 104 receives the first text line candidate from the first text line detector 102 and the second text line candidate from the second text line detector 103, respectively, and extracts a feature value of the image. As the feature value of the image, for example, the luminance and the length of the text line may be extracted.


The priority determiner 105 receives the feature value of the image from the feature extractor 104 and determines a priority that indicates which one of the first character detection scheme and the second character detection scheme should be prioritized in accordance with the character detection accuracy with respect to a region in the image (also referred to as an image region) having a feature corresponding to the feature value. The method of determining priority will be explained later on with reference to FIG. 5.


The integrator 106 receives the first text line candidate from the first text line detector 102, the second text line candidate from the second text line detector 103, and the priority from the priority determiner 105, respectively. The integrator 106 selects the first text line candidate and the second text line candidate in accordance with a superimposition degree indicating the ratio of a region in which the first text line candidate and the second text line candidate are superimposed, the feature value of the image, and the priority, and integrates them to generate a text line. Details of the processing executed by the integrator 106 will be explained later on with reference to FIG. 8.


Now, the first text line detector 102 will be explained in detail.


The connected component extractor 107 receives the image from the image acquirer 101 and connects pixels with similar characteristics regarding color information of the pixels, etc. between adjacent pixels among the pixels in the image to generate one or more connected components. Here, the pixels in the image are binarized in white and black. In the case where two or more black pixels among the binarized pixels are adjacent consecutively, a set of consecutive pixels is generated as a connected component.


The first text line generator 108 receives the connected components from the connected component extractor 107, and combines the connected components aligned approximately on the same straight line in accordance with the positional relationship between the connected components and the degree of similarity of the connected components to generate the first text line candidate. Specifically, a feature vector is generated for each of the connected components, and the positional relationship and the degree of similarity of the features between two connected components are defined by the distance of the feature vectors. If the distance of the feature vectors is below the threshold value, the two connected components will be considered as being similar and aligned on the same straight line. Therefore, the two connected components will be connected. As examples of each element of the feature vector, an x-coordinate and y-coordinate expressing the center point of the connected component, an average color of each connected component, and the size (height, width, and length of circumference etc.) of the connected component may be given. The center point may, for example, be the center point of a quadrangle circumscribing the connected component. The text line candidate may also be generated by using the method disclosed in Neumann L., Matas J.: Text Localization in Real-world Images using Efficiently Pruned Exhaustive Search, ICDAR 2011 (Beijing, China) “C. Exhaustive search.” The processing executed by the first text line generator 108 above is also referred to as a line detection using a connected component (connected component line detection: CC line detection).


Next, the second text line detector 103 will be explained in detail.


The character candidate region detector 109 receives the image from the image acquirer 101, and having learned the character's image data beforehand, detects the image region having a specific shape to generate a character candidate region. Since the matter of learning the image data is achieved by performing a general learning process, here, the explanation will be omitted.


The second text line generator 110 receives the character candidate regions from the character candidate region detector 109, and combines the connected components, of which the character candidate regions of the same size aligns approximately on the same straight line, to generate the second text line candidate. Here, the processing executed by the second text line generator 110 is assumed as a line detection using the character candidate region.


The detection processing executed by the character candidate region detector 109 will be explained with reference to FIGS. 2 and 3.


As an example of the character candidate region extraction processing, as shown in FIG. 2, scanning is performed by using various sizes of windows 202 for the entire image 201, and the character candidate region presumed as a character in the image 201 is extracted. By changing the size of a window 202, characters in various sizes can be detected as the character candidate region. In other words, even for a character that would not fit in the region of a certain size of window 202, by enlarging the size of window 202, the character would fit in the region of the window 202, thereby enabling the character to be detected as the character candidate region.


The result of extracting the character candidate region in the method shown in FIG. 2 is shown in FIG. 3. A character candidate region 301 for the character in the image 201 can be extracted in the manner shown in FIG. 3.


The text line generation processing executed by the second text line generator 110 will be explained with reference to FIG. 4.



FIG. 4 shows a concept of a line detection (Hough line detection) using a Hough transformation. As an image plane 401 showing an image, a coordinate plane in which the vertical axis is x and the horizontal axis is y is assumed. A character candidate region 402 in the image plane 401 is voted to a voting space 403. In the voting space 403, the vertical axis is ρ, and the horizontal axis is θ. The space represents a three-dimensional parameter with an a regarding the size of the character candidate region 402. As shown in FIG. 4, in the case where the size of the character candidate region 402 is small, the character candidate region 402 is voted to a voting space where the s is small. In the case where the size of the character candidate region 402 is large, the character candidate region 402 is voted to a voting space where the s is large. A second text line candidate, which sets a coordinate value with the largest number of votes in each voting space as a straight line parameter, is generated.


The priority determination processing executed by the priority determiner 105 will be explained with reference to FIG. 5.


An image for learning (hereinafter, referred to as a learning image) for which the position of the text line has already been ascertained in advance is prepared. For the learning image, the first text line detector 102 generates the first text line candidate by the first character detection scheme, and the second text line detector 103 generates the second text line candidate by the second character detection scheme. If the character detection scheme is the same, it is permissible to use the first text line candidate and the second text line candidate, which are processed in advance by the first character detection scheme and the second character detection scheme, instead of performing the processing by the first text line detection unit 102 and the second text line detection unit 103.


The priority determiner 105 calculates the matching rate between the first text line candidate and the text line whose position of the character string is ascertained in advance (hereinafter, referred to as a reference text line). In the same manner, the priority determiner 105 calculates the matching rate between the second text line candidate and the reference text line. As a result of comparing the matching rate calculated with respect to the first text line candidate and the matching rate calculated with respect to the second text line candidate, the text line candidate with a higher matching rate is considered as being processed by a scheme with high character detection accuracy. Therefore, the priority of such scheme will be set higher than that of the other scheme.


As the matching rate, a value obtained by dividing the area of a region in which the text line candidate and the reference text line are superimposed by the entire area of the text line candidate and the reference text line, may be used. The matching rate can be calculated, for example, by the following equation (1).





Matching rate=S(s1∩s2/S(s1∪s2)  (1)


Here, S( ) is an area, s1 is a reference text line, s2 is a first text line candidate or a second text line candidate, ∩ is a product set, and ∪ is a sum set.


In the example of FIG. 5, for example, when assuming the case of comparing the reference text line 501 and the first text line candidate 502, the matching rate becomes higher as a region 504 indicated by hatched lines becomes larger with respect to a region 503 indicated by dashed lines surrounding the entire area of the reference text line 501 and the first text line candidate 502.


For example, the feature extractor 104 calculates a feature value of the region of the reference text line in the learning image for determining priority, and the priority determiner 105 correlates the feature value with the priority. Therefore, by referring to the priority corresponding to the feature value that matches or is similar to the feature value of the region of the reference text line in accordance with the feature value of the image to be processed, it may be ascertained as to which of the schemes between the first character detection scheme and the second character detection scheme should be prioritized.


An example of correlating the character detection scheme and the feature value will be explained with reference to FIG. 6.



FIG. 6 shows the detection results of each of the first character detection scheme and the second character detection scheme under a condition 601. The condition 601 is a condition regarding a feature value. Here, a difference in luminance is assumed.


For example, in the case in condition 601 where a “luminance variation is small”, the background and the character string, for example, are the same color, and the detection accuracy tends to become higher for the first character detection scheme 602 than for the second character detection scheme 603. However, in the case where “luminance variation is large” in condition 601, the character string is, for example, an outline character, and the detection accuracy tends to become higher for the second character detection scheme 603 than for the first character detection scheme 602.


Therefore, in the case of determining the priority of the character detection scheme with respect to the image to be processed, as a feature value, luminance distribution of each of the regions of the first text line candidate generated by the first character detection scheme 602 and the second text line candidate generated by the second character detection scheme 602 is calculated. In the case where the dispersion of the luminance distribution is equal to or greater than the threshold value, condition 601 “luminance variation is large” applies, and the priority of the second character detection scheme 603 is set high.


In the case where the dispersion of the luminance distribution is below the threshold value, condition 601 “luminance variation is small” applies, and the priority of the first character detection scheme 602 is set high. Instead of calculating the luminance of the region of each of the first text line candidate and the second text line candidate, it is also permissible to calculate the luminance of the entire image and refer to the luminance of the entire image. In this case, the feature extractor 104 receives the image from the image acquirer 101, and calculates the luminance of the entire image, which will be used when determining the priority.


Furthermore, as a condition regarding the feature value, it is also permissible to use the length, the width, and the area of the text line candidate.


The concept of length 701 and width 702 of the text line candidate is shown in FIG. 7. As the length 701 of the text line becomes longer, the detection accuracy of the second character detection scheme becomes higher than the first character detection scheme. Accordingly, for example, as the feature value, the average length of the first text line candidate generated by the first character detection scheme, and the second text line candidate generated by the second character detection scheme, is calculated. In the case where the average length is equal to or greater than the threshold value, the priority of the second character detection scheme is set high. In the case where the average length is less than the threshold value, the priority of the first character detection scheme is set high.


Next, the integration processing executed by the integrator 106 is explained with reference to the flowchart of FIG. 8.


In step S801, it is determined whether or not the superimposition degree of the first text line candidate and the second text line candidate is equal to or greater than the threshold value. The superimposition degree may be calculated in the same manner as the method of calculating the matching rate between the first text line candidate and the reference text line, and calculating the matching rate between the second text line candidate and the reference text line in the priority determiner 105. Therefore the value obtained by dividing the area of a region in which the first text line candidate and the second text line candidate are superimposed by the entire area of the first text line candidate and the second text line candidate, may be used. In the case where the superimposition degree is equal to or greater than the threshold value, the processing proceeds on to step S802. In the case where the degree of superimposition is less than the threshold value, the processing proceeds on to step S803.


In step S802, the text line candidate generated by the character detection scheme with high priority is selected as the text line.


In step S803, the existence of an inclusive region, i.e., whether or not an inclusive relationship exists, is determined. The inclusive relationship is determined based on whether the second text line candidate is included in the first text line candidate or the first text line candidate is included in the second text line candidate. If the size of the superimposed region occupying the entire text line candidate of a smaller region (also referred to as minimum text line candidate) between the first text line candidate and the second text line candidate is equal to or greater than the threshold value, the inclusive relationship is determined to exist. If the inclusive region exists, the processing proceeds on to step S804. If an inclusive region does not exist, the operation proceeds on to step S805.


In step S804, between the text line candidates in an inclusive relationship, the text line candidate with a larger region (maximum text line candidate) is selected as a text line. For example, if the second text line candidate is included in the first text line candidate, the first text line candidate is selected as the text line.


In step S805, since the first text line candidate and the second text line candidate are not superimposed on each other, or the superimposed portion is smaller than the area in an inclusive relationship, both the first text line candidate and the second text line candidate are selected as the text lines. The integration processing is completed in the above manner.


An example of the integration processing result of the integrator 106 is explained with reference to FIG. 9.



FIG. 9(
a) shows, in a state before integration, a case of generating both the first text line candidate and the second text line candidate with respect to the image to be processed and displaying them in one image. The dashed line indicates the first text line candidate 901, and the dashed-dotted line indicates the second text line candidate 902.


As shown in FIG. 9(a), in region 903 in the intermediary part of the image, the superimposition degree is equal to or greater than the threshold value. In region 904 in the lowest part of the image, the first text line candidate 901 is included in the second text line candidate 902, and is in an inclusive relationship with the second text line candidate 902. A case in which the first character detection scheme has a high priority is assumed.


As shown in FIG. 9(b), after the integration, in region 903, the first text line candidate 901 with high priority is selected as the text line since the superimposition degree is equal to or greater than the threshold value. In region 904, since the inclusive relationship exists, the second text line candidate 902 that has a larger region is selected as the text line.


Another example of the integration processing result will be explained with reference to FIG. 10.



FIG. 10(
a) shows the second text line candidate, FIG. 10(b) shows the first text line candidate, and FIG. 10(c) shows the integration result.


As is the case for FIG. 9, for example, if the superimposition degree of the text line candidate of a character string “Tiredness” is equal to or greater than the threshold value, the first text line candidate shown in FIG. 10(b) with high priority will be selected as the text line. Also, for a character string “your life,” since the first text line candidate is included in the second text line candidate, the second text line candidate, which is larger, will be selected as the text line.


The evaluation result of the detection accuracy will be explained with reference to FIG. 11.


The graph shown in FIG. 11 is a graph evaluating the detection accuracy according to the difference in methods of detecting text lines. The vertical axis indicates precision, and the horizontal axis indicates recall. Point 1101 indicates the case of using only the first character detection scheme. Point 1102 indicates the case of using only the second character detection scheme. Point 1103 indicates the case of processing by the character detection apparatus according to the present embodiment.


As shown in FIG. 11, in the first character detection scheme of point 1101, recall is approximately 62%, and precision is approximately 82%. In the second character detection scheme of point 1102, recall is approximately 59%, and precision is approximately 85%. According to the character detection apparatus of the present embodiment, recall is approximately 68%, and precision is approximately 87%, which indicates that the recall and the precision have both improved.


In the present embodiment, the case of detecting characters using the two schemes of the first character detection scheme and the second character detection scheme is assumed. However, it is also permissible to use three or more character detection schemes.


For example, in the priority determiner 105, in the case of using three or more character detection schemes, the matching rate between each of the plurality of character detection schemes and the reference text line is calculated, and the character detection scheme having the highest matching rate is determined to have the highest priority.


The integrator 106 may also perform the same processing as in the flow chart shown in FIG. 8. For example, in step S801 shown in FIG. 8, in the case where a superimposed region exists among the text line candidates detected by each of the three or more character detection schemes, and the superimposition degree is equal to or greater than the threshold value, in step S802, the text line candidate detected by the character detection scheme with the highest priority should be selected as the text line.


In step S803, the text line candidate with the smallest region among the text line candidates detected by the plurality of character detection schemes is considered the minimum text line candidate. If the size of the region to be superimposed is equal to or greater than the threshold value with respect to the entire size of the minimum text line candidate, an inclusive relationship is determined to exist. In step S804, the text line candidate with the largest region among the text line candidates detected by the plurality of character detection schemes should be considered the maximum text line candidate and selected as the text line.


In step S805, each of the text line candidates detected by the plurality of character detection schemes should be selected as the text line.


According to the present embodiment described above, the priority of the character detection scheme is determined according to the feature value of the image, the text line candidates are detected from the image by using a plurality of character detection schemes, and the text line candidates are selected according to the priority in accordance with the feature value of the image and integrated to improve precision and recall of the text row for any kind of image.


The flow charts of the embodiments illustrate methods and systems according to the embodiments. It will be understood that each block of the flowchart illustrations, and combinations of blocks in the flowchart illustrations, can be implemented by computer program instructions. These computer program instructions may be loaded onto a computer or other programmable apparatus to produce a machine, such that the instructions which execute on the computer or other programmable apparatus create means for implementing the functions specified in the flowchart block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer programmable apparatus which provides steps for implementing the functions specified in the flowchart block or blocks.


While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions, and changes in the form of the methods and systems described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.

Claims
  • 1. A character detection apparatus, comprising: a feature extractor that extracts a feature value of an image including one or more character strings;a determiner that determines each priority of a plurality of different character detection schemes in accordance with character detection accuracy with respect to an image region having a feature corresponding to the feature value; andan integrator that integrates text line candidates of the character detection schemes, and selects, as a text line, one of the text line candidates detected by the character detection scheme with the highest priority if a superimposition degree indicating a ratio of a superimposed region among the text line candidates is no less than a first threshold value, the text line candidates being obtained as a result of detecting the character string using the plurality of character detection schemes and being a candidate of a region including the character string.
  • 2. The apparatus according to claim 1, wherein if the superimposition degree is less than the first threshold value, and if a ratio of the superimposed region occupying a minimum text line candidate having a smallest region among superimposed text line candidates is not less than a second threshold value, the integrator selects a maximum text line candidate having a largest region among the superimposed text line candidates as the text line,If the rate of the superimposed region occupying the minimum text line candidate is less than the second threshold value, the integrator selects each of the superimposed text line candidates as the text line.
  • 3. The apparatus according to claim 1, further comprising: a first detector that detects the character string using a first character detection scheme and obtains a first text line candidate indicating a candidate of a region including the character string; anda second detector that detects the character string using a second character detection scheme and obtains a second text line candidate indicating a candidate of a region including the character string,wherein the determiner determines the priority with respect to each of the first character detection scheme and the second character detection scheme, and the integrator integrates the first text line candidate and the second text line candidate, and selects, as the text line, a text line candidate detected by one of the first character detection scheme or the second character detection scheme with high priority if the superimposition degree relating to the first text line candidate and the second text line candidate is no less than the first threshold value.
  • 4. The apparatus according to claim 3, wherein the first detector comprises: an extractor that connects pixels with similar features between adjacent pixels in the image to obtain a plurality of connected components; anda first generator that generates the first text line candidate by combining the connected components in accordance with a similarity of the connected components and a positional relationship between the connected components, andthe second detector comprises: a region detector that selects one or more character candidate regions indicating a character candidate from the image; anda second generator that generates the second text line candidate by combining the one or more character candidate regions.
  • 5. The apparatus according to claim 4, wherein the first generator generates the first text line candidate by a line detection using the connected components, and the second generator generates the second text line candidate by a line detection using the character candidate regions.
  • 6. The apparatus according to claim 1, wherein the feature value is a luminance or a length of each text line candidate.
  • 7. A character detection method, comprising: extracting a feature value of an image including one or more character strings;determining each priority of a plurality of different character detection schemes in accordance with character detection accuracy with respect to an image region having a feature corresponding to the feature value;integrating text line candidates of the character detection schemes; andselecting, as a text line, one of the text line candidates detected by the character detection scheme with the highest priority if a superimposition degree indicating a ratio of a superimposed region among the text line candidates is no less than a first threshold value, the text line candidates being obtained as a result of detecting the character string using the plurality of character detection schemes and being a candidate of a region including the character string.
  • 8. The method according to claim 7, wherein if the superimposition degree is less than the first threshold value, and if a ratio of the superimposed region occupying a minimum text line candidate having a smallest region among superimposed text line candidates is not less than a second threshold value, the selecting the text line selects a maximum text line candidate having a largest region among the superimposed text line candidates as the text line,If the rate of the superimposed region occupying the minimum text line candidate is less than the second threshold value, the selecting the text line selects each of the superimposed text line candidates as the text line.
  • 9. The method according to claim 7, further comprising: first detecting the character string using a first character detection scheme and obtains a first text line candidate indicating a candidate of a region including the character string; andsecond detecting the character string using a second character detection scheme and obtains a second text line candidate indicating a candidate of a region including the character string,wherein the determining the each priority determines the priority with respect to each of the first character detection scheme and the second character detection scheme, and the integrating the text line candidates integrates the first text line candidate and the second text line candidate, and the selecting the text line selects, as the text line, a text line candidate detected by one of the first character detection scheme or the second character detection scheme with high priority if the superimposition degree relating to the first text line candidate and the second text line candidate is no less than the first threshold value.
  • 10. The method according to claim 9, wherein the first detecting comprises: connecting pixels with similar features between adjacent pixels in the image to obtain a plurality of connected components; andfirst generating the first text line candidate by combining the connected components in accordance with a similarity of the connected components and a positional relationship between the connected components, andthe second detecting comprises: selecting one or more character candidate regions indicating a character candidate from the image; andsecond generating the second text line candidate by combining the one or more character candidate regions.
  • 11. The method according to claim 10, wherein the first generating generates the first text line candidate by a line detection using the connected components, and the second generating generates the second text line candidate by a line detection using the character candidate regions.
  • 12. The method according to claim 7, wherein the feature value is a luminance or a length of each text line candidate.
  • 13. A non-transitory computer readable medium including computer executable instructions, wherein the instructions, when executed by a processor, cause the processor to perform a method comprising: extracting a feature value of an image including one or more character strings;determining each priority of a plurality of different character detection schemes in accordance with character detection accuracy with respect to an image region having a feature corresponding to the feature value;integrating text line candidates of the character detection schemes; andselecting, as a text line, one of the text line candidates detected by the character detection scheme with the highest priority if a superimposition degree indicating a ratio of a superimposed region among the text line candidates is no less than a first threshold value, the text line candidates being obtained as a result of detecting the character string using the plurality of character detection schemes and being a candidate of a region including the character string.
  • 14. The medium according to claim 13, wherein if the superimposition degree is less than the first threshold value, and if a ratio of the superimposed region occupying a minimum text line candidate having a smallest region among superimposed text line candidates is not less than a second threshold value, the selecting the text line selects a maximum text line candidate having a largest region among the superimposed text line candidates as the text line,If the rate of the superimposed region occupying the minimum text line candidate is less than the second threshold value, the selecting the text line selects each of the superimposed text line candidates as the text line.
  • 15. The medium according to claim 13, further comprising: first detecting the character string using a first character detection scheme and obtains a first text line candidate indicating a candidate of a region including the character string; andsecond detecting the character string using a second character detection scheme and obtains a second text line candidate indicating a candidate of a region including the character string,wherein the determining the each priority determines the priority with respect to each of the first character detection scheme and the second character detection scheme, and the integrating the text line candidates integrates the first text line candidate and the second text line candidate, and the selecting the text line selects, as the text line, a text line candidate detected by one of the first character detection scheme or the second character detection scheme with high priority if the superimposition degree relating to the first text line candidate and the second text line candidate is no less than the first threshold value.
  • 16. The medium according to claim 15, wherein the first detecting comprises: connecting pixels with similar features between adjacent pixels in the image to obtain a plurality of connected components; andfirst generating the first text line candidate by combining the connected components in accordance with a similarity of the connected components and a positional relationship between the connected components, andthe second detecting comprises: selecting one or more character candidate regions indicating a character candidate from the image; andsecond generating the second text line candidate by combining the one or more character candidate regions.
  • 17. The medium according to claim 16, wherein the first generating generates the first text line candidate by a line detection using the connected components, and the second generating generates the second text line candidate by a line detection using the character candidate regions.
  • 18. The medium according to claim 13, wherein the feature value is a luminance or a length of each text line candidate.
Priority Claims (1)
Number Date Country Kind
2014-126576 Jun 2014 JP national