This application is based upon and claims the benefit of priority from Japanese Patent Application No. 2015-140489, filed Jul. 14, 2015, the entire contents of which are incorporated herein by reference.
Embodiments described herein relate generally to an information processing apparatus and an information processing method.
In recent years, information processing apparatuses which detect characters written on a signboard, an indicator, a paper sheet, etc., in an image captured by a camera, and which perform character recognition processing or translation processing on the detected characters have come to be widely used. When using the information processing apparatus, it is necessary for a user to perform an operation called framing, in which the user detects, through a preview screen on a display, where the camera is currently imaging, and moves the information processing apparatus toward a character string as an imaging target to make it fall within the imaging range of the camera.
In other words, it may be assumed that during framing, the entire character string as a target of detection, recognition, translation, etc. is not set in a captured image (in particular, in substantially the center of the image), and that the entire character string as the target is finally set in the captured image (in particular, in substantially the center of the image) upon the completion of the framing. However, the conventional information processing apparatus has a problem that since a reject setting (such as setting of a threshold for detection) supposing a case where a captured image contains no characters, that is, a reject setting where excessive detection does not easily occur, is always activated under predetermined strict conditions, a character string, if there is any in an image obtained after framing, may not be detected because of the too strict conditions.
In general, according to one embodiment, an information processing apparatus includes an image processor, a hardware processor and a controller. The image processor acquires an image. The hardware processor detects a first region in the image that includes a character and that detects a second region in the image that includes a text-line comprising at least a particular number of first regions. The second region is detected based at least in part on the detection of the first region. The hardware processor detects a variation in position and attitude of a camera at a time when the image is shot. The controller causes the hardware processor to detect the second region in the image when the variation is less than or equal to a threshold. The controller changes a setting of the hardware processor associated with the detection of at least one of the first region and the second region and causes the hardware processor to detect the second region in the image when the second region is not detected by the hardware processor.
The controller 100 executes control for organically operating each component of the information processing apparatus 10 (the image acquisition module 101, the stationary-state detector 102, the image-analysis/setting module 103, the character detection dictionary storage unit 104, the text-line detector 105, the application module 106 and the output module 107). In other words, each component of the information processing apparatus 10 operates under control of the controller 100.
The image acquisition module 101 acquires an image shot by an imaging module, such as a camera, installed in the information processing apparatus 10 (image acquisition processing step S1 of
The stationary-state detector 102 acquires a position/attitude variation (a variation in position and/or attitude) in the information processing apparatus 10, assumed when an image is shot by the imaging module, from an acceleration sensor or an angular velocity sensor built in the apparatus 10. If the acquired variation is less than or equal to a threshold, the stationary-state detector 102 outputs a trigger for executing initial setting processing step S3, described later (Yes in stationary-state detection processing step S2). The position/attitude variation indicates how fast the information processing apparatus 10 (more specifically, the imaging module installed in the information processing apparatus 10) was performing a translation motion and/or rotating during image capture.
The period in which the position/attitude variation is more than a predetermined value is supposed that framing is being performed. In contrast, if the position/attitude variation becomes less than the predetermined value (this state is called a substantially stationary state), the framing is estimated to be complete. For instance, when the acceleration sensor is used, the magnitude of a velocity vector obtained by time-integration of an acceleration vector that excludes a gravity component can be set as the position/attitude variation. Alternatively, the rotational velocity obtained by, for example, the angular velocity sensor can be regarded as an approximate position/attitude variation that indicates, in particular, a variation in attitude. It is considered that the motion of framing has, as a main component, rotational movement that greatly changes the orientation of the imaging module in a position where the imaging module is set. Therefore, it is considered that the state of framing can be estimated only from the approximate position/attitude variation that indicates the attitude variation. This sensor exhibits a quick response, and the position/attitude variation can be acquired by a small number of calculations.
The stationary-state detector 102 compares the acquired position/attitude variation with a predetermined threshold, and outputs the above-mentioned trigger only when the position/attitude variation is less than or equal to the threshold. When the position/attitude variation is more than the predetermined threshold, the stationary-state detector 102 supplies the output module 107 with a command to cause the same to execute preview display processing, described later (No in stationary-state detection processing step S2).
The embodiment is directed to a case where the stationary-state detector 102 uses a position/attitude variation measured by a sensor module, such as an acceleration sensor. Using a feature that if an image is blurred because the imaging module has a significant position/attitude variation, the contrast value (this is obtained as the difference between a maximum luminance and a minimum luminance) of the image is low, the contrast value of an image acquired by the image acquisition module 101 may be calculated, and a value obtained by subtracting the calculated contrast value from a predetermined constant may be used as the position/attitude variation. Alternatively, the magnitude of a motion vector in an image may be directly calculated as in an optical flow, and, for example, a maximum value in the entire image may be used as the position/attitude variation. In this case, even an information processing apparatus without, for example, an acceleration sensor can directly calculate a position/attitude variation from an image acquired by the image acquisition module 101, thereby executing the above-mentioned processing.
Furthermore, in the embodiment, the trigger is presupposed here that it is output when the position/attitude variation is less than or equal to a predetermined threshold. However, even when the position/attitude variation is less than or equal to predetermined threshold, a blurred image will be obtained if the imaging module is out of focus, which adversely affects character candidate detection processing, described later. For this reason, the trigger may be output on condition that the position/attitude variation is less than or equal to the predetermined threshold, and that the imaging module is in focus. Whether the imaging module is in focus may be determined by analyzing an image, or using status information (including, for example, the driving status of a motor that moves the lens of the imaging module) acquired from the imaging module.
Upon receiving the trigger from the stationary-state detector 102, the image-analysis/setting module 103 analyzes an image acquired by the image acquisition module 101, and determines and outputs an initial parameter value for subsequent character-candidate/text-line detection processing step S4 (initial setting processing step S3 of
At this time, the image-analysis/setting module 103 calculates the degrees of complexity in a plurality of measurement windows of different positions (denoted by reference number 1013 in
If a character is written on a relatively simple background, such as a signboard (denoted by reference number 1011 in
The character detection dictionary storage unit 104 is a storage device that stores character detection dictionaries used by the text-line detector 105.
Upon receiving parameter values from the image-analysis/setting module 103, the text-line detector 105 executes, using the parameter values, character candidate detection processing of detecting, in an image acquired by the image acquisition module 101, an image region that seems a character region as a character candidate region (i.e., an region where a character seems to be written), and executes text-line detection processing of detecting a text-line in the detected character candidate region (character-candidate/text-line detection processing step S4 of
Referring now to
The text-line detector 105 reads a corresponding character detection dictionary from the character detection dictionary storage unit 104 in accordance with determination as to whether the background output from the image-analysis/setting module 103 is a simple or complex one.
Subsequently, the text-line detector 105 performs reduction processing on the image (input image) acquired by the image acquisition module 101, generates a so-called resolution pyramid image, and performs character candidate detection processing of searching and detecting a character on the resolution pyramid image. More specifically, as shown in
Since on the resized images 202 and 203 obtained by multiplying the input image by the constant reduction ratio r, the region covered by the detection window 205 of the same size is relatively larger than in the input image, the size of a detected character is relatively greater on the resized images. The text-line detector 105 generates a resized image until the size of a character to be detected exceeds the maximum size associated with the specifications. Thus, after generating one or more resized images, the text-line detector 105 generates the resolution pyramid image 204 that comprises the input image 201 and the resized images 202 and 203, as is shown in
After generating the resolution pyramid image 204, the text-line detector 105 generates a plurality of partial images by extracting images within the detection window 205 of the predetermined size in respective positions, while scanning, using the detection window 205, the respective images 201 to 203 included in the generated resolution pyramid image 204. Further, the text-line detector 105 detects character candidates based on the generated partial images and the above-mentioned read character detection dictionary. More specifically, the text-line detector 105 compares each of the above-mentioned partial images with the character detection dictionary, thereby calculating, for the respective partial images, scores indicating degrees of likeness to a character, and determining whether each score exceeds a character candidate detection threshold output from the image-analysis/setting module 103. As a result, it can be determined (estimated) whether each partial image contains a character.
In accordance with the determination result, the text-line detector 105 imparts a first code, indicating a character, to a partial image determined to be a character, and imparts a second code, indicating a non-character, to a partial image determined to be an image including no character (in other words, an image including a non-character). Thus, the text-line detector 105 can detect, as a region including a character, a region where a partial image with the first code exists (in other words, a region where the detection window 205 clipping the partial image with the first code is positioned).
If the number of partial images with the first code is not less than a predetermined threshold after the above-mentioned character candidate detection processing is executed, the text-line detector 105 generates first detection-result information indicating a region on the input image 201 where a character exists. The first detection-result information is information that indicates a region on the input image 201 where a character series is marked by a rectangular frame, as is shown in, for example,
If the number of the partial images with the first code is less than a predetermined threshold, the text-line detector 105 determines that the above processing has failed in detection of sufficient character candidates, and generates a first command for causing the image-analysis/setting module 103 to execute setting-change processing, described later (No in success determination processing step S5).
Since a score calculation method for estimating the degree of likeness, to a character, of a partial image in the detection window 205 can be realized by a known pattern identification method, such as a partial space method or a support vector machine, no detailed description will be given thereof.
When the first detection-result information is generated, the text-line detector 105 performs text-line detection processing of detecting a row of characters written in an image acquired by the image acquisition module 101, based on the first detection-result information. The text-line detection processing is a processing for detecting a linear arrangement of character candidates, using linear Hough transform.
Referring first to
Before describing the principle of linear Hough transform, a Hough curve will be described. As shown in
Linear Hough transform means transform of a straight line, which can pass through (x, y) coordinates, into a Hough curve drawn by (θ, ρ) uniquely determined as described above. Suppose here that θ assumes a positive value if the straight line that can pass through (x, y) is inclined leftward, assumes 0 if it is perpendicular, and assumes a negative value if it is inclined rightward. Suppose also that the domain of definition does not depart from −π<θ≤π.
Hough curves can be obtained for respective points on the xy coordinates independently of each other. As shown in, for example,
When detecting a straight line from a group of points, an engineering technique called Hough voting is used. In this technique, combinations of θ and ρ through which each Hough curve passes are voted in a two-dimensional Hough voting space formed of coordinate axes of θ and ρ, thereby suggesting existence of combinations of θ and ρ through which a large number of Hough curves pass, i.e., the existence of a straight line passing through a large number of points, in a position in the Hough voting space, where a large number of votes are obtained. In general, first, a two-dimensional arrangement (Hough voting space) having a size corresponding to a necessary search range of θ and ρ is prepared, and the number of votes is initialized to 0. Subsequently, a Hough curve corresponding to a point is obtained by the above-described Hough transform, and the value of an arrangement through which this Hough curve passes is incremented by one.
This processing is generally called a Hough vote. If the above-mentioned Hough voting is executed on all points, it can be understood that in a position where the number of votes is 0 (i.e., no Hough curve passes), no straight line exists, that in a position where only one vote is obtained (i.e., one Hough curve passes), a straight line passing through one point exists, that in a position where two votes are obtained (i.e., two Hough curves pass), a straight line passing through two points exists, and that in a position where n votes are obtained (i.e., one Hough curves pass), a straight line passing through n points exists. That is, a straight line which passes through two or more points on the xy coordinates appears as a place where two or more votes are obtained in the Hough voting space.
If the resolution of the Hough voting space can be made infinite, only a point, through which a number of loci pass, obtains votes corresponding to the number of the loci. However, since the actual Hough voting space is quantized at a certain resolution associated with θ and ρ, positions around a position where a plurality of loci intersect will also have a high voting distribution. In light of this, the position where a plurality of loci intersect is detected by detecting a position of a local maximum value in the voting distribution of the Hough voting space.
Referring then to
When the coordinates of the center of the character candidate 502 is (x, y), an infinite number of straight lines pass through the center. These straight lines always satisfy the above-mentioned linear Hough transform formula (ρ=x·cos θ+y·sin θ). As described above, ρ and θ represent the length of a normal dropped to each straight line from the origin O, and the inclination of the normal with respect to the x-axis, respectively. That is, the values of (θ, ρ) that satisfy the straight lines passing through the point (x, y) provide a Hough curve in the θρ coordinate system. A straight line passing through two different points can be expressed by a combination of (θ, ρ) where Hough curves associated with the two points intersect. The text-line detector 105 obtains Hough curves associated with the centers of a plurality of character candidates detected by the text-line detector 105, and detects a combination of (θ, ρ) where Hough curves intersect. This means that the text-line detector 105 detects a straight line of a large number of channel candidates, namely, the existence of a text-line.
In order to detect a combination of (θ, ρ) where a large number of Hough curves intersect, the text-line detector 105 votes, in the Hough voting space, a Hough curve calculated from the center coordinates of each character candidate. As shown in
In addition, when detecting, in association with one Hough curve, a plurality of straight lines defined by a local maximum position (θ, ρ) where the number of votes is not less than the text-line detection threshold, the text-line detector 105 detects, as the text-line, a set of character candidates associated with a straight line with a largest number of votes. For example, if a text-line detection threshold is 2, in the Hough voting space 503 of
If local maximum positions detected in different Hough voting spaces of sizes s close to each other are adjacent to each other within a predetermined distance, the text-line detector 105 determines that the same text-line has been detected in different ways, thereby detecting one text-line from sets of character candidates associated with the two local maximum positions.
Returning to
The image-analysis/setting module 103 will be described again. Upon receipt, from the text-line detector 105, of the first or second command that commands execution of setting-change processing, the image-analysis/setting module 103 determines whether parameter change is possible (changeability determination processing step S8). If change is possible, the image-analysis/setting module 103 changes parameter value and outputs it (setting-change processing step S9 of
Reception of the first command by the image-analysis/setting module 103 means that the text-line detector 105 could not detect a sufficient number of character candidates. In this case, it is strongly possible that the above-mentioned character candidate detection threshold is too high. Therefore, the image-analysis/setting module 103 determines whether processing can be repeated, with the current character candidate detection threshold lowered (changeability determination processing step S8). This determination is made according to two conditions. The first condition is whether the current character candidate detection threshold has reached a predetermined lower limit. The second condition is whether the number of setting changes executed on an acquired image has reached a predetermined upper limit.
If at least one of the conditions is satisfied (No in changeability determination processing step S8), the image-analysis/setting module 103 stops further repetition of character-candidate/text-line detection processing step S4, and supplies the output module 107 with a command for causing the output module 107 to execute a preview display of an acquired image superimposed with information that requests the user to perform re-framing. In contrast, if neither of the conditions is satisfied (Yes in changeability determination processing step S8), the image-analysis/setting module 103 determines a new threshold by subtracting a predetermined value from the current character candidate detection threshold, and outputs the determined value as an updated character candidate detection threshold.
Further, reception of the second command by the image-analysis/setting module 103 means that the text-line detector 105 could not detect a text-line. In this case, it is strongly possible that the above-mentioned text-line detection threshold is too high. Therefore, the image-analysis/setting module 103 determines whether processing can be repeated, with the current text-line detection threshold lowered (changeability determination processing step S8). This determination is made according to two conditions. The first condition is whether the current text-line detection threshold has reached a predetermined lower limit. The second condition is whether the number of setting changes executed on an acquired image has reached a predetermined upper limit.
If at least one of the conditions is satisfied (No in changeability determination processing step S8), the image-analysis/setting module 103 stops further repetition of character-candidate/text-line detection processing step S4, and supplies the output module 107 with a command for causing the output module 107 to execute a preview display of an acquired image superimposed with data that requests the user to perform re-framing. In contrast, if neither of the conditions is satisfied (Yes in changeability determination processing step S8), the image-analysis/setting module 103 determines a new threshold by subtracting a predetermined value from the current text-line detection threshold, and outputs the determined value as an updated text-line detection threshold.
Although in the embodiment, both the character candidate detection threshold and the text-line detection threshold are set adaptively changeable, only one of the thresholds may be set adaptively changeable.
Moreover, since in the information processing apparatus 10 of the embodiment, both the character candidate detection threshold and the text-line detection threshold can be adaptively changed as described above, initial-setting processing step S3 by the image-analysis/setting module 103 may be omitted to enable the text-line detector 105 to execute, using, for example, an initial parameter set for selecting a versatile character detection dictionary, character-candidate/text-line detection processing step S4 immediately when a trigger is output from the stationary-state detector 102.
Upon receipt of the second detection-result information from the text-line detector 105, the application module 106 executes processing (application processing step S6 of
If characters in an image are recognized by, for example, OCR, the application module 106 can also retrieve information associated with the recognized character code sequence. More specifically, information indicating a price or specifications of an article may be retrieved based on the name of the article, map information may be retrieved based on the name of a place or a beauty spot, or a certain language may be translated into another. Processing result information indicating the result of the processing executed by the application module 106 is output to the output module 107.
The output module 107 superimposes the processing result from the application module 106 on the image acquired from the image acquisition module 101, and executes preview-display processing for displaying the resultant information on the display of the information processing apparatus 10. Furthermore, upon receipt of a command to execute the preview-display processing from a component different from the application module 106, the output module 107 executes preview display processing of at least directly displaying an input image on the display, in accordance with the command.
Referring then to
The framing phase is a period ranging from the time when the user starts to move the information processing apparatus 10 (imaging module) toward a character string as an image capture target, to the time when an image from which the user tries to obtain a desired character recognition result or translation result as the purpose of framing (i.e., an image from which a desired result is obtained by processing the image), is acquired by, for example, a display output. The framing phase can be roughly divided into three stages. In the first stage, the information processing apparatus 10 is moved by a large amount toward a character string as an image capture target (hereinafter, referred to as the coarse adjustment phase), as is shown in diagram (a) of
In the coarse adjustment phase, since blurring occurs in an image because of the large movement of the information processing apparatus 10, no character candidate is detected as shown in diagram (a) of
Referring then to
Diagram (a) of
Diagram (b) of
Although
Further, not only stages (1) to (3) described above are indicated to the user, but also the above-mentioned position/attitude variation can also be indicated to the user, using a graph superimposed on the preview display output from the output module 107. Furthermore, the positions of character candidates or text-line detected by the text-line detector 105 can further be indicated to the user, using, for example, a frame. Referring then to
When the user cannot obtain a good result of detection, recognition and/or translation of a text-line, they can more accurately estimate whether its cause is the coarse adjustment phase, or failure of character candidate detection due to the distance of a target character or the skew of the character, if the position/attitude variation is indicated to them as shown in
Furthermore, if the display of the information processing apparatus 10 is a touchscreen display including a touch-panel, it may be modified such that a touch operation to horizontally move, on the touchscreen display, object 703 in icon 701 displayed on the above-mentioned graph display area is received, thereby enabling the user to arbitrarily change the threshold set in the stationary-state detector 102, as is shown in
In the above description, the stationary-state detector 102 does not execute image acquisition processing after processing (S3 to S9 in
Accordingly, the above processing may be modified such that when the position/attitude variation exceeds the threshold, the output of the trigger is immediately stopped to interrupt the processing (S3 to S9 in
At this time, instead of executing the initial setting processing immediately after receiving a trigger from the stationary-state detector 102, the image-analysis/setting module 103 may execute the initial setting processing when still receiving the trigger after a predetermined period (for example, about 0.5 seconds) elapses since then. By virtue of this structure, when an action (for example, the information processing apparatus 10 is moved a large amount) for revoking the trigger has been performed immediately after the output of the trigger from the stationary-state detector 102, useless initial setting or text-line detection processing is prevented from execution, advantageously.
Referring next to
The CPU 801 is a processor for controlling the components of the information processing apparatus 10. The CPU 801 executes a text-line detection program loaded to the RAM 802 from the HDD 804. By executing the text-line detection program, the CPU 801 can function as a processing module configured to execute the above-described information processing. The CPU 801 can also load a text-line detection program from the external storage device 809 (such as a flash drive) to the RAM 802, thereby executing the program. Not only the text-line detection program, but also images used during information processing, can be loaded from the external device 809.
The input device 806 is, for example, a keyboard, a mouse, a touch-panel, or one of other various types of input devices. The display 807 is a device capable of displaying results of various types of processing executed by the information processing apparatus 10. The camera 810 corresponds to the above-described imaging module, and can capture images serving as targets of information processing. As described above, the camera 810 may be a basic unit secured to the information processing apparatus 10, or may be an optional external unit detachably attached to the information processing apparatus 10. The acceleration sensor 811 is a device capable of acquiring a degradation estimation value.
In the above-described embodiment, only when framing is completed and possibility of existence of a character is determined high, the initial setting processing and the character-candidate/text-line detection processing are executed. Further, if no text-line is detected, the character-candidate detection threshold or the text-line detection threshold is adaptively changed. This enables a character string to be reliably detected in an acquired image, without reject setting, where excessive detection little occurs, performed under strict conditions. In addition, since the character-candidate detection threshold or the text-line detection threshold is adaptively changed as described above, initial setting processing including image analysis (for, for example, selecting a character device dictionary) can be omitted.
By the way, when a character as an image capture target is positioned too far, this may be regarded as a factor of prohibiting acquisition of a good result of text-line detection, recognition and/or translation. For instance, when a capture target area is wide, if an image which includes the entire image-capture target area is obtained at a distance to enable the entire image capture target area to be received in the image capture range, characters as image capture targets are at a far distance, and hence it is strongly possible that a good result of detection, recognition and/or translation of a text-line cannot be obtained. Therefore, the user has to execute a number of framing operations in order to divide one image capture target area 901 into a plurality of image capture ranges 902A, 902B and 902C, as is shown in diagram (a) of
In light of the above circumstances, when image capture range 902D is moving at a constant velocity as shown in, for example, diagram (b) of
The constant-velocity movement of image capture range 902D can be detected, assuming that the information processing apparatus 10 is in a constant-velocity motion state. Therefore, the stationary-state detector 102 outputs the second trigger (which can be discriminated from the aforementioned trigger), based on the position/attitude variation acquired as described above, more specifically, when the direction and length of a velocity vector calculated from the value of the acceleration sensor are substantially constant.
When the second trigger is output, the controller 100, for example, makes the image acquisition module 101 (or imaging module) continuously acquire images at intervals shorter than usual.
Moreover, when the second trigger is output, the controller 100 sets a character detection dictionary dedicated to blurred characters (i.e., a character detection dictionary having learned blurred characters) as a character detection dictionary used by the text-line detector 105. The text-line detector 105 performs the above-mentioned character-candidate detection processing and text-line detection processing on the images continuously acquired using the character detection dictionary dedicated to detection of blurred characters.
Yet further, the information processing apparatus may be modified such that it has an image processing function for correcting blurred images, such as blind de-convolution, and when the second trigger is output, the controller 100, for example, executes the image processing function on a character candidate, a text-line or a partial image including them, detected by the text-line detector 105.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
Number | Date | Country | Kind |
---|---|---|---|
2015-140489 | Jul 2015 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
5410611 | Huttenlocher et al. | Apr 1995 | A |
5557689 | Huttenlocher et al. | Sep 1996 | A |
5687253 | Huttenlocher et al. | Nov 1997 | A |
5949906 | Hontani et al. | Sep 1999 | A |
6188790 | Yoshikawa et al. | Feb 2001 | B1 |
6330358 | Nagaishi | Dec 2001 | B1 |
7221399 | Fujita | May 2007 | B2 |
7965904 | Kobayashi | Jun 2011 | B2 |
8139862 | Shimodaira | Mar 2012 | B2 |
9269009 | Liu | Feb 2016 | B1 |
9355336 | Jahagirdar | May 2016 | B1 |
9640173 | Pulz | May 2017 | B2 |
9843759 | Jojic | Dec 2017 | B2 |
20030086615 | Dance et al. | May 2003 | A1 |
20060120629 | Myers et al. | Jun 2006 | A1 |
20090238494 | Ochi et al. | Sep 2009 | A1 |
20090309892 | Uehori et al. | Dec 2009 | A1 |
20100141758 | Kim et al. | Jun 2010 | A1 |
20120092329 | Koo | Apr 2012 | A1 |
20120099795 | Jojic | Apr 2012 | A1 |
20130262106 | Hurvitz | Oct 2013 | A1 |
20150073770 | Pulz | Mar 2015 | A1 |
20150138399 | Ma | May 2015 | A1 |
20150371404 | Liu | Dec 2015 | A1 |
20160063340 | Suzuki et al. | Mar 2016 | A1 |
20160078631 | Takahashi et al. | Mar 2016 | A1 |
20160155309 | Watson | Jun 2016 | A1 |
20160269625 | Suzuki et al. | Sep 2016 | A1 |
20160309085 | Ilic | Oct 2016 | A1 |
20160328827 | Ilic | Nov 2016 | A1 |
20160330374 | Ilic | Nov 2016 | A1 |
20170221121 | Davis | Aug 2017 | A1 |
Number | Date | Country |
---|---|---|
05-054186 | Mar 1993 | JP |
05-159099 | Jun 1993 | JP |
05-189691 | Jul 1993 | JP |
05-258118 | Oct 1993 | JP |
05-258119 | Oct 1993 | JP |
06-044405 | Feb 1994 | JP |
06-245032 | Sep 1994 | JP |
07-085215 | Mar 1995 | JP |
07-093476 | Apr 1995 | JP |
07-105310 | Apr 1995 | JP |
07-152857 | Jun 1995 | JP |
07-182459 | Jul 1995 | JP |
08-190610 | Jul 1996 | JP |
08-194776 | Jul 1996 | JP |
08-315067 | Nov 1996 | JP |
11-203404 | Jul 1999 | JP |
11-272801 | Oct 1999 | JP |
2000-030052 | Jan 2000 | JP |
2000-181988 | Jun 2000 | JP |
2001-307017 | Nov 2001 | JP |
2001-331803 | Nov 2001 | JP |
2002-117373 | Apr 2002 | JP |
2002-123825 | Apr 2002 | JP |
2002-207963 | Jul 2002 | JP |
2005-055969 | Mar 2005 | JP |
3677666 | Aug 2005 | JP |
2006-172083 | Jun 2006 | JP |
3891981 | Mar 2007 | JP |
2008-039611 | Feb 2008 | JP |
2008-097588 | Apr 2008 | JP |
2008-123245 | May 2008 | JP |
2008-234160 | Oct 2008 | JP |
4164568 | Oct 2008 | JP |
2008-287517 | Nov 2008 | JP |
2009-230411 | Oct 2009 | JP |
2010-231686 | Oct 2010 | JP |
2012-221372 | Nov 2012 | JP |
2013-246677 | Dec 2013 | JP |
2016-045877 | Apr 2016 | JP |
2016-062263 | Apr 2016 | JP |
2016-167128 | Sep 2016 | JP |
Entry |
---|
U.S. Appl. No. 15/058,542, filed Mar. 2, 2016, Suzuki et al. |
“Word Lens”, Wikipedia, the free encyclopedia, https://en.wikipedia.org/wiki/Word_Lens, Jun. 2, 2015. |
Pan, Yi-Feng, et al. “A Hybrid Approach to Detect and Localize Texts in Natural Scene Images”, IEEE Transactions of Image Processing, vol. 20. No. 3, Mar. 2011, pp. 800-813, 14 pgs. |
Number | Date | Country | |
---|---|---|---|
20170017856 A1 | Jan 2017 | US |