The present exemplary embodiments broadly relate to text detection in electronic images. However, it is to be appreciated that the present exemplary embodiments are also amenable to other like applications.
The personalization and customization of images as a way to add value to documents has been gaining interest in recent times. This is especially true in transactional and promotional marketing applications, but also is gaining traction in more image-intensive markets such as photo finishing, whereby personalized calendars, photobooks, greeting cards, and the likes are created. One approach to personalize an image is to incorporate a personalized text message into the image, with the effect that the text is a natural part of the image. Several technologies currently exist to personalize images in this fashion, offered by vendors such as XMPie, DirectSmile, and AlphaPictures, for example. In such applications, a photorealistic result is intended to convey the intended effect. At the same time, these approaches are cumbersome and complicated, requiring sophisticated design tools, and designer input with image processing experience. For this reason, designers are often hired to create libraries of stock personalization templates for customers to use. This limits the images that the customer can draw from for personalization.
A natural choice for incorporating personalized text into an image is a location where text already exists, such as a street sign, store sign or banner. The automatic detection of text in images is a very interesting and broadly studied problem. The problem can be further grouped into two subcategories, which include detecting and recognizing text in documents and finding text in natural scenes. Document text detection has been tackled by researchers, and is a precursor to optical character recognition (OCR) and other document recognition technologies. However, text detection techniques applicable to documents work at best poorly, and often not at all, on text found in real image scenes, As the text can typically bear different perspectives and can vary significantly in many different respects, such as size, location, shading, font, etc. Furthermore, the detection algorithm may be confused by other details and structures in the image. State-of-the-art techniques generally make assumptions and thus constrain themselves to a subset of the general problem. For example, in license plate recognition, the license plate images are usually obtained in a controlled environment with little variation in perspective, location, angle, distance, etc. Furthermore, many of these algorithms are computationally costly, which renders them ill-suited for real-time or interactive applications.
What are therefore needed are convenient and automated systems and methods that facilitate automatically detecting text regions in natural scenes in electronic images for use in image personalization and other applications.
In one aspect, a computer-implemented method for automatically detecting text in electronic images of natural scenes comprises receiving an electronic image for analysis, performing an edge-detection algorithm on the electronic image, identifying closed contours in the electronic image as a function of detected edges, and establishing links between closed components. The method further comprises identifying candidate text lines as a function of the identified closed contours, classifying candidate text lines as being text or non-text regions, and outputting, via a graphical user interface (GUI), verified text regions in the electronic image to a user.
According to another aspect, a computerized system that facilitates automatically detecting text in electronic images of natural scenes comprises a memory that stores computer-executable instructions, and a processor configured to execute the instructions, the instructions comprising receiving an electronic image for analysis, performing an edge-detection algorithm on the electronic image, and identifying closed contours in the electronic image as a function of detected edges. The processor is further configured to execute stored instructions for establishing links between closed components, identifying candidate text lines as a function of the identified closed contours, and classifying candidate text lines as being text or non-text regions. The system further includes a graphical user interface (GUI) on which verified text regions in the electronic image are displayed to a user.
The file of this patent contains at least one drawing executed in color. Copies of this patent with color drawing(s) will be provided by the United States Patent and Trademark Office upon request and payment of the necessary fee.
The systems and methods described herein provide an efficient approach for finding text in natural scenes such as photographic images, digital and/or electronic images, and the like. The described approach exploits edge information (e.g., edges of structures or objects in the images) that can be obtained from known edge detection techniques or algorithms. The approach understands that edges from text characters form closed contours even with presence of reasonable levels of noise. Closed contour linking and candidate text line forming are two additional features of the described approach that will be presented in additional detail herein. Finally, a candidate text line classifier is applied to further screen out false-positive text identifications.
The herein described systems and methods find potential application in several domains. One example is image personalization, where in a personalized text message is incorporated into the image as a natural effect. This invention can be used to identify for the user those regions in the image containing existing text that can be potentially replaced with a personalized message. The invention is also used as a smart preprocessing step towards assessing the “suitability for personalization” (SFP) metric of an image, which is described in greater detail in U.S. patent application Ser. No. 13/349,751, entitled “Methods and system for analyzing and rating images for personalization”, filed on Jan. 13, 2012, the entirety of which is hereby incorporated by reference herein). Briefly summarized, in determining if an image is suited for text-based personalization, locations containing existing text (e.g. signage, banners, and the like) typically provide an important indication as they suggest natural image regions for placing personalized text messages. Thus, the ability to accurately and efficiently find text embedded in natural scenes is useful for determining an effective SFP and for serving as a design aid in image personalization. Many other applications are envisioned, e.g. image understanding and recognition, security, surveillance, license plate recognition, etc. Text region detection depends on the application in which the described approach is used. For example, in image personalization, the identified text regions can be visibly (to a user) marked up or highlighted and presented to the user via a graphical user interface (GUI) on or associated with a computer 50.
The computer 50 can be employed as one possible hardware configuration to support the systems and methods described herein. It is to be appreciated that although a standalone architecture is illustrated, any suitable computing environment can be employed in accordance with the present embodiments. For example, computing architectures including, but not limited to, stand alone, multiprocessor, distributed, client/server, minicomputer, mainframe, supercomputer, digital and analog can be employed in accordance with the present embodiment.
The computer 50 includes a processing unit (not shown) that executes, and a system memory (not shown) that stores, one or more sets of computer-executable instructions (e.g., modules, programs, routines, algorithms, etc.) for performing the various functions, procedures, methods, protocols, techniques, etc., described herein. The computer can further include a system bus (not shown) that couples various system components including the system memory to the processing unit. The processing unit can be any of various commercially available processors. Dual microprocessors and other multi-processor architectures also can be used as the processing unit.
As used herein, “algorithm” or “module” refers to a set of computer-executable instructions persistently stored on a computer-readable medium (e.g., a memory, hard drive, disk, flash drive, or any other suitable storage medium). Moreover, the steps of the methods described herein are executed by a computer and/or processor unless otherwise specified as being performed by a user.
The computer 50 typically includes at least some form of computer readable media. Computer readable media can be any available media that can be accessed by the computer. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above can also be included within the scope of computer readable media.
A user may enter commands and information into the computer through a keyboard (not shown), a pointing device (not shown), a mouse, thumb pad, voice input, stylus, touchscreen, etc. The computer 50 can operate in a networked environment using logical and/or physical connections to one or more remote computers, such as a remote computer(s). The logical connections depicted include a local area network (LAN) and a wide area network (WAN). Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
With continued reference to
Referring back to
At 276, after both closed contours from the initially selected link have been extended, the total number of contours in sequence is checked. If a predetermined number (e.g., 4) or more contours are present, the sequence is then identified as a candidate text line. All closed contours in the sequence, and all links associated with the closed contours are then removed. At 278, the method (e.g., steps 272, 274, and 276) is iterated to consider another link, until all links have been traversed. At 279, Candidate text lines are extended, if desired.
T=½(w1+w2)+½(h1,h2)·m′,
where m′ is a positive multiplicative coefficient (e.g., 0.5 or some other predetermined coefficient). As can be observed in
Second, a threshold is applied on the ratio of the heights between the two characters, (e.g., 0.6 or some other predetermined threshold value. The following equation is evaluated for the second criterion:
The second criterion states that the heights of the two characters need to be comparable.
Finally, a constraint on color is implemented based on the assumption that the background pixels of neighboring closed contours or characters will be similar in a statistical sense, and likewise for text pixels. For each closed contour, the edge pixels are first dilated. Then a Gaussian mixture distribution with two modes is estimated on the chrominance channels of a luminance-chrominance color space of all pixels covered by the dilated contour. For example, the a* and b* channels of CIELAB space can be used. Next, an average of the Kullback-Leibler divergence between background modes and between text modes for the two characters is computed as below:
Dcolor(C1,C2)=½(KL(G1,1,G2,1)+KL(G1,2,G2,2)),
where C1 and C2 represent any two closed contours/characters, while G1,1, G1,2 and G2,1, G2,2 are the background and text modes estimated for the two characters, respectively. The linkage between the two characters is retained if the distance D is within a threshold Tc chosen to be 2 based on trial-and-error.
The heuristic for the aspect ratio feature determined at 350 is illustrated in
The features determined at 350, 352, and 354 are fed to a classifier at 356 that classifies the candidate text lines as text or non-text. Any suitable classifier can be utilized in this regard. In one example, a logistic regression classifier (also used in the region classification in U.S. application Ser. No. 13/349,751) is used. In another example, an adaptive logistic regression classifier is used, which can be viewed as a “boosted” version of logistic regression.
It will be understood that the foregoing methods, techniques, procedures, etc., are executable by a computer, a processor, or the like, such as the computer 50 described herein and/or the processor (not shown) comprised thereby and described with regard thereto.
The exemplary embodiments have been described with reference to the preferred embodiments. Obviously, modifications and alterations will occur to others upon reading and understanding the preceding detailed description. It is intended that the exemplary embodiments be construed as including all such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
Number | Name | Date | Kind |
---|---|---|---|
4791675 | Deering et al. | Dec 1988 | A |
5350303 | Fox et al. | Sep 1994 | A |
5583949 | Smith et al. | Dec 1996 | A |
5757957 | Tachikawa | May 1998 | A |
6233364 | Krainiouk et al. | May 2001 | B1 |
8098891 | Lv et al. | Jan 2012 | B2 |
8233668 | Jing et al. | Jul 2012 | B2 |
20020102022 | Ma et al. | Aug 2002 | A1 |
20030095113 | Ma et al. | May 2003 | A1 |
20030198386 | Luo | Oct 2003 | A1 |
20060015571 | Fukuda et al. | Jan 2006 | A1 |
20070110322 | Yuille et al. | May 2007 | A1 |
20080002893 | Vincent et al. | Jan 2008 | A1 |
20090067709 | Gross et al. | Mar 2009 | A1 |
20090169105 | Song et al. | Jul 2009 | A1 |
20090285482 | Epshtein et al. | Nov 2009 | A1 |
20100157340 | Chen et al. | Jun 2010 | A1 |
20100246951 | Chen et al. | Sep 2010 | A1 |
20110098029 | Rhoads et al. | Apr 2011 | A1 |
20120045132 | Wong et al. | Feb 2012 | A1 |
20130127824 | Cohen et al. | May 2013 | A1 |
Entry |
---|
Kumar et al. (“Text detection using multilayer separation in real scene images,” 10th IEEE Int'l Conf. Computer and Information Technology, 2010, pp. 1413-1417). |
Chen et al. (“Robust Text Detection in Natural Images with Edge-enhanced Maximally Stable Extremal Regions,” IEEE International Conference on Image Processing (ICIP), Sep. 2011). |
Yi et al. (“Text String Detection From Natural Scenes by Structure-Based Partition and Grouping,” IEEE Trans. Image Processing, vol. 20, No. 9, Sep. 2011, 2594-2605). |
Russell et al. (“Using multiple segmentations to discover objects and their extent in image collections,” IEEE CVPR, 2006). |
Gao et al. (“An adaptive algorithm for text detection from natural scenes,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Dec. 2001, 6 pages). |
K. Jung et al., “Text Information Extraction in Images and Video: A Survey” Pattern Recognition, vol. 37, No. 5, pp. 977-997, May 2004. |
C. C. Anagnostopoulos et al., “License Plate Recognition from Still Images and Video Sequences: A Survey” IEEE Trans. Intel. Transport. Sys. vol. 9, No. 3, pp. 377-391, Sep. 2008. |
J. Friedman et al., “Additive Logistic Regression: A Statistical View of Boosting” The Annals of Statistics, vol. 28, No. 2, pp. 337-407, 2000. |
Number | Date | Country | |
---|---|---|---|
20130330004 A1 | Dec 2013 | US |