Iterative recognition-guided thresholding and data extraction

Information

  • Patent Grant
  • 10242285
  • Patent Number
    10,242,285
  • Date Filed
    Tuesday, July 19, 2016
    8 years ago
  • Date Issued
    Tuesday, March 26, 2019
    5 years ago
Abstract
Techniques for improved binarization and extraction of information from digital image data are disclosed in accordance with various embodiments. The inventive concepts include independently binarizing portions of the image data on the basis of individual features, e.g. per connected component, and using multiple different binarization thresholds to obtain the best possible binarization result for each portion of the image data independently binarized. Determining the quality of each binarization result may be based on attempted recognition and/or extraction of information therefrom. Independently binarized portions may be assembled into a contiguous result. In one embodiment, a method includes: identifying a region of interest within a digital image; generating a plurality of binarized images based on the region of interest using different binarization thresholds; and extracting data from some or all of the plurality of binarized images. Corresponding systems and computer program products are also disclosed.
Description
RELATED APPLICATIONS

This application is related to U.S. Provisional Patent Application No. 62/194,783, filed Jul. 20, 2015; U.S. Pat. No. 9,058,515, filed Mar. 19, 2014; U.S. Pat. No. 8,885,229, filed May 2, 2014; U.S. Pat. No. 8,855,375, filed Jan. 11, 2013; U.S. Pat. No. 8,345,981, filed Feb. 10, 2009; U.S. Pat. No. 9,355,312, filed Mar. 13, 2013; and U.S. Pat. No. 9,311,531, filed Mar. 13, 2014; each of which is herein incorporated by reference in its entirety.


FIELD OF INVENTION

The present invention relates to image capture and image processing. In particular, the present invention relates to capturing and processing digital images using a mobile device, and extracting data from the processed digital image using a recognition-guided thresholding and extraction process.


BACKGROUND OF THE INVENTION

Digital images having depicted therein an object inclusive of documents such as a letter, a check, a bill, an invoice, etc. have conventionally been captured and processed using a scanner or multifunction peripheral (MFP) coupled to a computer workstation such as a laptop or desktop computer. Methods and systems capable of performing such capture and processing are well known in the art and well adapted to the tasks for which they are employed.


More recently, the conventional scanner-based and MFP-based image capture and processing applications have shifted toward mobile platforms, e.g. as described in the related patent applications noted above with respect to capturing and processing images using mobile devices (U.S. Pat. No. 8,855,375), classifying objects depicted in images captured using mobile devices (U.S. Pat. No. 9,355,312, e.g. at column 9, line 9—column 15, line 28), and extracting data from images captured using mobile devices (U.S. Patent Publication No. 9,311,531, e.g. at column 18, line 25—column 27, line 16).


While these capture, processing, classification and extraction engines and methods are capable of reliably extracting information from certain objects or images, it is not possible to dynamically extract information from other objects, particularly objects characterized by a relatively complex background, and/or overlapping regions of foreground (e.g. text) and background. In practice, while it may be possible to reliably extract information from a simple document having a plain white background with dark foreground text and/or images imposed thereon, a document depicting one or more graphics (such as pictures, logos, etc.) as the background with foreground text and/or images imposed thereon, especially if overlapping.


This problem arises primarily because it becomes significantly difficult to distinguish the foreground from the background, especially in view of the fact that digital images are conventionally converted to bitonal (black/white) or grayscale color depth prior to attempting extraction. As a result, tonal differences between background and foreground are suppressed in converting the color channel information into grayscale intensity information or bitonal information.


This is an undesirable limitation that restricts users from using powerful extraction technology on an increasingly diverse array of documents encountered in the modern world and which are useful or necessary to complete various mobile device-mediated transactions or business processes.


For example, it is common for financial documents such as checks, credit cards, etc. to include graphics, photographs, or other imagery and/or color schemes as background upon which important financial information are displayed. The font and color of the foreground financial information may also vary from “standard” business fonts and/or colors, creating additional likelihood that discriminating between the foreground and background will be difficult or impossible.


Similarly, identifying documents such as driver's licenses, passports, employee identification, etc. frequently depict watermarks, holograms, logos, seals, pictures, etc. over which important identifying information may be superimposed in the foreground. To the extent these background and foreground elements overlap, difficulties are introduced into the discrimination process, frustrating or defeating the ability to extract those important information.


Therefore, it would be highly beneficial to provide new method, system and/or computer program product technology for extracting information from complex digital image data depicting highly similar foreground and background elements, and/or overlapping background and foreground elements.


SUMMARY

According to one embodiment, a computer-implemented method includes: identifying a region of interest within a digital image; generating a plurality of binarized images based on the region of interest, wherein some or all of the binarized images are generated using a different one of a plurality of binarization thresholds; and extracting data from some or all of the plurality of binarized images.


In accordance with another embodiment, a system such as a mobile device includes a processor and logic integrated with and/or executable by the processor. The logic is configured, upon execution thereof, to cause the processor to: identify a region of interest within a digital image; generate a plurality of binarized images based on the region of interest, wherein some or all of the binarized images are generated using a different one of a plurality of binarization thresholds; and extract data from some or all of the plurality of binarized images.


According to yet another embodiment, a computer program product includes a computer readable medium having embodied therewith computer readable program instructions configured to cause a processor, upon execution of the instructions, to: identify, using the processor, a region of interest within a digital image; generate, using the processor, a plurality of binarized images based on the region of interest, wherein some or all of the binarized images are generated using a different one of a plurality of binarization thresholds; and extract, using the processor, data from some or all of the plurality of binarized images.


Other aspects and embodiments of the present invention will become apparent from the following detailed description, which, when taken in conjunction with the drawings, illustrate by way of example the principles of the invention.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates a network architecture, in accordance with one embodiment.



FIG. 2 shows a representative hardware environment that may be associated with the servers and/or clients of FIG. 1, in accordance with one embodiment.



FIG. 3 a portion of a driver's license in a color rendition, according to one embodiment.



FIG. 4 depicts the same portion of the driver's license, in a grayscale rendition generated from the color image shown in FIG. 3, according to one embodiment.



FIG. 5 depicts a plurality of binary images generated by applying a plurality of different binarization thresholds to the grayscale image shown in FIG. 4, according to one embodiment.



FIGS. 6A and 6B depict a composite image generated by extracting and assembling high-confidence components from the plurality of thresholded images shown in FIG. 5.



FIG. 7 is a flowchart of a method, according to one embodiment.





DETAILED DESCRIPTION

The following description is made for the purpose of illustrating the general principles of the present invention and is not meant to limit the inventive concepts claimed herein. Further, particular features described herein can be used in combination with other described features in each of the various possible combinations and permutations.


Unless otherwise specifically defined herein, all terms are to be given their broadest possible interpretation including meanings implied from the specification as well as meanings understood by those skilled in the art and/or as defined in dictionaries, treatises, etc.


It must also be noted that, as used in the specification and the appended claims, the singular forms “a,” “an” and “the” include plural referents unless otherwise specified.


The present application refers to image processing of images (e.g. pictures, figures, graphical schematics, single frames of movies, videos, films, clips, etc.) captured by cameras, especially cameras of mobile devices. In particular, the presently disclosed inventive concepts concern determining optimum binarization parameters for recognizing and/or extracting features of an image, especially text. Determining optimum binarization parameters involves an iterative process whereby various binarization thresholds are employed to an image, and data are extracted from the binarized images to determine whether and to what degree the extraction result matches an expected result.


According to one embodiment, a computer-implemented method includes: identifying a region of interest within a digital image; generating a plurality of binarized images based on the region of interest, wherein some or all of the binarized images are generated using a different one of a plurality of binarization thresholds; and extracting data from some or all of the plurality of binarized images.


In accordance with another embodiment, a system such as a mobile device includes a processor and logic integrated with and/or executable by the processor. The logic is configured, upon execution thereof, to cause the processor to: identify a region of interest within a digital image; generate a plurality of binarized images based on the region of interest, wherein some or all of the binarized images are generated using a different one of a plurality of binarization thresholds; and extract data from some or all of the plurality of binarized images.


According to yet another embodiment, a computer program product includes a computer readable medium having embodied therewith computer readable program instructions configured to cause a processor, upon execution of the instructions, to: identify, using the processor, a region of interest within a digital image; generate, using the processor, a plurality of binarized images based on the region of interest, wherein some or all of the binarized images are generated using a different one of a plurality of binarization thresholds; and extract, using the processor, data from some or all of the plurality of binarized images.


As understood herein, a mobile device is any device capable of receiving data without having power supplied via a physical connection (e.g. wire, cord, cable, etc.) and capable of receiving data without a physical data connection (e.g. wire, cord, cable, etc.). Mobile devices within the scope of the present disclosures include exemplary devices such as a mobile telephone, smartphone, tablet, personal digital assistant, iPod®, iPad®, BLACKBERRY® device, etc.


However, as it will become apparent from the descriptions of various functionalities, the presently disclosed mobile image processing algorithms can be applied, sometimes with certain modifications, to images coming from scanners and multifunction peripherals (MFPs). Similarly, images processed using the presently disclosed processing algorithms may be further processed using conventional scanner processing algorithms, in some approaches.


Of course, the various embodiments set forth herein may be implemented utilizing hardware, software, or any desired combination thereof. For that matter, any type of logic may be utilized which is capable of implementing the various functionality set forth herein.


As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as “logic,” “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.


Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, processor, or device.


A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband, as part of a carrier wave, an electrical connection having one or more wires, an optical fiber, etc. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.


Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.


Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).


Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.


These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.


The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.


The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.



FIG. 1 illustrates an architecture 100, in accordance with one embodiment. As shown in FIG. 1, a plurality of remote networks 102 are provided including a first remote network 104 and a second remote network 106. A gateway 101 may be coupled between the remote networks 102 and a proximate network 108. In the context of the present architecture 100, the networks 104, 106 may each take any form including, but not limited to a LAN, a WAN such as the Internet, public switched telephone network (PSTN), internal telephone network, etc.


In use, the gateway 101 serves as an entrance point from the remote networks 102 to the proximate network 108. As such, the gateway 101 may function as a router, which is capable of directing a given packet of data that arrives at the gateway 101, and a switch, which furnishes the actual path in and out of the gateway 101 for a given packet.


Further included is at least one data server 114 coupled to the proximate network 108, and which is accessible from the remote networks 102 via the gateway 101. It should be noted that the data server(s) 114 may include any type of computing device/groupware. Coupled to each data server 114 is a plurality of user devices 116. Such user devices 116 may include a desktop computer, lap-top computer, hand-held computer, printer or any other type of logic. It should be noted that a user device 111 may also be directly coupled to any of the networks, in one embodiment.


A peripheral 120 or series of peripherals 120, e.g., facsimile machines, printers, networked and/or local storage units or systems, etc., may be coupled to one or more of the networks 104, 106, 108. It should be noted that databases and/or additional components may be utilized with, or integrated into, any type of network element coupled to the networks 104, 106, 108. In the context of the present description, a network element may refer to any component of a network.


According to some approaches, methods and systems described herein may be implemented with and/or on virtual systems and/or systems which emulate one or more other systems, such as a UNIX system which emulates an IBM z/OS environment, a UNIX system which virtually hosts a MICROSOFT WINDOWS environment, a MICROSOFT WINDOWS system which emulates an IBM z/OS environment, etc. This virtualization and/or emulation may be enhanced through the use of VMWARE software, in some embodiments.


In more approaches, one or more networks 104, 106, 108, may represent a cluster of systems commonly referred to as a “cloud.” In cloud computing, shared resources, such as processing power, peripherals, software, data, servers, etc., are provided to any system in the cloud in an on-demand relationship, thereby allowing access and distribution of services across many computing systems. Cloud computing typically involves an Internet connection between the systems operating in the cloud, but other techniques of connecting the systems may also be used.



FIG. 2 shows a representative hardware environment associated with a user device 116 and/or server 114 of FIG. 1, in accordance with one embodiment. Such figure illustrates a typical hardware configuration of a workstation having a central processing unit 210, such as a microprocessor, and a number of other units interconnected via a system bus 212.


The workstation shown in FIG. 2 includes a Random Access Memory (RAM) 214, Read Only Memory (ROM) 216, an I/O adapter 218 for connecting peripheral devices such as disk storage units 220 to the bus 212, a user interface adapter 222 for connecting a keyboard 224, a mouse 226, a speaker 228, a microphone 232, and/or other user interface devices such as a touch screen and a digital camera (not shown) to the bus 212, communication adapter 234 for connecting the workstation to a communication network 235 (e.g., a data processing network) and a display adapter 236 for connecting the bus 212 to a display device 238.


The workstation may have resident thereon an operating system such as the Microsoft Windows® Operating System (OS), a MAC OS, a UNIX OS, etc. It will be appreciated that a preferred embodiment may also be implemented on platforms and operating systems other than those mentioned. A preferred embodiment may be written using JAVA, XML, C, and/or C++ language, or other programming languages, along with an object oriented programming methodology. Object oriented programming (OOP), which has become increasingly used to develop complex applications, may be used.


An application may be installed on the mobile device, e.g., stored in a nonvolatile memory of the device. In one approach, the application includes instructions to perform processing of an image on the mobile device. In another approach, the application includes instructions to send the image to a remote server such as a network server. In yet another approach, the application may include instructions to decide whether to perform some or all processing on the mobile device and/or send the image to the remote site.


In various embodiments, the presently disclosed methods, systems and/or computer program products may utilize and/or include any of the functionalities disclosed in related U.S. Patents, Patent Publications, and/or Patent Applications incorporated herein by reference. For example, digital images suitable for processing according to the presently disclosed algorithms may be subjected to image processing operations, such as page detection, rectangularization, detection of uneven illumination, illumination normalization, resolution estimation, blur detection, classification, data extraction, etc.


In more approaches, the presently disclosed methods, systems, and/or computer program products may be utilized with, implemented in, and/or include one or more user interfaces configured to facilitate performing any functionality disclosed herein and/or in the aforementioned related patent applications, publications, and/or patents, such as an image processing mobile application, a case management application, and/or a classification application, in multiple embodiments.


In still more approaches, the presently disclosed systems, methods and/or computer program products may be advantageously applied to one or more of the use methodologies and/or scenarios disclosed in the aforementioned related patent applications, publications, and/or patents, among others that would be appreciated by one having ordinary skill in the art upon reading these descriptions.


It will further be appreciated that embodiments presented herein may be provided in the form of a service deployed on behalf of a customer to offer service on demand.


Intelligent, Iterative Recognition-Guided Thresholding


In general, the presently disclosed inventive concepts encompass the notion of performing a recognition-guided thresholding and extraction process on individual regions of interest of a digital image to maximize the quality of the processed (preferentially a binarized image, since a great number of OCR engines rely on binary images as input) for subsequent extraction of information therefrom. The process is iterative in that individual regions of interest are identified, and subjected to a plurality of thresholding and extraction iterations, in an attempt to identify the best quality image for extraction. The process is intelligent in that a training phase is employed from which a priori expectations may be developed regarding the nature (e.g. identity, location, size, shape, color, etc.) of information depicted in images of objects belonging to a common classification, e.g. driver's licenses issued by a particular state. These a priori expectations may be leveraged in subsequent operations directed to extracting information from other objects belonging to the same classification, for example by matching an expected region of interest identity with an expected region of interest location, it is possible to acquire confidence in the extraction result. For instance, and as will be described in further detail below, by matching a region of interest location with an expected region of interest identity, the result of extraction from various image “frames” subjected to different threshold levels may be evaluated to determine whether the extraction at one particular threshold is “correct.”


In the training phase, image features (such as the bounding box locations and OCR results from various regions of interest) are determined for a plurality of images depicting representative exemplars of a class of object, such as a document or person. The features are determined using a learn-by-example classification technique. Features are analyzed to determine characteristic features of the subject of the image. For example, characteristic features include any suitable feature upon which a person or item may be identified, such as the dynamic location range for the region (i.e. a subset of pixels within the image in which a field or object is statistically likely to be located, which may preferably be determined based on observing location of many exemplars in the training phase); median height, width, or other dimension(s) of each region; appropriate character set for each region; text or image formatting for each region; text color for each region; background color for each region; text alignment for each region; etc. as would be understood by a person having ordinary skill in the art upon reading the present descriptions.


A set of characteristic features is preferably defined as corresponding to objects belonging to a particular class of object based on this training. In this manner, it is possible to subsequently facilitate identification of characteristic features based on object class, and vice-versa, in various embodiments. For example, an image may be labeled as depicting a particular class of object, and features of the individual object belonging to that particular class may be determined based in whole or in part on the class definition including the characteristic object features. Conversely, an object may be determined to belong to the particular class based on determining an image of the object depicts some or all of the characteristic features defined in the class definition.


A trained system, algorithm, technique, etc. as referenced above is provided a test or sample image, e.g. an image depicting a document belonging to a particular class of objects for which the system, algorithm, technique, etc. was trained. Using the test image, the presently disclosed inventive concepts perform an initial classification and extraction operation such as described in U.S. Pat. No. 9,355,312; and/or U.S. Pat. No. 9,311,531 and attempt to extract as much information as possible from the image based on the object class and corresponding extraction model.


However, for various reasons including background/foreground overlap, complex background, etc., at least some of the information cannot be reliably extracted. For example, in one embodiment an image depicts a driver's license wherein the name, date of birth, expiration date, etc. partially overlap with a state seal depicted in the background of the driver's license and a hologram overlaying the text (e.g. embedded in a laminate layer overlaying the foreground text and the background state seal). Worse still, the name, date of birth, expiration date, etc. is depicted in a font color substantially similar to the color of the state seal, but significantly contrasting with other portions of the driver's license background.


In preferred embodiments, training therefore may also encompass the initial attempt to extract information, such that particular elements within the image which are robustly difficult or impossible to accurately extract may be identified. This “trouble region” information may be included as part of the characteristic features of the object, such that computational cost of performing iterative, recognition-guided thresholding as described further below is minimized.


As will be appreciated by skilled artisans, it is incredibly difficult if not impossible to define appropriate parameters for extracting underlying information such as text from an image that depicts text or other foreground regions having both substantial similarity and substantial contrast with the background region(s) they respectively overlay/overlap. This is in part because extracting underlying information relies in some form on reducing the color depth of the received image, e.g. from RGB to grayscale or bi-tonal, before performing recognition, e.g. OCR, intelligent character recognition (ICR), etc. as would be understood by a person having ordinary skill in the art upon reading the present descriptions. As a result, where a region depicts both significantly similar and significantly contrasting foreground and background elements, it is not possible to define color suppression (e.g. binarization) parameters which generate a legible result for both the significantly similar foreground/background elements and the significantly contrasting foreground/background elements.


Instead, color suppression parameters may be configured to boost the contrast between the significantly similar foreground/background elements, but this generally renders the significantly contrasting foreground/background elements illegible. In the opposite scenario, e.g. without contrast boosting, the significantly contrasting foreground/background elements are legible, but the significantly similar foreground/background elements are not. In rare circumstances, it may be possible to achieve an intermediately contrasting result by boosting contrast only slightly, but in practice this approach does not adequately facilitate extraction of all elements within the region of interest.


In order to accomplish accurate and reliable extraction of both significantly similar and significantly contrasting foreground/background elements within a single image or region of interest of an image, the presently disclosed inventive concepts propose an iterative, intelligent, recognition-guided thresholding and extraction process. In essence, and with reference to a string of text characters as the exemplary embodiment, the thresholding process may be performed in a manner that renders a legible result on a per-character basis, and upon achieving a legible result, extraction is performed on the legible result, and the process proceeds to obtain a legible result for other characters in the string. Upon accurately extracting all individual characters, the string may be reconstructed from the aggregate extraction results, including the extracted portion(s) of the image, as well as the result of extracting the region of interest (e.g. OCR result). As described herein, this basic procedure is referred to as recognition-guided thresholding.


Of course, it should be understood that recognition-guided thresholding as generally described herein may be performed on the basis of any suitable confidence criterion, and need not evaluate textual information as a means of deriving such confidence information. For example, in various approaches image features may serve as the basis for deriving confidence.


In one implementation, a recognition-guided thresholding process may identify a region of interest depicting one or more image features. Characteristics of the image features (e.g. size, location, shape, color profile, etc.) may be known based on a training operation such as a learn-by-example classification training operation. For example, a class of documents includes an image feature comprising an embedded security mark that overlaps with or is otherwise partially obscured by background textures appearing in the document. In order to authenticate the document, it is necessary to extract and verify the security mark. So as to overcome the apparent obscurity or overlap, it may be advantageous to apply an iterative thresholding process as described herein, and evaluate confidence of a result under each threshold on the basis of image features in the thresholded region matching corresponding image features in thresholded training images.


Of course, any other equivalent means of determining confidence as to whether a particular image feature matches an expected image feature may be employed without departing from the scope of the present disclosures.


Recognition-guided thresholding and extraction may also preferably include color normalization as an aspect of improving extraction robustness and accuracy. As discussed herein, color normalization should be understood as normalizing intensity values across the various color channels (e.g. R, B and G) to “stretch” each channel onto a single normalized scale. Most preferably, color normalization is performed prior to thresholding, and may be performed for each region of interest independently or on the aggregate of all regions of interest. This is particularly advantageous where foreground and background are similar and relatively dark, and assists in discriminating between foreground and background by “stretching” the values across the entire intensity range.


For instance, an image region is characterized by pixels having a RGB color profile. No pixel in the region has an intensity value greater than 100 in any color channel. Each color channel permits intensity values in a range from 0-255. In this scenario, color normalization may effectively set the maximum intensity value of 100 as corresponding to the maximum value in the range, and “stretch” the intervening values across the entire color space, such that each difference of 1 intensity unit in the original image becomes effectively a difference of 2.55 intensity units.


Of course, it should be understood that the iterative thresholding and extraction process described above is equally applicable to extraction of non-textual information, such as lines or other document structures, graphical elements, etc., as long as there is a quality criterion (as akin to OCR confidence for characters, e.g. a classification-based or other feature-matching confidence measure) evaluating the result. For example, consider a graphical element depicting a gradient of color, which progresses from contrasting with the background to substantially representing the background color the graphical element overlays. In such circumstances, it is similarly possible to progress along the gradient (or other pattern or progression) using an iterative thresholding process to extract a legible or clear version of the graphic.


In practice, and according to another exemplary approach based on connected components, images of a particular class of object such as a document may depict a plurality of regions of interest each corresponding to one or more of photograph(s), document structure, graphical elements, text fields, etc. A plurality of such images are used in a training phase as described above, and subsequent to training an image depicting a plurality of regions of interest is analyzed.


As referred-to herein, it should be understood that the term “connected component” refers to any structure within a bitonal image that is formed from a contiguous set of adjacent black pixels. For example connected components may include lines (e.g. part of a document's structure such as field boundaries in a form), graphical elements (e.g. photographs, logos, illustrations, unique markings, etc.), text (e.g. characters, symbols, handwriting, etc.) or any other feature depicted in a bitonal image. Accordingly, in one embodiment a connected component may be defined within a bitonal image according to the location of the various pixels from which the component is formed.


The term “image feature” is to be understood as inclusive of connected components, but also includes such components as may be defined within color spaces other than a bitonal image. Thus, an image feature includes any structure of an image that is formed from a contiguous set of adjacent pixels. The image feature may be defined according to the location of constituent pixels as noted above for connected components, but may also include other information such as intensity information (e.g. in one or more color channels).


Based on the training phase, each region of interest expected to appear is known a priori, preferably both in terms of the statistically-likely location of the region, as well as an expected identity of one or more image features and/or connected components located within the region (including an expected set of possible identities, such as a subset of alphanumeric characters, pixel color channel values, feature shape, size, etc. or other identifying characteristics of one or more connected components located within the region of interest.)


This information is utilized to perform conventional classification and extraction, by which a plurality of expected regions of interest are successfully extracted, while others are either not found or imperfectly extracted.


One or more particular regions of interest, e.g. depicting a field partially or wholly overlaying a seal, logo, or other similar background texture, may be known to be among the “trouble regions” defined in the classification, and/or may be determined “trouble regions” based on achieving imperfect/incomplete extraction results from the conventional approach. In response to determining a trouble region exists in the digital image, in some approaches a determination may be made that recognition-guided thresholding should be applied to the particular trouble regions, and/or optionally on all regions of interest in the digital image.


Each of the particular regions of interest are subjected to a color normalization process to stretch the intensity values in each color channel, thereby enhancing ability to distinguish between foreground and background elements.


In one exemplary approach, where the confidence measure is OCR confidence and the primary but nonexclusive objective is to threshold textual information, each particular region is matched to a corresponding region of interest known from the training set, e.g. based on its location, and is rendered (e.g. in grayscale) using channel weights derived from the analysis of foreground and background colors so that the foreground in the rendered image is made dark vs. lighter background. If the foreground is known or determined to be brighter than the background, this rendered image is inverted.


For each region of interest, a plurality of thresholds are applied to the rendered image, which is preferably a grayscale image, of the rectangular region encompassing the region of interest. Each threshold represents a different intensity value along a range of intensity values (e.g. grayscale intensity), and generates a different binary image with a number of connected components. Each component is subjected to a recognition process such as optical character recognition to attempt extracting information therefrom, e.g. character identity. As will be understood by those having ordinary skill in the art, the OCR may achieve varied results across the set of connected components. However, it is extremely likely that in at least one such binary image the component will be legible and the extraction will match expected extraction results based on the training set. For example, the extracted character may match an expected character or match one of a set of possible expected characters with high confidence, and deemed a candidate on this basis.


While the above example contemplates performing a plurality of thresholding operations on a particular region, it is also within the scope of the present disclosures to perform thresholding on a per-component or a per-feature basis. For example, in one approach a particular region may depict text having a known character spacing, or depict one or more image features according to a known pattern. It may be advantageous in some approaches to perform thresholding on individual features rather than the region as a whole. For example, the region may be divided according to the known character spacing or pattern, and each subregion defined therein may be separately subjected to thresholding, which may utilize different parameters than a thresholding process applied to the region as a whole.


In this manner, it is possible to tailor the thresholding to the individual feature or component desired for extraction, as well as for an immediately surrounding background region, without needing to consider the differences between the foreground and background of the region as a whole.


For instance, in one approach a credit card may depict a credit card number comprising a plurality of characters arranged in a line and having equal spacing between each character. The credit card number as a whole may be encompassed within a region of interest, which may be matched as described above. In addition or in the alternative to performing region-based thresholding as above, thresholding may include subdividing the region into a plurality (e.g. 16) subregions of interest, and performing thresholding on each individual region. This may be particularly useful in situations where, e.g., the credit card depicts a complex background whereby some but not all of the characters in the credit card number are in “trouble spots” and overlap or are obscured by unique background elements, such that no single threshold applied to the region as a whole can identify character(s) overlapping one or more of the unique background elements. By isolating those characters, thresholding may be specifically performed on the “trouble spot” to maximize the likelihood of achieving a candidate result with sufficient confidence for extraction.


In any event, as the threshold value diminishes the amount of black in the binary image is reduced and connected components become thinner and break into smaller components. Performing OCR on the sequence of progressively thinning components associated with diminishing threshold levels with significant overlap of their bounding boxes generates a sequence of candidates, and as the components break up a formerly single candidate with a wider bounding box may be replaced by a more confident pair or triplet of components associated with a lower threshold level. The candidates with highest confidences form the final string.


In some approaches, since the highest confidence candidates for a particular character/feature/component, etc. may include several (potentially consecutive) binarization threshold levels, it may be advantageous to choose from among the several highest confidence candidates. For instance, in situations where intensity values are minimized across multiple extraction results to assemble a contiguous extracted result, it may be useful to select one of the highest confidence candidates having an intensity value closest to a mean, median, etc. intensity of other frames to be used in assembling the final extraction result. Accordingly, in one embodiment the presently disclosed inventive concepts include techniques for determining from which thresholded image(s) to select a corresponding bounding box into the final binary rendition of the original region of interest.


Upon identifying the threshold range for each candidate in the region of interest, the various bounding boxes (and/or extraction results obtained therefrom) may be assembled into a cohesive result. As noted in further detail herein, in some embodiments where the various portions of the image corresponding to each component are to be assembled, it is advantageous to select a legible bounding box (but not necessarily the one with the highest confidence character) for some or all of the components in order to generate a more consistent visual result.


As another advantage, the presently disclosed inventive, recognition-guided thresholding process provides superior accuracy and reliability even outside the context of foreground elements that overlap with similar background elements. For instance, and as known in the art, extraction may be frustrated or rendered impossible due to poor image quality, e.g. arising from insufficient illumination in the capture environment, presence of artifacts such as shadows, etc.


To address these common problems, conventional image processing algorithms seek to improve the quality of the image as a whole, which yields moderate improvements to extraction capability, e.g. via correcting a uniformly insufficient illumination and permit improved distinction between foreground and background elements. However, these conventional solutions approach the rectification process from the perspective of the image, rather than individual elements of the image (e.g. connected components), and thus are limited in applicability and efficacy because adjustments that may be appropriate for one portion of an image are not appropriate or are less appropriate for other portions of the image.


By contrast, the presently disclosed inventive concepts can provide extraction that is robustly capable of extracting only the information from the image that most closely matches expected information based on training, both in terms of information content (e.g. text character identity) as well as location (e.g. center pixel, dynamic region, etc.). In particularly preferred approaches, extracted information matches the expected information in terms of information content, location, and size.


For instance, and as will be appreciated by persons having ordinary skill in the art upon reading the present descriptions, insufficient contrast between foreground and background in a digital image can have the effect of making foreground elements appear larger, due to “blobifying” of the foreground element (see, e.g. images 502-520 of FIG. 5, where the “0” and “6” characters are connected as a single “blob” that is not resolved until image 522). As a result, in an image having insufficient contrast, an expected element may be identifiable, but the appearance of the element may be unreliably identifiable due to obscured boundaries between foreground and background, or the identity of the element may be in question because the element is not fully contained within the dynamic region where the element is expected based on training.


Similarly, when contrast is excessive, a single element in an image may appear “broken” into several constituent elements (e.g. connected components) which may be unrecognizable or problematically represent an incorrect element (e.g. a capital letter “H” representing two adjacent “1” or “1” characters when the cross-bar is broken or missing). By leveraging the expected identity, location, and size, the presently disclosed concepts may more accurately and robustly determine, e.g. based on the width of spacing between the two “1” or “1” characters, the location within the image, and/or the identity of the components extracted from a corresponding location in training, that the component is actually a capital H instead of adjacent “1” or “1” characters.


In addition and/or alternatively, the presently disclosed inventive concepts may include determining a more appropriate image intensity to utilize prior to extracting the “H” character based on an iterative thresholding process as described herein. Accordingly, not only may overall extraction be improved with respect to compliance with expected results, the quality of information extracted may be bolstered by selectively thresholding the region from which the component is to be extracted.


Thus, while conventional image processing techniques are limited to determining the best possible extraction based on the overall image, the presently disclosed techniques can evaluate each element or grouping of elements (such as connected components) individually at varying levels of image intensity, and thus provide a more accurate extraction result (e.g. by selecting a frame where the component most closely matches the size, shape, and location expected by training from among a plurality of frames, where each frame depicts the component at a different level of image intensity).


Accordingly, in several embodiments evaluating each element or grouping of elements may include generating a sequence of candidate extraction results for each element or grouping of elements. Preferably, each sequence of candidate extraction results includes and/or is based on extracting data from a plurality of images each generated using a different binarization threshold but depicting the same element or grouping of elements. Thus each sequence of candidate extraction results preferably represents data extracted from a plurality of images spanning a spectrum or range of binarization thresholds, and more preferably represents data extracted from a plurality of images depicting the same content, or at least the same element or grouping of elements.


While in preferred approaches each sequence of candidate extraction results includes candidates corresponding to different binarization thresholds, various individual candidate extraction results from different sequences of candidate extraction results may correspond to the same binarization threshold. The candidate extraction results from different sequences may include images and/or data extracted therefrom corresponding to some of the same elements or groupings of elements as elements or groupings of elements to which other sequences correspond. For example, in one embodiment a windowed approach may attempt to extract data from adjacent pairs, triplets, etc. of connected components within a region of interest. However, in preferred embodiments each sequence of candidate extraction results includes or is based on extracting data from images depicting at least one non-overlapping element or groupings of elements.


In addition, the overall extraction process is more robust since the evaluation can be performed individually for each component, rather than on the image as a whole, increasing the likelihood of extracting a similarly accurate result from even drastically different renditions of the same image, or from different portions of a single image (e.g. illuminated region versus shadowed region, regions having different color profiles and/or color depths, etc.).


Those having ordinary skill in the art will also appreciate that this recognition-guided thresholding and extraction technique may generate resulting extracted versions of portions of a component or element which exhibit perhaps drastically different appearance, to the point of potentially looking like a “mosaic” or “ransom note” stitched together from multiple images. For example, adjacent characters, one of which overlays a dark background but the other of which overlays only a bright background, may be extracted based on very different image intensity levels and thus appear very different upon recreating or synthesizing a composite of the extracted components.


To alleviate this artifact, it is advantageous to select from among plural exemplary frames of a component so as to minimize the overall range of frame intensity across a particular set of components. For instance, assuming a two-component element is represented by a plurality of frames for each component, each of the plurality of frames being characterized by a different intensity level. While it may be the case that the most legible frame for the first component is characterized by an intensity of 100, and the most legible frame for the second component is characterized by an intensity of 20, if each component has a frame that is legible (even if not most legible) and characterized by a value closer to the midpoint between the two values (i.e. 60), it is preferable in some approaches to choose the frames that more closely match in intensity to generate a single, consistently intense result.


In practical application, the presently disclosed inventive techniques have been applied to images depicting driver licenses. Training involved providing a plurality of exemplar driver licenses from a particular state, identifying characteristic features thereof, defining a classification based on the characteristic features, and attempting classical extraction.


Based on this training, several “trouble regions” were identified, and intelligent, iterative thresholding was applied to these regions when processing subsequent test images.


From experimentation, it was determined that iterative, intelligent thresholding as described herein employ approximately twenty thresholds with which to investigate the image to determine ideal extraction parameters and perform extraction therewith.


The various threshold levels may be evenly distributed across a particular range, e.g. grayscale intensity ranging from 0-255, or may be staggered throughout a particular range, e.g. according to predetermined intensity levels known to generate desirable extraction results. Again, according to experimental results, it is apparent that distributing the threshold levels across a grayscale intensity ranging from 1 to 120 (i.e. each threshold corresponding to a 6-point intensity increment) is advantageous for extracting text from documents or images featuring complex backgrounds and/or illumination variations, e.g. from shadows, glare, etc.


As will be appreciated by skilled artisans, different threshold values, distributions, or ranges may be appropriate depending on the nature of the image data to be processed. The aforementioned experimentally determined values were established as optimal for processing complex documents having primarily a white or light colored background, with a plurality of dark background and foreground elements depicted thereon.


The images depicted in FIGS. 3-5 represent experimental results determined from a Massachusetts driver's license when attempting to extract an expiration date that overlaps a complex background texture, in this case the state seal, forming a “trouble region” where thresholding and extraction using conventional (e.g. OCR) approaches cannot obtain the entire date. In all images depicted in FIGS. 3-5, the expiration date is Jun. 27, 2013 (represented as “06-23-2013”). The images have been enlarged to emphasize differences.


First, FIG. 3 shows a rendition of the image in color, where many different colored background textures underlay the month, date and the majority of the year. Although the electronic record of the present application will reflect FIG. 3 in grayscale, skilled artisans in the field of image processing and object recognition will appreciate, e.g. by way of comparison to FIG. 4, that the complexity of color images such as FIG. 3 is greater than that of grayscale or bitonal images.


As will be further appreciated by those having ordinary skill in the art, and as described in further detail elsewhere herein, presence of complex backgrounds is a common source of error in attempting to extract information from an image, particularly where the information to be extracted overlaps in whole or in part with the complex background.



FIG. 4 depicts the same portion of the driver's license, appearing in a grayscale rendition of the color image shown in FIG. 3. As can be seen from FIG. 4, conventional techniques for reducing color depth across an entirety of a particular image are often incapable of removing or rectifying the source of extraction error, e.g. a complex background or illumination problem/variance. As shown in FIG. 4, the complexity of the background is reduced relative to the color rendition shown in FIG. 3, but retains sufficient variation in background texture that applying a single binarization threshold to the grayscale rendition shown in FIG. 4 will not enable accurate extraction of all text depicted in the region of interest (expiration date field).


For example, according to one embodiment none of the plurality of images shown in FIG. 5, each of which were generated by applying a different binarization threshold to the image shown in FIG. 4, are suitable for extracting all characters depicted in the region of interest with sufficient confidence. Each of the plurality of images may be suitable for extracting one or more of the characters with sufficient confidence, but in each image at least one character is sufficiently degraded (e.g. by white pixels for low binarization thresholds such as the bottom 25% to 33% of the range of binarization thresholds) or obscured (e.g. by black pixels for higher binarization thresholds such as the top 50% to 25% of the range of binarization thresholds) such that the obscured/degraded character(s) cannot be extracted with sufficient confidence.



FIG. 5 depicts a plurality of binary images 502-538 generated using a plurality of different binarization thresholds as described herein. The plurality of images depicted in FIG. 5 may be understood as forming a sequence of candidate extraction results, or alternatively a plurality of images upon which a sequence of candidate extraction results is based, in several embodiments. Each image is generated using a different binarization threshold, and is characterized by a difference in the binarization threshold of 6 units with respect to vertically adjacent counterparts. Thus, in accordance with FIG. 5 the first image 502 corresponds to a threshold value of 115, while the last image 538 corresponds to a threshold value of 1 (each on a scale from 0-255), and images 504-536 correspond to threshold values between 1-115, each separated by 6 units of intensity. As will be appreciated by skilled artisans, according to the binarization applied to FIG. 4 in order to generate the binary images shown in FIG. 5, pixels from the image shown in FIG. 4 having an intensity value less than the binarization threshold used to generate the corresponding image of FIG. 5 are converted to black, while pixels having an intensity value greater than or equal to the binarization threshold are converted to white. Thus, low binarization thresholds generally produce more white pixels and high binarization thresholds generate more black pixels. Although the embodiment shown and described with reference to FIGS. 3-5 involve thresholding based on grayscale pixel intensity values, it should be understood that other embodiments may additionally and/or alternatively utilize other image characteristics or values, such as intensity values in a particular color channel or combination of color channels, hue values, etc. as would be appreciated by skilled artisans upon reading the present disclosures.


Of course, in various embodiments sequences of candidate extraction results may be generated for each connected component (e.g. each character as shown in FIGS. 3-5), for different groupings of connected components (e.g. each pair or triplet of adjacent characters, each series of characters not separated by whitespace, etc.), or for the region of interest as a whole.



FIGS. 6A (enlarged) and 6B (native size) depict a composite image generated by extracting data from high-confidence candidates (e.g. candidate extraction results having confidence above a predetermined threshold) from the plurality of thresholded images shown in FIG. 5, and assembling the extracted high-confidence candidates into a single image. For instance, in one approach the composite image corresponding to FIGS. 6A and 6B may be generated by assembling an extraction result from images 522 and/or 524 for the “0” character of the month field, from images 526 and/or 528 for the “6” character of the month field, from image 530 for the hyphen separating the month and day fields as well as for the numerals forming the day and year fields, and from image 536 for the hyphen separating the day and year fields. In another embodiment extraction may be performed on the region as a whole based on image 530. Of course, in various approaches any combination of images and/or extraction results may be used to generate the composite image shown in FIGS. 6A and 6B.


In certain embodiments, it may be advantageous to essentially invert the assumptions, operation of thresholds (e.g. pixels with intensity greater than the binarization threshold convert to black, and pixels with intensity less than or equal to the binarization threshold convert to white), and/or the image data, e.g. when attempting to detect a light foreground element on a light background as opposed to a dark foreground element depicted on a dark background. This inversion may be particularly advantageous when one particular component overlays multiple different background textures, or when a particular component depicts multiple colors or textures itself.


The presently disclosed inventive concepts also encompass performing binarization (which in various embodiments involves a thresholding process, but which does not necessarily employ the iterative, Recognition-guided approach set forth herein) based on classification, e.g. as described in related U.S. Pat. No. 9,355,312. For instance, determining particular binarization parameters based on a classification of an object such as a connected component or group of connected components may include techniques and features as described in column 16, line 33—column 18, line 6 of U.S. Pat. No. 9,355,312.


Validation


In additional embodiments, classification and/or extraction results may be presented to a user for validation, e.g. for confirmation, negation, modification of the assigned class, etc. For example, upon classifying an object using semi- or fully-automated processes in conjunction with distinguishing criteria such as defined herein, the classification and the digital image to which the classification relates may be displayed to a user (e.g. on a mobile device display) so that the user may confirm or negate the classification. Upon negating the classification, a user may manually define the “proper” classification of the object depicted in the digital image. This user input may be utilized to provide ongoing “training” to the classifier(s), in preferred approaches. Of course, user input may be provided in relation to any number of operations described herein without departing from the scope of the instant disclosures.


In even more preferred embodiments, the aforementioned validation may be performed without requiring user input. For instance, it is possible to mitigate the need for a user to review and/or to correct extraction results by performing automatic validation of extraction results. In general, this technique involves referencing an external system or database in order to confirm whether the extracted values are known to be correct. For example, if name and address are extracted, in some instances it is possible to validate that the individual in question in fact resides at the given address.


This validation principle extends to classification, in even more embodiments. For example, if the extraction is correct, in some approaches it is appropriate to infer that the classification is also correct. This inference relies on the assumption that the only manner in which to achieve the “correct” extraction result (e.g. a value matches an expected value in a reference data source, matches an expected format for the value in question, is associated with an expected symbol or other value, etc. as would be understood by one having ordinary skill in the art upon reading the present descriptions).


Now referring to FIG. 7, a flowchart of a method 700 is shown according to one embodiment. The method 700 may be performed in accordance with the present invention in any of the environments depicted in FIGS. 1-2, among others, in various embodiments. Of course, more or less operations than those specifically described in FIG. 7 may be included in method 700, as would be understood by one of skill in the art upon reading the present descriptions.


Each of the steps of the method 700 may be performed by any suitable component of the operating environment. For example, in various embodiments, the method 700 may be partially or entirely performed by a processor of a mobile device, a processor of a workstation or server environment, some other device having one or more processors therein, or any combination thereof.


The processor, e.g., processing circuit(s), chip(s), and/or module(s) implemented in hardware and/or software, and preferably having at least one hardware component may be utilized in any device to perform one or more steps of the method 700. Illustrative processors include, but are not limited to, a central processing unit (CPU), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), etc., combinations thereof, or any other suitable computing device known in the art.


As shown in FIG. 7, method 700 may initiate with operation 702, where a region of interest within a digital image is identified. The region of interest preferably includes content such as text, a photograph, a symbol, etc. upon which extraction is to be performed, each of which may generally be represented via one or more connected components in the digital image (and/or representations of the digital image such as a grayscale or bitonal rendition of the digital image). Regions of interest may be identified in various embodiments based on a priori expectations such as a learned location of a particular field, photograph, symbol, etc. within a document, and/or image characteristics representing e.g. the identity, color, shape, size, etc. of object(s) depicted in particular location(s) of the document. As described above, in preferred approaches such a priori expectations may be developed in a training phase.


Method 700 also includes operation 704, in which a plurality of binarized images are generated based on the region of interest. The plurality of binarized images are preferably generated using a plurality of different binarization thresholds, though some of the binarized images may be generated using the same threshold(s) in some approaches. In various embodiments the plurality of binarized images may be arranged in one or more sequences, each sequence corresponding to a unique single connected component or a unique grouping of connected components from the region of interest. Where sequence(s) of binarized images are employed, preferably each image within each sequence is generated using a different binarization threshold, but images from different sequences may be generated using the same binarization threshold, in some embodiments.


With continuing reference to FIG. 7, method 700 also includes extracting data from some or all of the plurality of binarized images in operation 706. Data extraction may include any suitable form of extraction as disclosed herein and/or in the related patent documents referenced herein, in various embodiments. In preferred approaches, extraction includes recognizing text and/or objects within some or all of the plurality of binarized images, e.g. using techniques such as optical character recognition or equivalents thereof, and/or image classification and/or data extraction.


In multiple varying but combinable embodiments, method 700 may include any number of additional and/or alternative features, operations, etc. as described herein, and should be viewed as a generic embodiment of iterative, recognition-guided thresholding as contemplated by the inventors. Various species falling within the generic embodiment are described in accordance with embodiments of the invention that may be utilized in different scenarios to achieve desired binarization results. Accordingly, skilled artisans reading the present disclosure will appreciate that the embodiments described herein may be combined in any suitable manner without departing from the scope of these inventive concepts.


For instance, and in accordance with several such exemplary species embodiments, method 700 may include any one or more of the following features, functions, operations, inputs, etc.


As mentioned briefly above, in one embodiment the region of interest encompasses a plurality of connected components; and each of the plurality of binarized images corresponds to a different combination of: one of the plurality of connected components; and one of the plurality of binarization thresholds. As such, each binarized image may represent a unique combination of connected components rendered according to a unique binarization threshold with respect to that particular combination of connected components. In more embodiments, of course, there may be overlap between the combinations of connected components (e.g. a windowed approach) and/or binarization thresholds applied thereto.


With continuing reference to connected components encompassed within the region(s) of interest, in several embodiments extracting the data is performed on a per-component basis for at least some of the plurality of connected components. As such, extraction may be performed on a per-component resolution to address extreme variations in image characteristics across a particular region of interest, enabling robust extraction even when desired information overlaps with complex background texture(s) and/or variations in illumination.


In various embodiments, extracting the data generally includes estimating an identity of some or all of the plurality of connected components within one or more of the plurality of binarized images. The identity estimation may be based on a recognition engine, classification technique, etc. as discussed above, in preferred approaches. In particularly preferred approaches, estimating the identity of the connected components determining a confidence of the estimated identity of some or all of the plurality of connected components. Confidence may be determined in any suitable manner and measured according to any suitable standard, such as OCR confidence, classification confidence, etc. in various approaches.


In a preferred embodiment, determining the confidence of the estimated identity of connected components includes comparing the estimated identity of various connected components with an expected identity of the respective connected components. Such expectation-based identity comparisons may be based on a priori information derived from training, and/or based on extraction results obtained from other of the plurality of binarized images.


In some approaches, expectation-based confidence may be determined based on whether a particular component matches an expected component type and/or location, and/or whether the particular component matches one of a plurality of possible expected component types and/or locations. Accordingly, determining the confidence of the estimated identity of some or all of the connected components may include comparing the estimated location of each respective one of the plurality of connected components for which the identity was estimated with an expected location of the respective one of the plurality of connected components.


In circumstances where some or all of the plurality of connected components comprise non-textual information; determining the confidence of the estimated identity of some or all of the plurality of connected components may include classifying some or all of the connected components for which the identity was estimated. The classification is preferably based on image features such as component color, size, location, shape, aspect ratio, etc.


Where confidence measures are available, extracting data may include choosing from among a plurality of candidate component identities (e.g. “3” versus “8”), in which case the choice may be made based in whole or in part on determining whether the confidence of the estimated identity of one of the plurality of connected components is less than a predetermined confidence threshold. In cases where the confidence of the estimated identity is less than the predetermined confidence threshold, the candidate component identity may be discarded, and/or an alternate candidate component identity (preferably having a higher confidence measure, even if below the confidence threshold) may be chosen as the component identity.


In more embodiments, where the confidence of the estimated identity is less than the predetermined confidence threshold method 700 may include estimating the identity of the corresponding connected component(s) based on a different binarized image, optionally but preferably a different member of a sequence of binarized images corresponding to the same connected component(s) but generated using a different binarization threshold.


In still more embodiments of method 700, extracting data from binarized images may therefore include: generating at least one sequence of candidate extraction results for each grouping of one or more connected components depicted within the region of interest; determining an optimal extraction result within each sequence of candidate extraction results; and assembling all of the optimal extraction results into a single string of the one or more connected components.


Preferably, each sequence of candidate extraction results includes a plurality of candidate extraction results, and each candidate extraction result within a given sequence corresponds to a same connected component or grouping of connected components depicted within the region of interest. Furthermore, each candidate extraction result within the given sequence preferably corresponds to a different one of the plurality of binarization thresholds. Accordingly, each sequence may represent a spectrum of binarization results generated using different binarization thresholds to render the same connected component(s) into a binarized form.


Of course, in various embodiments candidate extraction results in different sequences may correspond to the same binarization threshold, and in one embodiment at least one candidate result from each of at least two of the sequences corresponds to a same binarization threshold.


Determining the optimal extraction result within each sequence of candidate extraction results, as mentioned above, may include selecting one extraction result within each sequence of candidate extraction results so as to minimize intensity differences between the optimal extraction results assembled into the single string. This approach facilitates avoiding the appearance of a “ransom note” in the assembled result, and may include selecting candidates that do not correspond to the highest identity confidence level in order to minimize intensity differences across the assembled result.


In one embodiment, at least two of the plurality of connected components encompassed by the region of interest are preferably extracted from different ones of the plurality of binarized images. As noted above performing extraction on a per-component basis may enable extraction of components that could otherwise not be accomplished using conventional binarization techniques.


The method 700 in one embodiment also includes normalizing color within the digital image and/or the region of interest specifically. Advantageously, region-based color normalization allows more precise extraction of data since the normalization process is not influenced by other portions of the document/digital image that may have very different color profiles and thus would “stretch” the color channels in a manner not appropriate (or less appropriate) for the particular region of interest.


As described in further detail above, method 700 may also include validating extracted data. Preferably, in such embodiments validation includes inferring a classification of an object depicted in the digital image based on validating the extracted data. For example, upon validating a name and address correspond to a same individual, a digital image or object depicted therein may be classified as an appropriate type of document, e.g. a utility bill, identification document, etc. optionally based in part on a location within the digital image/document from which the name and address are extracted. Of course, in various embodiments other combinations of criteria may be used to validate extracted information and infer therefrom a classification of a particular object.


While the present descriptions of data extraction within the scope of the instant disclosure have been made with primary reference to methods, one having ordinary skill in the art will appreciate that the inventive concepts described herein may be equally implemented in or as a system and/or computer program product.


For example, a system within the scope of the present descriptions may include a processor and logic in and/or executable by the processor to cause the processor to perform steps of a method as described herein.


Similarly, a computer program product within the scope of the present descriptions may include a computer readable storage medium having program code embodied therewith, the program code readable/executable by a processor to cause the processor to perform steps of a method as described herein.


The inventive concepts disclosed herein have been presented by way of example to illustrate the myriad features thereof in a plurality of illustrative scenarios, embodiments, and/or implementations. It should be appreciated that the concepts generally disclosed are to be considered as modular, and may be implemented in any combination, permutation, or synthesis thereof. In addition, any modification, alteration, or equivalent of the presently disclosed features, functions, and concepts that would be appreciated by a person having ordinary skill in the art upon reading the instant descriptions should also be considered within the scope of this disclosure.


Accordingly, one embodiment of the present invention includes all of the features disclosed herein, including those shown and described in conjunction with any of the FIGS. Other embodiments include subsets of the features disclosed herein and/or shown and described in conjunction with any of the FIGS. Such features, or subsets thereof, may be combined in any way using known techniques that would become apparent to one skilled in the art after reading the present description.


While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of an embodiment of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

Claims
  • 1. A computer-implemented method, comprising: identifying a region of interest within a digital image;generating a plurality of binarized images based on the region of interest, wherein some or all of the binarized images are generated using a different one of a plurality of binarization thresholds; andextracting data from some or all of the plurality of binarized images;wherein extracting the data from some or all of the plurality of binarized images comprises: generating at least one sequence of candidate extraction results for each grouping of one or more connected components depicted within the region of interest;determining an optimal extraction result within each sequence of candidate extraction results;assembling all of the optimal extraction results into a single string of the one or more connected components; andwherein determining the optimal extraction result within each sequence of candidate extraction results comprises selecting one extraction result within each sequence of candidate extraction results so as to minimize intensity differences between the optimal extraction results assembled into the single string; andwherein at least some of the connected components are text characters.
  • 2. The computer-implemented method as recited in claim 1, wherein the region of interest comprises a plurality of connected components; and wherein each of the plurality of binarized images corresponds to a different combination of: one of the plurality of connected components; andone of the plurality of binarization thresholds.
  • 3. The computer-implemented method as recited in claim 1, wherein the region of interest comprises a plurality of connected components; and wherein extracting the data is performed on a per-component basis for at least some of the plurality of connected components.
  • 4. The computer-implemented method as recited in claim 1, wherein the region of interest comprises a plurality of connected components; wherein extracting the data comprises estimating an identity of some or all of the plurality of connected components within one or more of the plurality of binarized images;wherein the identity of some or all of the plurality of connected components within one or more of the plurality of binarized images comprises the character, location, size, shape or color; andthe method further comprising determining a confidence of the estimated identity of some or all of the plurality of connected components.
  • 5. The computer-implemented method as recited in claim 4, wherein determining the confidence of the estimated identity of some or all of the plurality of connected components comprises comparing the estimated identity of each respective one of the plurality of connected components for which the identity was estimated with an expected identity of the respective one of the plurality of connected components.
  • 6. The computer-implemented method as recited in claim 4, wherein determining the confidence of the estimated identity of some or all of the plurality of connected components comprises comparing an estimated location of each respective one of the plurality of connected components for which the identity was estimated with an expected location of the respective one of the plurality of connected components.
  • 7. The computer-implemented method as recited in claim 4, wherein some or all of the plurality of connected components comprise non-textual information; and wherein determining the confidence of the estimated identity of some or all of the plurality of connected components comprises classifying some or all of the connected components for which the identity was estimated based on image features.
  • 8. The computer-implemented method as recited in claim 4, comprising determining whether the confidence of the estimated identity of one of the plurality of connected components is less than a predetermined confidence threshold.
  • 9. The computer-implemented method as recited in claim 8, comprising, in response to determining the confidence of the estimated identity of the one of the plurality of connected components is less than the predetermined confidence threshold, estimating the identity of the one of the plurality of connected components based on a different one of the plurality of binarized images than the one of the plurality of binarized images for which the confidence of the estimated identity of one of the plurality of connected components was determined to be less than the predetermined confidence threshold.
  • 10. The computer-implemented method as recited in claim 1, wherein each sequence of candidate extraction results comprises a plurality of candidate extraction results each corresponding to the same grouping of one or more of the connected components depicted within the region of interest; and wherein each of the plurality of candidate extraction results in each respective sequence of candidate extraction results corresponds to a different one of the plurality of binarization thresholds.
  • 11. The computer-implemented method as recited in claim 10, wherein at least one of the plurality of candidate results from each of at least two of the sequences of candidate extraction results correspond to a same one of the plurality of binarization thresholds.
  • 12. The computer-implemented method as recited in claim 1, wherein the region of interest comprises a plurality of connected components; and wherein at least two of the plurality of connected components are extracted from different ones of the plurality of binarized images.
  • 13. The computer-implemented method as recited in claim 1, comprising normalizing color within the digital image or the region of interest prior to thresholding; wherein normalizing color includes normalizing intensity values across one or more color channels to stretch the channel along a single normalized scale; andthe one or more color channels being selected from a group consisting of: R, G and B.
  • 14. The computer-implemented method as recited in claim 1, comprising: validating the extracted data; and inferring a classification of an object depicted in the digital image based on validating the extracted data.
  • 15. A system, comprising: a processor; and logic integrated with and/or executable by the processor to cause the processor to: identify a region of interest within a digital image, wherein the region of interest comprises a plurality of connected components;generate a plurality of binarized images based on the region of interest, wherein some or all of the binarized images are generated using a different one of a plurality of binarization thresholds; andextract data from some or all of the plurality of binarized images, wherein the data comprise a potential character identity of one or more of the plurality of connected components;wherein the region of interest is characterized by a complex background overlapped by the plurality of connected components;wherein one or more of the connected components overlap or are obscured by one or more unique background elements such that no single binarization threshold applied to a region encompassing the one or more of the plurality of connected components can identify the one or more of the connected components that overlap or are obscured by the one or more unique background elements.
  • 16. A computer program product, comprising a non-transitory computer readable medium having embodied therewith computer readable program instructions configured to cause a processor, upon execution thereof, to: identify, using the processor, a region of interest within a digital image;generate, using the processor, a plurality of binarized images based on the region of interest, wherein some or all of the binarized images are generated using a different one of a plurality of binarization thresholds; andsubjecting the region of interest within a digital image to a plurality of thresholding and extraction iterations;extract, using the processor, data from some or all of the plurality of binarized images;wherein the extracted data comprises one or more connected components represented in the plurality of binarized images; andwherein one or more of the connected components overlap or are obscured by one or more unique background elements such that no single binarization threshold applied to a region encompassing the one or more connected components can identify the one or more of the connected components that overlap or are obscured by the one or more unique background elements.
  • 17. The computer-implemented method of claim 5, wherein the expected identity of the respective one of the plurality of connected components is based on either: a priori information derived from a training set of digital images; orextraction results obtained from other of the plurality of images.
  • 18. The computer-implemented method as recited in claim 1, wherein the region of interest is characterized by a complex background forming a trouble region with respect to extracting the data.
  • 19. The computer-implemented method as recited in claim 1, wherein one or more of the connected components overlap or are obscured by one or more unique background elements such that no single binarization threshold applied to a region encompassing the one or more connected components can identify the one or more of the connected components that overlap or are obscured by the one or more unique background elements.
  • 20. The computer-implemented method as recited in claim 1, wherein the region of interest comprises a plurality of connected components; and wherein each of the plurality of binarized images depicts at least one of the plurality of connected components at a different level of image intensity.
US Referenced Citations (794)
Number Name Date Kind
16601102 Appelt et al. Feb 1928
3069654 Hough Dec 1962 A
3696599 Palmer et al. Oct 1972 A
4558461 Schlang Dec 1985 A
4651287 Tsao Mar 1987 A
4656665 Pennebaker Apr 1987 A
4836026 P'an et al. Jun 1989 A
4903312 Sato Feb 1990 A
4992863 Moriya Feb 1991 A
5020112 Chou May 1991 A
5063604 Weiman Nov 1991 A
5101448 Kawachiya et al. Mar 1992 A
5124810 Seto Jun 1992 A
5151260 Contursi et al. Sep 1992 A
5159667 Borrey et al. Oct 1992 A
5181260 Kurosu et al. Jan 1993 A
5202934 Miyakawa et al. Apr 1993 A
5220621 Saitoh Jun 1993 A
5268967 Jang et al. Dec 1993 A
5282055 Suzuki Jan 1994 A
5293429 Pizano et al. Mar 1994 A
5313527 Guberman et al. May 1994 A
5317646 Sang, Jr. et al. May 1994 A
5321770 Huttenlocher et al. Jun 1994 A
5344132 LeBrun et al. Sep 1994 A
5353673 Lynch Oct 1994 A
5355547 Fitjer Oct 1994 A
5375197 Kang Dec 1994 A
5430810 Saeki Jul 1995 A
5467407 Guberman et al. Nov 1995 A
5473742 Polyakov et al. Dec 1995 A
5546474 Zuniga Aug 1996 A
5563723 Beaulieu et al. Oct 1996 A
5563966 Ise et al. Oct 1996 A
5586199 Kanda et al. Dec 1996 A
5594815 Fast et al. Jan 1997 A
5596655 Lopez Jan 1997 A
5602964 Barrett Feb 1997 A
5629989 Osada May 1997 A
5652663 Zelten Jul 1997 A
5668890 Winkelman Sep 1997 A
5680525 Sakai et al. Oct 1997 A
5696611 Nishimura et al. Dec 1997 A
5696805 Gaborski et al. Dec 1997 A
5699244 Clark, Jr. et al. Dec 1997 A
5717794 Koga et al. Feb 1998 A
5721940 Luther et al. Feb 1998 A
5757963 Ozaki et al. May 1998 A
5760912 Itoh Jun 1998 A
5781665 Cullen et al. Jul 1998 A
5818978 Al-Hussein Oct 1998 A
5822454 Rangarajan Oct 1998 A
5825915 Michimoto et al. Oct 1998 A
5832138 Nakanishi et al. Nov 1998 A
5839019 Ito Nov 1998 A
5848184 Taylor et al. Dec 1998 A
5857029 Patel Jan 1999 A
5867264 Hinnrichs Feb 1999 A
5899978 Irwin May 1999 A
5923763 Walker et al. Jul 1999 A
5937084 Crabtree et al. Aug 1999 A
5953388 Walnut et al. Sep 1999 A
5956468 Ancin Sep 1999 A
5987172 Michael Nov 1999 A
3005958 Farmer et al. Dec 1999 A
6002489 Murai et al. Dec 1999 A
6005968 Granger Dec 1999 A
6009191 Julier Dec 1999 A
6009196 Mahoney Dec 1999 A
6011595 Henderson et al. Jan 2000 A
6016361 Hongu et al. Jan 2000 A
6038348 Carley Mar 2000 A
6052124 Stein et al. Apr 2000 A
6055968 Sasaki et al. May 2000 A
6067385 Cullen et al. May 2000 A
6072916 Suzuki Jun 2000 A
6073148 Rowe et al. Jun 2000 A
6094198 Shashua Jul 2000 A
6098065 Skillen et al. Aug 2000 A
6104830 Schistad Aug 2000 A
6104840 Ejiri et al. Aug 2000 A
6118544 Rao Sep 2000 A
6118552 Suzuki et al. Sep 2000 A
6154217 Aldrich Nov 2000 A
6192360 Dumais et al. Feb 2001 B1
6215469 Mori et al. Apr 2001 B1
6219158 Dawe Apr 2001 B1
6219773 Garibay, Jr. et al. Apr 2001 B1
6223223 Kumpf et al. Apr 2001 B1
6229625 Nakatsuka May 2001 B1
6233059 Kodaira et al. May 2001 B1
6263122 Simske et al. Jul 2001 B1
6292168 Venable et al. Sep 2001 B1
6327581 Platt Dec 2001 B1
6337925 Cohen et al. Jan 2002 B1
6347152 Shinagawa et al. Feb 2002 B1
6347162 Suzuki Feb 2002 B1
6356647 Bober et al. Mar 2002 B1
6370277 Borrey et al. Apr 2002 B1
6385346 Gillihan et al. May 2002 B1
6393147 Danneels et al. May 2002 B2
6396599 Patton et al. May 2002 B1
6408094 Mirzaoff et al. Jun 2002 B1
6408105 Maruo Jun 2002 B1
6424742 Yamamoto et al. Jul 2002 B2
6426806 Melen Jul 2002 B2
6433896 Ueda et al. Aug 2002 B1
6456738 Tsukasa Sep 2002 B1
6463430 Brady et al. Oct 2002 B1
6469801 Telle Oct 2002 B1
6473198 Matama Oct 2002 B1
6473535 Takaoka Oct 2002 B1
6480304 Os et al. Nov 2002 B1
6480624 Horie et al. Nov 2002 B1
6501855 Zelinski Dec 2002 B1
6512848 Wang et al. Jan 2003 B2
6522791 Nagarajan Feb 2003 B2
6525840 Haraguchi et al. Feb 2003 B1
6563531 Matama May 2003 B1
6571008 Bandyopadhyay et al. May 2003 B1
6601026 Appelt et al. Jul 2003 B2
6614930 Agnihotri et al. Sep 2003 B1
6621595 Fan et al. Sep 2003 B1
6628416 Hsu et al. Sep 2003 B1
6628808 Bach et al. Sep 2003 B1
6633857 Tipping Oct 2003 B1
6643413 Shum et al. Nov 2003 B1
6646765 Barker et al. Nov 2003 B1
6658147 Gorbatov et al. Dec 2003 B2
6665425 Sampath et al. Dec 2003 B1
6667774 Berman et al. Dec 2003 B2
6675159 Lin et al. Jan 2004 B1
6701009 Makoto et al. Mar 2004 B1
6704441 Inagaki et al. Mar 2004 B1
6724916 Shyu Apr 2004 B1
6729733 Raskar et al. May 2004 B1
6732046 Joshi May 2004 B1
6748109 Yamaguchi Jun 2004 B1
6751349 Matama Jun 2004 B2
6757081 Fan et al. Jun 2004 B1
6757427 Hongu Jun 2004 B1
6763515 Vazquez et al. Jul 2004 B1
6765685 Yu Jul 2004 B1
6778684 Bollman Aug 2004 B1
6781375 Miyazaki et al. Aug 2004 B2
6788830 Morikawa Sep 2004 B1
6789069 Barnhill et al. Sep 2004 B1
6801658 Morita et al. Oct 2004 B2
6816187 Iwai et al. Nov 2004 B1
6826311 Wilt Nov 2004 B2
6831755 Narushima et al. Dec 2004 B1
6839466 Venable Jan 2005 B2
6850653 Abe Feb 2005 B2
6873721 Beyerer et al. Mar 2005 B1
6882983 Furphy et al. Apr 2005 B2
6898601 Amado et al. May 2005 B2
6901170 Terada et al. May 2005 B1
6917438 Yoda et al. Jul 2005 B1
6917709 Zelinski Jul 2005 B2
6921220 Aiyama Jul 2005 B2
6950555 Filatov et al. Sep 2005 B2
6987534 Seta Jan 2006 B1
6989914 Iwaki Jan 2006 B2
6999625 Nelson Feb 2006 B1
7006707 Peterson Feb 2006 B2
7016549 Utagawa Mar 2006 B1
7017108 Wan Mar 2006 B1
7020320 Filatov Mar 2006 B2
7023447 Luo et al. Apr 2006 B2
7027181 Takamori Apr 2006 B2
7038713 Matama May 2006 B1
7042603 Masao et al. May 2006 B2
7043080 Dolan May 2006 B1
7054036 Hirayama May 2006 B2
7081975 Yoda et al. Jul 2006 B2
7082426 Musgrove et al. Jul 2006 B2
7107285 von Kaenel et al. Sep 2006 B2
7123292 Seeger et al. Oct 2006 B1
7123387 Cheng et al. Oct 2006 B2
7130471 Bossut et al. Oct 2006 B2
7145699 Dolan Dec 2006 B2
7149347 Wnek Dec 2006 B1
7167281 Fujimoto Jan 2007 B1
7168614 Kotovich et al. Jan 2007 B2
7173732 Matama Feb 2007 B2
7174043 Lossev et al. Feb 2007 B2
7177049 Karidi Feb 2007 B2
7181082 Feng Feb 2007 B2
7184929 Goodman Feb 2007 B2
7194471 Nagatsuka et al. Mar 2007 B1
7197158 Camara et al. Mar 2007 B2
7201323 Kotovich et al. Apr 2007 B2
7209599 Simske et al. Apr 2007 B2
7228314 Kawamoto et al. Jun 2007 B2
7249717 Kotovich et al. Jul 2007 B2
7251777 Valtchev et al. Jul 2007 B1
7253836 Suzuki et al. Aug 2007 B1
7263221 Moriwaki Aug 2007 B1
7266768 Ferlitsch et al. Sep 2007 B2
7286177 Cooper Oct 2007 B2
7298897 Dominguez et al. Nov 2007 B1
7317828 Suzuki et al. Jan 2008 B2
7337389 Woolf et al. Feb 2008 B1
7339585 Verstraelen et al. Mar 2008 B2
7340376 Goodman Mar 2008 B2
7349888 Heidenreich et al. Mar 2008 B1
7365881 Burns et al. Apr 2008 B2
7366705 Zeng et al. Apr 2008 B2
7382921 Lossev et al. Jun 2008 B2
7386527 Harris et al. Jun 2008 B2
7392426 Wolfe et al. Jun 2008 B2
7403008 Blank et al. Jul 2008 B2
7403313 Kuo Jul 2008 B2
7406183 Emerson et al. Jul 2008 B2
7409092 Srinivasa Aug 2008 B2
7409633 Lerner et al. Aug 2008 B2
7416131 Fortune et al. Aug 2008 B2
7426293 Chien et al. Sep 2008 B2
7430059 Rodrigues et al. Sep 2008 B2
7430066 Hsu et al. Sep 2008 B2
7430310 Kotovich et al. Sep 2008 B2
7447377 Takahira Nov 2008 B2
7464066 Zelinski et al. Dec 2008 B2
7478332 Buttner et al. Jan 2009 B2
7487438 Withers Feb 2009 B1
7492478 Une Feb 2009 B2
7492943 Li et al. Feb 2009 B2
7515313 Cheng Apr 2009 B2
7515772 Li et al. Apr 2009 B2
7528883 Hsu May 2009 B2
7542931 Black et al. Jun 2009 B2
7545529 Borrey et al. Jun 2009 B2
7553095 Kimura Jun 2009 B2
7562060 Sindhwani et al. Jul 2009 B2
7580557 Zavadsky et al. Aug 2009 B2
7636479 Luo et al. Dec 2009 B2
7639387 Hull et al. Dec 2009 B2
7643665 Zavadsky et al. Jan 2010 B2
7651286 Tischler Jan 2010 B2
7655685 McElroy et al. Feb 2010 B2
7657091 Postnikov et al. Feb 2010 B2
7665061 Kothari et al. Feb 2010 B2
7673799 Hart et al. Mar 2010 B2
7702162 Cheong et al. Apr 2010 B2
7735721 Ma et al. Jun 2010 B1
7738730 Hawley Jun 2010 B2
7739127 Hall Jun 2010 B1
7761391 Schmidtler et al. Jul 2010 B2
7778457 Nepomniachtchi et al. Aug 2010 B2
7782384 Kelly Aug 2010 B2
7787695 Nepomniachtchi et al. Aug 2010 B2
7937345 Schmidtler et al. May 2011 B2
7941744 Oppenlander et al. May 2011 B2
7949167 Krishnan et al. May 2011 B2
7949176 Nepomniachtchi May 2011 B2
7949660 Green et al. May 2011 B2
7953268 Nepomniachtchi May 2011 B2
7958067 Schmidtler et al. Jun 2011 B2
7978900 Nepomniachtchi et al. Jul 2011 B2
7999961 Wanda Aug 2011 B2
8000514 Nepomniachtchi et al. Aug 2011 B2
8035641 O'Donnell Oct 2011 B1
8059888 Chen et al. Nov 2011 B2
8064710 Mizoguchi Nov 2011 B2
8073263 Hull et al. Dec 2011 B2
8078958 Cottrille et al. Dec 2011 B2
8081227 Kim et al. Dec 2011 B1
8094976 Berard et al. Jan 2012 B2
8126924 Herin Feb 2012 B1
8135656 Evanitsky Mar 2012 B2
8136114 Gailloux et al. Mar 2012 B1
8184156 Mino et al. May 2012 B2
8194965 Lossev et al. Jun 2012 B2
8213687 Fan Jul 2012 B2
8238880 Jin et al. Aug 2012 B2
8239335 Schmidtler et al. Aug 2012 B2
8244031 Cho et al. Aug 2012 B2
8265422 Jin Sep 2012 B1
8279465 Couchman Oct 2012 B2
8295599 Katougi et al. Oct 2012 B2
8311296 Filatov et al. Nov 2012 B2
8326015 Nepomniachtchi Dec 2012 B2
8345981 Schmidtler et al. Jan 2013 B2
8354981 Kawasaki et al. Jan 2013 B2
8374977 Schmidtler et al. Feb 2013 B2
8379914 Nepomniachtchi et al. Feb 2013 B2
8385647 Hawley et al. Feb 2013 B2
8406480 Grigsby et al. Mar 2013 B2
8433775 Buchhop et al. Apr 2013 B2
8441548 Nechyba et al. May 2013 B1
8443286 Cameron May 2013 B2
8452098 Nepomniachtchi et al. May 2013 B2
8478052 Yee et al. Jul 2013 B1
8483473 Roach et al. Jul 2013 B2
8503769 Baker et al. Aug 2013 B2
8503797 Turkelson et al. Aug 2013 B2
8515163 Cho et al. Aug 2013 B2
8515208 Minerich Aug 2013 B2
8526739 Schmidtler et al. Sep 2013 B2
8532374 Chen et al. Sep 2013 B2
8532419 Coleman Sep 2013 B2
8553984 Slotine et al. Oct 2013 B2
8559766 Tilt et al. Oct 2013 B2
8577118 Nepomniachtchi et al. Nov 2013 B2
8582862 Nepomniachtchi et al. Nov 2013 B2
8587818 Imaizumi et al. Nov 2013 B2
8620058 Nepomniachtchi et al. Dec 2013 B2
8620078 Chapleau Dec 2013 B1
8639621 Ellis et al. Jan 2014 B1
8675953 Elwell et al. Mar 2014 B1
8676165 Cheng et al. Mar 2014 B2
8677249 Buttner et al. Mar 2014 B2
8681150 Kim et al. Mar 2014 B2
8693043 Schmidtler et al. Apr 2014 B2
8705836 Gorski et al. Apr 2014 B2
8719197 Schmidtler et al. May 2014 B2
8724907 Sampson et al. May 2014 B1
8745488 Wong Jun 2014 B1
8749839 Borrey et al. Jun 2014 B2
8774516 Amtrup et al. Jul 2014 B2
8805125 Kumar et al. Aug 2014 B1
8811751 Ma Aug 2014 B1
8813111 Guerin et al. Aug 2014 B2
8823991 Borrey et al. Sep 2014 B2
8855375 Macciola et al. Oct 2014 B2
8855425 Schmidtler et al. Oct 2014 B2
8879120 Thrasher et al. Nov 2014 B2
8879783 Wang et al. Nov 2014 B1
8879846 Amtrup et al. Nov 2014 B2
8885229 Amtrup et al. Nov 2014 B1
8908977 King Dec 2014 B2
8918357 Minocha et al. Dec 2014 B2
8955743 Block et al. Feb 2015 B1
8971587 Macciola et al. Mar 2015 B2
8989515 Shustorovich et al. Mar 2015 B2
8995012 Heit et al. Mar 2015 B2
8995769 Carr Mar 2015 B2
9020432 Matsushita et al. Apr 2015 B2
9058327 Lehrman et al. Jun 2015 B1
9058515 Amtrup et al. Jun 2015 B1
9058580 Amtrup et al. Jun 2015 B1
9064316 Eid et al. Jun 2015 B2
9117117 Macciola et al. Aug 2015 B2
9129210 Borrey et al. Sep 2015 B2
9135277 Petrou Sep 2015 B2
9137417 Macciola et al. Sep 2015 B2
9141926 Kilby et al. Sep 2015 B2
9158967 Shustorovich et al. Oct 2015 B2
9165187 Macciola et al. Oct 2015 B2
9165188 Thrasher et al. Oct 2015 B2
9183224 Petrou et al. Nov 2015 B2
9208536 Macciola et al. Dec 2015 B2
9239713 Lakshman et al. Jan 2016 B1
9251614 Tian Feb 2016 B1
9253349 Amtrup et al. Feb 2016 B2
9275281 Macciola Mar 2016 B2
9277022 Lee et al. Mar 2016 B2
9292815 Vibhor et al. Mar 2016 B2
9298979 Nepomniachtchi et al. Mar 2016 B2
9311531 Amtrup et al. Apr 2016 B2
9342741 Amtrup et al. May 2016 B2
9342742 Amtrup et al. May 2016 B2
9355312 Amtrup et al. May 2016 B2
9367899 Fang Jun 2016 B1
9373057 Erhan et al. Jun 2016 B1
9386235 Ma et al. Jul 2016 B2
9405772 Petrou et al. Aug 2016 B2
9436921 Whitmore Sep 2016 B2
9483794 Amtrup et al. Nov 2016 B2
9514357 Macciola et al. Dec 2016 B2
9576272 Macciola et al. Feb 2017 B2
9584729 Amtrup et al. Feb 2017 B2
9648297 Ettinger et al. May 2017 B1
9747504 Ma et al. Aug 2017 B2
9754164 Macciola et al. Sep 2017 B2
9760788 Shustorovich et al. Sep 2017 B2
9767354 Thompson et al. Sep 2017 B2
9767379 Macciola et al. Sep 2017 B2
9769354 Thrasher et al. Sep 2017 B2
9779296 Ma et al. Oct 2017 B1
9819825 Amtrup et al. Nov 2017 B2
9934433 Thompson et al. Apr 2018 B2
9946954 Macciola et al. Apr 2018 B2
9978024 Ryan et al. May 2018 B2
9996741 Amtrup et al. Jun 2018 B2
10108860 Ma et al. Oct 2018 B2
10127441 Amtrup et al. Nov 2018 B2
10127636 Ma et al. Nov 2018 B2
20010027420 Boublik et al. Oct 2001 A1
20020030831 Kinjo Mar 2002 A1
20020054693 Elmenhurst May 2002 A1
20020069218 Sull et al. Jun 2002 A1
20020113801 Reavy et al. Aug 2002 A1
20020122071 Camara et al. Sep 2002 A1
20020126313 Namizuka Sep 2002 A1
20020165717 Solmer et al. Nov 2002 A1
20030002068 Constantin et al. Jan 2003 A1
20030007683 Wang et al. Jan 2003 A1
20030026479 Thomas et al. Feb 2003 A1
20030030638 Astrom et al. Feb 2003 A1
20030044012 Eden Mar 2003 A1
20030046445 Witt et al. Mar 2003 A1
20030053696 Schmidt et al. Mar 2003 A1
20030063213 Poplin Apr 2003 A1
20030086615 Dance et al. May 2003 A1
20030095709 Zhou May 2003 A1
20030101161 Ferguson et al. May 2003 A1
20030117511 Belz et al. Jun 2003 A1
20030120653 Brady et al. Jun 2003 A1
20030142328 McDaniel et al. Jul 2003 A1
20030151674 Lin Aug 2003 A1
20030156201 Zhang Aug 2003 A1
20030197063 Longacre Oct 2003 A1
20030210428 Bevlin et al. Nov 2003 A1
20030223615 Keaton et al. Dec 2003 A1
20040019274 Galloway et al. Jan 2004 A1
20040021909 Kikuoka Feb 2004 A1
20040022437 Beardsley Feb 2004 A1
20040022439 Beardsley Feb 2004 A1
20040049401 Carr et al. Mar 2004 A1
20040083119 Schunder et al. Apr 2004 A1
20040090458 Yu et al. May 2004 A1
20040093119 Gunnarsson et al. May 2004 A1
20040102989 Jang et al. May 2004 A1
20040111453 Harris et al. Jun 2004 A1
20040143547 Mersky Jul 2004 A1
20040143796 Lerner et al. Jul 2004 A1
20040169873 Nagarajan Sep 2004 A1
20040169889 Sawada Sep 2004 A1
20040175033 Matama Sep 2004 A1
20040181482 Yap Sep 2004 A1
20040190019 Li et al. Sep 2004 A1
20040223640 Bovyrin Nov 2004 A1
20040245334 Sikorski Dec 2004 A1
20040252190 Antonis Dec 2004 A1
20040261084 Rosenbloom et al. Dec 2004 A1
20040263639 Sadovsky et al. Dec 2004 A1
20050021360 Miller et al. Jan 2005 A1
20050030602 Gregson et al. Feb 2005 A1
20050046887 Shibata et al. Mar 2005 A1
20050050060 Damm et al. Mar 2005 A1
20050054342 Otsuka Mar 2005 A1
20050060162 Mohit et al. Mar 2005 A1
20050063585 Matsuura Mar 2005 A1
20050065903 Zhang et al. Mar 2005 A1
20050080844 Dathathraya et al. Apr 2005 A1
20050100209 Lewis et al. May 2005 A1
20050131780 Princen Jun 2005 A1
20050134935 Schmidtler et al. Jun 2005 A1
20050141777 Kuwata Jun 2005 A1
20050151990 Ishikawa et al. Jul 2005 A1
20050160065 Seeman Jul 2005 A1
20050163343 Kakinami et al. Jul 2005 A1
20050180628 Curry et al. Aug 2005 A1
20050180632 Aradhye et al. Aug 2005 A1
20050193325 Epstein Sep 2005 A1
20050204058 Philbrick et al. Sep 2005 A1
20050206753 Sakurai et al. Sep 2005 A1
20050212925 Lefebure et al. Sep 2005 A1
20050216426 Weston et al. Sep 2005 A1
20050216564 Myers Sep 2005 A1
20050226505 Wilson Oct 2005 A1
20050228591 Hur et al. Oct 2005 A1
20050234955 Zeng et al. Oct 2005 A1
20050246262 Aggarwal et al. Nov 2005 A1
20050265618 Jebara Dec 2005 A1
20050271265 Wang et al. Dec 2005 A1
20050273453 Holloran Dec 2005 A1
20060013463 Ramsay et al. Jan 2006 A1
20060017810 Kurzweil et al. Jan 2006 A1
20060023271 Boay et al. Feb 2006 A1
20060031344 Mishima et al. Feb 2006 A1
20060033615 Nou Feb 2006 A1
20060047704 Gopalakrishnan Mar 2006 A1
20060048046 Joshi et al. Mar 2006 A1
20060074821 Cristianini Apr 2006 A1
20060082595 Liu et al. Apr 2006 A1
20060089907 Kohlmaier et al. Apr 2006 A1
20060093208 Li et al. May 2006 A1
20060095373 Venkatasubramanian et al. May 2006 A1
20060095374 Lo et al. May 2006 A1
20060095830 Krishna et al. May 2006 A1
20060098899 King et al. May 2006 A1
20060112340 Mohr et al. May 2006 A1
20060114488 Motamed Jun 2006 A1
20060115153 Bhattacharjya Jun 2006 A1
20060120609 Ivanov et al. Jun 2006 A1
20060126918 Oohashi et al. Jun 2006 A1
20060147113 Han Jul 2006 A1
20060159364 Poon et al. Jul 2006 A1
20060161646 Chene et al. Jul 2006 A1
20060164682 Lev Jul 2006 A1
20060195491 Nieland et al. Aug 2006 A1
20060203107 Steinberg et al. Sep 2006 A1
20060206628 Erez Sep 2006 A1
20060212413 Rujan et al. Sep 2006 A1
20060215231 Borrey et al. Sep 2006 A1
20060219773 Richardson Oct 2006 A1
20060222239 Bargeron et al. Oct 2006 A1
20060235732 Miller et al. Oct 2006 A1
20060235812 Rifkin et al. Oct 2006 A1
20060236304 Luo et al. Oct 2006 A1
20060239539 Kochi et al. Oct 2006 A1
20060242180 Graf et al. Oct 2006 A1
20060256371 King et al. Nov 2006 A1
20060256392 Van Hoof et al. Nov 2006 A1
20060257048 Lin et al. Nov 2006 A1
20060262962 Hull et al. Nov 2006 A1
20060263134 Beppu Nov 2006 A1
20060265640 Albornoz et al. Nov 2006 A1
20060268352 Tanigawa et al. Nov 2006 A1
20060268356 Shih et al. Nov 2006 A1
20060268369 Kuo Nov 2006 A1
20060279798 Rudolph et al. Dec 2006 A1
20060282442 Lennon et al. Dec 2006 A1
20060282463 Kudolph et al. Dec 2006 A1
20060282762 Diamond et al. Dec 2006 A1
20060288015 Schirripa et al. Dec 2006 A1
20060294154 Shimizu Dec 2006 A1
20070002348 Hagiwara Jan 2007 A1
20070002375 Ng Jan 2007 A1
20070003155 Miller et al. Jan 2007 A1
20070003165 Sibiryakov et al. Jan 2007 A1
20070005341 Burges et al. Jan 2007 A1
20070011334 Higgins et al. Jan 2007 A1
20070016848 Rosenoff et al. Jan 2007 A1
20070030540 Cheng et al. Feb 2007 A1
20070031028 Vetter et al. Feb 2007 A1
20070035780 Kanno Feb 2007 A1
20070036432 Xu Feb 2007 A1
20070046957 Jacobs et al. Mar 2007 A1
20070046982 Hull et al. Mar 2007 A1
20070047782 Hull et al. Mar 2007 A1
20070065033 Hemandez et al. Mar 2007 A1
20070086667 Dai et al. Apr 2007 A1
20070109590 Hagiwara May 2007 A1
20070118794 Hollander et al. May 2007 A1
20070128899 Mayer Jun 2007 A1
20070133862 Gold et al. Jun 2007 A1
20070165801 Devolites et al. Jul 2007 A1
20070172151 Gennetten et al. Jul 2007 A1
20070177818 Teshima et al. Aug 2007 A1
20070204162 Rodriguez Aug 2007 A1
20070206877 Wu et al. Sep 2007 A1
20070239642 Sindhwani et al. Oct 2007 A1
20070250416 Beach et al. Oct 2007 A1
20070252907 Hsu Nov 2007 A1
20070255653 Tumminaro et al. Nov 2007 A1
20070260588 Biazetti et al. Nov 2007 A1
20080004073 John et al. Jan 2008 A1
20080005678 Buttner et al. Jan 2008 A1
20080068452 Nakao et al. Mar 2008 A1
20080082352 Schmidtler et al. Apr 2008 A1
20080086432 Schmidtler et al. Apr 2008 A1
20080086433 Schmidtler et al. Apr 2008 A1
20080095467 Olszak et al. Apr 2008 A1
20080097936 Schmidtler et al. Apr 2008 A1
20080130992 Fujii Jun 2008 A1
20080133388 Alekseev et al. Jun 2008 A1
20080137971 King et al. Jun 2008 A1
20080144881 Fortune et al. Jun 2008 A1
20080147561 Euchner et al. Jun 2008 A1
20080147790 Malaney et al. Jun 2008 A1
20080166025 Thorne Jul 2008 A1
20080175476 Ohk et al. Jul 2008 A1
20080177612 Starink et al. Jul 2008 A1
20080177643 Matthews et al. Jul 2008 A1
20080183576 Kim et al. Jul 2008 A1
20080199081 Kimura et al. Aug 2008 A1
20080211809 Kim et al. Sep 2008 A1
20080212115 Konishi Sep 2008 A1
20080215489 Lawson et al. Sep 2008 A1
20080219543 Csulits et al. Sep 2008 A1
20080225127 Ming Sep 2008 A1
20080232715 Miyakawa et al. Sep 2008 A1
20080235766 Wallos et al. Sep 2008 A1
20080253647 Cho et al. Oct 2008 A1
20080292144 Kim Nov 2008 A1
20080294737 Kim Nov 2008 A1
20080298718 Liu et al. Dec 2008 A1
20090015687 Shinkai et al. Jan 2009 A1
20090073266 Abdellaziz Trimeche et al. Mar 2009 A1
20090089078 Bursey Apr 2009 A1
20090103808 Dey et al. Apr 2009 A1
20090110267 Zakhor et al. Apr 2009 A1
20090132468 Putivsky et al. May 2009 A1
20090132504 Vegnaduzzo et al. May 2009 A1
20090141985 Sheinin et al. Jun 2009 A1
20090154778 Lei et al. Jun 2009 A1
20090159509 Wojdyla et al. Jun 2009 A1
20090164889 Piersol et al. Jun 2009 A1
20090175537 Tribelhorn et al. Jul 2009 A1
20090185241 Nepomniachtchi Jul 2009 A1
20090214112 Borrey et al. Aug 2009 A1
20090225180 Maruyama et al. Sep 2009 A1
20090228499 Schmidtler et al. Sep 2009 A1
20090254487 Dhar et al. Oct 2009 A1
20090285445 Vasa Nov 2009 A1
20090324025 Camp, Jr. et al. Dec 2009 A1
20090324062 Lim et al. Dec 2009 A1
20090327250 Green et al. Dec 2009 A1
20100007751 Icho et al. Jan 2010 A1
20100014769 Lundgren Jan 2010 A1
20100045701 Scott Feb 2010 A1
20100049035 Hu Feb 2010 A1
20100060910 Fechter Mar 2010 A1
20100060915 Suzuki et al. Mar 2010 A1
20100062491 Lehmbeck Mar 2010 A1
20100082491 Rosenblatt et al. Apr 2010 A1
20100142820 Malik Jun 2010 A1
20100150424 Nepomniachtchi et al. Jun 2010 A1
20100166318 Ben-Horesh et al. Jul 2010 A1
20100169250 Schmidtler et al. Jul 2010 A1
20100174974 Brisebois et al. Jul 2010 A1
20100202698 Schmidtler et al. Aug 2010 A1
20100202701 Basri et al. Aug 2010 A1
20100209006 Grigsby et al. Aug 2010 A1
20100214291 Muller et al. Aug 2010 A1
20100214584 Takahashi Aug 2010 A1
20100232706 Forutanpour Sep 2010 A1
20100280859 Frederick, II Nov 2010 A1
20100331043 Chapman et al. Dec 2010 A1
20110004547 Giordano et al. Jan 2011 A1
20110013039 Aisaka et al. Jan 2011 A1
20110025825 McNamer et al. Feb 2011 A1
20110025842 King et al. Feb 2011 A1
20110025860 Katougi et al. Feb 2011 A1
20110032570 Imaizumi et al. Feb 2011 A1
20110035284 Moshfeghi Feb 2011 A1
20110055033 Chen et al. Mar 2011 A1
20110090337 Klomp et al. Apr 2011 A1
20110091092 Nepomniachtchi et al. Apr 2011 A1
20110116716 Kwon et al. May 2011 A1
20110129153 Petrou et al. Jun 2011 A1
20110137898 Gordo et al. Jun 2011 A1
20110145178 Schmidtler et al. Jun 2011 A1
20110181589 Quan et al. Jul 2011 A1
20110182500 Esposito et al. Jul 2011 A1
20110194127 Nagakoshi et al. Aug 2011 A1
20110196870 Schmidtler et al. Aug 2011 A1
20110200107 Ryu Aug 2011 A1
20110246076 Su et al. Oct 2011 A1
20110249905 Singh et al. Oct 2011 A1
20110279456 Hiranuma et al. Nov 2011 A1
20110280450 Nepomniachtchi et al. Nov 2011 A1
20110285873 Showering Nov 2011 A1
20110285874 Showering et al. Nov 2011 A1
20110313966 Schmidt et al. Dec 2011 A1
20120008856 Hewes et al. Jan 2012 A1
20120008858 Sedky Jan 2012 A1
20120019614 Murray et al. Jan 2012 A1
20120038549 Mandella et al. Feb 2012 A1
20120057756 Yoon et al. Mar 2012 A1
20120069131 Abelow Mar 2012 A1
20120075442 Vujic Mar 2012 A1
20120077476 Paraskevakos et al. Mar 2012 A1
20120092329 Koo et al. Apr 2012 A1
20120105662 Staudacher et al. May 2012 A1
20120113489 Heit et al. May 2012 A1
20120114249 Conwell May 2012 A1
20120116957 Zanzot et al. May 2012 A1
20120131139 Siripurapu et al. May 2012 A1
20120134576 Sharma May 2012 A1
20120162527 Baker Jun 2012 A1
20120194692 Mers et al. Aug 2012 A1
20120195466 Teng et al. Aug 2012 A1
20120215578 Swierz, III et al. Aug 2012 A1
20120230577 Calman et al. Sep 2012 A1
20120230606 Sugiyama et al. Sep 2012 A1
20120236019 Oh et al. Sep 2012 A1
20120269398 Fan Oct 2012 A1
20120272192 Grossman et al. Oct 2012 A1
20120284122 Brandis Nov 2012 A1
20120284185 Mettler et al. Nov 2012 A1
20120290421 Qawami et al. Nov 2012 A1
20120293607 Bhogal et al. Nov 2012 A1
20120294524 Zyuzin et al. Nov 2012 A1
20120300020 Arth et al. Nov 2012 A1
20120301011 Grzechnik Nov 2012 A1
20120301024 Yuan Nov 2012 A1
20120308139 Dhir Dec 2012 A1
20130004076 Koo et al. Jan 2013 A1
20130022231 Nepomniachtchi et al. Jan 2013 A1
20130027757 Lee et al. Jan 2013 A1
20130057703 Vu et al. Mar 2013 A1
20130060596 Gu et al. Mar 2013 A1
20130066798 Morin et al. Mar 2013 A1
20130073459 Zacarias et al. Mar 2013 A1
20130078983 Doshi et al. Mar 2013 A1
20130080347 Paul et al. Mar 2013 A1
20130088757 Schmidtler et al. Apr 2013 A1
20130090969 Rivere Apr 2013 A1
20130097157 Ng et al. Apr 2013 A1
20130117175 Hanson May 2013 A1
20130121610 Chen et al. May 2013 A1
20130124414 Roach et al. May 2013 A1
20130142402 Myers et al. Jun 2013 A1
20130152176 Courtney et al. Jun 2013 A1
20130182002 Macciola et al. Jul 2013 A1
20130182105 Fahn et al. Jul 2013 A1
20130182128 Amtrup et al. Jul 2013 A1
20130182292 Thrasher et al. Jul 2013 A1
20130182951 Shustorovich et al. Jul 2013 A1
20130182959 Thrasher et al. Jul 2013 A1
20130182970 Shustorovich et al. Jul 2013 A1
20130182973 Macciola et al. Jul 2013 A1
20130185618 Macciola et al. Jul 2013 A1
20130188865 Saha et al. Jul 2013 A1
20130198192 Hu et al. Aug 2013 A1
20130198358 Taylor Aug 2013 A1
20130223762 Nagamasa Aug 2013 A1
20130230246 Nuggehalli Sep 2013 A1
20130251280 Borrey et al. Sep 2013 A1
20130268378 Yovin Oct 2013 A1
20130268430 Lopez et al. Oct 2013 A1
20130271579 Wang Oct 2013 A1
20130287265 Nepomniachtchi et al. Oct 2013 A1
20130287284 Lepomniachtchi et al. Oct 2013 A1
20130290036 Strange Oct 2013 A1
20130297353 Strange et al. Nov 2013 A1
20130308832 Schmidtler et al. Nov 2013 A1
20130329023 Suplee, III et al. Dec 2013 A1
20140003721 Saund Jan 2014 A1
20140006129 Heath Jan 2014 A1
20140006198 Daly et al. Jan 2014 A1
20140012754 Hanson et al. Jan 2014 A1
20140032406 Roach et al. Jan 2014 A1
20140047367 Nielsen Feb 2014 A1
20140055812 DeRoller Feb 2014 A1
20140055826 Hinski Feb 2014 A1
20140079294 Amtrup et al. Mar 2014 A1
20140108456 Ramachandrula et al. Apr 2014 A1
20140149308 Ming May 2014 A1
20140153787 Schmidtler et al. Jun 2014 A1
20140153830 Amtrup et al. Jun 2014 A1
20140164914 Schmidtler et al. Jun 2014 A1
20140172687 Chirehdast Jun 2014 A1
20140181691 Poornachandran et al. Jun 2014 A1
20140201612 Buttner et al. Jul 2014 A1
20140207717 Schmidtler et al. Jul 2014 A1
20140211991 Stoppa Jul 2014 A1
20140233068 Borrey et al. Aug 2014 A1
20140254887 Amtrup et al. Sep 2014 A1
20140270349 Amtrup et al. Sep 2014 A1
20140270439 Chen Sep 2014 A1
20140270536 Amtrup et al. Sep 2014 A1
20140316841 Kilby et al. Oct 2014 A1
20140317595 Kilby et al. Oct 2014 A1
20140327940 Amtrup et al. Nov 2014 A1
20140328520 Macciola et al. Nov 2014 A1
20140333971 Macciola et al. Nov 2014 A1
20140368890 Amtrup et al. Dec 2014 A1
20140376060 Bocharov et al. Dec 2014 A1
20150040001 Kannan et al. Feb 2015 A1
20150040002 Kannan et al. Feb 2015 A1
20150086080 Stein Mar 2015 A1
20150093033 Kwon Apr 2015 A1
20150098628 Macciola et al. Apr 2015 A1
20150120564 Smith et al. Apr 2015 A1
20150161765 Kota et al. Jun 2015 A1
20150170085 Amtrup et al. Jun 2015 A1
20150248391 Watanabe Sep 2015 A1
20150254469 Butler Sep 2015 A1
20150317529 Zhou et al. Nov 2015 A1
20150324640 Macciola et al. Nov 2015 A1
20150339526 Macciola et al. Nov 2015 A1
20150347861 Doepke et al. Dec 2015 A1
20150355889 Kilby et al. Dec 2015 A1
20160019530 Wang et al. Jan 2016 A1
20160028921 Thrasher Jan 2016 A1
20160034775 Meadow et al. Feb 2016 A1
20160055395 Macciola et al. Feb 2016 A1
20160063358 Mehrseresht Mar 2016 A1
20160125613 Shustorovich et al. May 2016 A1
20160147891 Chhichhia et al. May 2016 A1
20160171603 Amtrup et al. Jun 2016 A1
20160217319 Bhanu et al. Jul 2016 A1
20160320466 Berker Nov 2016 A1
20160350592 Ma et al. Dec 2016 A1
20170046788 Macciola et al. Feb 2017 A1
20170103281 Amtrup et al. Apr 2017 A1
20170104885 Amtrup et al. Apr 2017 A1
20170109576 Shustorovich et al. Apr 2017 A1
20170109588 Ma et al. Apr 2017 A1
20170109606 Macciola et al. Apr 2017 A1
20170109610 Macciola et al. Apr 2017 A1
20170109818 Amtrup et al. Apr 2017 A1
20170109819 Amtrup et al. Apr 2017 A1
20170109830 Macciola et al. Apr 2017 A1
20170111532 Amtrup et al. Apr 2017 A1
20170147572 Kilby et al. May 2017 A1
20170286764 Ma et al. Oct 2017 A1
20170351915 Thompson et al. Dec 2017 A1
20170357869 Shustorovich et al. Dec 2017 A1
Foreign Referenced Citations (97)
Number Date Country
101052991 Oct 2007 CN
101295305 Oct 2008 CN
101329731 Dec 2008 CN
101339566 Jan 2009 CN
101493830 Jul 2009 CN
101673402 Mar 2010 CN
101894262 Nov 2010 CN
0549329 Jun 1993 EP
0723247 Jul 1996 EP
0767578 Apr 1997 EP
0809219 Nov 1997 EP
0843277 May 1998 EP
0936804 Aug 1999 EP
1054331 Nov 2000 EP
1128659 Aug 2001 EP
1229485 Aug 2002 EP
1317133 Jun 2003 EP
1319133 Jun 2003 EP
1422520 May 2004 EP
1422920 May 2004 EP
1956518 Aug 2008 EP
1959363 Aug 2008 EP
1976259 Oct 2008 EP
2107480 Oct 2009 EP
2472372 Jul 2012 EP
5462286 Apr 2014 JE
H04034671 Feb 1992 JP
H05060616 Mar 1993 JP
H07260701 Oct 1995 JP
H0962826 Mar 1997 JP
H09091341 Apr 1997 JP
H09116720 May 1997 JP
H11118444 Apr 1999 JP
2000067065 Mar 2000 JP
2000103628 Apr 2000 JP
2000298702 Oct 2000 JP
2000354144 Dec 2000 JP
2001297303 Oct 2001 JP
2001309128 Nov 2001 JP
2002024258 Jan 2002 JP
2002109242 Apr 2002 JP
2002519766 Jul 2002 JP
2002312385 Oct 2002 JP
2003091521 Mar 2003 JP
2003196357 Jul 2003 JP
2003234888 Aug 2003 JP
2003303315 Oct 2003 JP
2004005624 Jan 2004 JP
2004523022 Jul 2004 JP
2004363786 Dec 2004 JP
2005018678 Jan 2005 JP
2005071262 Mar 2005 JP
2005085173 Mar 2005 JP
2005173730 Jun 2005 JP
2005208861 Aug 2005 JP
2006031379 Feb 2006 JP
2006054519 Feb 2006 JP
2006126941 May 2006 JP
2006185367 Jul 2006 JP
2006209588 Aug 2006 JP
2006330863 Dec 2006 JP
201052670 Mar 2007 JP
2007251518 Sep 2007 JP
2008134683 Jun 2008 JP
2009015396 Jan 2009 JP
2009211431 Sep 2009 JP
2009541896 Nov 2009 JP
2010062722 Mar 2010 JP
2011034387 Feb 2011 JP
2011055467 Mar 2011 JP
2011118513 Jun 2011 JP
2011118600 Jun 2011 JP
2012008791 Jan 2012 JP
2012009033 Jan 2012 JP
2012058904 Mar 2012 JP
2012156644 Aug 2012 JP
2012517637 Aug 2012 JP
2012194736 Oct 2012 JP
2012217159 Nov 2012 JP
2013196357 Sep 2013 JP
401553 Aug 2000 TW
9604749 Feb 1996 WO
97006522 Feb 1997 WO
9847098 Oct 1998 WO
9967731 Dec 1999 WO
0263812 Aug 2002 WO
02063812 Aug 2002 WO
2004053630 Jun 2004 WO
2004056360 Jul 2004 WO
2006104627 Oct 2006 WO
2007081147 Jul 2007 WO
2007082534 Jul 2007 WO
2008008142 Jan 2008 WO
2010030056 Mar 2010 WO
2010056368 May 2010 WO
2010096192 Aug 2010 WO
2013059599 Apr 2013 WO
Non-Patent Literature Citations (229)
Entry
Thompson et al., U.S. Appl. No. 15/686,017, filed Aug. 24, 2017.
Corrected Notice of Allowance from U.S. Appl. No. 15/389,342, dated Aug. 30, 2017.
Shustorovich et al., U.S. Appl. No. 15/672,200, filed Aug. 8, 2017.
Notice of Allowance from U.S. Appl. No. 15/390,321, dated Oct. 4, 2017.
Final Office Action from U.S. Appl. No. 14/932,902, dated Oct. 20, 2017.
Non-Final Office Action from U.S. Appl. No. 15/686,017, dated Oct. 18, 2017.
Corrected Notice of Allowance from U.S. Appl. No. 15/390,321, dated Oct. 20, 2017.
Supplementary European Search Report from European Application No. 15764687.8, dated Oct. 17, 2017.
Examination Report from European Application No. 14775259.6, dated Oct. 25, 2017.
Office Action from Chinese Patent Application No. 201480014229.9, dated Oct. 10, 2017.
Examination Report from European Application No. 13738301.4, dated Oct. 26, 2017.
Final Office Action from U.S. Appl. No. 15/424,756, dated Dec. 22, 2017.
Non-Final Office Action from U.S. Appl. No. 15/157,325, dated Jan. 8, 2018.
Advisory Action from U.S. Appl. No. 14/932,902, dated Jan. 23, 2018.
Non-Final Office Action from U.S. Appl. No. 15/1390,321, dated Jan. 23, 2018.
Non-Final Office Action from U.S. Appl. No. 14/829,474, dated Jan. 25, 2018.
KOFAX Inc, “Module 2—Kofax Capture Overview,” Jun. 2011, pp. 1-22.
KOFAX Inc., “Kofax Capture 10.0 Developer's Guide,” Aug. 1, 2011, 138 pages.
Notice of Allowance from U.S. Appl. No. 15/686,017, dated, Feb. 14, 2018.
Office Action from Japanese Patent Application No. 2016-512078, dated Feb. 13, 2018.
Notice of Allowance from U.S. Appl. No. 14/932,902, dated Feb. 16, 2018.
Corrected Notice of Allowance from U.S. Appl. No. 14/932,902, dated Mar. 2, 2018.
Office Action from Japanese Patent Application No. 2016-502192, dated Feb. 13, 2018.
Hirose et al., “Media Conversion for Document Images Based on Layout Analysis and Character Recognition,” IEICE Technical Report, The Institute of Electronics, Information and Communication Engineers, vol. 99, No. 648, Feb. 21, 2000, pp. 39-46.
Oe et al., “Segmentation Method of Texture Image Using Two-Dimensional AR Model and Pyramid Linking,” The Transactions of The Institute of Electronics, Information and Communication Engineers, vol. J75-D-II, No. 7, Jul. 25, 1992, pp. 1132-1142.
Non-Final Office Action from U.S. Appl. No. 14/804,281, dated Mar. 16, 2018.
Office Action from Chinese Patent Application No. 201580014141.1, dated Feb. 6, 2018.
Notice of Allowance from U.S. Appl. No. 15/157,325, dated Mar. 26, 2018.
Corrected Notice of Allowance from U.S. Appl. No. 15/157,325, dated Apr. 5, 2018.
Non-Final Office Action from U.S. Appl. No. 15/385,707, dated Apr. 4, 2018.
Final Office Action from U.S. Appl. No. 15/234,993, dated Apr. 9, 2018.
Wang et al., “Object Recognition Using Multi-View Imaging,” ICSP2008 Proceedings, IEEE, 2008, pp. 810-813.
Examination Report from European Application No. 14773721.7, dated Mar. 27, 2018.
Office Action from Taiwanese Application No. 103114611, dated Feb. 8, 2018.
Office Action from Chinese Patent Application No. 201380004057.2, dated Feb. 27, 2017.
Notice of Allowance from U.S. Appl. No. 14/814,455, dated Mar. 30, 2017.
Non-Final Office Action from U.S. Appl. No. 14/932,902, dated Apr. 11, 2017.
Non-Final Office Action from U.S. Appl. No. 15/390,321, dated Mar. 17, 2017.
Notice of Allowance from U.S. Appl. No. 15/146,848, dated Apr. 13, 2017.
Corrected Notice of Allowance from U.S. Appl. No. 14/927,359, dated Aug. 2, 2017.
Corrected Notice of Allowance from U.S. Appl. No. 14/927,359, dated Aug. 9, 2017.
Corrected Notice of Allowance from U.S. Appl. No. 15/191,442, dated Aug. 2, 2017.
Notice of Allowance from U.S. Appl. No. 15/146,848, dated Aug. 4, 2017.
Notice of Allowance from U.S. Appl. No. 15/389,342, dated Aug. 14, 2017.
Notice of Grounds of Rejection from Japanese Application No. 2015-229466, dated Jul. 18, 2017, with English Translation.
Non-Final Office Action from U.S. Appl. No. 14/829,474, dated Aug. 17, 2017.
Extended European Search Report from European Application No. 14847922.3 dated Jun. 22, 2017.
Tsoi et al., “Geometric and Shading Correction for Images of Printed Materials a Unified Approach Using Boundary,” Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'04), 2004, pp. 1-7.
Tian et al., “Rectification and 3D Reconstruction of Curved Document Images,” 2011 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2011, pp. 377-384.
Notice of Allowance from U.S. Appl. No. 14/927,359, dated Jul. 20, 2017.
Notice of Allowance from U.S. Appl. No. 15/191,442, dated Apr. 24, 2017.
Final Office Action from U.S. Appl. No. 14/927,359, dated Apr. 28, 2017.
Notice of Allowance from U.S. Appl. No. 15/234,969, dated May 8, 2017.
Non-Final Office Action from U.S. Appl. No. 15/234,993, dated Dec. 14, 2017.
Office Action from Japanese Patent Application No. 2016-502178, dated Apr. 10, 2018.
Office Action from Japanese Patent Application No. 2016-568791, dated Mar. 28, 2018.
Kawakatsu et al., “Development and Evaluation of Task Driven Device Orchestration System for User Work Support,” Forum on Information Technology 10th Conference Proceedings, Institute of Electronics, Information and Communication Engineers (IEICE), Aug. 22, 2011, pp. 309-310.
Statement of Relevance of Non-Translated Foreign Document NPL: Kawakatsu et al., “Development and Evaluation of Task Driven Device Orcestration System for User Work Support,” Forum on Information Technology 10th Conference Proceedings, Institute of Electronics, Information and Communication Engineers (IEICE), Aug. 22, 2011, pp. 309-310.
Office Action from Chinese Patent Application No. 201480013621.1, dated Apr. 28, 2018.
Examination Report from European Application No. 14847922.3 dated Jun. 22, 2018.
Lenz et al., “Techniques for Calibration of the Scale Factor and Image Center for High Accuracy 3-D Machine Vision Metrology,” IEEE Transactions on Pattern Anaysis and Machine Intelligence, vol. 10, No. 5, Sep. 1988, pp. 713-720.
Wang et al., “Single view metrology from scene constraints,” Image and Vision Computing, vol. 23, 2005, pp. 831-840.
Criminisi et al., “A plane measuring device,” Image and Vision Computing, vol. 17, 1999, pp. 625-634.
Notice of Allowance from U.S. Appl. No. 15/234,993, dated Jul. 5, 2018.
Final Office Action from U.S. Appl. No. 14/829,474, dated Jul. 10, 2018.
Notice of Allowance from U.S. Appl. No. 15/396,322 , dated Jul. 18, 2018.
Notice of Allowance from U.S. Appl. No. 14/804,281, dated Jul. 23, 2018.
Notice of Allowance from U.S. Appl. No. 15/390,321, dated Aug. 6, 2018.
Corrected Notice of Allowance from U.S. Appl. No. 15/396,322, dated Aug. 8, 2018.
Corrected Notice of Allowance from U.S. Appl. No. 15/234,993, dated Aug. 1, 2018.
Notice of Allowance from U.S. Appl. No. 14/814,455, dated May 26, 2017.
Corrected Notice of Allowance from U.S. Appl. No. 15/191,442, dated May 26, 2017.
Extended European Search Report from European Application No. 14881675.4, dated Jun. 7, 2017.
Notice of Allowance from U.S. Appl. No. 15/394,719, dated Jun. 20, 2017.
International Search Report and Written Opinion from International Application No. PCT/US2017/025553, dated May 24, 2017.
Office Action from Chinese Patent Application No. 201580014141.1, dated May 31, 2017.
Non-Final Office Action from U.S. Appl. No. 15/424,756, dated Jun. 27, 2017.
Corrected Notice of Allowance from U.S. Appl. No. 15/191,442, dated Jun. 29, 2017.
Notice of Allowance from U.S. Appl. No. 14/818,196, dated Jul. 3, 2017.
Office Action from Japanese Patent Application No. 2016-512078, dated Aug. 8, 2017.
Non-Final Office Action from U.S. Appl. No. 13/898,407, dated Aug. 1, 2013.
Final Office Action from U.S. Appl. No. 13/898,407, dated Jan. 13, 2014.
Notice of Allowance from U.S. Appl. No. 13/898,407, dated Apr. 23, 2014.
Non-Final Office Action from U.S. Appl. No. 14/340,460, dated Jan. 16, 2015.
Notice of Allowance from U.S. Appl. No. 14/340,460, dated Apr. 28, 2015.
Office Action from Japanese Patent Application No. 2014-552356, dated Jun. 2, 2015.
Office Action from Taiwan Application No. 102101177, dated Dec. 17, 2014.
Notice of Allowance from U.S. Appl. No. 14/220,023, dated Jan. 30, 2015.
Notice of Allowance from U.S. Appl. No. 14/220,029, dated Feb. 11, 2015.
International Search Report and Written Opinion from International Application No. PCT/US2013/021336, dated May 23, 2013.
Non-Final Office Action from U.S. Appl. No. 13/740,127, dated Oct. 27, 2014.
Notice of Allowance from U.S. Appl. No. 13/740,131, dated Oct. 27, 2014.
Final Office Action from U.S. Appl. No. 13/740,134, dated Mar. 3, 2015.
Non-Final Office Action from U.S. Appl. No. 13/740,134, dated Oct. 10, 2014.
Non-Final Office Action from U.S. Appl. No. 13/740,138, dated Dec. 1, 2014.
Notice of Allowance from U.S. Appl. No. 13/740,139, dated Aug. 29, 2014.
Notice of Allowance from U.S. Appl. No. 13/740,145, dated Mar. 30, 2015.
Non-Final Office Action from U.S. Appl. No. 13/740,145, dated Sep. 29, 2014.
Notice of Allowance from Taiwan Patent Application No. 102101177, dated Apr. 24, 2015.
Notice of Allowance from U.S. Appl. No. 13/740,138, dated Jun. 5, 2015.
Notice of Allowance from U.S. Appl. No. 13/740,127, dated Jun. 8, 2015.
Notice of Allowance from U.S. Appl. No. 14/569,375, dated Apr. 15, 2015.
Notice of Allowance from U.S. Appl. No. 13/740,134, dated May 29, 2015.
Notice of Allowability from U.S. Appl. No. 13/740,145, dated May 26, 2015.
Corrected Notice of Allowability from U.S. Appl. No. 13/740,138, dated Jul. 8, 2018.
Non-Final Office Action from U.S. Appl. No. 13/740,127, dated Feb. 23, 2015.
Final Office Action from U.S. Appl. No. 13/740,134, dated Mar. 3, 3015.
Notice of Allowance from U.S. Appl. No. 14/804,276, dated Oct. 21, 2015.
Extended Europrean Search Report from European Application No. 13738301.4, dated Nov. 17, 2015.
Notice of Allowance from U.S. Appl. No. 13/740,145, dated Jan. 15, 2016.
Office Action from Taiwan Patent Application No. 102101177, dated Dec. 17, 2014.
Non-Final Office Action from U.S. Appl. No. 13/740,141, dated Oct. 16, 2015.
Notice of Allowance from U.S. Appl. No. 13/140,145, dated Sep. 8, 2015.
Notice of Allowance from U.S. Appl. No. 14/334,558, dated Sep. 10, 2014.
Notice of Allowance from U.S. Appl. No. 13/740,123, dated Jul. 10, 2014.
Intsig Information Co., Ltd., “CamScanner,” www.intsig.com/en/camscanner.html, retrieved Oct. 25, 2012.
Intsig Information Co., Ltd., “Product Descriptions,” www.intsig.com/en/product.html, retrieved Oct. 25, 2012.
Extended European Search Report from European Application No. 14775259.6, dated Jun. 1, 2016.
Non-Final Office Action from U.S. Appl. No. 14/814,455, dated Jun. 17, 2016.
Final Office Action from U.S. Appl. No. 13/740,141, dated May 5, 2016.
Notice of Allowance from U.S. Appl. No. 13/740,141, dated Jul. 29, 2016.
Non-Final Office Action from U.S. Appl. No. 14/818,196, dated Aug. 19, 2016.
International Search Report and Written Opinion from International Application No. PCT/US2016/043207, dated Oct. 21, 2016.
Non-Final Office Action from U.S. Appl. No. 14/927,359, dated Nov. 21, 2016.
Final Office Action from U.S. Appl. No. 14/814,455, dated Dec. 16, 2016.
International Search Report and Written Opinion from International Application No. PCT/US14/26569, dated Aug. 12, 2014.
Gllavata et al., “Finding Text in Images Via Local Thresholding,” International Symposium on Signal Processing and Information Technology, Dec. 2003, pp. 539-542.
Zunino et al., “Vector Quantization for License-Plate Location and Image Coding,” IEEE Transactions on Industrial Electronics, vol. 47, Issue 1, Feb. 2000, pp. 159-167.
International Search Report and Written Opinion from International Application No. PCT/US2014/057065, dated Dec. 30, 2014.
Non-Final Office Action from U.S. Appl. No. 14/932,902, dated Sep. 28, 2016.
Su et al., “Stereo rectification of calibrated image pairs based on geometric transformation,” I.J.Modem Education and Computer Science, vol. 4, 2011, pp. 17-24.
Malis et al., “Deeper understanding of the homography decomposition for vision-based control,” [Research Report] RR-6303, INRIA, Sep. 2007, pp. 1-90.
Notice of Allowance from U.S. Appl. No. 14/491,901, dated Aug. 4, 2015.
Final Office Action from U.S. Appl. No. 14/491,901, dated Apr. 30, 2015.
Non-Final Office Action from U.S. Appl. No. 14/491,901, dated Nov. 19, 2014.
Non-Final Office Action from U.S. Appl. No. 15/234,969, dated Nov. 18, 2016.
Amtrup, J. W. et al., U.S. Appl. No. 14/220,029, filed Mar. 19, 2014.
International Search Report and Written Opinion from PCT Application No. PCT/US15/26022, dated Jul. 22, 2015.
Non-Final Office Action from U.S. Appl. No. 14/588,147, dated Jun. 3, 2015.
Notice of Allowance from Japanese Patent Application No. 2014-005616, dated Jun. 12, 2015.
Office Action from Japanese Patent Application No. 2014-005616, dated Oct. 7, 2014.
Final Office Action from U.S. Appl. No. 14/588,147, dated Nov. 4, 2015.
Non-Final Office Action from U.S. Appl. No. 14/283,156, dated Dec. 1, 2015.
Notice of Allowance from U.S. Appl. No. 14/588,147, dated Jan. 14, 2016.
Non-Final Office Action from U.S. Appl. No. 14/804,278, dated Mar. 10, 2016.
Notice of Allowance from U.S. Appl. No. 14/283,156, dated Mar. 16, 2016.
Summons to Attend Oral Proceedings from European Application No. 10741580.4, dated Jun. 7, 2016.
Notice of Allowance from U.S. Appl. No. 14/078,402, dated Feb. 26, 2014.
Non-Final Office Action from U.S. Appl. No. 14/078,402, dated Jan. 30, 2014.
Notice of Allowance from U.S. Appl. No. 14/175,999, dated Aug. 8, 2014.
Non-Final Office Action from U.S. Appl. No. 14/175,999, dated Apr. 3, 2014.
Notice of Allowance from U.S. Appl. No. 13/802,226, dated Jan. 29, 2016.
Non-Final Office Action from U.S. Appl. No. 13/802,226, dated Sep. 30, 2015.
Final Office Action from U.S. Appl. No. 13/802,226, dated May 20, 2015.
Non-Final Office Action from U.S. Appl. No. 13/802,226, dated Jan. 8, 2015.
Non-Final Office Action from U.S. Appl. No. 14/209,825, dated Apr. 14, 2015.
Final Office Action from U.S. Appl. No. 14/209,825, dated Aug. 13, 2015.
Notice of Allowance from U.S. Appl. No. 14/209,825, dated Oct. 28, 2015.
International Search Report and Written Opinion from International Application No. PCT/US2014/026569, dated Aug. 12, 2014.
Bruns, E. et al., “Mobile Phone-Enabled Museum Guidance with Adaptive Classification,” Computer Graphics and Applications, IEEE, vol. 28, No. 4, Jul.-Aug. 2008, pp. 98,102.
Tzotsos, A. et al., “Support vector machine classification for object-based image analysis,” Object-Based Image Analysis, Springer Berlin Heidelberg, 2008, pp. 663-677.
Vailaya, A. et al., “On Image Classification: City Images vs. Landscapes,” Pattern Recognition, vol. 31, No. 12, Dec. 1998, pp. 1921-1935.
Extended European Search Report from European Application No. 14773721.7, dated May 17, 2016.
Gonzalez, R. C. et al., “Image Interpolation”, Digital Image Processing, Third Edition,2008, Chapter 2, pp. 65-68.
Kim, D. et al., “Location-based large-scale landmark image recognition scheme for mobile devices,” 2012 Third FTRA International Conference on Mobile, Ubiquitous, and Intelligent Computing, IEEE, 2012, pp. 47-52.
Sauvola, J. et al., “Adaptive document image binarization,” Pattern Recognition, vol. 33, 2000, pp. 225-236.
Tsai, C., “Effects of 2-D Preprocessing on Feature Extraction: Accentuating Features by Decimation, Contrast Enhancement, Filtering,” EE 262: 2D Imaging Project Report, 2008, pp. 1-9.
Final Office Action from U.S. Appl. No. 14/804,278, dated Jun. 28, 2016.
International Search Report and Written Opinion from International Application No. PCT/US2014/065831, dated Feb. 26, 2015.
U.S. Appl. No. 61/780,747, filed Mar. 13, 2013.
U.S. Appl. No. 61/819,463, dated May 3, 2013.
Notice of Allowance from U.S. Appl. No. 14/268,876, dated Aug. 29, 2014.
Non-Final Office Action from U.S. Appl. No. 14/268,876, dated Jul. 24, 2014.
Non-Final Office Action from U.S. Appl. No. 14/473,950, dated Jan. 21, 2015.
Non-Final Office Action from U.S. Appl. No. 14/473,950, dated Feb. 6, 2015.
Final Office Action from U.S. Appl. No. 14/473,950, dated Jun. 26, 2015.
Notice of Allowance from U.S. Appl. No. 14/473,950, dated Sep. 16, 2015.
Non-Final Office Action from U.S. Appl. No. 14/981,759, dated Jun. 7, 2016.
Extended European Search Report from European Application No. 14861942.2, dated Nov. 2, 2016.
Non-Final Office Action from U.S. Appl. No. 15/191,442, dated Oct. 12, 2016.
Partial Supplementary European Search Report from European Application No. 14792188.6, dated Sep. 12, 2016.
Notice of Allowance from U.S. Appl. No. 14/981,759, dated Nov. 16, 2016.
International Search Report and Written Opinion from International Application No. PCT/US2015/021597, dated Jun. 22, 2015.
U.S. Appl. No. 14/340,460, filed Jul. 24, 2014.
Requirement for Restriction from U.S. Appl. No. 14/177,136, dated Aug. 15, 2014.
International Search Report and Written Opinion from PCT Application No. PCT/US2014/036673, dated Aug. 28, 2014.
U.S. Appl. No. 14/473,950, filed Aug. 29, 2014.
Final Office Action from U.S. Appl. No. 14/176,006, dated Sep. 3, 2014.
Bishop, C.M., “Neural Networks for Pattern Recognition,” Oxford University Press, Inc., 1995, p. 27.
Bishop, C.M., “Neural Networks for Pattern Recognition,” Oxford University Press, Inc., 1995, pp. 77-85.
Bishop, C.M., “Neural Networks for Pattern Recognition,” Oxford University Press, Inc., 1995, pp. 230-247.
Bishop, C.M., “Neural Networks for Pattern Recognition,” Oxford University Press, Inc., 1995, pp. 295-300.
Bishop, C.M., “Neural Networks for Pattern Recognition,” Oxford University Press, Inc., 1995, pp. 343-345.
Final Office Action from U.S. Appl. No. 14/220,023, dated Sep. 18, 2014.
International Search Report and Written Opinion from PCT Application No. PCT/US14/26597, dated Sep. 19, 2014.
U.S. Appl. No. 14/491,901, filed Sep. 19, 2014.
Final Office Action from U.S. Appl. No. 14/220,029, dated Sep. 26, 2014.
International Search Report and Written Opinion from PCT Application No. PCT/US14/36851, dated Sep. 25, 2014.
Notice of Allowance from U.S. Appl. No. 14/176,006, dated Oct. 1, 2014.
Non-Final Office Action from U.S. Appl. No. 11/752,691, dated Oct. 10, 2014.
Non-Final Office Action from U.S. Appl. No. 15/146,848, dated Dec. 6, 2016.
U.S. Appl. No. 15/389,342, filed Dec. 22, 2016.
U.S. Appl. No. 15/390,321, filed Dec. 23, 2016.
Final Office Action from U.S. Appl. No. 14/177,136, dated Nov. 4, 2016.
Non-Final Office Action from U.S. Appl. No. 14/177,136, dated Apr. 13, 2016.
Non-Final Office Action from U.S. Appl. No. 14/177,136, dated Dec. 29, 2014.
“Location and Camera with Cell Phones,” Wikipedia, Mar. 30, 2016, pp. 1-19.
Non-Final Office Action from U.S. Appl. No. 14/176,006, dated Apr. 7, 2014.
Non-Final Office Action from U.S. Appl. No. 14/220,023, dated May 5, 2014.
Non-Final Office Action from U.S. Appl. No. 14/220,029, dated May 14, 2014.
International Search Report and Written Opinion from International Application No. PCT/US2016/043204, dated Oct. 6, 2016.
Final Office Action from U.S. Appl. No. 14/818,196, dated Jan. 9, 2017.
Decision to Refuse from European Application No. 10 741 580.4, dated Jan. 20, 2017.
Rainardi, V., “Building a Data Warehouse: With Examples in SQL Server,” Apress, Dec. 27, 2007, pp. 471-473.
Office Action from Japanese Patent Application No. 2015-229466, dated Nov. 29, 2016.
Extended European Search Report from European Application No. 14792188.6, dated Jan. 25, 2017.
Non-Final Office Action from U.S. Appl. No. 15/394,719, dated Feb. 21, 2017.
Non-Final Office Action from U.S. Appl. No. 15/389,342, dated Mar. 10, 2017.
Notice of Allowance from U.S. Appl. No. 14/818,196, dated Mar. 16, 2017.
Notice of Allowance from U.S. Appl. No. 15/385,707, dated Aug. 15, 2018.
Macciola et al., U.S. Appl. No. 16/052,495, filed Aug. 1, 2018.
Corrected Notice of Allowance from U.S. Appl. No. 15/390,321, dated Sep. 19, 2018.
Notice of Allowance from U.S. Appl. No. 14/829,474, dated Oct. 1, 2018.
Abiteboul et al., “Collaborative Data-Driven Workflows: Think Global, Act Local,” ACM, PODS, Jun. 2013, pp. 91-102.
Chen et al., “A Model Driven Visualization Platform for Workflow,” ACM, VINCI, Sep. 2010, 6 pages.
Corrected Notice of Allowance from U.S. Appl. No. 15/396,322, dated Oct. 16, 2018.
Corrected Notice of Allowance from U.S. Appl. No. 15/234,993, dated Oct. 11, 2018.
Corrected Notice of Allowance from U.S. Appl. No. 15/385,707, dated Oct. 16, 2018.
Ma et al., U.S. Appl. No. 16/151,090, filed Oct. 3, 2018.
Related Publications (1)
Number Date Country
20170024629 A1 Jan 2017 US
Provisional Applications (1)
Number Date Country
62194783 Jul 2015 US