The present invention relates to mobile image capture and image processing, and more particularly to capturing and processing digital images using a mobile device, and classifying objects detected in such digital images.
Digital images having depicted therein an object inclusive of documents such as a letter, a check, a bill, an invoice, etc. have conventionally been captured and processed using a scanner or multifunction peripheral coupled to a computer workstation such as a laptop or desktop computer. Methods and systems capable of performing such capture and processing are well known in the art and well adapted to the tasks for which they are employed.
However, in an era where day-to-day activities, computing, and business are increasingly performed using mobile devices, it would be greatly beneficial to provide analogous document capture and processing systems and methods for deployment and use on mobile platforms, such as smart phones, digital cameras, tablet computers, etc.
A major challenge in transitioning conventional document capture and processing techniques is the limited processing power and image resolution achievable using hardware currently available in mobile devices. These limitations present a significant challenge because it is impossible or impractical to process images captured at resolutions typically much lower than achievable by a conventional scanner. As a result, conventional scanner-based processing algorithms typically perform poorly on digital images captured using a mobile device.
In addition, the limited processing and memory available on mobile devices makes conventional image processing algorithms employed for scanners prohibitively expensive in terms of computational cost. Attempting to process a conventional scanner-based image processing algorithm takes far too much time to be a practical application on modern mobile platforms.
A still further challenge is presented by the nature of mobile capture components (e.g. cameras on mobile phones, tablets, etc.). Where conventional scanners are capable of faithfully representing the physical document in a digital image, critically maintaining aspect ratio, dimensions, and shape of the physical document in the digital image, mobile capture components are frequently incapable of producing such results.
Specifically, images of documents captured by a camera present a new line of processing issues not encountered when dealing with images captured by a scanner. This is in part due to the inherent differences in the way the document image is acquired, as well as the way the devices are constructed. The way that some scanners work is to use a transport mechanism that creates a relative movement between paper and a linear array of sensors. These sensors create pixel values of the document as it moves by, and the sequence of these captured pixel values forms an image. Accordingly, there is generally a horizontal or vertical consistency up to the noise in the sensor itself, and it is the same sensor that provides all the pixels in the line.
In contrast, cameras have many more sensors in a nonlinear array, e.g., typically arranged in a rectangle. Thus, all of these individual sensors are independent, and render image data that is not typically of horizontal or vertical consistency. In addition, cameras introduce a projective effect that is a function of the angle at which the picture is taken. For example, with a linear array like in a scanner, even if the transport of the paper is not perfectly orthogonal to the alignment of sensors and some skew is introduced, there is no projective effect like in a camera. Additionally, with camera capture, nonlinear distortions may be introduced because of the camera optics.
Conventional image processing algorithms designed to detect documents in images captured using traditional flat-bed and/or paper feed scanners may also utilize information derived from page detection to attempt to classify detected documents as members of a particular document class. However, due to the unique challenges introduced by virtue of capturing digital images using cameras of mobile devices, these conventional classification algorithms perform inadequately and are incapable of robustly classifying documents in such digital images.
Moreover, even when documents can be properly classified, the hardware limitations of current mobile devices make performing classification using the mobile device prohibitively expensive from a computational efficiency standpoint.
In view of the challenges presented above, it would be beneficial to provide an image capture and processing algorithm and applications thereof that compensate for and/or correct problems associated with image capture, processing and classification using a mobile device, while maintaining a low computational cost via efficient processing methods.
Moreover, it would be a further improvement in the field to provide object classification systems, methods and computer program products capable of robustly assigning objects to a particular class of objects and utilize information known about members of the class to further address and overcome unique challenges inherent to processing images captured using a camera of a mobile device.
In one embodiment, a system includes: a processor; and logic in and/or executable by the processor to cause the processor to: generate a first feature vector based on a digital image captured by a mobile device; compare the first feature vector to a plurality of reference feature matrices; classify an object depicted in the digital image as a member of a particular object class based at least in part on the comparison; determine one or more object features of the object based at least in part on the particular object class; and detect one or more additional objects belonging to the particular object class based on the determined object feature(s). The one or more additional objects are depicted either in the digital image or another digital image received by the mobile device.
In another embodiment, a computer program product includes: a computer readable storage medium having program code embodied therewith. The program code is readable/executable by a processor to: generate a first feature vector based on a digital image captured by a mobile device; compare the first feature vector to a plurality of reference feature matrices; classify an object depicted in the digital image as a member of a particular object class based at least in part on the comparison; determine one or more object features of the object based at least in part on the particular object class; and detect one or more additional objects belonging to the particular object class based on the determined object feature(s). The one or more additional objects are depicted either in the digital image or another digital image received by the mobile device.
Other aspects and embodiments of the inventive concepts presented above will become clear from carefully reviewing the following detailed descriptions in light of the drawings.
The following description is made for the purpose of illustrating the general principles of the present invention and is not meant to limit the inventive concepts claimed herein. Further, particular features described herein can be used in combination with other described features in each of the various possible combinations and permutations.
Unless otherwise specifically defined herein, all terms are to be given their broadest possible interpretation including meanings implied from the specification as well as meanings understood by those skilled in the art and/or as defined in dictionaries, treatises, etc.
It must also be noted that, as used in the specification and the appended claims, the singular forms “a,” “an” and “the” include plural referents unless otherwise specified.
The present application refers to image processing of images (e.g. pictures, figures, graphical schematics, single frames of movies, videos, films, clips, etc.) captured by cameras, especially cameras of mobile devices. As understood herein, a mobile device is any device capable of receiving data without having power supplied via a physical connection (e.g. wire, cord, cable, etc.) and capable of receiving data without a physical data connection (e.g. wire, cord, cable, etc.). Mobile devices within the scope of the present disclosures include exemplary devices such as a mobile telephone, smartphone, tablet, personal digital assistant, iPod®, iPad®, BLACKBERRY® device, etc.
However, as it will become apparent from the descriptions of various functionalities, the presently disclosed mobile image processing algorithms can be applied, sometimes with certain modifications, to images coming from scanners and multifunction peripherals (MFPs). Similarly, images processed using the presently disclosed processing algorithms may be further processed using conventional scanner processing algorithms, in some approaches.
Of course, the various embodiments set forth herein may be implemented utilizing hardware, software, or any desired combination thereof. For that matter, any type of logic may be utilized which is capable of implementing the various functionality set forth herein.
One benefit of using a mobile device is that with a data plan, image processing and information processing based on captured images can be done in a much more convenient, streamlined and integrated way than previous methods that relied on presence of a scanner. However, the use of mobile devices as document(s) capture and/or processing devices has heretofore been considered unfeasible for a variety of reasons.
In one approach, an image may be captured by a camera of a mobile device. The term “camera” should be broadly interpreted to include any type of device capable of capturing an image of a physical object external to the device, such as a piece of paper. The term “camera” does not encompass a peripheral scanner or multifunction device. Any type of camera may be used. Preferred embodiments may use cameras having a higher resolution, e.g. 8 MP or more, ideally 12 MP or more. The image may be captured in color, grayscale, black and white, or with any other known optical effect. The term “image” as referred to herein is meant to encompass any type of data corresponding to the output of the camera, including raw data, processed data, etc.
General Embodiments
In one general embodiment a method includes: receiving a digital image captured by a mobile device; and using a processor of the mobile device: generating a first representation of the digital image, the first representation being characterized by a reduced resolution; generating a first feature vector based on the first representation; comparing the first feature vector to a plurality of reference feature matrices; and classifying an object depicted in the digital image as a member of a particular object class based at least in part on the comparing.
In another general embodiment, a method includes: generating a first feature vector based on a digital image captured by a mobile device; comparing the first feature vector to a plurality of reference feature matrices; classifying an object depicted in the digital image as a member of a particular object class based at least in part on the comparing; and determining one or more object features of the object based at least in part on the particular object class; and performing at least one processing operation using a processor of a mobile device, the at least one processing operation selected from a group consisting of: detecting the object depicted in the digital image based at least in part on the one or more object features; rectangularizing the object depicted in the digital image based at least in part on the one or more object features; cropping the digital image based at least in part on the one or more object features; and binarizing the digital image based at least in part on the one or more object features.
In still another general embodiment, a system includes a processor; and logic in and/or executable by the processor to cause the processor to: generate a first representation of a digital image captured by a mobile device; generate a first feature vector based on the first representation; compare the first feature vector to a plurality of reference feature matrices; and classify an object depicted in the digital image as a member of a particular object class based at least in part on the comparison.
In still yet another general embodiment, a computer program product includes a computer readable storage medium having program code embodied therewith, the program code readable/executable by a processor to: generate a first representation of a digital image captured by a mobile device; generate a first feature vector based on the first representation; compare the first feature vector to a plurality of reference feature matrices; and classify an object depicted in the digital image as a member of a particular object class based at least in part on the comparison.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as “logic,” “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, processor, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband, as part of a carrier wave, an electrical connection having one or more wires, an optical fiber, etc. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In use, the gateway 101 serves as an entrance point from the remote networks 102 to the proximate network 108. As such, the gateway 101 may function as a router, which is capable of directing a given packet of data that arrives at the gateway 101, and a switch, which furnishes the actual path in and out of the gateway 101 for a given packet.
Further included is at least one data server 114 coupled to the proximate network 108, and which is accessible from the remote networks 102 via the gateway 101. It should be noted that the data server(s) 114 may include any type of computing device/groupware. Coupled to each data server 114 is a plurality of user devices 116. Such user devices 116 may include a desktop computer, lap-top computer, hand-held computer, printer or any other type of logic. It should be noted that a user device 111 may also be directly coupled to any of the networks, in one embodiment.
A peripheral 120 or series of peripherals 120, e.g., facsimile machines, printers, networked and/or local storage units or systems, etc., may be coupled to one or more of the networks 104, 106, 108. It should be noted that databases and/or additional components may be utilized with, or integrated into, any type of network element coupled to the networks 104, 106, 108. In the context of the present description, a network element may refer to any component of a network.
According to some approaches, methods and systems described herein may be implemented with and/or on virtual systems and/or systems which emulate one or more other systems, such as a UNIX system which emulates an IBM z/OS environment, a UNIX system which virtually hosts a MICROSOFT WINDOWS environment, a MICROSOFT WINDOWS system which emulates an IBM z/OS environment, etc. This virtualization and/or emulation may be enhanced through the use of VMWARE software, in some embodiments.
In more approaches, one or more networks 104, 106, 108, may represent a cluster of systems commonly referred to as a “cloud.” In cloud computing, shared resources, such as processing power, peripherals, software, data, servers, etc., are provided to any system in the cloud in an on-demand relationship, thereby allowing access and distribution of services across many computing systems. Cloud computing typically involves an Internet connection between the systems operating in the cloud, but other techniques of connecting the systems may also be used.
The workstation shown in
The workstation may have resident thereon an operating system such as the Microsoft Windows® Operating System (OS), a MAC OS, a UNIX OS, etc. It will be appreciated that a preferred embodiment may also be implemented on platforms and operating systems other than those mentioned. A preferred embodiment may be written using JAVA, XML, C, and/or C++ language, or other programming languages, along with an object oriented programming methodology. Object oriented programming (OOP), which has become increasingly used to develop complex applications, may be used.
An application may be installed on the mobile device, e.g., stored in a nonvolatile memory of the device. In one approach, the application includes instructions to perform processing of an image on the mobile device. In another approach, the application includes instructions to send the image to a remote server such as a network server. In yet another approach, the application may include instructions to decide whether to perform some or all processing on the mobile device and/or send the image to the remote site.
In various embodiments, the presently disclosed methods, systems and/or computer program products may utilize and/or include any of the functionalities disclosed in related U.S. patent application Ser. No. 13/740,123, filed Jan. 11, 2013. For example, digital images suitable for processing according to the presently disclosed algorithms may be subjected to any image processing operations disclosed in the aforementioned patent application, such as page detection, rectangularization, detection of uneven illumination, illumination normalization, resolution estimation, blur detection, etc.
In more approaches, the presently disclosed methods, systems, and/or computer program products may be utilized with, implemented in, and/or include one or more user interfaces configured to facilitate performing any functionality disclosed herein and/or in the aforementioned related patent application, such as an image processing mobile application, a case management application, and/or a classification application, in multiple embodiments.
In still more approaches, the presently disclosed systems, methods and/or computer program products may be advantageously applied to one or more of the use methodologies and/or scenarios disclosed in the aforementioned related patent application, among others that would be appreciated by one having ordinary skill in the art upon reading these descriptions.
It will further be appreciated that embodiments presented herein may be provided in the form of a service deployed on behalf of a customer to offer service on demand.
Document Classification
In accordance with one inventive embodiment commensurate in scope with the present disclosures, as shown in
In operation 502, a digital image captured by a mobile device is received.
In one embodiment the digital image may be characterized by a native resolution. As understood herein, a “native resolution” may be an original, native resolution of the image as originally captured, but also may be a resolution of the digital image after performing some pre-classification processing such as any of the image processing operations described above and in copending U.S. patent application Ser. No. 13/740,123, filed Jan. 11, 2013, a virtual re-scan (VRS) processing as disclosed in related U.S. Pat. No. 6,370,277, etc. as would be understood by one having ordinary skill in the art upon reading the present descriptions. In one embodiment, the native resolution is approximately 500 pixels by 600 pixels (i.e. a 500×600 digital image) for a digital image of a driver license subjected to processing by VRS before performing classification. Moreover, the digital image may be characterized as a color image in some approaches, and in still more approaches may be a cropped-color image, i.e. a color image depicting substantially only the object to be classified, and not depicting image background.
In operation 504, a first representation of the digital image is generated using a processor of the mobile device. The first representation may be characterized by a reduced resolution, in one approach. As understood herein, a “reduced resolution” may be any resolution less than the native resolution of the digital image, and more particularly any resolution suitable for subsequent analysis of the first representation according to the principles set forth herein.
In preferred embodiments, the reduced resolution is sufficiently low to minimize processing overhead and maximize computational efficiency and robustness of performing the algorithm on the respective mobile device, host device and/or server platform. For example, in one approach the first representation is characterized by a resolution of about 25 pixels by 25 pixels, which has been experimentally determined to be a particularly efficient and robust reduced resolution for processing of relatively small documents, such as business cards, driver licenses, receipts, etc. as would be understood by one having ordinary skill in the art upon reading the present descriptions.
Of course, in other embodiments, different resolutions may be employed without departing from the scope of the present disclosure. For example, classification of larger documents or objects may benefit from utilizing a higher resolution such as 50 pixels by 50 pixels, 100 pixels by 100 pixels, etc. to better represent the larger document or object for robust classification and maximum computational efficiency. The resolution utilized may or may not have the same number of pixels in each dimension. Moreover, the most desirable resolution for classifying various objects within a broad range of object classes may be determined experimentally according to a user's preferred balance between computational efficiency and classification robustness. In still more embodiments, any resolution may be employed, and preferably the resolution is characterized by comprising between 1 pixel and about 1000 pixels in a first dimension, and between 1 and about 1000 pixels in a second dimension.
One exemplary embodiment of inputs, outputs and/or results of a process flow for generating the first representation will now be presented with particular reference to
As shown in
In one general embodiment, a first representation may be generated by dividing a digital image R (having a resolution of xR pixels by yR pixels) into Sx horizontal sections and Sy vertical sections and thus may be characterized by a reduced resolution r of Sx pixels by Sy pixels. Thus, generating the first representation essentially includes generating a less-granular representation of the digital image.
For example, in one approach the digital image 300 is divided into S sections, each section 304 corresponding to one portion of an s-by-s grid 302. Generating the first representation involves generating a s-pixel-by-s-pixel first representation 310, where each pixel 312 in the first representation 310 corresponds to one of the S sections 304 of the digital image, and wherein each pixel 312 is located in a position of the first representation 310 corresponding to the location of the corresponding section 304 in the digital image, i.e. the upper-leftmost pixel 312 in the first representation corresponds to the upper-leftmost section 304 in the digital image, etc.
Of course, other reduced resolutions may be employed for the first representation, ideally but not necessarily according to limitations and/or features of a mobile device, host device, and or server platform being utilized to carry out the processing, the characteristics of the digital image (resolution, illumination, presence of blur, etc.) and/or characteristics of the object which is to be detected and/or classified (contrast with background, presence of text or other symbols, closeness of fit to a general template, etc.) as would be understood by those having ordinary skill in the art upon reading the present descriptions.
In some approaches, generating the first representation may include one or more alternative and/or additional suboperations, such as dividing the digital image into a plurality of sections. The digital image may be divided into a plurality of sections in any suitable manner, and in one embodiment the digital image is divided into a plurality of rectangular sections. Of course, sections may be characterized by any shape, and in alternative approaches the plurality of sections may or may not represent the entire digital image, may represent an oversampling of some regions of the image, or may represent a single sampling of each pixel depicted in the digital image. In a preferred embodiment, as discussed above regarding
In further approaches, generating the first representation may also include determining, for each section of the digital image, at least one characteristic value, where each characteristic value corresponds to one or more features descriptive of the section. Within the scope of the present disclosures, any feature that may be expressed as a numerical value is suitable for use in generating the first representation, e.g. an average brightness or intensity (0-255) across each pixel in the section, an average value (0-255) of each color channel of each pixel in the section, such as an average red-channel value, and average green-channel value, and an average blue-channel value for a red-green-blue (RGB) image, etc. as would be understood by one having ordinary skill in the art upon reading the present descriptions.
With continuing reference to
Of course, the pixels 312 comprising the first representation 310 may be represented using any characteristic value or combination of characteristic values without departing from the scope of the presently disclosed classification methods. Further, characteristic values may be computed and/or determined using any suitable means, such as by random selection of a characteristic value from a distribution of values, by a statistical means or measure, such as an average value, a spread of values, a minimum value, a maximum value, a standard deviation of values, a variance of values, or by any other means that would be known to a skilled artisan upon reading the instant descriptions.
In operation 506, a first feature vector is generated based on the first representation.
The first feature vector and/or reference feature matrices may include a plurality of feature vectors, where each feature vector corresponds to a characteristic of a corresponding object class, e.g. a characteristic minimum, maximum, average, etc. brightness in one or more color channels at a particular location (pixel or section), presence of a particular symbol or other reference object at a particular location, dimensions, aspect ratio, pixel density (especially black pixel density, but also pixel density of any other color channel), etc.
As would be understood by one having ordinary skill in the art upon reading the present descriptions, feature vectors suitable for inclusion in first feature vector and/or reference feature matrices comprise any type, number and/or length of feature vectors, such as described in U.S. patent application Ser. No. 12/042,774, filed Mar. 5, 2008; and Ser. No. 12/368,685, filed Feb. 10, 2009 and/or U.S. Pat. No. 7,761,391, granted Jul. 20, 2010 (U.S. patent application Ser. No. 11/752,364, filed May 13, 2007).
In operation 508, the first feature vector is compared to a plurality of reference feature matrices.
The comparing operation 508 may be performed according to any suitable vector matrix comparison, such as described in U.S. patent application Ser. Nos. 12/042,774, filed Mar. 5, 2008; and Ser. No. 12/368,685, filed Feb. 10, 2009 and U.S. Pat. No. 7,761,391, granted Jul. 20, 2010 (U.S. patent application Ser. No. 11/752,364, filed May 13, 2007).
Thus, in such approaches the comparing may include an N-dimensional feature space comparison. In at least one approach, N is greater than 50, but of course, N may be any value sufficiently large to ensure robust classification of objects into a single, correct object class, which those having ordinary skill in the art reading the present descriptions will appreciate to vary according to many factors, such as the complexity of the object, the similarity or distinctness between object classes, the number of object classes, etc.
As understood herein, “objects” include any tangible thing represented in an image and which may be described according to at least one unique characteristic such as color, size, dimensions, shape, texture, or representative feature(s) as would be understood by one having ordinary skill in the art upon reading the present descriptions. Additionally, objects include or classified according to at least one unique combination of such characteristics. For example, in various embodiments objects may include but are in no way limited to persons, animals, vehicles, buildings, landmarks, documents, furniture, plants, etc. as would be understood by one having ordinary skill in the art upon reading the present descriptions.
For example, in one embodiment where attempting to classify an object depicted in a digital image as one of only a small number of object classes (e.g. 3-5 object classes), each object class being characterized by a significant number of starkly distinguishing features or feature vectors (e.g. each object class corresponding to an object or object(s) characterized by very different size, shape, color profile and/or color scheme and easily distinguishable reference symbols positioned in unique locations on each object class, etc.), a relatively low value of N may be sufficiently large to ensure robust classification.
On the other hand, where attempting to classify an object depicted in a digital image as one of a large number of object classes (e.g. 30 or more object classes), and each object class is characterized by a significant number of similar features or feature vectors, and only a few distinguishing features or feature vectors, a relatively high value of N may be preferable to ensure robust classification. Similarly, the value of N is preferably chosen or determined such that the classification is not only robust, but also computationally efficient; i.e. the classification process(es) introduce only minimal processing overhead to the device(s) or system(s) utilized to perform the classification algorithm.
The value of N that achieves the desired balance between classification robustness and processing overhead will depend on many factors such as described above and others that would be appreciated by one having ordinary skill in the art upon reading the present descriptions. Moreover, determining the appropriate value of N to achieve the desired balance may be accomplished using any known method or equivalent thereof as understood by a skilled artisan upon reading the instant disclosures.
In a concrete implementation, directed to classifying driver licenses according to state and distinguishing driver licenses from myriad other document types, it was determined that a 625-dimensional comparison (N=625) provided a preferably robust classification without introducing unsatisfactorily high overhead to processing performed using a variety of current-generation mobile devices.
In operation 510, an object depicted in the digital image is classified as a member of a particular object class based at least in part on the comparing operation 508. More specifically, the comparing operation 508 may involve evaluating each feature vector of each reference feature matrix, or alternatively evaluating a plurality of feature matrices for objects belonging to a particular object class, and identifying a hyper-plane in the N-dimensional feature space that separates the feature vectors of one reference feature matrix from the feature vectors of other reference feature matrices. In this manner, the classification algorithm defines concrete hyper-plane boundaries between object classes, and may assign an unknown object to a particular object class based on similarity of feature vectors to the particular object class and/or dissimilarity to other reference feature matrix profiles.
In the simplest example of such feature-space discrimination, imagining a two-dimensional feature space with one feature plotted along the ordinate axis and another feature plotted along the abscissa, objects belonging to one particular class may be characterized by feature vectors having a distribution of values clustered in the lower-right portion of the feature space, while another class of objects may be characterized by feature vectors exhibiting a distribution of values clustered in the upper-left portion of the feature space, and the classification algorithm may distinguish between the two by identifying a line between each cluster separating the feature space into two classes—“upper left” and “lower-right.” Of course, as the number of dimensions considered in the feature space increases, the complexity of the classification grows rapidly, but also provides significant improvements to classification robustness, as will be appreciated by one having ordinary skill in the art upon reading the present descriptions.
Additional Processing
In some approaches, classification according to embodiments of the presently disclosed methods may include one or more additional and/or alternative features and/or operations, such as described below.
In one embodiment, classification such as described above may additionally and/or alternatively include assigning a confidence value to a plurality of putative object classes based on the comparing operation (e.g. as performed in operation 508 of method 500) the presently disclosed classification methods, systems and/or computer program products may additionally and/or alternatively determine a location of the mobile device, receive location information indicating the location of the mobile device, etc. and based on the determined location, a confidence value of a classification result corresponding to a particular location may be adjusted. For example, if a mobile device is determined to be located in a particular state (e.g. Maryland) based on a GPS signal, then during classification, a confidence value may be adjusted for any object class corresponding to the particular state (e.g. Maryland Driver License, Maryland Department of Motor Vehicle Title/Registration Form, Maryland Traffic Violation Ticket, etc. as would be understood by one having ordinary skill in the art upon reading the present descriptions).
Confidence values may be adjusted in any suitable manner, such as increasing a confidence value for any object class corresponding to a particular location, decreasing a confidence value for any object class not corresponding to a particular location, normalizing confidence value(s) based on correspondence/non-correspondence to a particular location, etc. as would be understood by the skilled artisan reading the present disclosures.
The mobile device location may be determined using any known method, and employing hardware components of the mobile device or any other number of devices in communication with the mobile device, such as one or more satellites, wireless communication networks, servers, etc. as would be understood by one having ordinary skill in the art upon reading the present descriptions.
For example, the mobile device location may be determined based in whole or in part on one or more of a global-positioning system (GPS) signal, a connection to a wireless communication network, a database of known locations (e.g. a contact database, a database associated with a navigational tool such as Google Maps, etc.), a social media tool (e.g. a “check-in” feature such as provided via Facebook, Google Plus, Yelp, etc.), an IP address, etc. as would be understood by one having ordinary skill in the art upon reading the present descriptions.
In more embodiments, classification additionally and/or alternatively includes outputting an indication of the particular object class to a display of the mobile device; and receiving user input via the display of the mobile device in response to outputting the indication. While the user input may be of any known type and relate to any of the herein described features and/or operations, preferably user input relates to confirming, negating or modifying the particular object class to which the object was assigned by the classification algorithm.
The indication may be output to the display in any suitable manner, such as via a push notification, text message, display window on the display of the mobile device, email, etc. as would be understood by one having ordinary skill in the art. Moreover, the user input may take any form and be received in any known manner, such as detecting a user tapping or pressing on a portion of the mobile device display (e.g. by detecting changes in resistance, capacitance on a touch-screen device, by detecting user interaction with one or more buttons or switches of the mobile device, etc.)
In one embodiment, classification further includes determining one or more object features of a classified object based at least in part on the particular object class. Thus, classification may include determining such object features using any suitable mechanism or approach, such as receiving an object class identification code and using the object class identification code as a query and/or to perform a lookup in a database of object features organized according to object class and keyed, hashed, indexed, etc. to the object class identification code.
Object features within the scope of the present disclosures may include any feature capable of being recognized in a digital image, and preferably any feature capable of being expressed in a numerical format (whether scalar, vector, or otherwise), e.g. location of subregion containing reference object(s) (especially in one or more object orientation states, such as landscape, portrait, etc.) object color profile, or color scheme, object subregion color profile or color scheme, location of text, etc. as would be understood by one having ordinary skill in the art upon reading the present descriptions.
In accordance with another inventive embodiment commensurate in scope with the present disclosures, as shown in
In operation 602, a first feature vector is generated based on a digital image captured by a mobile device.
In operation 604, the first feature vector is compared to a plurality of reference feature matrices.
In operation 606, an object depicted in the digital image is classified as a member of a particular object class based at least in part on the comparing (e.g. the comparing performed in operation 604).
In operation 608, one or more object features of the object are determined based at least in part on the particular object class.
In operation 610, a processing operation is performed. The processing operation includes performing one or more of the following subprocesses: detecting the object depicted in the digital image based at least in part on the one or more object features; rectangularizing the object depicted in the digital image based at least in part on the one or more object features; cropping the digital image based at least in part on the one or more object features; and binarizing the digital image based at least in part on the one or more object features.
As will be further appreciated by one having ordinary skill in the art upon reading the above descriptions of document classification, in various embodiments it may be advantageous to perform one or more additional processing operations, such as the subprocesses described above with reference to operation 610, on a digital image based at least in part on object features determined via document classification.
For example, after classifying an object depicted in a digital image, such as a document, it may be possible to refine other processing parameters, functions, etc. and/or utilize information known to be true for the class of objects to which the classified object belongs, such as object shape, size, dimensions, location of regions of interest on and/or in the object, such as regions depicting one or more symbols, patterns, text, etc. as would be understood by one having ordinary skill in the art upon reading the present descriptions.
Regarding performing page detection based on classification, it may be advantageous in some approaches to utilize information known about an object belonging to a particular object class in order to improve object detection capabilities. For example, and as would be appreciated by one having ordinary skill in the art, it may be less computationally expensive, and/or may result in a higher-confidence or higher-quality result to narrow a set of characteristics that may potentially identify an object in a digital image to one or a few discrete, known characteristics, and simply search for those characteristic(s).
Exemplary characteristics that may be utilized to improve object detection may include characteristics such as object dimensions, object shape, object color, one or more reference features of the object class (such as reference symbols positioned in a known location of a document).
In another approach, object detection may be improved based on the one or more known characteristics by facilitating an object detection algorithm distinguishing regions of the digital image depicting the object from regions of the digital image depicting other objects, image background, artifacts, etc. as would be understood by one having ordinary skill in the art upon reading the present descriptions. For example, if objects belonging to a particular object class are known to exhibit a particular color profile or scheme, it may be simpler and/or more reliable to attempt detecting the particular color profile or scheme within the digital image rather than detecting a transition from one color profile or scheme (e.g. a background color profile or scheme) to another color profile or scheme (e.g. the object color profile or scheme), especially if the two colors profiles or schemes are not characterized by sharply contrasting features.
Regarding performing rectangularization based on classification, it may be advantageous in some approaches to utilize information known about an object belonging to a particular object class in order to improve object rectangularization capabilities. For example, and as would be appreciated by one having ordinary skill in the art, it may be less computationally expensive, and/or may result in a higher-confidence or higher-quality result to transform a digital representation of an object from a native appearance to a true configuration based on a set of known object characteristics that definitively represent the true object configuration, rather than attempting to estimate the true object configuration from the native appearance and project the native appearance onto an estimated object configuration.
In one approach, the classification may identify known dimensions of the object, and based on these known dimensions the digital image may be rectangularized to transform a distorted representation of the object in the digital image into an undistorted representation (e.g. by removing projective effects introduced in the process of capturing the image using a camera of a mobile device rather than a traditional flat-bed scanner, paper-feed scanner or other similar multifunction peripheral (MFP)).
Regarding performing cropping based on classification, and similar to the principles discussed above regarding rectangularization, it may be advantageous in some approaches to utilize information known about an object belonging to a particular object class to improve cropping of digital images depicting the object such that all or significantly all of the cropped image depicts the object and not image background (or other objects, artifacts, etc. depicted in the image).
As a simple example, it may be advantageous to determine an object's known size, dimensions, configuration, etc. according to the object classification and utilize this information to identify a region of the image depicting the object from regions of the image not depicting the object, and define crop lines surrounding the object to remove the regions of the image not depicting the object.
Regarding performing binarization based on classification, the presently disclosed classification algorithms provide several useful improvements to mobile image processing. Several exemplary embodiments of such improvements will now be described with reference to
For example, binarization algorithms generally transform a multi-tonal digital image (e.g. grayscale, color, or any other image such as image 400 exhibiting more than two tones) into a bitonal image, i.e. an image exhibiting only two tones (typically white and black). Those having ordinary skill in the art will appreciate that attempting to binarize a digital image depicting an object with regions exhibiting two or more distinct color profiles and/or color schemes (e.g. a region depicting a color photograph 402 as compared to a region depicting a black/white text region 404, a color-text region 406, a symbol 408 such as a reference object, watermark, etc. object background region 410, etc.) may produce an unsuccessful or unsatisfactory result.
As one explanation, these difficulties may be at least partially due to the differences between the color profiles, schemes, etc., which counter-influence a single binarization transform. Thus, providing an ability to distinguish each of these regions having disparate color schemes or profiles and define separate binarization parameters for each may greatly improve the quality of the resulting bitonal image as a whole and with particular respect to the quality of the transformation in each respective region.
According to one exemplary embodiment shown in
Binarization parameters may include any parameter of any suitable binarization process as would be appreciated by those having ordinary skill in the art reading the present descriptions, and binarization parameters may be adjusted according to any suitable methodology. For example, with respect to adjusting binarization parameters based on an object class color profile and/or color scheme, binarization parameters may be adjusted to over- and/or under-emphasize a contribution of one or more color channels, intensities, etc. in accordance with the object class color profile/scheme (such as under-emphasizing the red channel for an object class color profile/scheme relatively saturated by red hue(s), etc.).
Similarly, in other embodiments such as particularly shown in
For example, as shown in
Extending the principle shown in
Similarly, it may be advantageous to simply exclude only a portion of an image from binarization, whether or not adjusting any parameters. For example, with reference to
In still more embodiments, it may be advantageous to perform optical character recognition (OCR) based at least in part on the classification and/or result of classification. Specifically, it may be advantageous to determine information about the location, format, and/or content of text depicted in objects belonging to a particular class, and modify predictions estimated by traditional OCR methods based on an expected text location, format and/or content. For example, in one embodiment where an OCR prediction estimates text in a region corresponding to a “date” field of a document reads “Jan. 14, 201l” the presently disclosed algorithms may determine the expected format for this text follows a format such as “[Abbreviated Month][.] [##][,][####]” the algorithm may correct the erroneous OCR predictions, e.g. converting the comma after “Jan” into a period and/or converting the letter “l” at the end of 201l” into a numerical one character. Similarly, the presently disclosed algorithms may determine the expected format for the same text is instead “[##]/[##]/[####]” and convert “Jan” to “01” and convert each set of comma-space characters “,” into a slash “/” to correct the erroneous OCR predictions.
Of course, other methods of improving upon and/or correcting OCR predictions that would be appreciated by the skilled artisan upon reading these descriptions are also fully within the scope of the present disclosure.
While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of an embodiment of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
This application is related to copending U.S. patent application Ser. No. 13/740,123, filed Jan. 11, 2013; Ser. No. 12/042,774, filed Mar. 5, 2008; and Ser. No. 12/368,685, filed Feb. 10, 2009, each of which is herein incorporated by reference in its entirety. This application is also related to U.S. Pat. No. 7,761,391, granted Jul. 20, 2010 (U.S. patent application Ser. No. 11/752,364, filed May 13, 2007) and U.S. Pat. No. 6,370,277, granted Apr. 9, 2002 (U.S. patent application Ser. No. 09/206,753, filed Dec. 7, 1998), each of which is also herein incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
1660102 | Appelt et al. | Feb 1928 | A |
3069654 | Hough | Dec 1962 | A |
3696599 | Palmer et al. | Oct 1972 | A |
4558461 | Schlang | Dec 1985 | A |
4651287 | Tsao | Mar 1987 | A |
4656665 | Pennebaker | Apr 1987 | A |
4836026 | P'an et al. | Jun 1989 | A |
4903312 | Sato | Feb 1990 | A |
4992863 | Moriya | Feb 1991 | A |
5020112 | Chou | May 1991 | A |
5063604 | Weiman | Nov 1991 | A |
5101448 | Kawachiya et al. | Mar 1992 | A |
5124810 | Seto | Jun 1992 | A |
5151260 | Contursi et al. | Sep 1992 | A |
5159667 | Borrey et al. | Oct 1992 | A |
5181260 | Kurosu et al. | Jan 1993 | A |
5202934 | Miyakawa et al. | Apr 1993 | A |
5220621 | Saitoh | Jun 1993 | A |
5268967 | Jang et al. | Dec 1993 | A |
5282055 | Suzuki | Jan 1994 | A |
5293429 | Pizano et al. | Mar 1994 | A |
5313527 | Guberman et al. | May 1994 | A |
5317646 | Sang, Jr. et al. | May 1994 | A |
5321770 | Huttenlocher et al. | Jun 1994 | A |
5344132 | LeBrun et al. | Sep 1994 | A |
5353673 | Lynch | Oct 1994 | A |
5355547 | Fitjer | Oct 1994 | A |
5375197 | Kang | Dec 1994 | A |
5430810 | Saeki | Jul 1995 | A |
5467407 | Guberman et al. | Nov 1995 | A |
5473742 | Polyakov et al. | Dec 1995 | A |
5546474 | Zuniga | Aug 1996 | A |
5563723 | Beaulieu et al. | Oct 1996 | A |
5563966 | Ise et al. | Oct 1996 | A |
5586199 | Kanda et al. | Dec 1996 | A |
5594815 | Fast et al. | Jan 1997 | A |
5596655 | Lopez | Jan 1997 | A |
5602964 | Barrett | Feb 1997 | A |
5629989 | Osada | May 1997 | A |
5652663 | Zelten | Jul 1997 | A |
5668890 | Winkelman | Sep 1997 | A |
5680525 | Sakai et al. | Oct 1997 | A |
5696611 | Nishimura et al. | Dec 1997 | A |
5696805 | Gaborski et al. | Dec 1997 | A |
5699244 | Clark, Jr. et al. | Dec 1997 | A |
5717794 | Koga et al. | Feb 1998 | A |
5721940 | Luther et al. | Feb 1998 | A |
5757963 | Ozaki et al. | May 1998 | A |
5760912 | Itoh | Jun 1998 | A |
5781665 | Cullen et al. | Jul 1998 | A |
5818978 | Al-Hussein | Oct 1998 | A |
5822454 | Rangarajan | Oct 1998 | A |
5825915 | Michimoto et al. | Oct 1998 | A |
5832138 | Nakanishi et al. | Nov 1998 | A |
5839019 | Ito | Nov 1998 | A |
5848184 | Taylor et al. | Dec 1998 | A |
5857029 | Patel | Jan 1999 | A |
5867264 | Hinnrichs | Feb 1999 | A |
5923763 | Walker et al. | Jul 1999 | A |
5937084 | Crabtree et al. | Aug 1999 | A |
5953388 | Walnut et al. | Sep 1999 | A |
5956468 | Ancin | Sep 1999 | A |
5987172 | Michael | Nov 1999 | A |
6002489 | Murai et al. | Dec 1999 | A |
6005958 | Farmer et al. | Dec 1999 | A |
6005968 | Granger | Dec 1999 | A |
6009191 | Julier | Dec 1999 | A |
6009196 | Mahoney | Dec 1999 | A |
6011595 | Henderson et al. | Jan 2000 | A |
6016361 | Hongu et al. | Jan 2000 | A |
6038348 | Carley | Mar 2000 | A |
6055968 | Sasaki et al. | May 2000 | A |
6067385 | Cullen et al. | May 2000 | A |
6072916 | Suzuki | Jun 2000 | A |
6073148 | Rowe et al. | Jun 2000 | A |
6098065 | Skillen et al. | Aug 2000 | A |
6104830 | Schistad | Aug 2000 | A |
6104840 | Ejiri et al. | Aug 2000 | A |
6118544 | Rao | Sep 2000 | A |
6118552 | Suzuki et al. | Sep 2000 | A |
6154217 | Aldrich | Nov 2000 | A |
6192360 | Dumais et al. | Feb 2001 | B1 |
6215469 | Mori et al. | Apr 2001 | B1 |
6219158 | Dawe | Apr 2001 | B1 |
6219773 | Garibay, Jr. et al. | Apr 2001 | B1 |
6223223 | Kumpf et al. | Apr 2001 | B1 |
6229625 | Nakatsuka | May 2001 | B1 |
6233059 | Kodaira et al. | May 2001 | B1 |
6263122 | Simske et al. | Jul 2001 | B1 |
6292168 | Venable et al. | Sep 2001 | B1 |
6327581 | Platt | Dec 2001 | B1 |
6337925 | Cohen et al. | Jan 2002 | B1 |
6347152 | Shinagawa et al. | Feb 2002 | B1 |
6347162 | Suzuki | Feb 2002 | B1 |
6356647 | Bober et al. | Mar 2002 | B1 |
6370277 | Borrey et al. | Apr 2002 | B1 |
6385346 | Gillihan et al. | May 2002 | B1 |
6393147 | Danneels et al. | May 2002 | B2 |
6396599 | Patton et al. | May 2002 | B1 |
6408094 | Mirzaoff et al. | Jun 2002 | B1 |
6408105 | Maruo | Jun 2002 | B1 |
6424742 | Yamamoto et al. | Jul 2002 | B2 |
6426806 | Melen | Jul 2002 | B2 |
6433896 | Ueda et al. | Aug 2002 | B1 |
6456738 | Tsukasa | Sep 2002 | B1 |
6463430 | Brady et al. | Oct 2002 | B1 |
6469801 | Telle | Oct 2002 | B1 |
6473198 | Matama | Oct 2002 | B1 |
6473535 | Takaoka | Oct 2002 | B1 |
6480304 | Os et al. | Nov 2002 | B1 |
6480624 | Horie et al. | Nov 2002 | B1 |
6501855 | Zelinski | Dec 2002 | B1 |
6512848 | Wang et al. | Jan 2003 | B2 |
6522791 | Nagarajan | Feb 2003 | B2 |
6525840 | Haraguchi et al. | Feb 2003 | B1 |
6563531 | Matama | May 2003 | B1 |
6601026 | Appelt et al. | Jul 2003 | B2 |
6614930 | Agnihotri et al. | Sep 2003 | B1 |
6621595 | Fan et al. | Sep 2003 | B1 |
6628416 | Hsu et al. | Sep 2003 | B1 |
6628808 | Bach et al. | Sep 2003 | B1 |
6633857 | Tipping | Oct 2003 | B1 |
6643413 | Shum et al. | Nov 2003 | B1 |
6646765 | Barker et al. | Nov 2003 | B1 |
6658147 | Gorbatov et al. | Dec 2003 | B2 |
6665425 | Sampath et al. | Dec 2003 | B1 |
6667774 | Berman et al. | Dec 2003 | B2 |
6675159 | Lin et al. | Jan 2004 | B1 |
6701009 | Makoto et al. | Mar 2004 | B1 |
6704441 | Inagaki et al. | Mar 2004 | B1 |
6724916 | Shyu | Apr 2004 | B1 |
6729733 | Raskar et al. | May 2004 | B1 |
6732046 | Joshi | May 2004 | B1 |
6748109 | Yamaguchi | Jun 2004 | B1 |
6751349 | Matama | Jun 2004 | B2 |
6757081 | Fan et al. | Jun 2004 | B1 |
6757427 | Hongu | Jun 2004 | B1 |
6763515 | Vazquez et al. | Jul 2004 | B1 |
6765685 | Yu | Jul 2004 | B1 |
6778684 | Bollman | Aug 2004 | B1 |
6781375 | Miyazaki et al. | Aug 2004 | B2 |
6788830 | Morikawa | Sep 2004 | B1 |
6789069 | Barnhill et al. | Sep 2004 | B1 |
6801658 | Morita et al. | Oct 2004 | B2 |
6816187 | Iwai et al. | Nov 2004 | B1 |
6826311 | Wilt | Nov 2004 | B2 |
6831755 | Narushima et al. | Dec 2004 | B1 |
6839466 | Venable | Jan 2005 | B2 |
6850653 | Abe | Feb 2005 | B2 |
6873721 | Beyerer et al. | Mar 2005 | B1 |
6882983 | Furphy et al. | Apr 2005 | B2 |
6898601 | Amado et al. | May 2005 | B2 |
6901170 | Terada et al. | May 2005 | B1 |
6917438 | Yoda et al. | Jul 2005 | B1 |
6917709 | Zelinski | Jul 2005 | B2 |
6921220 | Aiyama | Jul 2005 | B2 |
6950555 | Filatov et al. | Sep 2005 | B2 |
6987534 | Seta | Jan 2006 | B1 |
6989914 | Iwaki | Jan 2006 | B2 |
6999625 | Nelson | Feb 2006 | B1 |
7006707 | Peterson | Feb 2006 | B2 |
7016549 | Utagawa | Mar 2006 | B1 |
7017108 | Wan | Mar 2006 | B1 |
7020320 | Filatov | Mar 2006 | B2 |
7023447 | Luo et al. | Apr 2006 | B2 |
7027181 | Takamori | Apr 2006 | B2 |
7038713 | Matama | May 2006 | B1 |
7042603 | Masao et al. | May 2006 | B2 |
7043080 | Dolan | May 2006 | B1 |
7054036 | Hirayama | May 2006 | B2 |
7081975 | Yoda et al. | Jul 2006 | B2 |
7082426 | Musgrove et al. | Jul 2006 | B2 |
7107285 | von Kaenel et al. | Sep 2006 | B2 |
7123292 | Seeger et al. | Oct 2006 | B1 |
7123387 | Cheng et al. | Oct 2006 | B2 |
7130471 | Bossut et al. | Oct 2006 | B2 |
7145699 | Dolan | Dec 2006 | B2 |
7167281 | Fujimoto et al. | Jan 2007 | B1 |
7168614 | Kotovich et al. | Jan 2007 | B2 |
7173732 | Matama | Feb 2007 | B2 |
7174043 | Lossev et al. | Feb 2007 | B2 |
7177049 | Karidi | Feb 2007 | B2 |
7181082 | Feng | Feb 2007 | B2 |
7184929 | Goodman | Feb 2007 | B2 |
7194471 | Nagatsuka et al. | Mar 2007 | B1 |
7197158 | Camara et al. | Mar 2007 | B2 |
7201323 | Kotovich et al. | Apr 2007 | B2 |
7209599 | Simske et al. | Apr 2007 | B2 |
7228314 | Kawamoto et al. | Jun 2007 | B2 |
7249717 | Kotovich et al. | Jul 2007 | B2 |
7251777 | Valtchev et al. | Jul 2007 | B1 |
7253836 | Suzuki et al. | Aug 2007 | B1 |
7263221 | Moriwaki | Aug 2007 | B1 |
7266768 | Ferlitsch et al. | Sep 2007 | B2 |
7286177 | Cooper | Oct 2007 | B2 |
7298897 | Dominguez et al. | Nov 2007 | B1 |
7317828 | Suzuki et al. | Jan 2008 | B2 |
7337389 | Woolf et al. | Feb 2008 | B1 |
7339585 | Verstraelen et al. | Mar 2008 | B2 |
7340376 | Goodman | Mar 2008 | B2 |
7349888 | Heidenreich et al. | Mar 2008 | B1 |
7365881 | Bums et al. | Apr 2008 | B2 |
7366705 | Zeng et al. | Apr 2008 | B2 |
7382921 | Lossev et al. | Jun 2008 | B2 |
7386527 | Harris et al. | Jun 2008 | B2 |
7392426 | Wolfe et al. | Jun 2008 | B2 |
7403008 | Blank et al. | Jul 2008 | B2 |
7403313 | Kuo | Jul 2008 | B2 |
7406183 | Emerson et al. | Jul 2008 | B2 |
7409092 | Srinivasa | Aug 2008 | B2 |
7409633 | Lerner et al. | Aug 2008 | B2 |
7416131 | Fortune et al. | Aug 2008 | B2 |
7426293 | Chien et al. | Sep 2008 | B2 |
7430059 | Rodrigues et al. | Sep 2008 | B2 |
7430066 | Hsu et al. | Sep 2008 | B2 |
7430310 | Kotovich et al. | Sep 2008 | B2 |
7447377 | Takahira | Nov 2008 | B2 |
7464066 | Zelinski et al. | Dec 2008 | B2 |
7478332 | Buttner et al. | Jan 2009 | B2 |
7487438 | Withers | Feb 2009 | B1 |
7492478 | Une | Feb 2009 | B2 |
7492943 | Li et al. | Feb 2009 | B2 |
7515313 | Cheng | Apr 2009 | B2 |
7515772 | Li et al. | Apr 2009 | B2 |
7528883 | Hsu | May 2009 | B2 |
7542931 | Black et al. | Jun 2009 | B2 |
7545529 | Borrey et al. | Jun 2009 | B2 |
7553095 | Kimura | Jun 2009 | B2 |
7562060 | Sindhwani et al. | Jul 2009 | B2 |
7580557 | Zavadsky et al. | Aug 2009 | B2 |
7636479 | Luo et al. | Dec 2009 | B2 |
7639387 | Hull et al. | Dec 2009 | B2 |
7643665 | Zavadsky et al. | Jan 2010 | B2 |
7651286 | Tischler | Jan 2010 | B2 |
7655685 | McElroy et al. | Feb 2010 | B2 |
7657091 | Postnikov et al. | Feb 2010 | B2 |
7665061 | Kothari et al. | Feb 2010 | B2 |
7673799 | Hart et al. | Mar 2010 | B2 |
7702162 | Cheong et al. | Apr 2010 | B2 |
7735721 | Ma et al. | Jun 2010 | B1 |
7738730 | Hawley | Jun 2010 | B2 |
7739127 | Hall | Jun 2010 | B1 |
7761391 | Schmidtler | Jul 2010 | B2 |
7778457 | Nepomniachtchi et al. | Aug 2010 | B2 |
7782384 | Kelly | Aug 2010 | B2 |
7787695 | Nepomniachtchi et al. | Aug 2010 | B2 |
7937345 | Schmidtler et al. | May 2011 | B2 |
7941744 | Oppenlander et al. | May 2011 | B2 |
7949167 | Krishnan et al. | May 2011 | B2 |
7949176 | Nepomniachtchi | May 2011 | B2 |
7949660 | Green et al. | May 2011 | B2 |
7953268 | Nepomniachtchi | May 2011 | B2 |
7958067 | Schmidtler et al. | Jun 2011 | B2 |
7978900 | Nepomniachtchi et al. | Jul 2011 | B2 |
7999961 | Wanda | Aug 2011 | B2 |
8000514 | Nepomniachtchi et al. | Aug 2011 | B2 |
8035641 | O'Donnell | Oct 2011 | B1 |
8064710 | Mizoguchi | Nov 2011 | B2 |
8073263 | Hull et al. | Dec 2011 | B2 |
8078958 | Cottrille et al. | Dec 2011 | B2 |
8081227 | Kim et al. | Dec 2011 | B1 |
8094976 | Berard et al. | Jan 2012 | B2 |
8135656 | Evanitsky | Mar 2012 | B2 |
8136114 | Gailloux et al. | Mar 2012 | B1 |
8184156 | Mino et al. | May 2012 | B2 |
8194965 | Lossev et al. | Jun 2012 | B2 |
8213687 | Fan | Jul 2012 | B2 |
8238880 | Jin et al. | Aug 2012 | B2 |
8239335 | Schmidtler et al. | Aug 2012 | B2 |
8244031 | Cho et al. | Aug 2012 | B2 |
8265422 | Jin | Sep 2012 | B1 |
8279465 | Couchman | Oct 2012 | B2 |
8295599 | Katougi et al. | Oct 2012 | B2 |
8311296 | Filatov et al. | Nov 2012 | B2 |
8326015 | Nepomniachtchi | Dec 2012 | B2 |
8345981 | Schmidtler et al. | Jan 2013 | B2 |
8354981 | Kawasaki et al. | Jan 2013 | B2 |
8374977 | Schmidtler et al. | Feb 2013 | B2 |
8379914 | Nepomniachtchi et al. | Feb 2013 | B2 |
8385647 | Hawley et al. | Feb 2013 | B2 |
8406480 | Grigsby et al. | Mar 2013 | B2 |
8433775 | Buchhop et al. | Apr 2013 | B2 |
8441548 | Nechyba | May 2013 | B1 |
8443286 | Cameron | May 2013 | B2 |
8452098 | Nepomniachtchi et al. | May 2013 | B2 |
8478052 | Yee | Jul 2013 | B1 |
8483473 | Roach et al. | Jul 2013 | B2 |
8503769 | Baker et al. | Aug 2013 | B2 |
8503797 | Turkelson et al. | Aug 2013 | B2 |
8515163 | Cho et al. | Aug 2013 | B2 |
8515208 | Minerich | Aug 2013 | B2 |
8526739 | Schmidtler et al. | Sep 2013 | B2 |
8532419 | Coleman | Sep 2013 | B2 |
8559766 | Tilt et al. | Oct 2013 | B2 |
8577118 | Nepomniachtchi et al. | Nov 2013 | B2 |
8582862 | Nepomniachtchi et al. | Nov 2013 | B2 |
8587818 | Imaizumi et al. | Nov 2013 | B2 |
8620058 | Nepomniachtchi et al. | Dec 2013 | B2 |
8639621 | Ellis et al. | Jan 2014 | B1 |
8675953 | Elwell et al. | Mar 2014 | B1 |
8676165 | Cheng et al. | Mar 2014 | B2 |
8677249 | Buttner et al. | Mar 2014 | B2 |
8693043 | Schmidtler et al. | Apr 2014 | B2 |
8705836 | Gorski et al. | Apr 2014 | B2 |
8719197 | Schmidtler et al. | May 2014 | B2 |
8724907 | Sampson et al. | May 2014 | B1 |
8745488 | Wong | Jun 2014 | B1 |
8749839 | Borrey et al. | Jun 2014 | B2 |
8774516 | Amtrup et al. | Jul 2014 | B2 |
8805125 | Kumar et al. | Aug 2014 | B1 |
8813111 | Guerin | Aug 2014 | B2 |
8823991 | Borrey et al. | Sep 2014 | B2 |
8855375 | Macciola et al. | Oct 2014 | B2 |
8855425 | Schmidtler et al. | Oct 2014 | B2 |
8879120 | Thrasher et al. | Nov 2014 | B2 |
8879783 | Wang et al. | Nov 2014 | B1 |
8879846 | Amtrup et al. | Nov 2014 | B2 |
8885229 | Amtrup et al. | Nov 2014 | B1 |
8908977 | King | Dec 2014 | B2 |
8955743 | Block et al. | Feb 2015 | B1 |
8971587 | Macciola et al. | Mar 2015 | B2 |
8989515 | Shustorovich et al. | Mar 2015 | B2 |
8995012 | Heit et al. | Mar 2015 | B2 |
8995769 | Carr | Mar 2015 | B2 |
9058327 | Lehrman et al. | Jun 2015 | B1 |
9058515 | Amtrup et al. | Jun 2015 | B1 |
9058580 | Amtrup et al. | Jun 2015 | B1 |
9064316 | Eid et al. | Jun 2015 | B2 |
9117117 | Macciola et al. | Aug 2015 | B2 |
9129210 | Borrey et al. | Sep 2015 | B2 |
9137417 | Macciola et al. | Sep 2015 | B2 |
9141926 | Kilby et al. | Sep 2015 | B2 |
9158967 | Shustorovich et al. | Oct 2015 | B2 |
9165187 | Macciola et al. | Oct 2015 | B2 |
9165188 | Thrasher et al. | Oct 2015 | B2 |
9208536 | Macciola et al. | Dec 2015 | B2 |
9253349 | Amtrup et al. | Feb 2016 | B2 |
9275281 | Macciola | Mar 2016 | B2 |
9298979 | Nepomniachtchi et al. | Mar 2016 | B2 |
9311531 | Amtrup et al. | Apr 2016 | B2 |
9342741 | Amtrup et al. | May 2016 | B2 |
9342742 | Amtrup et al. | May 2016 | B2 |
9355312 | Amtrup et al. | May 2016 | B2 |
9514357 | Macciola et al. | Dec 2016 | B2 |
9576272 | Macciola et al. | Feb 2017 | B2 |
9584729 | Amtrup et al. | Feb 2017 | B2 |
9946954 | Macciola et al. | Apr 2018 | B2 |
20010027420 | Boublik et al. | Oct 2001 | A1 |
20020030831 | Kinjo | Mar 2002 | A1 |
20020054693 | Elmenhurst | May 2002 | A1 |
20020069218 | Sull et al. | Jun 2002 | A1 |
20020113801 | Reavy et al. | Aug 2002 | A1 |
20020122071 | Camara et al. | Sep 2002 | A1 |
20020126313 | Namizuka | Sep 2002 | A1 |
20020165717 | Solmer et al. | Nov 2002 | A1 |
20030002068 | Constantin et al. | Jan 2003 | A1 |
20030007683 | Wang et al. | Jan 2003 | A1 |
20030026479 | Thomas et al. | Feb 2003 | A1 |
20030030638 | Astrom et al. | Feb 2003 | A1 |
20030044012 | Eden | Mar 2003 | A1 |
20030046445 | Witt et al. | Mar 2003 | A1 |
20030053696 | Schmidt et al. | Mar 2003 | A1 |
20030063213 | Poplin | Apr 2003 | A1 |
20030086615 | Dance et al. | May 2003 | A1 |
20030095709 | Zhou | May 2003 | A1 |
20030101161 | Ferguson et al. | May 2003 | A1 |
20030117511 | Belz et al. | Jun 2003 | A1 |
20030120653 | Brady et al. | Jun 2003 | A1 |
20030142328 | McDaniel et al. | Jul 2003 | A1 |
20030151674 | Lin | Aug 2003 | A1 |
20030156201 | Zhang | Aug 2003 | A1 |
20030197063 | Longacre | Oct 2003 | A1 |
20030210428 | Bevlin et al. | Nov 2003 | A1 |
20030223615 | Keaton et al. | Dec 2003 | A1 |
20040019274 | Galloway et al. | Jan 2004 | A1 |
20040021909 | Kikuoka | Feb 2004 | A1 |
20040022437 | Beardsley | Feb 2004 | A1 |
20040049401 | Carr et al. | Mar 2004 | A1 |
20040083119 | Schunder et al. | Apr 2004 | A1 |
20040090458 | Yu et al. | May 2004 | A1 |
20040093119 | Gunnarsson et al. | May 2004 | A1 |
20040102989 | Jang et al. | May 2004 | A1 |
20040111453 | Harris et al. | Jun 2004 | A1 |
20040143796 | Lerner et al. | Jul 2004 | A1 |
20040169873 | Nagarajan | Sep 2004 | A1 |
20040169889 | Sawada | Sep 2004 | A1 |
20040175033 | Matama | Sep 2004 | A1 |
20040181482 | Yap | Sep 2004 | A1 |
20040190019 | Li et al. | Sep 2004 | A1 |
20040245334 | Sikorski | Dec 2004 | A1 |
20040261084 | Rosenbloom et al. | Dec 2004 | A1 |
20040263639 | Sadovsky et al. | Dec 2004 | A1 |
20050021360 | Miller et al. | Jan 2005 | A1 |
20050030602 | Gregson et al. | Feb 2005 | A1 |
20050046887 | Shibata et al. | Mar 2005 | A1 |
20050050060 | Damm et al. | Mar 2005 | A1 |
20050054342 | Otsuka | Mar 2005 | A1 |
20050060162 | Mohit et al. | Mar 2005 | A1 |
20050063585 | Matsuura | Mar 2005 | A1 |
20050065903 | Zhang et al. | Mar 2005 | A1 |
20050080844 | Dathathraya et al. | Apr 2005 | A1 |
20050100209 | Lewis et al. | May 2005 | A1 |
20050131780 | Princen | Jun 2005 | A1 |
20050134935 | Schmidtler et al. | Jun 2005 | A1 |
20050141777 | Kuwata | Jun 2005 | A1 |
20050151990 | Ishikawa et al. | Jul 2005 | A1 |
20050160065 | Seeman | Jul 2005 | A1 |
20050163343 | Kakinami et al. | Jul 2005 | A1 |
20050180628 | Curry et al. | Aug 2005 | A1 |
20050180632 | Aradhye et al. | Aug 2005 | A1 |
20050193325 | Epstein | Sep 2005 | A1 |
20050204058 | Philbrick et al. | Sep 2005 | A1 |
20050206753 | Sakurai et al. | Sep 2005 | A1 |
20050212925 | Lefebure et al. | Sep 2005 | A1 |
20050216426 | Weston et al. | Sep 2005 | A1 |
20050216564 | Myers et al. | Sep 2005 | A1 |
20050228591 | Hur et al. | Oct 2005 | A1 |
20050234955 | Zeng et al. | Oct 2005 | A1 |
20050246262 | Aggarwal et al. | Nov 2005 | A1 |
20050265618 | Jebara | Dec 2005 | A1 |
20050271265 | Wang et al. | Dec 2005 | A1 |
20050273453 | Holloran | Dec 2005 | A1 |
20060013463 | Ramsay et al. | Jan 2006 | A1 |
20060017810 | Kurzweil et al. | Jan 2006 | A1 |
20060023271 | Boay et al. | Feb 2006 | A1 |
20060031344 | Mishima et al. | Feb 2006 | A1 |
20060033615 | Nou | Feb 2006 | A1 |
20060047704 | Gopalakrishnan | Mar 2006 | A1 |
20060048046 | Joshi et al. | Mar 2006 | A1 |
20060074821 | Cristianini | Apr 2006 | A1 |
20060089907 | Kohlmaier et al. | Apr 2006 | A1 |
20060093208 | Li et al. | May 2006 | A1 |
20060095373 | Venkatasubramanian et al. | May 2006 | A1 |
20060095374 | Lo et al. | May 2006 | A1 |
20060098899 | King et al. | May 2006 | A1 |
20060112340 | Mohr et al. | May 2006 | A1 |
20060114488 | Motamed | Jun 2006 | A1 |
20060115153 | Bhattacharjya | Jun 2006 | A1 |
20060120609 | Ivanov et al. | Jun 2006 | A1 |
20060126918 | Oohashi et al. | Jun 2006 | A1 |
20060147113 | Han | Jul 2006 | A1 |
20060159364 | Poon et al. | Jul 2006 | A1 |
20060161646 | Chene et al. | Jul 2006 | A1 |
20060164682 | Lev | Jul 2006 | A1 |
20060195491 | Nieland et al. | Aug 2006 | A1 |
20060203107 | Steinberg et al. | Sep 2006 | A1 |
20060206628 | Erez | Sep 2006 | A1 |
20060212413 | Rujan et al. | Sep 2006 | A1 |
20060215231 | Borrey et al. | Sep 2006 | A1 |
20060219773 | Richardson | Oct 2006 | A1 |
20060222239 | Bargeron et al. | Oct 2006 | A1 |
20060235732 | Miller et al. | Oct 2006 | A1 |
20060235812 | Rifkin et al. | Oct 2006 | A1 |
20060236304 | Luo et al. | Oct 2006 | A1 |
20060242180 | Graf et al. | Oct 2006 | A1 |
20060256392 | Van Hoof et al. | Nov 2006 | A1 |
20060257048 | Lin et al. | Nov 2006 | A1 |
20060263134 | Beppu | Nov 2006 | A1 |
20060265640 | Albomoz et al. | Nov 2006 | A1 |
20060268352 | Tanigawa et al. | Nov 2006 | A1 |
20060268356 | Shih et al. | Nov 2006 | A1 |
20060268369 | Kuo | Nov 2006 | A1 |
20060279798 | Rudolph et al. | Dec 2006 | A1 |
20060282442 | Lennon et al. | Dec 2006 | A1 |
20060282463 | Rudolph et al. | Dec 2006 | A1 |
20060282762 | Diamond et al. | Dec 2006 | A1 |
20060288015 | Schirripa et al. | Dec 2006 | A1 |
20060294154 | Shimizu | Dec 2006 | A1 |
20070002348 | Hagiwara | Jan 2007 | A1 |
20070002375 | Ng | Jan 2007 | A1 |
20070003155 | Miller et al. | Jan 2007 | A1 |
20070003165 | Sibiryakov et al. | Jan 2007 | A1 |
20070005341 | Burges et al. | Jan 2007 | A1 |
20070016848 | Rosenoff et al. | Jan 2007 | A1 |
20070030540 | Cheng et al. | Feb 2007 | A1 |
20070035780 | Kanno | Feb 2007 | A1 |
20070036432 | Xu et al. | Feb 2007 | A1 |
20070046957 | Jacobs et al. | Mar 2007 | A1 |
20070046982 | Hull et al. | Mar 2007 | A1 |
20070047782 | Hull et al. | Mar 2007 | A1 |
20070065033 | Hernandez et al. | Mar 2007 | A1 |
20070086667 | Dai et al. | Apr 2007 | A1 |
20070109590 | Hagiwara | May 2007 | A1 |
20070118794 | Hollander et al. | May 2007 | A1 |
20070128899 | Mayer | Jun 2007 | A1 |
20070133862 | Gold et al. | Jun 2007 | A1 |
20070165801 | Devolites et al. | Jul 2007 | A1 |
20070172151 | Gennetten et al. | Jul 2007 | A1 |
20070177818 | Teshima et al. | Aug 2007 | A1 |
20070204162 | Rodriguez | Aug 2007 | A1 |
20070239642 | Sindhwani et al. | Oct 2007 | A1 |
20070250416 | Beach et al. | Oct 2007 | A1 |
20070252907 | Hsu | Nov 2007 | A1 |
20070260588 | Biazetti et al. | Nov 2007 | A1 |
20080004073 | John et al. | Jan 2008 | A1 |
20080005678 | Buttner et al. | Jan 2008 | A1 |
20080068452 | Nakao et al. | Mar 2008 | A1 |
20080082352 | Schmidtler et al. | Apr 2008 | A1 |
20080086432 | Schmidtler et al. | Apr 2008 | A1 |
20080086433 | Schmidtler et al. | Apr 2008 | A1 |
20080095467 | Olszak et al. | Apr 2008 | A1 |
20080097936 | Schmidtler et al. | Apr 2008 | A1 |
20080130992 | Fujii | Jun 2008 | A1 |
20080133388 | Alekseev et al. | Jun 2008 | A1 |
20080137971 | King et al. | Jun 2008 | A1 |
20080144881 | Fortune et al. | Jun 2008 | A1 |
20080147561 | Euchner et al. | Jun 2008 | A1 |
20080147790 | Malaney et al. | Jun 2008 | A1 |
20080166025 | Thorne | Jul 2008 | A1 |
20080175476 | Ohk et al. | Jul 2008 | A1 |
20080177643 | Matthews et al. | Jul 2008 | A1 |
20080183576 | Kim et al. | Jul 2008 | A1 |
20080199081 | Kimura et al. | Aug 2008 | A1 |
20080212115 | Konishi | Sep 2008 | A1 |
20080215489 | Lawson et al. | Sep 2008 | A1 |
20080219543 | Csulits et al. | Sep 2008 | A1 |
20080225127 | Ming | Sep 2008 | A1 |
20080232715 | Miyakawa et al. | Sep 2008 | A1 |
20080235766 | Wallos et al. | Sep 2008 | A1 |
20080253647 | Cho et al. | Oct 2008 | A1 |
20080294737 | Kim | Nov 2008 | A1 |
20080298718 | Liu et al. | Dec 2008 | A1 |
20090015687 | Shinkai et al. | Jan 2009 | A1 |
20090073266 | Abdellaziz Trimeche et al. | Mar 2009 | A1 |
20090089078 | Bursey | Apr 2009 | A1 |
20090103808 | Dey et al. | Apr 2009 | A1 |
20090132468 | Putivsky et al. | May 2009 | A1 |
20090132504 | Vegnaduzzo et al. | May 2009 | A1 |
20090141985 | Sheinin et al. | Jun 2009 | A1 |
20090154778 | Lei et al. | Jun 2009 | A1 |
20090159509 | Wojdyla et al. | Jun 2009 | A1 |
20090175537 | Tribelhorn et al. | Jul 2009 | A1 |
20090185241 | Nepomniachtchi | Jul 2009 | A1 |
20090214112 | Borrey et al. | Aug 2009 | A1 |
20090225180 | Maruyama et al. | Sep 2009 | A1 |
20090228499 | Schmidtler et al. | Sep 2009 | A1 |
20090285445 | Vasa | Nov 2009 | A1 |
20090324025 | Camp, Jr. et al. | Dec 2009 | A1 |
20090324062 | Lim et al. | Dec 2009 | A1 |
20100007751 | Icho et al. | Jan 2010 | A1 |
20100045701 | Scott et al. | Feb 2010 | A1 |
20100060910 | Fechter | Mar 2010 | A1 |
20100060915 | Suzuki et al. | Mar 2010 | A1 |
20100062491 | Lehmbeck | Mar 2010 | A1 |
20100169250 | Schmidtler et al. | Jul 2010 | A1 |
20100202698 | Schmidtler et al. | Aug 2010 | A1 |
20100202701 | Basri et al. | Aug 2010 | A1 |
20100214584 | Takahashi | Aug 2010 | A1 |
20100232706 | Forutanpour | Sep 2010 | A1 |
20100280859 | Frederick, II | Nov 2010 | A1 |
20110013039 | Aisaka et al. | Jan 2011 | A1 |
20110025842 | King et al. | Feb 2011 | A1 |
20110025860 | Katougi et al. | Feb 2011 | A1 |
20110032570 | Imaizumi et al. | Feb 2011 | A1 |
20110055033 | Chen et al. | Mar 2011 | A1 |
20110090337 | Klomp et al. | Apr 2011 | A1 |
20110091092 | Nepomniachtchi et al. | Apr 2011 | A1 |
20110116716 | Kwon et al. | May 2011 | A1 |
20110129153 | Petrou et al. | Jun 2011 | A1 |
20110137898 | Gordo et al. | Jun 2011 | A1 |
20110145178 | Schmidtler et al. | Jun 2011 | A1 |
20110182500 | Esposito et al. | Jul 2011 | A1 |
20110196870 | Schmidtler et al. | Aug 2011 | A1 |
20110200107 | Ryu | Aug 2011 | A1 |
20110246076 | Su et al. | Oct 2011 | A1 |
20110249905 | Singh et al. | Oct 2011 | A1 |
20110279456 | Hiranuma et al. | Nov 2011 | A1 |
20110280450 | Nepomniachtchi et al. | Nov 2011 | A1 |
20110285873 | Showering | Nov 2011 | A1 |
20110285874 | Showering et al. | Nov 2011 | A1 |
20120008858 | Sedky et al. | Jan 2012 | A1 |
20120019614 | Murray et al. | Jan 2012 | A1 |
20120038549 | Mandella et al. | Feb 2012 | A1 |
20120069131 | Abelow | Mar 2012 | A1 |
20120077476 | Paraskevakos et al. | Mar 2012 | A1 |
20120092329 | Koo et al. | Apr 2012 | A1 |
20120105662 | Staudacher et al. | May 2012 | A1 |
20120113489 | Heit et al. | May 2012 | A1 |
20120114249 | Conwell | May 2012 | A1 |
20120116957 | Zanzot et al. | May 2012 | A1 |
20120131139 | Siripurapu et al. | May 2012 | A1 |
20120134576 | Sharma | May 2012 | A1 |
20120162527 | Baker | Jun 2012 | A1 |
20120194692 | Mers et al. | Aug 2012 | A1 |
20120215578 | Swierz, III et al. | Aug 2012 | A1 |
20120230606 | Sugiyama et al. | Sep 2012 | A1 |
20120236019 | Oh et al. | Sep 2012 | A1 |
20120272192 | Grossman et al. | Oct 2012 | A1 |
20120284122 | Brandis | Nov 2012 | A1 |
20120290421 | Qawami et al. | Nov 2012 | A1 |
20120293607 | Bhogal et al. | Nov 2012 | A1 |
20120294524 | Zyuzin et al. | Nov 2012 | A1 |
20120300020 | Arth et al. | Nov 2012 | A1 |
20120301011 | Grzechnik | Nov 2012 | A1 |
20120308139 | Dhir | Dec 2012 | A1 |
20130004076 | Koo | Jan 2013 | A1 |
20130022231 | Nepomniachtchi et al. | Jan 2013 | A1 |
20130027757 | Lee et al. | Jan 2013 | A1 |
20130060596 | Gu et al. | Mar 2013 | A1 |
20130073459 | Zacarias et al. | Mar 2013 | A1 |
20130088757 | Schmidtler et al. | Apr 2013 | A1 |
20130090969 | Rivere | Apr 2013 | A1 |
20130097157 | Ng et al. | Apr 2013 | A1 |
20130117175 | Hanson | May 2013 | A1 |
20130121610 | Chen et al. | May 2013 | A1 |
20130124414 | Roach et al. | May 2013 | A1 |
20130142402 | Myers | Jun 2013 | A1 |
20130152176 | Courtney et al. | Jun 2013 | A1 |
20130182002 | Macciola et al. | Jul 2013 | A1 |
20130182105 | Fahn et al. | Jul 2013 | A1 |
20130182128 | Amtrup et al. | Jul 2013 | A1 |
20130182292 | Thrasher et al. | Jul 2013 | A1 |
20130182951 | Shustorovich et al. | Jul 2013 | A1 |
20130182959 | Thrasher et al. | Jul 2013 | A1 |
20130182970 | Shustorovich et al. | Jul 2013 | A1 |
20130182973 | Macciola et al. | Jul 2013 | A1 |
20130185618 | Macciola et al. | Jul 2013 | A1 |
20130188865 | Saha et al. | Jul 2013 | A1 |
20130198192 | Hu et al. | Aug 2013 | A1 |
20130198358 | Taylor | Aug 2013 | A1 |
20130223762 | Nagamasa | Aug 2013 | A1 |
20130230246 | Nuggehalli | Sep 2013 | A1 |
20130251280 | Borrey et al. | Sep 2013 | A1 |
20130268378 | Yovin | Oct 2013 | A1 |
20130268430 | Lopez et al. | Oct 2013 | A1 |
20130287265 | Nepomniachtchi et al. | Oct 2013 | A1 |
20130287284 | Nepomniachtchi et al. | Oct 2013 | A1 |
20130297353 | Strange et al. | Nov 2013 | A1 |
20130308832 | Schmidtler et al. | Nov 2013 | A1 |
20130329023 | Suplee, III et al. | Dec 2013 | A1 |
20140003721 | Saund | Jan 2014 | A1 |
20140006129 | Heath | Jan 2014 | A1 |
20140006198 | Daly et al. | Jan 2014 | A1 |
20140012754 | Hanson et al. | Jan 2014 | A1 |
20140047367 | Nielsen | Feb 2014 | A1 |
20140055826 | Hinski | Feb 2014 | A1 |
20140079294 | Amtrup et al. | Mar 2014 | A1 |
20140108456 | Ramachandrula et al. | Apr 2014 | A1 |
20140153787 | Schmidtler et al. | Jun 2014 | A1 |
20140153830 | Amtrup et al. | Jun 2014 | A1 |
20140164914 | Schmidtler et al. | Jun 2014 | A1 |
20140172687 | Chirehdast | Jun 2014 | A1 |
20140181691 | Poornachandran et al. | Jun 2014 | A1 |
20140201612 | Buttner et al. | Jul 2014 | A1 |
20140207717 | Schmidtler et al. | Jul 2014 | A1 |
20140233068 | Borrey et al. | Aug 2014 | A1 |
20140254887 | Amtrup et al. | Sep 2014 | A1 |
20140270349 | Amtrup et al. | Sep 2014 | A1 |
20140270439 | Chen | Sep 2014 | A1 |
20140270536 | Amtrup et al. | Sep 2014 | A1 |
20140316841 | Kilby et al. | Oct 2014 | A1 |
20140317595 | Kilby et al. | Oct 2014 | A1 |
20140327940 | Amtrup et al. | Nov 2014 | A1 |
20140328520 | Macciola et al. | Nov 2014 | A1 |
20140333971 | Macciola et al. | Nov 2014 | A1 |
20140368890 | Amtrup et al. | Dec 2014 | A1 |
20150040001 | Kannan et al. | Feb 2015 | A1 |
20150040002 | Kannan et al. | Feb 2015 | A1 |
20150086080 | Stein et al. | Mar 2015 | A1 |
20150098628 | Macciola et al. | Apr 2015 | A1 |
20150170085 | Amtrup et al. | Jun 2015 | A1 |
20150254469 | Butler | Sep 2015 | A1 |
20150317529 | Zhou et al. | Nov 2015 | A1 |
20150324640 | Macciola et al. | Nov 2015 | A1 |
20150339526 | Macciola et al. | Nov 2015 | A1 |
20150347861 | Doepke et al. | Dec 2015 | A1 |
20150355889 | Kilby et al. | Dec 2015 | A1 |
20160019530 | Wang et al. | Jan 2016 | A1 |
20160028921 | Thrasher et al. | Jan 2016 | A1 |
20160034775 | Meadow et al. | Feb 2016 | A1 |
20160055395 | Macciola et al. | Feb 2016 | A1 |
20160063358 | Mehrseresht | Mar 2016 | A1 |
20160112645 | Amtrup et al. | Apr 2016 | A1 |
20160125613 | Shustorovich et al. | May 2016 | A1 |
20160147891 | Chhichhia et al. | May 2016 | A1 |
20160171603 | Amtrup et al. | Jun 2016 | A1 |
20160320466 | Berker et al. | Nov 2016 | A1 |
20160350592 | Ma et al. | Dec 2016 | A1 |
20170024629 | Thrasher et al. | Jan 2017 | A1 |
Number | Date | Country |
---|---|---|
101295305 | Oct 2008 | CN |
101329731 | Dec 2008 | CN |
101493830 | Jul 2009 | CN |
0549329 | Jun 1993 | EP |
0723247 | Jul 1996 | EP |
0767578 | Apr 1997 | EP |
0809219 | Nov 1997 | EP |
0843277 | May 1998 | EP |
0936804 | Aug 1999 | EP |
1128659 | Aug 2001 | EP |
1317133 | Jun 2003 | EP |
1319133 | Jun 2003 | EP |
1422520 | May 2004 | EP |
1422920 | May 2004 | EP |
1956518 | Aug 2008 | EP |
1959363 | Aug 2008 | EP |
1976259 | Oct 2008 | EP |
2472372 | Jul 2012 | EP |
H07260701 | Oct 1995 | JP |
H0962826 | Mar 1997 | JP |
H09091341 | Apr 1997 | JP |
H09116720 | May 1997 | JP |
H11118444 | Apr 1999 | JP |
2000067065 | Mar 2000 | JP |
2000103628 | Apr 2000 | JP |
2000298702 | Oct 2000 | JP |
2000354144 | Dec 2000 | JP |
2001309128 | Nov 2001 | JP |
2002024258 | Jan 2002 | JP |
2002519766 | Jul 2002 | JP |
2002312385 | Oct 2002 | JP |
2003091521 | Mar 2003 | JP |
2003196357 | Jul 2003 | JP |
2003234888 | Aug 2003 | JP |
2003303315 | Oct 2003 | JP |
2004005624 | Jan 2004 | JP |
2004523022 | Jul 2004 | JP |
2005018678 | Jan 2005 | JP |
2005085173 | Mar 2005 | JP |
2005173730 | Jun 2005 | JP |
2006031379 | Feb 2006 | JP |
2006185367 | Jul 2006 | JP |
2006209588 | Aug 2006 | JP |
2006330863 | Dec 2006 | JP |
200752670 | Mar 2007 | JP |
2008134683 | Jun 2008 | JP |
2009015396 | Jan 2009 | JP |
2009211431 | Sep 2009 | JP |
2011034387 | Feb 2011 | JP |
2011055467 | Mar 2011 | JP |
2011118513 | Jun 2011 | JP |
2011118600 | Jun 2011 | JP |
2012517637 | Aug 2012 | JP |
2012194736 | Oct 2012 | JP |
2013196357 | Sep 2013 | JP |
5462286 | Apr 2014 | JP |
401553 | Aug 2000 | TW |
9604749 | Feb 1996 | WO |
97006522 | Feb 1997 | WO |
9847098 | Oct 1998 | WO |
9967731 | Dec 1999 | WO |
0263812 | Aug 2002 | WO |
02063812 | Aug 2002 | WO |
2004053630 | Jun 2004 | WO |
2004056360 | Jul 2004 | WO |
2006104627 | Oct 2006 | WO |
2007081147 | Jul 2007 | WO |
2007082534 | Jul 2007 | WO |
2008008142 | Jan 2008 | WO |
2010030056 | Mar 2010 | WO |
2010056368 | May 2010 | WO |
Entry |
---|
Final Office Action from U.S. Appl. No. 14/804,278, dated Jun. 28, 2016. |
International Search Report and Written Opinion from International Application No. PCT/US2014/065831, dated Feb. 26, 2015. |
U.S. Appl. No. 61/780,747, filed Mar. 13, 2013. |
U.S. Appl. No. 61/819,463, dated May 3, 2013. |
Notice of Allowance from U.S. Appl. No. 14/268,876, dated Aug. 29, 2014. |
Non-Final Office Action from U.S. Appl. No. 14/268,876, dated Jul. 24, 2014. |
Non-Final Office Action from U.S. Appl. No. 14/473,950, dated Jan. 21, 2015. |
Non-Final Office Action from U.S. Appl. No. 14/473,950, dated Feb. 6, 2015. |
Final Office Action from U.S. Appl. No. 14/473,950, dated Jun. 26, 2015. |
Notice of Allowance from U.S. Appl. No. 14/473,950, dated Sep. 16, 2015. |
Non-Final Office Action from U.S. Appl. No. 14/981,759, dated Jun. 7, 2016. |
Extended European Search Report from European Application No. 14861942.2, dated Nov. 2, 2016. |
Non-Final Office Action from U.S. Appl. No. 15/191,442, dated Oct. 12, 2016. |
Partial Supplementary European Search Report from European Application No. 14792188.6, dated Sep. 12, 2016. |
Notice of Allowance from U.S. Appl. No. 14/981,759, dated Nov. 16, 2016. |
Non-Final Office Action from U.S. Appl. No. 13/740,127, dated Feb. 23, 2015. |
International Search Report and Written Opinion from International Application No. PCT/US2015/021597, dated Jun. 22, 2015. |
U.S. Appl. No. 14/340,460, filed Jul. 24, 2014. |
Requirement for Restriction from U.S. Appl. No. 14/177,136, dated Aug. 15, 2014. |
International Search Report and Written Opinion from PCT Application No. PCT/US2014/036673, dated Aug. 28, 2014. |
U.S. Appl. No. 14/473,950, filed Aug. 29, 2014. |
Final Office Action from U.S. Appl. No. 14/176,006, dated Sep. 3, 2014. |
Bishop, C.M., “Neural Networks for Pattern Recognition,” Oxford University Press, Inc., 1995, p. 27. |
Bishop, C.M., “Neural Networks for Pattern Recognition,” Oxford University Press, Inc., 1995, pp. 77-85. |
Bishop, C.M., “Neural Networks for Pattern Recognition,” Oxford University Press, Inc., 1995, pp. 230-247. |
Bishop, C.M., “Neural Networks for Pattern Recognition,” Oxford University Press, Inc., 1995, pp. 295-300. |
Bishop, C.M., “Neural Networks for Pattern Recognition,” Oxford University Press, Inc., 1995, pp. 343-345. |
Final Office Action from U.S. Appl. No. 14/220,023, dated Sep. 18, 2014. |
International Search Report and Written Opinion from PCT Application No. PCT/US14/26597, dated Sep. 19, 2014. |
U.S. Appl. No. 14/491,901, filed Sep. 19, 2014. |
Final Office Action from U.S. Appl. No. 14/220,029, dated Sep. 26, 2014. |
International Search Report and Written Opinion from PCT Application No. PCT/US14/36851, dated Sep. 25, 2014. |
Notice of Allowance from U.S. Appl. No. 14/176,006, dated Oct. 1, 2014. |
Non-Final Office Action from U.S. Appl. No. 11/752,691, dated Oct. 10, 2014. |
Non-Final Office Action from U.S. Appl. No. 15/146,848, dated Dec. 6, 2016. |
U.S. Appl. No. 15/389,342, filed Dec. 22, 2016. |
U.S. Appl. No. 15/390,321, filed Dec. 23, 2016. |
Final Office Action from U.S. Appl. No. 14/177,136, dated Nov. 4, 2016. |
Non-Final Office Action from U.S. Appl. No. 14/177,136, dated Apr. 13, 2016. |
Non-Final Office Action from U.S. Appl. No. 14/177,136, dated Dec. 29, 2014. |
“Location and Camera with Cell Phones,” Wikipedia, Mar. 30, 2016, pp. 1-19. |
Non-Final Office Action from U.S. Appl. No. 14/176,006, dated Apr. 7, 2014. |
Non-Final Office Action from U.S. Appl. No. 14/220,023, dated May 5, 2014. |
Non-Final Office Action from U.S. Appl. No. 14/220,029, dated May 14, 2014. |
International Search Report and Written Opinion from International Application No. PCT/US2016/043204, dated Oct. 6, 2016. |
Final Office Action from U.S. Appl. No. 14/818,196, dated Jan. 9, 2017. |
Decision to Refuse from European Application No. 10 741 580.4, dated Jan. 20, 2017. |
Rainardi, V., “Building a Data Warehouse: With Examples in SQL Server,” Apress, Dec. 27, 2007, pp. 471-473. |
Office Action from Japanese Patent Application No. 2015-229466, dated Nov. 29, 2016. |
Extended European Search Report from European Application No. 14792188.6, dated Jan. 25, 2017. |
Extended European Search Report from European Application No. 14773721.7, dated May 17, 2016. |
Extended European Search Report from European Application No. 14775259.6, dated Jun. 1, 2016. |
Clemons, J. et al., “MEVBench: A Mobile Computer Vision Benchmarking Suite,” IEEE, 2011, pp. 91-102. |
Gonzalez, R. C. et al., “Image Interpolation”, Digital Image Processing, Third Edition,2008, Chapter 2, pp. 65-68. |
Kim, D. et al., “Location-based large-scale landmark image recognition scheme for mobile devices,” 2012 Third FTRA International Conference on Mobile, Ubiquitous, and Intelligent Computing, IEEE, 2012, pp. 47-52. |
Sauvola, J. et al., “Adaptive document image binarization,” Pattern Recognition, vol. 33, 2000, pp. 225-236. |
Tsai, C., “Effects of 2-D Preprocessing on Feature Extraction: Accentuating Features by Decimation, Contrast Enhancement, Filtering,” EE 262: 2D Imaging Project Report, 2008, pp. 1-9. |
Amtrup, J. W. et al., U.S. Appl. No. 14/220,029, filed Mar. 19, 2014. |
International Search Report and Written Opinion from PCT Application No. PCT/US15/26022, dated Jul. 22, 2015. |
Non-Final Office Action from U.S. Appl. No. 14/588,147, dated Jun. 3, 2015. |
Notice of Allowance from Japanese Patent Application No. 2014-005616, dated Jun. 12, 2015. |
Office Action from Japanese Patent Application No. 2014-005616, dated Oct. 7, 2014. |
Final Office Action from U.S. Appl. No. 14/588,147, dated Nov. 4, 2015. |
Non-Final Office Action from U.S. Appl. No. 14/283,156, dated Dec. 1, 2015. |
Notice of Allowance from U.S. Appl. No. 14/588,147, dated Jan. 14, 2016. |
Non-Final Office Action from U.S. Appl. No. 14/804,278, dated Mar. 10, 2016. |
Notice of Allowance from U.S. Appl. No. 14/283,156, dated Mar. 16, 2016. |
Summons to Attend Oral Proceedings from European Application No. 10741580.4, dated Jun. 7, 2016. |
Notice of Allowance from U.S. Appl. No. 14/078,402, dated Feb. 26, 2014. |
Non-Final Office Action from U.S. Appl. No. 14/078,402, dated Jan. 30, 2014. |
Notice of Allowance from U.S. Appl. No. 14/175,999, dated Aug. 8, 2014. |
Non-Final Office Action from U.S. Appl. No. 14/175,999, dated Apr. 3, 2014. |
Notice of Allowance from U.S. Appl. No. 13/802,226, dated Jan. 29, 2016. |
Non-Final Office Action from U.S. Appl. No. 13/802,226, dated Sep. 30, 2015. |
Final Office Action from U.S. Appl. No. 13/802,226, dated May 20, 2015. |
Non-Final Office Action from U.S. Appl. No. 13/802,226, dated Jan. 8, 2015. |
Non-Final Office Action from U.S. Appl. No. 14/209,825, dated Apr. 14, 2015. |
Final Office Action from U.S. Appl. No. 14/209,825, dated Aug. 13, 2015. |
Notice of Allowance from U.S. Appl. No. 14/209,825, dated Oct. 28, 2015. |
International Search Report and Written Opinion from International Application No. PCT/US2014/026569, dated Aug. 12, 2014. |
Gllavata, et al., “Finding Text in Images Via Local Thresholding,” International Symposium on Signal Processing and Information Technology, Dec. 2003, pp. 539-542. |
Zunino, et al., “Vector Quantization for License-Plate Location and Image Coding,” IEEE Transactions on Industrial Electronics, vol. 47, Issue 1, Feb. 2000, pp. 159-167. |
Bruns, E. et al., “Mobile Phone-Enabled Museum Guidance with Adaptive Classification,” Computer Graphics and Applications, IEEE, vol. 28, No. 4, Jul.-Aug. 2008, pp. 98,102. |
Tzotsos, A. et al., “Support vector machine classification for object-based image analysis,” Object-Based Image Analysis, Springer Berlin Heidelberg, 2008, pp. 663-677. |
Vailaya, A. et al., “On Image Classification: City Images vs. Landscapes,” Pattern Recognition, vol. 31, No. 12, Dec. 1998, pp. 1921-1935. |
International Search Report and Written Opinion from International Application No. PCT/US2016/043207, dated Oct. 21, 2016. |
Non-Final Office Action from U.S. Appl. No. 14/927,359, dated Nov. 21, 2016. |
Final Office Action from U.S. Appl. No. 14/814,455, dated Dec. 16, 2016. |
Non-Final Office Action from U.S. Appl. No. 14/814,455, dated Jun. 17, 2016. |
Non-Final Office Action from U.S. Appl. No. 14/818,196, dated Aug. 19, 2016. |
International Search Report and Written Opinion from International Application No. PCT/US14/26569, dated Aug. 12, 2014. |
Non-Final Office Action from U.S. Appl. No. 13/898,407, dated Aug. 1, 2013. |
Final Office Action from U.S. Appl. No. 13/898,407, dated Jan. 13, 2014. |
Notice of Allowance from U.S. Appl. No. 13/898,407, dated Apr. 23, 2014. |
Non-Final Office Action from U.S. Appl. No. 14/340,460, dated Jan. 16, 2015. |
Notice of Allowance from U.S. Appl. No. 14/340,460, dated Apr. 28, 2015. |
Office Action from Japanese Patent Application No. 2014-552356, dated Jun. 2, 2015. |
Office Action from Taiwan Application No. 102101177, dated Dec. 17, 2014. |
Notice of Allowance from U.S. Appl. No. 14/220,023, dated Jan. 30, 2015. |
Notice of Allowance from U.S. Appl. No. 14/220,029, dated Feb. 11, 2015. |
International Search Report and Written Opinion from International Application No. PCT/US2013/021336, dated May 23, 2013. |
Non-Final Office Action from U.S. Appl. No. 13/740,127, dated Oct. 27, 2014. |
Notice of Allowance from U.S. Appl. No. 13/740,131, dated Oct. 27, 2014. |
Final Office Action from U.S. Appl. No. 13/740,134, dated Mar. 3, 2015. |
Non-Final Office Action from U.S. Appl. No. 13/740,134, dated Oct. 10, 2014. |
Non-Final Office Action from U.S. Appl. No. 13/740,138, dated Dec. 1, 2014. |
Notice of Allowance from U.S. Appl. No. 13/740,139, dated Aug. 29, 2014. |
Notice of Allowance from U.S. Appl. No. 13/740,145, dated Mar. 30, 2015. |
Non-Final Office Action from U.S. Appl. No. 13/740,145, dated Sep. 29, 2014. |
Notice of Allowance from Taiwan Patent Application No. 102101177, dated Apr. 24, 2015. |
Notice of Allowance from U.S. Appl. No. 13/740,138, dated Jun. 5, 2015. |
Notice of Allowance from U.S. Appl. No. 13/740,127, dated Jun. 8, 2015. |
Notice of Allowance from U.S. Appl. No. 14/569,375, dated Apr. 15, 2015. |
Notice of Allowance from U.S. Appl. No. 13/740,134, dated May 29, 2015. |
Notice of Allowability from U.S. Appl. No. 13/740,145, dated May 26, 2015. |
Corrected Notice of Allowability from U.S. Appl. No. 13/740,138, dated Jul. 8, 2018. |
Final Office Action from U.S. Appl. No. 13/740,134, dated Mar. 3, 3015. |
Notice of Allowance from U.S. Appl. No. 14/804,276, dated Oct. 21, 2015. |
Extended Europrean Search Report from European Application No. 13738301.4, dated Nov. 17, 2015. |
Notice of Allowance from U.S. Appl. No. 13/740,145, dated Jan. 15, 2016. |
Office Action from Taiwan Patent Application No. 102101177, dated Dec. 17, 2014. |
Non-Final Office Action from U.S. Appl. No. 13/740,141, dated Oct. 16, 2015. |
Notice of Allowance from U.S. Appl. No. 13/740,145, dated Sep. 8, 2015. |
Notice of Allowance from U.S. Appl. No. 14/334,558, dated Sep. 10, 2014. |
Notice of Allowance from U.S. Appl. No. 13/740,123, dated Jul. 10, 2014. |
Intsig Information Co., Ltd., “CamScanner,” www.intsig.com/en/camscanner.html, retrieved Oct. 25, 2012. |
Intsig Information Co., Ltd., “Product Descriptions,” www.intsig.com/en/product.html, retrieved Oct. 25, 2012. |
Final Office Action from U.S. Appl. No. 13/740,141, dated May 5, 2016. |
Thrasher, C. W. et al., U.S. Appl. No. 15/214,351, filed Jul. 19, 2016. |
Notice of Allowance from U.S. Appl. No. 13/740,141, dated Jul. 29, 2016. |
International Search Report and Written Opinion from International Application No. PCT/US2014/057065, dated Dec. 30, 2014. |
Non-Final Office Action from U.S. Appl. No. 14/932,902, dated Sep. 28, 2016. |
Su et al., “Stereo rectification of calibrated image pairs based on geometric transformation,” I.J.Modern Education and Computer Science, vol. 4, 2011, pp. 17-24. |
Malis et al., “Deeper understanding of the homography decomposition for vision-based control,” [Research Report] RR-6303, INRIA, Sep. 2007, pp. 1-90. |
Notice of Allowance from U.S. Appl. No. 14/491,901, dated Aug. 4, 2015. |
Final Office Action from U.S. Appl. No. 14/491,901, dated Apr. 30, 2015. |
Non-Final Office Action from U.S. Appl. No. 14/491,901, dated Nov. 19, 2014. |
Non-Final Office Action from U.S. Appl. No. 15/234,969, dated Nov. 18, 2016. |
Office Action from Chinese Patent Application No. 201480013621.1, dated Apr. 28, 2018. |
Examination Report from European Application No. 14847922.3 dated Jun. 22, 2018. |
Lenz et al., “Techniques for Calibration of the Scale Factor and Image Center for High Accuracy 3-D Machine Vision Metrology,” IEEE Transactions on Pattern Anaysis and Machine Intelligence, vol. 10, No. 5, Sep. 1988, pp. 713-720. |
Wang et al., “Single view metrology from scene constraints,” Image and Vision Computing, vol. 23, 2005, pp. 831-840. |
Criminisi et al., “A plane measuring device,” Image and Vision Computing, vol. 17, 1999, pp. 625-634. |
Notice of Allowance from U.S. Appl. No. 15/234,993, dated Jul. 5, 2018. |
Final Office Action from U.S. Appl. No. 14/829,474, dated Jul. 10, 2018. |
Notice of Allowance from U.S. Appl. No. 15/396,322 , dated Jul. 18, 2018. |
Notice of Allowance from U.S. Appl. No. 14/804,281, dated Jul. 23, 2018. |
Office Action from Chinese Patent Application No. 201580014141.1, dated Feb. 6, 2018. |
Final Office Action from U.S. Appl. No. 15/234,993, dated Apr. 9, 2018. |
Wang et al., “Object Recognition Using Multi-View Imaging,” ICSP2008 Proceedings, IEEE, 2008, pp. 810-813. |
Examination Report from European Application No. 14773721.7, dated Mar. 27, 2018. |
Office Action from Taiwanese Application No. 103114611, dated Feb. 8, 2018. |
Office Action from Japanese Patent Application No. 2016-502178, dated Apr. 10, 2018. |
Office Action from Japanese Patent Application No. 2016-568791, dated Mar. 28, 2018. |
Kawakatsu et al., “Development and Evaluation of Task Driven Device Orchestration System for User Work Support,” Forum on Information Technology 10th Conference Proceedings, Institute of Electronics, Information and Communication Engineers (IEICE), Aug. 22, 2011, pp. 309-310. |
Statement of Relevance of Non-Translated Foreign Document NPL: Kawakatsu et al., “Development and Evaluation of Task Driven Device Orcestration System for User Work Support,” Forum on Information Technology 10th Conference Proceedings, Institute of Electronics, Information and Communication Engineers (IEICE), Aug. 22, 2011, pp. 309-310. |
Non-Final Office Action from U.S. Appl. No. 15/214,351, dated May 22, 2018. |
Notice of Allowance from U.S. Appl. No. 15/390,321, dated Aug. 6, 2018. |
Corrected Notice of Allowance from U.S. Appl. No. 15/396,322, dated Aug. 8, 2018. |
Corrected Notice of Allowance from U.S. Appl. No. 15/234,993, dated Aug. 1, 2018. |
Corrected Notice of Allowance from U.S. Appl. No. 15/390,321, dated Sep. 19, 2018. |
Number | Date | Country | |
---|---|---|---|
20170103281 A1 | Apr 2017 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15157325 | May 2016 | US |
Child | 15385707 | US | |
Parent | 13802226 | Mar 2013 | US |
Child | 15157325 | US |