The present invention relates generally to processing of digital information, and more particularly to techniques for identifying one or more objects from digital media content and comparing them to one or more objects specified by a machine readable identifier.
Techniques exist for automatically recognizing objects from a document, image, etc. However, conventional object recognition techniques are very computation and time intensive and as a result not reliable. Accordingly, improved object recognition techniques are desired.
Embodiments of the present invention provide techniques for identifying one or more objects from digital media content and comparing them to one or more objects specified by a machine readable identifier
In one embodiment, techniques are provided for automatically comparing one or more objects determined from digital media content (e.g., an image, audio information, video information) to one or more objects specified by a machine readable identifier to determine if an object determined from the media content matches an object specified by the machine readable identifier. One or more actions may be initiated upon determining that an object determined from the media content matches an object specified by the machine readable identifier. Information identifying the action to be initiated may also be encapsulated by the machine readable identifier.
According to an embodiment of the present invention, techniques (e.g., methods, systems, and code) are provided for processing digital media content. A first object descriptor is determined from a machine readable identifier, the first object descriptor specifying one or more features of an object. A set of one or more objects is determined from the digital media content. An object descriptor is generated for each object in the set of objects. At least one object descriptor is identified from the object descriptors determined for the set of objects that matches the first object descriptor determined from the machine readable identifier.
According to an embodiment of the present invention, an action may be performed in response to identifying the at least one object descriptor as matching the first object descriptor. In one embodiment, metadata information associated with the first object descriptor may be determined from the machine readable identifier and the action to be performed may be determined based upon the metadata information. The metadata information associated with the first object descriptor may identify the action. The action may be performed using a portion of the metadata information. The action may comprise annotating the digital media content.
In one embodiment, an identifier associated with the first object descriptor is determined from the machine readable identifier. In this embodiment, the first object descriptor determined from the machine readable identifier may specify features of a person. The set of objects determined from the digital media content may comprise one or more persons determined from the digital media content. A spatial location may be determined for a person corresponding to the at least one object descriptor. Performing the action may comprise adding, to the digital media content, information indicative of a spatial location of a person corresponding to the at least one object descriptor in the digital media content, and adding the identifier associated with the first object descriptor to the digital media content.
In one embodiment, metadata information associated with the first object descriptor is determined from the machine readable identifier. In this embodiment, the first object descriptor may specify features of a document fragment and the set of objects determined from the digital media content may comprise one or more document fragments. A spatial location in the digital media content of an object corresponding to the at least one object descriptor is determined. The action performed may comprise annotating the digital media content such that a portion of the metadata information is placed proximal to or overlapping with the spatial location of the object corresponding to the at least one object descriptor.
The digital media content comprises an image, textual information, audio information, or video information, etc., and combinations thereof.
The first object descriptor may be determined from the machine readable by decoding the machine readable identifier. In one embodiment, the machine readable identifier may be a barcode. The barcode may be read using a barcode reader and the first object descriptor may be determined from the barcode. In another embodiment, the machine readable identifier may be stored on a radio frequency identifier (RFID) tag. An RFID reader may be used to read information from the RFID tag and the first object descriptor may be determined from the information read from the RFID tag.
In one embodiment, generating an object descriptor for an object in the set of objects comprises extracting one of more features for the object and generating an object descriptor for the object based upon the extracted one or more features. The one or more features to be extracted for an object from the digital media content may be determined based upon the one or more features specified by the first object descriptor determined from the machine readable identifier. The first object descriptor may be represented in MPEG-7 format. The first object descriptor may specify one or more features of a person, a document fragment, an image, a slide, a motion, or a speech pattern. Determining the set of objects from the digital media content comprises analyzing the digital media content for the one or more features specified by the first object descriptor.
The foregoing, together with other features, embodiments, and advantages of the present invention, will become more apparent when referring to the following specification, claims, and accompanying drawings.
In the following description, for the purposes of explanation, specific details are set forth in order to provide a thorough understanding of the invention. However, it will be apparent that the invention may be practiced without these specific details.
Embodiments of the present invention provide techniques for automatically comparing one or more objects determined from digital media content (e.g., an image, audio information, video information) to one or more objects specified by a machine readable identifier to determine if an object determined from the media content matches an object specified by the machine readable identifier. One or more actions may be initiated upon determining that an object determined from the media content matches an object specified by the machine readable identifier. Information identifying the action to be initiated may also be encapsulated by the machine readable identifier.
Memory subsystem 104 may be configured to store the basic programming and data constructs that provide the functionality of system 100. For example, software code modules or instructions 112 that provide the functionality of system 100 may be stored in memory 104. These software modules or instructions may be executed by processor 102. Memory 104 may also provide a repository for storing various types of data used in accordance with the present invention. Memory subsystem 104 may include a number of memories including a main random access memory (RAM) for storage of instructions and data during program execution and a read only memory (ROM) in which fixed instructions are stored. Memory subsystem 104 may also include removable media such as an optical disk, a memory card, a memory cartridge, and other storage media.
User interface subsystem 110 enables a user to interact with system 100. For example, a user may use user interface subsystem 110 to input information to system 100. The information may be input via tools such as a mouse, a pointer, a keyboard, a touchscreen, a stylus, or other like input devices. System 100 may also output information using user interface subsystem 110. Output devices may include a screen or monitor, audio information output devices, or other devices capable of outputting information.
System 100 is configured to receive one or more machine readable identifiers 114. A machine readable identifier 114 encapsulates information related to a set of one or more objects. Machine readable identifiers may be embodied in different forms such as barcodes, information stored in radio frequency identifier (RFID) tags, and the like. A machine readable identifier may be provided to system 100, for example, by a user of system 100. Alternatively, system may read a machine readable identifier using machine readable identifier capture subsystem 106. Machine readable identifier capture subsystem 106 may be configured to capture (e.g., read, detect) a machine readable identifier or access it from a memory location accessible to system 100. The components of machine readable identifier capture subsystem 106 and the functions performed by the subsystem may depend upon the types of machine readable identifiers used. For example, for capturing machine readable identifiers that are in the form of barcodes, machine readable identifier capture subsystem 106 may comprise a barcode reader for reading the barcodes. In embodiments where the machine readable identifiers information is stored in RFID tags, then machine readable identifier capture subsystem 106 may comprise a receiver for reading the machine readable identifiers information from the RFID tags.
A machine readable identifier received by system 100 may specify one or more objects. System 100 may be configured to decode the machine readable identifier to extract information encapsulated by the machine readable identifier. The extracted information may comprise information specifying one or more objects. Processing for decoding and extracting the information from a machine readable identifier may be performed by processor 102.
The decoded machine readable identifier information 116 may be stored in memory 104. According to an embodiment of the present invention, the information decoded from a machine readable identifier comprises information identifying one or more object descriptors. An object descriptor identifies an object by specifying one or more features (or characteristics) of the object. Various different types of objects may be specified, possibly for different media content types. Examples of objects include but are not limited to a document fragment (for a digital document content), an image, a slide, a person (for a photograph), a speech pattern (for audio content), a motion (for video content), etc.
The type of features specified by an object descriptor may depend on the type of object being described. In general, the features may describe aspects of the contents of the object or other characteristics of the object being specified. For example, if the object is a face of a person, then the object descriptor may describe visual appearance characteristics of the face. As another example, if the object being specified is a document fragment, then the object descriptor may specify features identifying characteristics of the document fragment such as number of n-grams (e.g., words), color distribution within the doc fragment, white space distribution within the document fragment, color histogram, or other characteristics of the document fragment. As yet another example, if the object being specified is a speech pattern (for audio content), then the object descriptor may specify pre-recorded phonemes characterizing the speech pattern. If the object is a motion object (for video content), then the object descriptor may specify a video sequence or trajectory characterizing the motion (e.g., how a person walks, sign language). Various different standards may be used to specify and process features such as ISO/IEC 15938-3, MPEG-7: Visual, 2002, ISO/IEC 15938-4, MPEG-7: Audio, 2002, ISO/IEC 15938-8, MPEG-7: Extraction and Use of MPEG-7 Description, 2002, the entire contents of which are herein incorporated by reference for all purposes.
Information 116 that is decoded from a machine readable identifier may also comprise metadata information. The metadata information may specify one or more actions to be performed for one or more object descriptors. The metadata information may also comprise information to be used in the performance of the action. The metadata may also comprise other types of information that may be used for a variety of different purposes.
In one embodiment, the metadata information may also be associated with individual object descriptors decoded from the machine readable identifier. The metadata associated with an object descriptor may include additional information associated with the object descriptor. For example, the metadata associated with an individual object descriptor may identify an action to be performed when an object descriptor matching the individual object descriptor has been identified. Examples of metadata information and the types of actions that may be performed are described below.
As previously described, the information decoded from a machine readable identifier may comprise metadata information that may identify an action to be performed for an object descriptor. Various other techniques may also be used to determine an action to be performed for an object descriptor. For example, in some embodiments, the metadata information may not indicate an action(s) but the action(s) to be performed may be inherent or automatically determined based upon the metadata information. In other embodiments, the action to be performed for an object descriptor may also be inherently determined based upon the object descriptor itself and information specified by the object descriptor. In yet other embodiments, a combination of the object descriptor and the metadata information may be used to determine an action to be performed. In yet other embodiments, the action(s) to be performed may preconfigured for different types of objects. It should be apparent that performance of an action is not necessary for the present invention as recited in the claims.
As depicted in
As depicted in
System 100 is configured to analyze the media content and determine a set of one or more objects from the digital media content. In one embodiment, information identifying the type of objects to be determined from the media content may be specified by a user of system 100. The type of objects to be determined may also be determined from the type of media content itself. For example, if the media content is a photograph of people, then system 100 may be configured to automatically extract individual people (or some other object) from the photograph. As another example, if the media content is a document, then system 100 may be configured to automatically determine document fragments (or some other objects such as images, etc.) from the document.
The type of objects to be determined from the digital media content may also be determined from the object descriptors that have been decoded from one or more machine readable identifiers. For example, if the objects descriptors specify faces of people, then faces may automatically be determined from the media content. If the object descriptors specify audio phonemes, then voice patterns may be determined from the media content. In some embodiments, the application context in which processing is performed may also be used to identify the type of objects to be determined from the media content.
Other information related to the objects determined from the media content may also be determined. For example, where appropriate, spatial information for an object determined from the media content may be determined. The spatial information for an object may identify the position of the object in the media content. For example, as previously described, if the media content is a document, then document fragments may be determined from the document and spatial coordinates of the fragments within the document may also be determined. As another example, as previously described, if the media content is a photograph, then people or their faces present in the photograph may be determined from the photograph. For each person or face, spatial information indicating the location of the person or face within the photograph may also be determined. In one embodiment, spatial information for an object may be determined at the time the object is determined from the media content. Alternatively, spatial information for an object may be determined at some later time such as when an action is to be performed for an object.
System 100 is then configured to determine object descriptors for the objects determined from the media content. As part of the processing for generating object descriptors, for each determined object, system 100 may extract features from the object and then generate an object descriptor for the object based upon the extracted features. The features to be extracted from an object may depend upon the type of the object. For example, if the object is a person's face, then facial features may be extracted from the face object and then used to generate an object description for the object.
The features to be extracted from the objects may also be guided by the features described by the object descriptors that have been decoded from a machine readable identifier. For example, assume that a set of object descriptors have been extracted from a machine readable identifier with each object descriptor identifying a document fragment. An object descriptor may identify a document fragment by specifying word lengths occurring in the document fragment. In this embodiment, for a document fragment determined from the media content, system 100 may only extract word lengths from the document fragment. In this manner, for an object determined from the media content, the number of features that are extracted from the object is reduced and limited to the features identified by the object descriptors determined from the machine readable identifier. The object descriptors determined from a machine readable identifier thus reduce the features that have to be extracted from the media content. This simplifies the feature extraction process thereby reducing the memory and computational resources required for the feature extraction. Reduction of the features search space may also increase the accuracy of the feature extraction process.
After extracting features for the objects determined from the media content, system generates an object descriptor for each object based upon the features extracted from that object. The object descriptors generated for the objects determined from the media content are then compared to the object descriptors decoded from the machine readable identifier(s) to find matching object descriptors, if any.
Different techniques may be used to compare the two sets of object descriptors to find matching object descriptors. According to one technique, for each object descriptor in the set of object descriptors generated for objects determined from the media content, a distance metric is calculated for that object descriptor and each object descriptor in the set of object descriptors decoded from a machine readable identifier, where the distance metric between two object descriptors provides a measure of the similarity or matching between the two object descriptors. For any two object descriptors, the distance metric calculated for the pair may then be compared to a preset threshold to determine if the two object descriptors are to be considered as matching. For example, in an embodiment where a lower value for the distance metric identifies a better match, two object descriptors may be considered as matching if the distance metric calculated for the pair is less than the threshold value. Accordingly, system 100 may identify two object descriptors as matching even though they are not exactly similar. A user may thus control the desired amount of similarity required for a match by setting the threshold to an appropriate value.
If an object descriptor from the set of object descriptors generated for the objects determined from the media content matches an object descriptor decoded from the machine readable identifier, it implies that an object determined from the media content and corresponding to the matching object descriptor satisfies or matches the features specified by an object descriptor decoded from the machine readable identifier. Thus, a match indicates that an object described by an object descriptor decoded from the machine readable identifier is found in the objects determined from the media content. A match indicates that an object determined from the media content has features that match features specified by an object descriptor decoded from a machine readable identifier.
Various actions may be initiated for an object descriptor in the set of object descriptors generated for the objects that is identified as matching an object descriptor decoded from the machine readable identifier. As previously described, various techniques may be used for determining the action to be performed. For example, if a specific object descriptor generated for an object determined from the media content is found to match a first object descriptor decoded from an machine readable identifier, then the action to be initiated may be specified by metadata associated with the first object descriptor. Alternatively, the action to be performed may be inherent or automatically determined from the contents of the metadata information associated with the first object descriptor. The action to be initiated may also be determined from the nature of the first object descriptor or a combination of the different techniques. In yet other embodiments, a preconfigured action may be initiated or performed. In some embodiments, actions may also be performed if an object descriptor generated for an object determined from the media content does not match an object descriptor decoded from the machine readable identifier.
Various different types of actions may be initiated or performed. Examples of actions include annotating the media content, performing an action using the media content, updating a database, sending a message, invoking a URL, or other like actions. Metadata information, if any, associated with the matching object descriptor in information 116 decoded from machine readable identifier 114 may be used as part of the action.
For example, assume that a first object descriptor generated for an object determined from document media content is found to match a second object descriptor decoded from an machine readable identifier. Further assume that the metadata associated with the second object descriptor identifies an URI. In this scenario, the action to be performed may be automatically determined and may comprise annotating the document such that the URI associated with the second object descriptor is superimposed, partially overlapped, or placed proximal to the object corresponding to the first object descriptor in the document.
As another example, annotating the media content may comprise identifying the spatial location of an object in an image corresponding to a matching object descriptor and adding that information to the header of the image (e.g., may be added to the JPEG header for the image). In this manner various different actions may be initiated.
In the embodiment depicted in
In alternative embodiments, the various processing described above may be performed by system 100 in association with one or more other data processing systems.
For the embodiment depicted in
In another embodiment, the processing of machine readable identifiers, processing of media content, and comparison of object descriptors may be performed by system 100 while the action may be performed by server 130. In yet other embodiments, the actions to be performed may be determined by the server and the information provided to system 100. Various other combinations of the processing steps may be performed in alternative embodiments. Various other data processing systems in addition to system 100 and server 130 may be involved in the processing in other embodiments. In one embodiment, the machine readable identifier information and the digital media content may be received via a network connection via wired or wireless links.
As depicted in
The machine readable identifier received in 202 is decoded to extract object descriptors (D1) and possibly metadata information (step 204). Each decoded object descriptor may identify an object by specifying one or more features (or characteristics) of the object. The metadata information, if any, decoded from the machine readable identifier may be associated with one or more of the decoded object descriptors. The system is then ready for analyzing digital media content.
As shown in
One or more objects are then determined from the media content received in 206 (step 208). As previously described, various different type of objects may be determined from the media content. Various criteria may control the type of objects to be determined including based upon information provided by a user, from the type of media content, from the context of use, and/or from the information decoded in 204.
A set of object descriptors (D2) are then generated for the one or more objects determined from the digital media content in 208 (step 210). As part of the processing in step 210, features may be extracted from each object determined in 208. An object descriptor may then be generated for an object based upon the features extracted from the object. As previously described, the features to be extracted from an object may depend upon the type of the object. Further, the features to be extracted from the objects may be guided by the features described by the object descriptors decoded from a machine readable identifier in step 204. The features that are extracted for an object may be limited to the features identified by the object descriptors (D1) determined from the machine readable identifier. The object descriptors (D1) thus reduce the features that have to be extracted from the media content. This simplifies the feature extraction process thereby reducing the memory and computational resources required for the feature extraction. Reduction of the features search space may also increase the accuracy of the feature extraction process.
The object descriptors (D1) determined in 204 are then compared to the object descriptors (D2) generated in 210 to find any matching object descriptors (step 212). Different techniques may be used to compare the two sets of object descriptors to find matching object descriptors. As previously described, according to one technique, for each object descriptor in D2, a distance metric is calculated for that object descriptor and each object descriptor in D1, where the distance metric between two object descriptors provides a measure of the similarity or matching between the two object descriptors. The distance metric calculated for the pair may then be compared to a preset threshold to determine if the two object descriptors are to be considered as matching. Accordingly, two object descriptors may be considered as matching even though they are not exactly similar. A user may control the desired amount of similarity required for a match by setting the threshold to an appropriate value.
One or more actions, if any, may then be initiated or performed for an object descriptor in D2 that matches an object descriptor in D1 (step 214). Various techniques may be used to determine whether or not an action(s) is to be initiated for a matching object descriptor in D2. Various different types of actions may be initiated. In one embodiment, for a matching object descriptor in D2, the action to be initiated is specified by metadata associated with the matching object descriptor in D1. The action to be performed may be inherent or automatically determined from the contents of the metadata information associated with matching object descriptor in D1. The action to be initiated may also be determined from the nature of the matching object descriptors or a combination of the different techniques. In yet other embodiments, a preconfigured action may be initiated or performed.
Various different types of actions may be initiated and performed. Various types of information may be used in the performance of the action. For example, metadata information associated with the matching object descriptor in D1 may be used for performing the action. In other embodiment, the media content received in 206 or portions thereof may be used in performing the action. Information related to the object whose object descriptor matches an object descriptor in D1 may also be used for performing the action. For example, spatial information related to the object may be used in performing the action.
After the action has been initiated or performed, a check may be made to determine whether or not to stop processing (step 216). If further processing is to be performed then the flowchart processing continues with step 206 wherein new media content is received for analysis. Else, the processing is terminated.
As described above, a single machine readable identifier may be used to encode information for a collection of objects. The machine readable identifier may encode information identifying one or more object descriptors, each object descriptor identifying an object by identifying features or characteristics of the object. The machine readable identifier may also encode information identifying actions upon finding a matching object descriptor. Per the processing described above, objects determined from the media content are identified whose object descriptors match the object descriptors extracted from the machine readable identifier. Accordingly, objects from the media content are identified whose features match the features specified by the object descriptors decoded from the machine readable identifier. Actions may be performed for matching object descriptors. In this manner, actions may be performed for one or more objects determined from the media content without having to associate the machine readable identifier with the media content.
Since the object descriptors are encoded in the machine readable identifier itself, in one embodiment, the entire processing can be performed by the system by reading the machine readable identifier without requiring access to a server or some other system. For example, as depicted in
Further, the object descriptors and other information decoded from a machine readable identifier is used to guide the analysis of the media content. As a result, only specific objects and associated features need to be determined from the media content. This simplifies the process of analyzing the media content to recognize objects from the media content, especially where the media content may have several non-relevant objects. As a result of the simplification, the object determination may be performed using reduced processing and memory resources. Additionally, feature extraction is performed for only those objects that are determined from the media content. Further, only those features specified by object descriptors decoded from the machine readable identifier may be extracted. This simplifies the feature extraction process thereby reducing the memory and computational resources required for the feature extraction. Reduction of the features search space may also increase the accuracy of the feature extraction process. As a result, it may be feasible to run the feature extraction and object determination processes on a low power device such as a cellular phone.
All the objects identified by the object descriptors decoded from the machine readable identifier need not be present in the media content that is analyzed. The media content that is analyzed may comprise none of the objects described by object descriptors decoded from a machine readable identifier, a subset of the objects, or additional objects.
Various different applications may be based upon the teachings of the present invention. One such application is depicted in
Various different forms of barcodes may be used such as QRCodes+MPEG-7 barcodes. In one embodiment, the capacity of a QRCode is approximately 3000 bytes and is sufficient for representation of object descriptors describing features based upon color histograms, color layouts, shape descriptors (Angular Radial Transform (ART) based, Curvature Scale Space (CSS) based), texture descriptors (Gabor, etc.), Mixed Document Reality (MDR) descriptors (e.g., OCR based, image based), etc. MPEG-7 provides a standard and compressed representation for object descriptors. Applicable standards include ISO/IEC 15938-3, MPEG-7: Visual, 2002, ISO/IEC 15938-4, MPEG-7: Audio, 2002, and ISO/IEC 15938-8, MPEG-7: Extraction and Use of MPEG-7 Description, 2002. In one implementation, approximately 100 bytes may be used for each document fragment descriptor, and as a result up to 300 document fragment descriptors may be described in a document with one machine readable identifier. Machine readable identifier 306 may take different forms in alternative embodiments such as information stored on an RFID tag.
A user may use a device equipped with an appropriate reader to read machine readable identifier 306. For example, as depicted in
The machine readable identifier read by phone 308 may then be decoded to extract decoded information 310. Information 310 may comprise information identifying a set of object descriptors 312 (D1). In
Decoded information 310 may also comprise metadata information 314 associated with object descriptors 312. In the embodiment depicted in
While paging through book 302, a user may use camera phone 308 (or some other device) to capture a digital image 320 of a portion of a page 322 in book 302. Digital image 320 corresponds to a document fragment and represents the media content to be searched for objects that match the features specified by object descriptor 312. Image 320 may then be processed to determine objects. In this example, since object descriptors D1 identify document fragments, only document fragments may be determined from image 320. Features may then be extracted from the document fragments. The feature extraction may be limited to word lengths, spaces, and new lines as specified by the object descriptors.
Object descriptors (D2) may then be generated for the objects recognized from image 320. The object descriptors (D2) generated for the objects recognized from image 320 are then compared to object descriptors (D1) decoded from machine readable identifier 306 to find matching object descriptors.
An action is then performed for an object descriptor in D2 that matches an object descriptor in D1. The action that is performed is identified by the metadata associated with the matching object descriptor in D1. In the example depicted in
In the example depicted in
As part of the action, URLs may be associated with the markers. For a particular identified hot zone, the marker overlaid on the hot zone may be associated with a URL specified by the metadata for the corresponding matching object descriptor from D1. One or more of the URLs may also be invoked and displayed. For example, a document 328 corresponding to a URL associated with marker 324 may be invoked and displayed by device 308 (or by some other device or system). In other embodiments, the URL corresponding to a marker may be invoked when a user selects the marker from annotated image 320′.
As the use of RFID tags becomes more prevalent, more and more products will come with RFID tags attached to them. People may also carry RFID tags that store personal identification information. For example, a person may carry an RFID tag that stores biometric information about a person. For example, an RFID tag associated with a person may store one or more object descriptors describing features of the person such as information describing the visual appearance of the person's face. Since the object descriptors describe face features they may also be referred to as face descriptors. A descriptor may also describe other features of a person such as biometric information for the person.
An RFID tag may also store metadata for the person. The metadata may be associated with one or more face descriptors stored in the RFID tag. For example, for a person, the metadata may identify the name of the person. The metadata associated with a face descriptor may also store other types of information such as the address of the person, health information for the person, telephone number, and other information related to the person. Other information may also be stored by the RFID tag. Several people may carry such RFID tags, each potentially storing a facial descriptor for the corresponding person and possibly associated metadata information.
In alternative embodiments, other types of object descriptors describing features of a person may be stored in an RFID tag for a person. These object descriptors may then be used to identify persons from the media content.
The RFID information read by the camera from the RFID tags may then be decoded by the camera to form decoded information 404. Decoded information 404 may comprise a list of face descriptors 406 and associated metadata information 408 specifying names of the persons.
Photo image 410 captured by the digital camera serves as the media content that is analyzed. In one embodiment, the analysis of the captured photo image may be performed by the camera. The camera may determine a set of objects from the captured image. The objects in this application are faces 412 of people occurring in the captured image. For each face 412 determined from image 410, the spatial location of the face in the photograph may also be determined. In alternative embodiments, the spatial information for a face in the photograph may be determined at the time of performing an action for the face. Features may be extracted for the faces. The camera may then generate a face descriptor (object descriptor) for each face (object) determined from image 410.
The camera may then compare the face descriptors generated for the one or more faces determined from image 410 to face descriptors 406 decoded from information read from the RFID tags to find any matching object descriptors. As previously described, various different techniques may be used to perform the comparison and to determine whether two object descriptors are matching.
If a face descriptor for a face determined from image 410 is found to match a face descriptor 406 decoded from the RFID tags, it implies that a face in photo image 410 satisfies or matches the features specified by a face descriptor decoded or read from the RFID tags. Thus, a match indicates that a face described by a face descriptor decoded from information read from the RFID tags is found in the faces (objects) determined from the photo image.
An action may then be initiated or performed for each matching face descriptor generated for a face determined from image 410. In the example depicted in
The information may be added to the image in various ways. For example, a JPEG image 414 comprises image data 416 and also header information related to the image. The tag information for a face, including spatial coordinates information and the name, may be inserted in the header for the JPEG image. In another embodiment, image 410 may be annotated by adding information to the image data. For example, the name may be printed on top (or proximal to or partially overlapping) of the face corresponding to the name.
Other actions may also be performed. For example, for a particular face located in the image, the action performed may comprise associating a link with the face in the image, wherein selection of the link invokes information (e.g., a web page of the person, a document authored/edited/viewed by the person, etc.) related to the person.
According to an embodiment of the present invention, the processing depicted in
In the example depicted in
As another example, a camera equipped with an RFID reader may be used to take capture an image of books placed on a bookshelf in a library Each book may be tagged with an RFID tags. A tag associated with a book may store the card catalog number that is printed on the spine of the book. The image may then be analyzed to identify card catalog numbers from the image. The spatial locations of books within the image may also be determined. The image may then be annotated with the card catalog numbers such that card numbers are assigned to individual books in the image.
Similar processing techniques may also be used for applications comprising a collection of things such as books, products, etc. For example, a camera equipped with an RFID reader may be used to capture an image of products on a shelf in a store. For example, the image may depict different brands of toothpastes. An RFID tag may be associated with each toothpaste and may store information specifying a toothpaste descriptor describing features of the toothpaste such as color histograms of the package, etc. Each RFID tag attached to a toothpaste may also store metadata information specifying a brand of the toothpaste (or other information such as product identification information). Based upon the information read from the RFID tags attached to the toothpastes, the image of the toothpastes may be analyzed to determine locations of individual toothpastes in the image. The image may then be annotated with correct brand names for the toothpastes at various locations in the image.
As previously described, teachings of the present invention may be used to determine occurrences of particular objects (described by information decoded from one or more machine readable identifiers) in media content and also to determine spatial locations of the particular objects. For example, an image may be captured of physical objects, and the image may then be analyzed, guided by information read from machine readable identifiers read for the physical objects, to determine the spatial locations of the physical objects within the image which in turn identifies to the spatial locations of the physical objects. Embodiments of the present invention may thus be used to determine spatial locations of physical objects.
Various different applications may require determination of spatial locations of physical objects. One solution for determining the spatial locations of physical objects is to attach RFID tags to the physical objects. A RFID reader grid may then be used to read information from the RFID tags and to determine the locations of the physical objects. A RFID reader grid however may not always be available. Further, the costs of creating such a grid may be prohibitive.
Embodiments of the present invention provide a simpler solution to the problem of determining the location of the physical objects. An RFID tag may be attached to each physical object. The RFID tag attached to a physical object may store information specifying an object descriptor. The object descriptor may describe features of the physical object. For example, an object descriptor may specify visually distinctive features of the physical object such as pictures, text, etc. printed on the token, color of the object, dimensions or basic shape of the object, a color histogram for the object, text features, etc. A camera equipped with an RFID reader may be used to capture an image of the physical objects. While capturing the image, the camera may also be configured to read information from RFID tags attached to the physical objects. The information read from the RFID tags may be decoded to determine object descriptors (and possibly metadata information) specifying features of the physical objects. Based upon the information objects descriptors, the image of the physical objects may be analyzed to determine one or more objects in the image and spatial locations of the objects determined from the image. Objects descriptors may be generated for the objects determined from the image. The objects descriptors generated for objects determined from the image may then be compared to objects descriptors decoded from information read from the RFID tags to identify matching object descriptors and associated spatial information. In this manner, the spatial positions of the physical objects may be determined from analyzing the image based upon information read from the RFID tags attached to the physical objects.
An example of an application where spatial locations of physical objects is determined is described in U.S. application Ser. No. 11/396,375 filed Mar. 31, 2006. In this applications, various actions are performed based upon spatial positions of media keys. Embodiments of the present invention may be used to determine the physical locations of the media keys.
The embodiments of the present invention described above assume that the object descriptors and the corresponding metadata information are decoded or extracted from one or more machine readable identifiers. In alternative embodiments, the object descriptors and the metadata may also be provided to the processing system, in which cases the processing system does not have to process machine readable identifiers. Rest of the processing may be performed as described above. For example, in the example depicted in
Although specific embodiments of the invention have been described, various modifications, alterations, alternative constructions, and equivalents are also encompassed within the scope of the invention. The described invention is not restricted to operation within certain specific data processing environments, but is free to operate within a plurality of data processing environments. Additionally, although the present invention has been described using a particular series of transactions and steps, it should be apparent to those skilled in the art that the scope of the present invention is not limited to the described series of transactions and steps.
Further, while the present invention has been described using a particular combination of hardware and software, it should be recognized that other combinations of hardware and software are also within the scope of the present invention. The present invention may be implemented only in hardware, or only in software, or using combinations thereof.
The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. It will, however, be evident that additions, subtractions, deletions, and other modifications and changes may be made thereunto without departing from the broader spirit and scope of the invention as set forth in the claims.
Number | Name | Date | Kind |
---|---|---|---|
1915993 | Handel | Jun 1933 | A |
2612994 | Woodland et al. | Oct 1952 | A |
4759075 | Lipkie et al. | Jul 1988 | A |
5010581 | Kanno | Apr 1991 | A |
5027421 | Kanno | Jun 1991 | A |
5077805 | Tan | Dec 1991 | A |
5109439 | Froessl | Apr 1992 | A |
5263100 | Kim et al. | Nov 1993 | A |
5392447 | Schlack et al. | Feb 1995 | A |
5416892 | Loken-Kim | May 1995 | A |
5432864 | Lu et al. | Jul 1995 | A |
5465353 | Hull et al. | Nov 1995 | A |
5546502 | Hart et al. | Aug 1996 | A |
5553217 | Hart et al. | Sep 1996 | A |
5555556 | Ozaki | Sep 1996 | A |
5579471 | Barber et al. | Nov 1996 | A |
5706097 | Schelling et al. | Jan 1998 | A |
5752055 | Repath | May 1998 | A |
5754772 | Leaf | May 1998 | A |
5757953 | Jang | May 1998 | A |
5761344 | Al-Hussein | Jun 1998 | A |
5764277 | Loui et al. | Jun 1998 | A |
5806005 | Hull et al. | Sep 1998 | A |
5832474 | Lopresti et al. | Nov 1998 | A |
5832530 | Paknad et al. | Nov 1998 | A |
5842194 | Arbuckle | Nov 1998 | A |
5848184 | Taylor et al. | Dec 1998 | A |
5867597 | Peairs et al. | Feb 1999 | A |
5873077 | Kanoh et al. | Feb 1999 | A |
5889886 | Mahoney | Mar 1999 | A |
5892843 | Zhou et al. | Apr 1999 | A |
5905502 | Deering | May 1999 | A |
5907835 | Yokomizo et al. | May 1999 | A |
5918012 | Astiz et al. | Jun 1999 | A |
5933525 | Makhoul et al. | Aug 1999 | A |
5933823 | Cullen | Aug 1999 | A |
5956468 | Ancin | Sep 1999 | A |
5968175 | Morishita et al. | Oct 1999 | A |
5999664 | Mahoney et al. | Dec 1999 | A |
5999915 | Nahan et al. | Dec 1999 | A |
6006240 | Handley | Dec 1999 | A |
6026411 | Delp | Feb 2000 | A |
6035055 | Wang et al. | Mar 2000 | A |
6067369 | Kamei | May 2000 | A |
6104834 | Hull | Aug 2000 | A |
6121969 | Jain et al. | Sep 2000 | A |
6138129 | Combs | Oct 2000 | A |
6192157 | Prebble | Feb 2001 | B1 |
6208771 | Jared et al. | Mar 2001 | B1 |
6223171 | Chaudhuri et al. | Apr 2001 | B1 |
6253201 | Abdel-Mottaleb | Jun 2001 | B1 |
6301386 | Zhu et al. | Oct 2001 | B1 |
6332039 | Bando et al. | Dec 2001 | B1 |
6345109 | Souma | Feb 2002 | B1 |
6345274 | Zhu et al. | Feb 2002 | B1 |
6353822 | Lieberman | Mar 2002 | B1 |
6363381 | Lee et al. | Mar 2002 | B1 |
6393142 | Swain et al. | May 2002 | B1 |
6397213 | Cullen et al. | May 2002 | B1 |
6404925 | Foote et al. | Jun 2002 | B1 |
6405172 | Baker et al. | Jun 2002 | B1 |
6408257 | Harrington et al. | Jun 2002 | B1 |
6411953 | Ganapathy et al. | Jun 2002 | B1 |
6430307 | Souma | Aug 2002 | B1 |
6430312 | Huang et al. | Aug 2002 | B1 |
6445834 | Rising, III | Sep 2002 | B1 |
6448979 | Schena et al. | Sep 2002 | B1 |
6457026 | Graham et al. | Sep 2002 | B1 |
6460036 | Herz | Oct 2002 | B1 |
6470094 | Lienhart et al. | Oct 2002 | B1 |
6470264 | Bide | Oct 2002 | B2 |
6504571 | Narayanaswami et al. | Jan 2003 | B1 |
6537324 | Tabata et al. | Mar 2003 | B1 |
6567799 | Sweet et al. | May 2003 | B2 |
6574375 | Cullen et al. | Jun 2003 | B1 |
6574644 | Hsu et al. | Jun 2003 | B2 |
6584223 | Shiiyama | Jun 2003 | B1 |
6611862 | Riesman | Aug 2003 | B2 |
6625311 | Zhu | Sep 2003 | B1 |
6686970 | Windle | Feb 2004 | B1 |
6693649 | Lipscomb et al. | Feb 2004 | B1 |
6732915 | Nelson et al. | May 2004 | B1 |
6751343 | Ferrell et al. | Jun 2004 | B1 |
6753883 | Schena et al. | Jun 2004 | B2 |
6766363 | Rothschild | Jul 2004 | B1 |
6781694 | Nahum et al. | Aug 2004 | B2 |
6791605 | Reele et al. | Sep 2004 | B1 |
6799201 | Lee et al. | Sep 2004 | B1 |
6804332 | Miner et al. | Oct 2004 | B1 |
6804659 | Graham et al. | Oct 2004 | B1 |
6813381 | Ohnishi et al. | Nov 2004 | B2 |
6824057 | Rathus et al. | Nov 2004 | B2 |
6827267 | Rathus et al. | Dec 2004 | B2 |
6830187 | Rathus et al. | Dec 2004 | B2 |
6834804 | Rathus et al. | Dec 2004 | B2 |
6842755 | Maslov | Jan 2005 | B2 |
6843411 | Rathus et al. | Jan 2005 | B2 |
6859909 | Lerner et al. | Feb 2005 | B1 |
6865302 | Chang | Mar 2005 | B2 |
6866196 | Rathus et al. | Mar 2005 | B1 |
6874131 | Blumberg | Mar 2005 | B2 |
6874420 | Lewis et al. | Apr 2005 | B2 |
6882741 | Dobashi | Apr 2005 | B2 |
6922699 | Schuetze et al. | Jul 2005 | B2 |
6929182 | Rathus et al. | Aug 2005 | B2 |
6940491 | Incertis Carro | Sep 2005 | B2 |
6958821 | McIntyre | Oct 2005 | B1 |
6963358 | Cohen et al. | Nov 2005 | B2 |
6964374 | Djuknic et al. | Nov 2005 | B1 |
6980962 | Arganbright et al. | Dec 2005 | B1 |
6981224 | Gardner | Dec 2005 | B1 |
6993573 | Hunter | Jan 2006 | B2 |
6999204 | Mortenson et al. | Feb 2006 | B2 |
7013289 | Horn | Mar 2006 | B2 |
7013309 | Chakraborty et al. | Mar 2006 | B2 |
7031965 | Moriya et al. | Apr 2006 | B1 |
7035467 | Nicponski | Apr 2006 | B2 |
7051086 | Rhoads et al. | May 2006 | B2 |
7054489 | Yamaoka et al. | May 2006 | B2 |
7062722 | Carlin et al. | Jun 2006 | B1 |
7089487 | Tsai | Aug 2006 | B2 |
7092953 | Haynes | Aug 2006 | B1 |
7134095 | Smith et al. | Nov 2006 | B1 |
7136093 | Itoh et al. | Nov 2006 | B1 |
7150021 | Vajjhala et al. | Dec 2006 | B1 |
7150399 | Barrus et al. | Dec 2006 | B2 |
7167574 | Kim et al. | Jan 2007 | B2 |
7174031 | Rhoads et al. | Feb 2007 | B2 |
7185274 | Rubin et al. | Feb 2007 | B1 |
7206820 | Rhoads et al. | Apr 2007 | B1 |
7213101 | Srinivasan et al. | May 2007 | B1 |
7232057 | Rathus et al. | Jun 2007 | B2 |
7236632 | Erol et al. | Jun 2007 | B2 |
7239402 | Soler et al. | Jul 2007 | B2 |
7240279 | Chartier et al. | Jul 2007 | B1 |
7249123 | Elder et al. | Jul 2007 | B2 |
7251689 | Wesley | Jul 2007 | B2 |
7263205 | Lev | Aug 2007 | B2 |
7281199 | Nicol et al. | Oct 2007 | B1 |
7305435 | Hamynen | Dec 2007 | B2 |
7310769 | Dash | Dec 2007 | B1 |
7310779 | Carro | Dec 2007 | B2 |
7359094 | Sayuda | Apr 2008 | B1 |
7362323 | Doyle | Apr 2008 | B2 |
7363580 | Tabata et al. | Apr 2008 | B2 |
7366979 | Spielberg et al. | Apr 2008 | B2 |
7379627 | Li et al. | May 2008 | B2 |
7386789 | Chao et al. | Jun 2008 | B2 |
7392287 | Ratcliff, III | Jun 2008 | B2 |
7403642 | Zhang et al. | Jul 2008 | B2 |
7406214 | Rhoads et al. | Jul 2008 | B2 |
7421153 | Ronca et al. | Sep 2008 | B1 |
7421155 | King et al. | Sep 2008 | B2 |
7424541 | Bourne | Sep 2008 | B2 |
7437023 | King et al. | Oct 2008 | B2 |
7450760 | Molnar et al. | Nov 2008 | B2 |
7457825 | Li et al. | Nov 2008 | B2 |
7458014 | Rubin et al. | Nov 2008 | B1 |
7463270 | Vale et al. | Dec 2008 | B2 |
7463790 | Shepherd | Dec 2008 | B2 |
7489415 | Furuta | Feb 2009 | B2 |
7509386 | Miyashita | Mar 2009 | B2 |
7546524 | Bryar et al. | Jun 2009 | B1 |
7551780 | Nudd et al. | Jun 2009 | B2 |
7567262 | Clemens et al. | Jul 2009 | B1 |
7585224 | Dyke-Wells | Sep 2009 | B2 |
7587681 | Kake et al. | Sep 2009 | B2 |
7593605 | King et al. | Sep 2009 | B2 |
7593961 | Eguchi | Sep 2009 | B2 |
7613686 | Rui | Nov 2009 | B2 |
7620254 | Hahn et al. | Nov 2009 | B2 |
7623259 | Tojo | Nov 2009 | B2 |
7643705 | Erol | Jan 2010 | B1 |
7647331 | Li et al. | Jan 2010 | B2 |
7653238 | Stentiford | Jan 2010 | B2 |
7668405 | Gallagher | Feb 2010 | B2 |
7676767 | Hofmeister et al. | Mar 2010 | B2 |
7683933 | Tanaka | Mar 2010 | B2 |
7684622 | Fisher et al. | Mar 2010 | B2 |
7702673 | Hull | Apr 2010 | B2 |
7702681 | Brewer | Apr 2010 | B2 |
7707039 | King et al. | Apr 2010 | B2 |
7725508 | Lawarence et al. | May 2010 | B2 |
7742953 | King et al. | Jun 2010 | B2 |
7746376 | Mendoza et al. | Jun 2010 | B2 |
7752534 | Blanchard et al. | Jul 2010 | B2 |
7761436 | Norton et al. | Jul 2010 | B2 |
7765231 | Rathus et al. | Jul 2010 | B2 |
7779355 | Erol et al. | Aug 2010 | B1 |
7787655 | Cohen | Aug 2010 | B1 |
7801845 | King et al. | Sep 2010 | B1 |
7809192 | Gokurk et al. | Oct 2010 | B2 |
7812986 | Graham et al. | Oct 2010 | B2 |
7872669 | Darrell et al. | Jan 2011 | B2 |
7882177 | Wei et al. | Feb 2011 | B2 |
7885955 | Hull | Feb 2011 | B2 |
7894684 | Monobe et al. | Feb 2011 | B2 |
7917554 | Hull | Mar 2011 | B2 |
7920759 | Hull | Apr 2011 | B2 |
7930292 | Nakajima | Apr 2011 | B2 |
7946491 | Burlan et al. | May 2011 | B2 |
7991778 | Hull | Aug 2011 | B2 |
8005831 | Hull | Aug 2011 | B2 |
8036441 | Frank et al. | Oct 2011 | B2 |
8073263 | Hull | Dec 2011 | B2 |
8086038 | Ke | Dec 2011 | B2 |
8144921 | Ke | Mar 2012 | B2 |
8156115 | Erol | Apr 2012 | B1 |
8156116 | Graham | Apr 2012 | B2 |
8156427 | Graham | Apr 2012 | B2 |
8176054 | Moraleda | May 2012 | B2 |
8184155 | Ke | May 2012 | B2 |
8195659 | Hull | Jun 2012 | B2 |
8212832 | Stefanidis | Jul 2012 | B2 |
8276088 | Ke | Sep 2012 | B2 |
8326037 | Abitz et al. | Dec 2012 | B1 |
8332401 | Hull | Dec 2012 | B2 |
8335789 | Hull et al. | Dec 2012 | B2 |
8369655 | Moraleda et al. | Feb 2013 | B2 |
8385589 | Erol et al. | Feb 2013 | B2 |
8385660 | Moraleda et al. | Feb 2013 | B2 |
8386336 | Fox et al. | Feb 2013 | B1 |
8600989 | Hull et al. | Dec 2013 | B2 |
8612475 | Graham et al. | Dec 2013 | B2 |
8676810 | Moraleda | Mar 2014 | B2 |
8825682 | Kishi et al. | Sep 2014 | B2 |
8838591 | Hull et al. | Sep 2014 | B2 |
8856108 | Erol et al. | Oct 2014 | B2 |
8868555 | Erol et al. | Oct 2014 | B2 |
8892595 | Graham et al. | Nov 2014 | B2 |
8949287 | Hull et al. | Feb 2015 | B2 |
8965145 | Moraleda et al. | Feb 2015 | B2 |
8989431 | Erol et al. | Mar 2015 | B1 |
9020966 | Erol et al. | Apr 2015 | B2 |
9058331 | Graham et al. | Jun 2015 | B2 |
9063952 | Moraleda et al. | Jun 2015 | B2 |
9063953 | Hull et al. | Jun 2015 | B2 |
9087104 | Graham et al. | Jul 2015 | B2 |
9092423 | Moraleda | Jul 2015 | B2 |
9171202 | Hull et al. | Oct 2015 | B2 |
9176984 | Hull et al. | Nov 2015 | B2 |
9311336 | Hull et al. | Apr 2016 | B2 |
20010011276 | Durst, Jr. et al. | Aug 2001 | A1 |
20010013546 | Ross | Aug 2001 | A1 |
20010019636 | Slatter | Sep 2001 | A1 |
20010024514 | Matsunaga | Sep 2001 | A1 |
20010037454 | Botti et al. | Nov 2001 | A1 |
20010042030 | Ito et al. | Nov 2001 | A1 |
20010042085 | Peairs et al. | Nov 2001 | A1 |
20010043741 | Mahoney et al. | Nov 2001 | A1 |
20010047373 | Jones | Nov 2001 | A1 |
20010049700 | Ichikura | Dec 2001 | A1 |
20020008697 | Deering | Jan 2002 | A1 |
20020029232 | Bobrow et al. | Mar 2002 | A1 |
20020038430 | Edwards et al. | Mar 2002 | A1 |
20020052872 | Yada | May 2002 | A1 |
20020054059 | Schneiderman | May 2002 | A1 |
20020063709 | Gilbert et al. | May 2002 | A1 |
20020069418 | Philips | Jun 2002 | A1 |
20020073236 | Helgeson et al. | Jun 2002 | A1 |
20020093538 | Carlin | Jul 2002 | A1 |
20020102966 | Lev et al. | Aug 2002 | A1 |
20020118379 | Chakraborty | Aug 2002 | A1 |
20020126905 | Suzuki et al. | Sep 2002 | A1 |
20020129057 | Spielberg | Sep 2002 | A1 |
20020131641 | Luo et al. | Sep 2002 | A1 |
20020145746 | Mortenson et al. | Oct 2002 | A1 |
20020146176 | Meyers | Oct 2002 | A1 |
20020154148 | Inoue et al. | Oct 2002 | A1 |
20020157028 | Koue et al. | Oct 2002 | A1 |
20020159640 | Vaithillingam et al. | Oct 2002 | A1 |
20020161673 | Lee | Oct 2002 | A1 |
20020161747 | Li et al. | Oct 2002 | A1 |
20020191003 | Hobgood et al. | Dec 2002 | A1 |
20020191848 | Boose et al. | Dec 2002 | A1 |
20020194264 | Uchiyama et al. | Dec 2002 | A1 |
20020198789 | Waldman | Dec 2002 | A1 |
20030012428 | Syeda-Mahmood | Jan 2003 | A1 |
20030025714 | Ebersole et al. | Feb 2003 | A1 |
20030026457 | Nahum | Feb 2003 | A1 |
20030030828 | Soler et al. | Feb 2003 | A1 |
20030030835 | Yoshida et al. | Feb 2003 | A1 |
20030063319 | Umeda et al. | Apr 2003 | A1 |
20030063673 | Riemens et al. | Apr 2003 | A1 |
20030069932 | Hall et al. | Apr 2003 | A1 |
20030098877 | Boegelund | May 2003 | A1 |
20030110216 | Althin et al. | Jun 2003 | A1 |
20030112930 | Bosik et al. | Jun 2003 | A1 |
20030115481 | Baird et al. | Jun 2003 | A1 |
20030121006 | Tabata et al. | Jun 2003 | A1 |
20030122922 | Saffer et al. | Jul 2003 | A1 |
20030126147 | Essafi et al. | Jul 2003 | A1 |
20030128375 | Ruhl et al. | Jul 2003 | A1 |
20030142106 | Saund et al. | Jul 2003 | A1 |
20030151674 | Lin | Aug 2003 | A1 |
20030152293 | Bresler et al. | Aug 2003 | A1 |
20030169910 | Reisman et al. | Sep 2003 | A1 |
20030169922 | Kamon | Sep 2003 | A1 |
20030179230 | Seidman | Sep 2003 | A1 |
20030187886 | Hull et al. | Oct 2003 | A1 |
20030190094 | Yokota | Oct 2003 | A1 |
20030193530 | Blackman et al. | Oct 2003 | A1 |
20030195883 | Mojsilovic et al. | Oct 2003 | A1 |
20030212585 | Kyoya et al. | Nov 2003 | A1 |
20040012569 | Hara | Jan 2004 | A1 |
20040015495 | Kim et al. | Jan 2004 | A1 |
20040017482 | Weitman | Jan 2004 | A1 |
20040027604 | Jeran et al. | Feb 2004 | A1 |
20040036679 | Emerson | Feb 2004 | A1 |
20040042667 | Lee et al. | Mar 2004 | A1 |
20040047499 | Shams | Mar 2004 | A1 |
20040102898 | Yokota | May 2004 | A1 |
20040122811 | Page | Jun 2004 | A1 |
20040133582 | Howard et al. | Jul 2004 | A1 |
20040139391 | Stumbo et al. | Jul 2004 | A1 |
20040143644 | Berton et al. | Jul 2004 | A1 |
20040190791 | Oyabu et al. | Sep 2004 | A1 |
20040198396 | Fransioli | Oct 2004 | A1 |
20040199531 | Kim et al. | Oct 2004 | A1 |
20040201706 | Shimizu et al. | Oct 2004 | A1 |
20040205347 | Erol et al. | Oct 2004 | A1 |
20040205466 | Kuppinger et al. | Oct 2004 | A1 |
20040215689 | Dooley et al. | Oct 2004 | A1 |
20040220898 | Eguchi et al. | Nov 2004 | A1 |
20040221244 | Baldino | Nov 2004 | A1 |
20040233235 | Rubin et al. | Nov 2004 | A1 |
20040238621 | Beenau et al. | Dec 2004 | A1 |
20040243514 | Wankmueller | Dec 2004 | A1 |
20040260625 | Usami et al. | Dec 2004 | A1 |
20040260680 | Best et al. | Dec 2004 | A1 |
20040264780 | Zhang et al. | Dec 2004 | A1 |
20050012960 | Eden et al. | Jan 2005 | A1 |
20050021478 | Gautier et al. | Jan 2005 | A1 |
20050069291 | Voss et al. | Mar 2005 | A1 |
20050080627 | Hennebert et al. | Apr 2005 | A1 |
20050080693 | Foss et al. | Apr 2005 | A1 |
20050080871 | Dinh et al. | Apr 2005 | A1 |
20050084154 | Li et al. | Apr 2005 | A1 |
20050086188 | Hillis et al. | Apr 2005 | A1 |
20050086224 | Franciosa et al. | Apr 2005 | A1 |
20050088684 | Naito et al. | Apr 2005 | A1 |
20050089246 | Luo | Apr 2005 | A1 |
20050097435 | Prakash et al. | May 2005 | A1 |
20050100219 | Berkner et al. | May 2005 | A1 |
20050108406 | Lee et al. | May 2005 | A1 |
20050111738 | Iizuka | May 2005 | A1 |
20050114325 | Liu et al. | May 2005 | A1 |
20050125390 | Hurst-Hiller et al. | Jun 2005 | A1 |
20050129293 | Acharya et al. | Jun 2005 | A1 |
20050135483 | Nair | Jun 2005 | A1 |
20050160115 | Starkweather | Jul 2005 | A1 |
20050160258 | O'Shea et al. | Jul 2005 | A1 |
20050165747 | Bargeron et al. | Jul 2005 | A1 |
20050165784 | Gomez et al. | Jul 2005 | A1 |
20050169511 | Jones | Aug 2005 | A1 |
20050169520 | Chen et al. | Aug 2005 | A1 |
20050182773 | Feinsmith | Aug 2005 | A1 |
20050185060 | Neven | Aug 2005 | A1 |
20050185225 | Brawn et al. | Aug 2005 | A1 |
20050187768 | Godden | Aug 2005 | A1 |
20050190273 | Toyama et al. | Sep 2005 | A1 |
20050198095 | Du et al. | Sep 2005 | A1 |
20050216257 | Tanabe et al. | Sep 2005 | A1 |
20050234851 | King et al. | Oct 2005 | A1 |
20050240381 | Seiler et al. | Oct 2005 | A1 |
20050244059 | Turski | Nov 2005 | A1 |
20050256866 | Lu et al. | Nov 2005 | A1 |
20050259866 | Jacobs et al. | Nov 2005 | A1 |
20050261990 | Gocht et al. | Nov 2005 | A1 |
20050262240 | Drees | Nov 2005 | A1 |
20050273812 | Sakai | Dec 2005 | A1 |
20050288859 | Golding et al. | Dec 2005 | A1 |
20050288911 | Porikli | Dec 2005 | A1 |
20050289182 | Pandian et al. | Dec 2005 | A1 |
20050289447 | Hadley et al. | Dec 2005 | A1 |
20060002607 | Boncyk | Jan 2006 | A1 |
20060012677 | Neven et al. | Jan 2006 | A1 |
20060014317 | Farnworth | Jan 2006 | A1 |
20060020630 | Stager et al. | Jan 2006 | A1 |
20060023945 | King et al. | Feb 2006 | A1 |
20060026140 | King et al. | Feb 2006 | A1 |
20060041605 | King et al. | Feb 2006 | A1 |
20060043188 | Kricorissian | Mar 2006 | A1 |
20060047639 | King et al. | Mar 2006 | A1 |
20060048059 | Etkin | Mar 2006 | A1 |
20060053097 | King et al. | Mar 2006 | A1 |
20060053101 | Stuart et al. | Mar 2006 | A1 |
20060056696 | Jun et al. | Mar 2006 | A1 |
20060056697 | Jun et al. | Mar 2006 | A1 |
20060061806 | King et al. | Mar 2006 | A1 |
20060070120 | Aoki et al. | Mar 2006 | A1 |
20060079214 | Mertama et al. | Apr 2006 | A1 |
20060080286 | Svendsen | Apr 2006 | A1 |
20060082438 | Bazakos et al. | Apr 2006 | A1 |
20060085477 | Phillips et al. | Apr 2006 | A1 |
20060085735 | Shimizu | Apr 2006 | A1 |
20060104515 | King et al. | May 2006 | A1 |
20060112092 | Ziou et al. | May 2006 | A1 |
20060114485 | Sato | Jun 2006 | A1 |
20060116555 | Pavlidis et al. | Jun 2006 | A1 |
20060119880 | Dandekar et al. | Jun 2006 | A1 |
20060122884 | Graham et al. | Jun 2006 | A1 |
20060122983 | King et al. | Jun 2006 | A1 |
20060123347 | Hewitt et al. | Jun 2006 | A1 |
20060140475 | Chin et al. | Jun 2006 | A1 |
20060140614 | Kim et al. | Jun 2006 | A1 |
20060143176 | Mojsilovic et al. | Jun 2006 | A1 |
20060147107 | Zhang et al. | Jul 2006 | A1 |
20060150079 | Albornoz et al. | Jul 2006 | A1 |
20060173560 | Widrow | Aug 2006 | A1 |
20060190812 | Elienby et al. | Aug 2006 | A1 |
20060192997 | Matsumoto et al. | Aug 2006 | A1 |
20060200347 | Kim et al. | Sep 2006 | A1 |
20060200480 | Harris et al. | Sep 2006 | A1 |
20060206335 | Thelen et al. | Sep 2006 | A1 |
20060218225 | Hee Voon et al. | Sep 2006 | A1 |
20060227992 | Rathus et al. | Oct 2006 | A1 |
20060240862 | Neven et al. | Oct 2006 | A1 |
20060251292 | Gokturk et al. | Nov 2006 | A1 |
20060251339 | Gokturk et al. | Nov 2006 | A1 |
20060253439 | Ren et al. | Nov 2006 | A1 |
20060253491 | Gokturk et al. | Nov 2006 | A1 |
20060262352 | Hull et al. | Nov 2006 | A1 |
20060262962 | Hull et al. | Nov 2006 | A1 |
20060262976 | Hart et al. | Nov 2006 | A1 |
20060264209 | Atkinson et al. | Nov 2006 | A1 |
20060285172 | Hull et al. | Dec 2006 | A1 |
20060285755 | Hager et al. | Dec 2006 | A1 |
20060285772 | Hull et al. | Dec 2006 | A1 |
20060286951 | Nagamoto et al. | Dec 2006 | A1 |
20060294049 | Sechrest et al. | Dec 2006 | A1 |
20060294094 | King | Dec 2006 | A1 |
20070003147 | Viola et al. | Jan 2007 | A1 |
20070003166 | Berkner | Jan 2007 | A1 |
20070006129 | Cieslak et al. | Jan 2007 | A1 |
20070019261 | Chu | Jan 2007 | A1 |
20070036469 | Kim et al. | Feb 2007 | A1 |
20070041642 | Romanoff et al. | Feb 2007 | A1 |
20070041668 | Todaka | Feb 2007 | A1 |
20070047819 | Hull et al. | Mar 2007 | A1 |
20070052997 | Hull et al. | Mar 2007 | A1 |
20070053513 | Hoffberg | Mar 2007 | A1 |
20070063050 | Attia et al. | Mar 2007 | A1 |
20070076922 | Living et al. | Apr 2007 | A1 |
20070078846 | Gulli et al. | Apr 2007 | A1 |
20070106721 | Scholter | May 2007 | A1 |
20070115373 | Gallagher et al. | May 2007 | A1 |
20070118794 | Hollander et al. | May 2007 | A1 |
20070150466 | Brave et al. | Jun 2007 | A1 |
20070165904 | Nudd et al. | Jul 2007 | A1 |
20070174269 | Jing et al. | Jul 2007 | A1 |
20070175998 | Lev | Aug 2007 | A1 |
20070233613 | Barrus et al. | Oct 2007 | A1 |
20070236712 | Li | Oct 2007 | A1 |
20070237426 | Xie et al. | Oct 2007 | A1 |
20070242626 | Altberg | Oct 2007 | A1 |
20070271247 | Best et al. | Nov 2007 | A1 |
20070276845 | Geilich | Nov 2007 | A1 |
20070300142 | King | Dec 2007 | A1 |
20080004944 | Calabria | Jan 2008 | A1 |
20080009268 | Ramer et al. | Jan 2008 | A1 |
20080010605 | Frank | Jan 2008 | A1 |
20080037043 | Hull et al. | Feb 2008 | A1 |
20080059419 | Auerbach et al. | Mar 2008 | A1 |
20080071767 | Grieselhuber et al. | Mar 2008 | A1 |
20080071929 | Motte et al. | Mar 2008 | A1 |
20080078836 | Tomita | Apr 2008 | A1 |
20080106594 | Thurn | May 2008 | A1 |
20080120321 | Liu | May 2008 | A1 |
20080141117 | King | Jun 2008 | A1 |
20080177541 | Satomura | Jul 2008 | A1 |
20080229192 | Gear et al. | Sep 2008 | A1 |
20080267504 | Schloter et al. | Oct 2008 | A1 |
20080275881 | Conn et al. | Nov 2008 | A1 |
20080288476 | Kim et al. | Nov 2008 | A1 |
20080296362 | Lubow | Dec 2008 | A1 |
20080310717 | Saathoff et al. | Dec 2008 | A1 |
20080317383 | Franz et al. | Dec 2008 | A1 |
20090016564 | Ke et al. | Jan 2009 | A1 |
20090016604 | Ket et al. | Jan 2009 | A1 |
20090016615 | Hull et al. | Jan 2009 | A1 |
20090019402 | Ke et al. | Jan 2009 | A1 |
20090059922 | Appelman | Mar 2009 | A1 |
20090063431 | Erol et al. | Mar 2009 | A1 |
20090067726 | Erol et al. | Mar 2009 | A1 |
20090070110 | Erol et al. | Mar 2009 | A1 |
20090070302 | Moraleda et al. | Mar 2009 | A1 |
20090070415 | Kishi et al. | Mar 2009 | A1 |
20090074300 | Hull et al. | Mar 2009 | A1 |
20090076996 | Hull et al. | Mar 2009 | A1 |
20090080800 | Moraleda et al. | Mar 2009 | A1 |
20090092287 | Moraleda et al. | Apr 2009 | A1 |
20090100048 | Hull et al. | Apr 2009 | A1 |
20090100334 | Hull et al. | Apr 2009 | A1 |
20090152357 | Lei et al. | Jun 2009 | A1 |
20090228126 | Spielberg et al. | Sep 2009 | A1 |
20090235187 | Kim et al. | Sep 2009 | A1 |
20090248665 | Garg et al. | Oct 2009 | A1 |
20090254643 | Terheggen et al. | Oct 2009 | A1 |
20090265761 | Evanitsky | Oct 2009 | A1 |
20090285444 | Erol et al. | Nov 2009 | A1 |
20100013615 | Hebert et al. | Jan 2010 | A1 |
20100040296 | Ma et al. | Feb 2010 | A1 |
20100042511 | Sundaresan et al. | Feb 2010 | A1 |
20100046842 | Conwell | Feb 2010 | A1 |
20100057556 | Rousso et al. | Mar 2010 | A1 |
20100063961 | Guiheneuf et al. | Mar 2010 | A1 |
20100174783 | Zarom | Jul 2010 | A1 |
20100211567 | Abir | Aug 2010 | A1 |
20100239175 | Bober et al. | Sep 2010 | A1 |
20100306273 | Branigan et al. | Dec 2010 | A1 |
20110035384 | Qiu | Feb 2011 | A1 |
20110093492 | Sull et al. | Apr 2011 | A1 |
20110121069 | Lindahl et al. | May 2011 | A1 |
20110125727 | Zou et al. | May 2011 | A1 |
20110167064 | Achtermann et al. | Jul 2011 | A1 |
20110173521 | Horton et al. | Jul 2011 | A1 |
20110314031 | Chittar et al. | Dec 2011 | A1 |
20120166435 | Graham | Jun 2012 | A1 |
20120173504 | Moraleda | Jul 2012 | A1 |
20130027428 | Graham et al. | Jan 2013 | A1 |
20130031100 | Graham et al. | Jan 2013 | A1 |
20130031125 | Graham et al. | Jan 2013 | A1 |
20150139540 | Moraleda et al. | May 2015 | A1 |
20150287228 | Moraleda et al. | Oct 2015 | A1 |
20150324848 | Graham et al. | Nov 2015 | A1 |
20150350151 | Graham et al. | Dec 2015 | A1 |
Number | Date | Country |
---|---|---|
1245935 | Mar 2000 | CN |
0706283 | Apr 1996 | EP |
0706283 | Apr 1996 | EP |
1229496 | Aug 2002 | EP |
1229496 | Aug 2002 | EP |
1555626 | Jul 2005 | EP |
1662064 | May 2006 | EP |
1783681 | May 2007 | EP |
09-006961 | Jan 1997 | JP |
9134372 | May 1997 | JP |
10-228468 | Aug 1998 | JP |
10-0240765 | Sep 1998 | JP |
11-234560 | Aug 1999 | JP |
2000-165645 | Jun 2000 | JP |
2000-268179 | Sep 2000 | JP |
2001-211359 | Aug 2001 | JP |
2001-230916 | Aug 2001 | JP |
2002-513480 | May 2002 | JP |
2002521752 | Jul 2002 | JP |
2003-178081 | Jun 2003 | JP |
2004-055658 | Feb 2004 | JP |
2004234656 | Aug 2004 | JP |
2005-011005 | Jan 2005 | JP |
2005100274 | Apr 2005 | JP |
2005157931 | Jun 2005 | JP |
2005-242579 | Sep 2005 | JP |
2005-286395 | Oct 2005 | JP |
2006-053568 | Feb 2006 | JP |
2006-059351 | Mar 2006 | JP |
2006-229465 | Aug 2006 | JP |
2006-6215756 | Aug 2006 | JP |
2007-072573 | Mar 2007 | JP |
2007-140613 | Jun 2007 | JP |
2007-174270 | Jul 2007 | JP |
2007264992 | Oct 2007 | JP |
WO 9905658 | Feb 1999 | WO |
WO0005663 | Feb 2000 | WO |
WO2004072897 | Aug 2004 | WO |
WO 2004072897 | Aug 2004 | WO |
WO2005043270 | May 2005 | WO |
WO 2005043270 | May 2005 | WO |
WO2006092957 | Sep 2006 | WO |
2007023994 | Mar 2007 | WO |
WO 2007073347 | Jun 2007 | WO |
WO2008129373 | Oct 2008 | WO |
Entry |
---|
“Automatic Identification and Data Capture”, Unknown, Wikipedia Online Encyclopedia, Published: Jul. 21, 2008, pp. 1-2. |
Brassil, J. et al., “Hiding Information in Document Images,” Proc. Conf. Information Sciences and Systems (CISS-95), Mar. 1995, Johns Hopkins University, Baltimore, MD, pp. 482-489. |
Ho, T.K. et al., “Decision Combination in Multiple Classifier Systems,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Jan. 1994, pp. 66-75, vol. 16, No. 1. |
Hull, J., “Document Image Skew Detection: Survey and Annotated Bibliography,” Document Analysis Systems II, World Scientific, 1998, pp. 40-64. |
Khoubyari, S. et al., “Font and Funct on Word Ident ficat on n Document Recogn t on,” Computer Vision and Image Understanding, Jan. 1996, pp. 66-74, vol. 63, No. 1. |
Kopec, G.E. et al., “Document Image Decoding Using Markov Source Models,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Jun. 1994, pp. 602-617, vol. 16, No. 6. |
McDonald, G., “Third Voice: Invisible Web Graffiti,” PC World, May 18, 1999, [online] [Retrieved on Nov. 14, 2006] Retrieved from the Internet<URL:http://www.pcworld.com/news/article/0,aid,11016,00.asp>. |
PCT International Search Report and Written Opinion, PCT/JP2006/316810, Oct. 10, 2006, 9 pages. |
PCT International Search Report and Written Opinion, PCT/JP2006/316811, Oct. 10, 2006, 9 pages. |
PCT International Search Report and Written Opinion, PCT/JP2006/316812, Oct. 10, 2006, 9 pages. |
PCT International Search Report and Written Opinion, PCT/JP2006/316814, Oct. 10, 2006, 11 pages. |
Aggarwal, M et al, “On Cosine-fourth and Vignetting Effects in Real Lenses,” ICCV Proceedings, IEEE, 2001, vol. 1, pp. 472-479, [online] Retrieved from the Internet<URL: http://www.metaverselab.org/classis/635/reading/aggarwal-iccv.pdf>. |
Akenine-Moller, T. et al., “Real-Time Rendering,” A.K. Peters, Natick, MA, 2nd Edition, 2002, pp. 70-84. |
Archive of “Barcodepedia.com—the online barcode database,” [online] [Archived by http://archive.org on Jul. 9, 2006; Retrieved on Aug. 18, 2008] Retrieved from the Internet<http://web.archive.org/web/20060709101455/http://en.barcodepedia.com/>. |
Baba, M. et al., “Shadow Removal from a Real Image Based on Shadow Density,” Poster at SIGGRAPH2004, Updated Aug. 16, 2004, 4 pages, [online] Retrieved from the Internet<URL:http://www.cv.its.hiroshima-cu.ac.jp/baba/Shadow/poster04-02.pdf>. |
Baird, H.S., “Document Image Defect Models and Their Uses,” Proc., IAPR 2nd International Conference on Document Analysis and Recognition, Tsukuba Science City, Japan, Oct. 20-22, 1993, 7 pages. |
Baird, H., “Document Image Defect Models,” in Proc. of IAPR Workshop on Syntactic and Structural Pattern Recognition, Murray Hill, NJ, Jun. 1990, Structured Document Image Analysis, Springer-Verlag, pp. 546-556. |
Baird, H., “The State of the Art of Document Image Degradation Modeling,” in Proc. of the 4th IAPR International Workshop on Document Analysis Systems, Rio de Janeiro, Brazil, 2000, pp. 1-16, [online] Retrieved from the Internet<URL:http://www2.parc.xerox.com/istl/members/baird/das00.pas.gz>. |
Barney Smith, E.H. et al., “Text Degradations and OCR Training,” International Conference on Document Analysis and Recognition 2005, Seoul, Korea, Aug. 2005, 5 pages, [online] Retrieved from the Internet<URL:http://coen.boisestate.edu/EBarneySmith/Papers/ICDAR05—submit.pdf>. |
Bouget, J., “Camera Calibration Toolbox for Matlab,” Online Source, Updated Jul. 24, 2006, 6 pages, [online] Retrieved from the Internet<URL:http://www.vision.caltech.edu/bougetj/calib—doc/index.html#ref>. |
Boukraa, M. et al., “Tag-Based Vision: Assisting 3D Scene Analysis with Radio-Frequency Tags,” Jul. 8, 2002, Proceedings of the Fifth International Conference on Information Fusion, Piscataway, N.J., IEEE, Jul. 8-11, 2002, pp. 412-418. |
Boyd, S., “EE263: Introduction to Linear Dynamical Systems,” Online Lecture Notes, Stanford University, Spring Quarter, 2006-2007, Accessed on Sep. 11, 2006, 4 pages, [online] Retrieved from the Internet<URL:http://www.standford/edu/class/ee263/#lectures>. |
“Call for Papers: ICAT 2007,” 17th International Conference on Artificial Reality and Telexistence, 2007, [Online] [Retrieved on Nov. 4, 2008] Retrieved from the Internet<URL:http://www.idemployee.id.tue.nl/g.w.m.rauterberg/conferences/ICAT2007-CfP.pdf>. |
Constantini, R. et al., “Virtual Sensor Design,” Proceedings of the SPIE, vol. 5301, 2004, pp. 408-419, Retrieved from the Internet<URL:http://ivrgwww.epfl.ch/publications/cs04.pdf>. |
Cover, T.M. et al., “Nearest Neighbor Pattern Classification,” IEEE Transactions on Information Theory, Jan. 1967, pp. 21-27, vol. IT-13, No. 1. |
Davis, M. et al., “Towards Context-Aware Face Recognition,” Proceedings of the 13th Annual ACM International Conference on Multimedia, Nov. 6-11, 2005, pp. 483-486, vol. 13. |
Doermann, D. et al., “Progress in Camera-Based Document Image Analysis,” Proceedings of the Seventh International Conference on Document Analysis and Recognition, ICDAR 2003, 11 pages, [online] Retrieved from the Internet<URL:http://www.cse.salford.ac.uk/prima/ICDAR2003/Papers/0111—keynote—III—doermann—d.pdf>. |
European Partial Search Report, European Application No. EP07015093.3, Dec. 17, 2007, 7 pages. |
European Search Report, European Application No. 08159971.4, Nov. 14, 2008, 6 pages. |
European Search Report, European Application No. 08160115.5, Nov. 12, 2008, 6 pages. |
European Search Report, European Application No. 08160130.4, Nov. 12, 2008, 7 pages. |
European Search Report, European Application No. 08160112.2, Nov. 10, 2008, 7 pages. |
Ho, T.K. et al., “Evaluation of OCT Accuracy Using Synthetic Data,” Proceedings of the 4th Annual Symposium on Document Analysis and Information Retrieval, Apr. 24-26, 1995, pp. 413-422. [online] Retrieved from the Internet<URL:http://citeseer.ist.psu.edu/cache/papers/cs/2303/http:zSzzSzcm.bell-labs.comzSzcmzSzcszSzwhozSzhsbzSzeoasd.pdf/ho95evaluation.pdf>. |
Hull, J.J. et al., “Paper-Based Augmented Reality,” 17th International Conference on Artificial Reality and Telexistence, Nov. 1, 2007, pp. 205-209. |
Kanungo, T. et al., “A Downhill Simplex Algorithm for Estimating Morphological Degradation Model Parameters,” University of Maryland Technical Report, LAMP-RT-066, Feb. 2001, 15 pages, [online] Retrieved from the Internet<URL:http://lampsrv01.umiacs.umd.edu/pubs/TechReports/LAMP—066/LAMP—066.pdf>. |
Kanungo, T. et al., “Global and Local Document Degradation Models,” Document Analysis and Recognition, 1993, Proceedings of the Second International Conference on Volume, Oct. 20-22, 1993, pp. 730-734. |
Khoubyari, S. et al., “Keyword Location and Noisy Document Images,” Second Annual Symposium on Document Analysis and Information Retrieval, Las Vegas, NV, Apr. 26-28, 1993, pp. 217-231. |
Li, Y. et al., “Validation of Image Defect Models for Optical Character Recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 18, 2, Feb. 1996, pp. 99-108, [online] Retrieved from the Internet<URL: http://www.cs.cmu.edu/afs/cs/usr/andrewt/papers/Validate/journal.ps.gz>. |
Liang, J. et al., “Flattening Curved Documents in Images,” in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2005, 8 pages, [online] Retrieved from the Internet<URL:http://www.cfar.umd.edu/˜daniel/daniel—papersfordownload/liang-j—cpvr2005.pdf>. |
“Mobile Video Managed Service,” Vidiator, 2008, [online] [Retrieved on Aug. 29, 2008] Retrieved from the Internet<URL:http://www.vidiator.com/services/managed—mobile—video.aspx>. |
Pavlidis, T., “Effects of Distortions on the Recognition Rate of a Structural OCR System,” in Pro. Conf. on Comp. Vision and Pattern Recog., IEEE, Washington, DC, 1983, pp. 303-309. |
Sato, T. et al., “High Resolution Video Mosaicing for Documents and Photos by Estimating Camera Motion,” Proceedings of the SPIE 5299, 246, 2004, 8 pages, [online] Retrieved from the Internet<URL:http://yokoya.naist.jp/paper/datas/711/spie2004.pdf>. |
Schalkoff, R.J., “Syntactic Pattern Recognition (SYNTPR) Overview,” Pattern Recognition: Statistical, Structural and Neural Approaces, Jan. 1, 1992, pp. 127-150, vol. 3, Wiley. |
Sivic, J. et al., “Video Google: A Text Retrieval Approach to Object Matching in Videos,” Proceedings of the Ninth IEEE International Conference on Computer Vision (ICCV 2003), 2-Volume Set, 2003, IEEE, pp. 1-8.=. |
Stoyanov, D., “Camera Calibration Tools,” Online Source, Updated Aug. 24, 2006, Accessed Aug. 31, 2006, 12 pages, [online] Retrieved from the Internet<URL:http://ubimon.doc.ic.ac.uk/dvs/index.php?m=581>. |
Zhang, Z., “A Flexible New Technique for Camera Calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Nov. 2000, pp. 1330-1334, vol. 22, No. 11. |
Zheng, Q.-F. et al., “Effective and Efficient Object-Based Image Retrieval Using Visual Phases,” Proceedings of the 14th Annual ACM International Conference on Multimedia, MM'06, Oct. 23-27, 2006, Santa Barbara, CA, pp. 77-80. |
Zi, G., “Groundtruth Generation and Document Image Degradation,” University of Maryland Language and Media Processing Laboratory Technical report (LAMP-TR-121), May 2005, 72 pages, [online] Retrieved from the Internet<URL:http://lampsrv01.umiacs.umd.edu/pubs/TechReports/LAMPStructural and Neural Approaces, Jan. 1, 1992, pp. 127-150, vol. 3, Wiley.121/LAMPStructural and Neural Approaces, Jan. 1, 1992, pp. 127-150, vol. 3, Wiley.121.pdf>=. |
Erol, B. et al., “Linking Multimedia Presentations with Their Symbolic Source Documents: Algorithm and Applications,” Nov. 2-8, 2003, pp. 498-507, [Online] [Retreived on Oct. 15, 2008] Retrieved from the Internet<URL:http://rii.ricoh.com/{hull/pubs/p225—erol.pdf>. |
U.S. Appl. No. 10/813,901, filed Mar. 30, 2004, Erol et al. |
European Search Report, European Application No. 08160125.4, Oct. 13, 2008, 5 pages. |
European Search Report, European Application No. 06796845.3, Oct. 30, 2008, 12 pages. |
European Search Report, European Application No. 06796844.6, Oct. 30, 2008, 12 pages. |
European Search Report, European Application No. 06796848.7, Oct. 31, 2008, 12 pages. |
European Search Report, European Application No. 06796846.1, Nov. 5, 2008, 11 pages. |
European Search Report, European Application No. 07252397, Oct. 15, 2007, 7 pages. |
Hull, J.J., “Document Image Matching and Retrieval with Multiple Distortion-Invariant Descriptors,” International Association for Pattern Recognition Workshop on Document Analysis Systems, Jan. 1, 1995, pp. 375-396. |
Hull, J.J. et al., “Document Image Matching Techniques,” Apr. 30, 1997, pp. 31-35, [Online] [Retrieved on May 2, 1997] Retrieved from the Internet<URL:http://rii.ricoch.com/hull/pubs/hull—sdiut97.pdf>. |
Hull, J. J., “Document Image Similarity and Equivalence Detection,” International Journal on Document Analysis and Recognition, 1998, pp. 37-42, Springer-Verlag. |
Microsoft Computer Dictionary, 5th ed., “Hyperlink” Definition, 2002, pp. 260-261. |
“Mobile Search Engines,” Sonera MediaLab, Nov. 15, 2002, pp. 1-12. |
Mukherjea, S. et al., “AMORE: A World Wide Web Image Retrieval Engine,” C&C Research Laboratories, NEC USA Inc., Baltzer Science Publishers BV, World Wide Web 2, 1999, pp. 115-132. |
Veltkamp, R. et al., “Content-Based Image Retrieval Systems: A Survey,” Department of Computing Science, Utrecht University, Oct. 28, 2002, pp. 1-62. |
Wikipedia Online Definition, “Optical Character Recognition,” Sep. 14, 2008, pp. 1-7, [Online] [Retrieved on Sep. 14, 2008] Retrieved from the Internet<URL:http://en.wikipedia.org/wiki/Optical—character—recognition>. |
Archive of Scanbuy Solutions | Optical Intelligence for your Mobile Devices, Scanbuy® Inc., www.scanbuy.com/website/solutions—summary.htm, [Online] [Archived by http://archive.org on Jun. 19, 2006; Retrieved on Mar. 3, 2009] Retrieved from the Internet<URL:http://web.archive.org/web/20060619172549/http://www.scanbuy.com/website/solutions—su . . . >. |
Canny, J., “A Computational Approach to Edge Detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Nov. 1986, pp. 679-714, vol. PAMI-8, No. 6. |
Di Stefano, L. et al., “A Simple and Efficient Connected Components Labeling Algorithm,” International Conference on Image Analysis and Processing, 1999, pp. 322-327. |
Duda, R. O. et al., “Use of the Hough Transformation to Detect Lines and Curves in Pictures,” Communications of the ACM, Jan. 1972, pp. 11-15, vol. 15, No. 1. |
Erol, B. et al., “Prescient Paper: Multimedia Document Creation with Document Image Matching,” 17th International Conference on Pattern Recognition, Aug. 23-26, 2004, Cambridge, UK. |
Erol, B. et al., “Retrieval of Presentation Recordings with Digital Camera Images,” IEEE Conference on Computer Vision and Pattern Recognition, Jun. 27-Jul. 2, 2004. |
Ezaki, N. et al., “Text Detection from Natural Scene Images: Towards a System for Visually Impaired Persons,” Proc. of 17th Int. Conf. on Pattern Recognition (ICPR 2004), IEEE Computer Society, Aug. 23-26, 2004, Cambridge, UK, pp. 683-686, vol. II. |
Fadoua, D. et al., “Restoring Ink Bleed-Through Degraded Document Images Using a Recursive Unsupervised Classification Technique,” Lecture Notes in Computer Science 3872, Document Analysis Systems VII, 7th International Workshop, DAS 2006, Feb. 13-15, 2006, Nelson, New Zealand, Bunke, H. et al. (eds.), pp. 38-49. |
Freund, Y. et al., “A Short Introduction to Boosting,” Journal of Japanese Society for Artificial Intelligence, Sep. 1999, pp. 771-780, vol. 14, No. 5. |
Hjelmas, E. et al., “Face Detection: A Survey,” Computer Vision and Image Understanding, 2001, pp. 236-274, vol. 83. |
Hull, J.J., “Document Image Matching on CCITT Group 4 Compressed Images,” SPIE Conference on Document Recognition IV, Feb. 8, 1997, pp. 82-87. |
Jagannathan, L. et al., Perspective Correction Methods for Camera Based Document Analysis, Proc. First Int. Workshop on Camera-based Document Analysis and Recognition, 2005, pp. 148-154. |
Jain, A.K. et al., “An Introduction to Biometric Recognition,” IEEE Transactions on Circuits and Systems for Video Technology, Jan. 2004, pp. 4-20, vol. 14, No. 1. |
Po, L-M. et al., “A Novel Four-Step Search Algorithm for Fast Block Motion Estimation,” IEEE Transactions on Circuits and Systems for Video Technology, Jun. 1996, pp. 313-317, vol. 6, Issue 3. |
Rangarajan, K. et al. “Optimal Corner Detector,” 1988, IEEE, pp. 90-94. |
Rosin, P.L. et al., “Image Difference Threshold Strategies and Shadow Detection,” Proceedings of the 6th British Machine Vision Conference, 1995,10 pages. |
Sezgin, M. et al., “Survey Over Image Thresholding Techniques and Quantitative Performance Evaluation,” Journal of Electronic Imaging, Jan. 2004, pp. 146-165, vol. 13, No. 1. |
Triantafyllidis, G.A. et al., “Detection of Blocking Artifacts of Compressed Still Images,” Proceedings of the 11th International Conference on Image Analysis and Processing (ICIAP '01), IEEE, 2001, pp. 1-5. |
U.S. Appl. No. 10/696,735, filed Oct. 28, 2003, Erol, B. et al., “Techniques for Using a Captured Image for the Retrieval of Recorded Information,” 58 pages. |
Zanibbi, R. et al. “A Survey of Table Recognition,” International Journal on Document Analysis and Recognition, 2004, pp. 1-33. |
Zhao, W. et al., Face Recognition: A Literature Survey, ACM Computing Surveys (CSUR), 2003, pp. 399-458, vol. 35, No. 4. |
European Search Report, European Application No. 09156089.6, Jun. 19, 2009, 8 pages. |
Marques, O. et al., “Content-Based Image and Video Retrieval, Video Content Representation, Indexing, and Retrieval, a Survey of Content-Based Image Retrieval Systems, CBVQ (Content-Based Visual Query),” Content-Based Image and Video Retrieval [Multimedia Systems and Applications Series], Apr. 1, 2002, pp. 15-117, vol. 21, Kluwer Academic Publishers Group, Boston, USA. |
JP Office Action for JP Patent Application No. 2009-119205 dated Feb. 19, 2013, 2 pages. |
U.S. Appeal Decision, U.S. Appl. No. 11/461,164, dated Feb. 27, 2013, 10 pages. |
U.S. Appeal Decision, U.S. Appl. No. 11/461,147, dated Mar. 4, 2013, 11 pages. |
US Non-Final Office Action for U.S. Appl. No. 12/060,200, dated Mar. 22, 2013, 47 pages. |
US Final Office Action for U.S. Appl. No. 11/461,279 dated Mar. 25, 2013, 36 pages. |
US Non-Final Office Action for U.S. Appl. No. 12/060,198 dated Apr. 2, 2013, 56 pages. |
US Notice of Allowance for U.S. Appl. No. 13/415,228 dated Apr. 30, 2013, 10 pages. |
US Notice of Allowance for U.S. Appl. No. 12/210,519 dated May 1, 2013, 24 pages. |
US Notice of Allowance for U.S. Appl. No. 13/273,189 dated May 9, 2013, 11 pages. |
US Notice of Allowance for U.S. Appl. No. 11/461,300 dated May 15, 2013, 13 pages. |
US Final Office Action for U.S. Appl. No. 13/273,186, dated Jun. 12, 2013, 24 pages. |
US Non-Final Office Action for U.S. Appl. No. 11/461,037, dated Jun. 24, 2013, 25 pages. |
US Non-Final Office Action for U.S. Appl. No. 12/719,437, dated Jun. 25, 2013, 22 pages. |
US Notice of Allowance for U.S. Appl. No. 11/461,279, dated Jul. 31, 2013, 14 pages. |
JP Office Action for JP Application No. 2009212242 dated Jul. 16, 2013, 2 pages. |
US Non-Final Office Action for U.S. Appl. No. 11/461,085, dated Jul. 9, 2013, 11 pages. |
European Office Action for Application No. 08 252 377.0, dated Aug. 9, 2013, 5 pages. |
European Search Report for Application No. 12159375.0 dated Sep. 12, 2013, 9 pages. |
Non-Final Office Action for U.S. Appl. No. 12/060,198, dated Nov. 7, 2013, 55 pages. |
Final Office Action for U.S. Appl. No. 12/060,200, dated Nov. 8, 2013, 58 pages. |
Non-Final Office Action for U.S. Appl. No. 13/273,186, dated Dec. 5, 2013, 25 pages. |
Final Office Action for U.S. Appl. No. 11/461,085, dated Dec. 10, 2013, 16 pages. |
Non-Final Office Action for U.S. Appl. No. 13/729,458, dated Dec. 17, 2013, 8 pages. |
Non-Final Office Action for U.S. Appl. No. 12/253,715, dated Dec. 19, 2013, 38 pages. |
Notice of Allowance for U.S. Appl. No. 12/240,596, dated Dec. 23, 2013, 10 pages. |
Final Office Action for U.S. Appl. No. 11/461,164, dated Dec. 26, 2013, 17 pages. |
Final Office Action for U.S. Appl. No. 13/330,492, dated Jan. 2, 2014, 15 pages. |
Final Office Action for U.S. Appl. No. 12/719,437, dated Jan. 16, 2014, 22 pages. |
Non-Final Office Action for U.S. Appl. No. 13/789,669, dated Jan. 17, 2014, 6 pages. |
Final Office Action for U.S. Appl. No. 13/192,458, dated Jan. 27, 2014, 13 pages. |
Non-Final Office Action for U.S. Appl. No. 12/340,124, dated Jan. 29, 2014, 24 pages. |
Non-Final Office Action for U.S. Appl. No. 13/330,492, dated Aug. 27, 2013, 14 pages. |
Non-Final Office Action for U.S. Appl. No. 11/461,164, dated Aug. 30, 2013, 19 pages. |
Non-Final Office Action for U.S. Appl. No. 12/240,596, dated Sep. 5, 2013, 17 pages. |
Notice of Allowance for U.S. Appl. No. 13/273,189, dated Sep. 13, 2013, 15 pages. |
Non-Final Office Action for U.S. Appl. No. 11/461,147, dated Sep. 27, 2013, 15 pages. |
Non-Final Office Action for U.S. Appl. No. 12/210,532, dated Oct. 7, 2013, 18 pages. |
Non-Final Office Action for U.S. Appl. No. 12/247,205, dated Oct. 7, 2013, 19 pages. |
Final Office Action for U.S. Appl. No. 11/461,037, dated Oct. 24, 2013, 24 pages. |
Chi-Hung Chi et al. , Context Query in Information Retrieval, dated 2002, Proceedings of the 14th IEEE International Conference on Tools with Artificial Intelligence (ICTAI'02) 6 pages (http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1180793). |
Hirokazu Kate et al., A Registration Method for Augmented Reality based on Matching Templates Generated from Texture Image, Transaction for the Virtual Reality Society of Japan, The Virtual Reality Society of Japan, 2002, vol. 7, No. 2, pp. 119-128. |
Japanese Office Action, JP2008-180790, dated May 22, 2012, 3 pages. |
Japanese Office Action, JP2008-180791, dated May 22, 2012, 4 pages. |
Japanese Office Action, JP2008-180792, dated May 22, 2012, 3 pages. |
Japanese Office Action, JP2008-180793, dated May 29, 2012, 3 pages. |
Japanese Office Action, JP2008-180794, dated May 22, 2012, 3 pages. |
United States Final Office Action, U.S. Appl. No. 12/247,205, dated May 23, 2012, 16 pages. |
United States Final Office Action, U.S. Appl. No. 12/210,532, dated Jun. 5, 2012, 19 pages. |
United States Non-Final Office Action, U.S. Appl. No. 11/461,037, dated Jun. 13, 2012, 21 pages. |
United States Final Office Action, U.S. Appl. No. 12/240,596, dated Jun. 14, 2012, 17 pages. |
United States Non-Final Office Action, U.S. Appl. No. 12/340,124, dated Jun. 27, 2012, 17 pages. |
United States Final Office Action, U.S. Appl. No. 12/210,519, dated Jun. 28, 2012, 12 pages. |
United States Final Office Action, U.S. Appl. No. 12/491,018, dated Jun. 28, 2012, 7 pages. |
United States Final Office Action, U.S. Appl. No. 11/461,300, dated Jul. 13, 2012, 19 pages. |
United States Notice of Allowance, U.S. Appl. No. 11/461,294, dated Aug. 9, 2012, 22 pages. |
United States Final Office Action, U.S. Appl. No. 11/461,279, dated Aug. 10, 2012, 39 pages. |
United States Notice of Allowance, U.S. Appl. No. 11/461,286, dated Aug. 14, 2012, 3 pages. |
Non-Final Office Action for U.S. Appl. No. 13/933,078, dated Mar. 17, 2014, 9 pages. |
Notice of Allowance for U.S. Appl. No. 13/273,186, dated Mar. 26, 2014, 9 pages. |
Notice of Allowance for U.S. Appl. No. 11/461,037, dated Apr. 3, 2014, 10 pages. |
Non-Final Office Action for U.S. Appl. No. 12/060,200, dated Apr. 8, 2014, 65 pages. |
Non-Final Office Action for U.S. Appl. No. 11/461,085, dated Apr. 9, 2014, 16 pages. |
Final Office Action for U.S. Appl. No. 11/461,147, dated Apr. 24, 2014, 11 pages. |
Notice of Allowance for U.S. Appl. No. 12/210,511, dated Apr. 30, 2014, 11 pages. |
Final Office Action for U.S. Appl. No. 12/247,205, dated May 13, 2014, 17 pages. |
Notice of Allowance for U.S. Appl. No. 12/210,540, dated May 22, 2014, 20 pages. |
Final Office Action for U.S. Appl. No. 13/729,458, dated Jun. 2, 2014, 8 pages. |
Non-Final Office Action for U.S. Appl. No. 13/192,458, dated Jun. 5, 2014, 12 pages. |
Final Office Action for U.S. Appl. No. 12/060,198, dated Jun. 5, 2014, 63 pages. |
Josef Sivic, “Video Google: A Text Retrieval Approach to Object Matching in Videos,” IEEE, Proceedings of the Ninth IEEE International Conference on Computer Vision (ICCV 2003), 8 pages, vol. 2. |
Japanese Office Action for JP Application No. 2013222652, dated May 20, 2014, 5 pages. |
Japanese Office Action for JP Application No. 2013222655, dated May 20, 2014, 4 pages. |
Adobe Acrobat Advanced Elements (for both PC and Mac Computers), 2002, pp. 1-19. |
A. Antonacopoulos et al., Flexible Page Segmentation Using the Background, Proceedings of the IAPR International Conference on Pattern Recognition, Jerusalem, Oct. 9-12, 1994; pp. 339-344. |
Baird, H. et al., Structured Document Image Analysis;1992; pp. 546-556, Springer-Verlag Berlin Heidelberg. |
Chinese Office Action, Chinese Application No. 200910138044, Jan. 26, 2011, 6 pages. |
U.S. Office Action, U.S. Appl. No. 11/461,037, Mar. 30, 2011, 16 pages. |
U.S. Office Action, U.S. Appl. No. 11/461,037, Mar. 4, 2010, 16 pages. |
U.S. Office Action, U.S. Appl. No. 11/461,037, Nov. 23, 2011, 17 pages. |
U.S. Supplemental Final Office Action, U.S. Appl. No. 11/461,109, Oct. 23, 2009, 22 pages. |
U.S. Office Action, U.S. Appl. No. 11/461,279, Aug. 5, 2010, 35 pages. |
U.S. Office Action, U.S. Appl. No. 11/461,279, Feb. 19, 2010, 33 pages. |
U.S. Office Action, U.S. Appl. No. 11/461,279, Jan. 7, 2011, 36 pages. |
U.S. Office Action, U.S. Appl. No. 11/461,279, Jul. 8, 2011, 37 pages. |
U.S. Office Action, U.S. Appl. No. 11/461,279, Sep. 17, 2009, 25 pages. |
U.S. Office Action, U.S. Appl. No. 11/461,286, Aug. 5, 2010, 26 pages. |
U.S. Office Action, U.S. Appl. No. 11/461,286, Feb. 19, 2010, 23 pages. |
U.S. Office Action, U.S. Appl. No. 11/461,286, Jan. 20, 2012, 27 pages. |
U.S. Office Action, U.S. Appl. No. 11/461,286, Jan. 21, 2011, 26 pages. |
U.S. Office Action, U.S. Appl. No. 11/461,286, Jul. 15, 2011, 38 pages. |
U.S. Office Action, U.S. Appl. No. 11/461,286, Sep. 17, 2009, 22 pages. |
Hull, J.J. et al., Visualizing Multimedia Content on Paper Documents: Components of Key Frame Selection for Video Paper, Proceedings of the Seventh International Conference on Document Analysis and Recognition (ICDAR'03), IEEE, 2003, 4 pages. |
Japanese Office Action, Japanese Application No. 2004-293962, Aug. 24, 2010, 3 pages. |
Japanese Office Action, Japanese Application No. 2008-008112, Oct. 17, 2011, 3 pages. |
Liu, T. et al., A Fast Image Segmentation Algorithm for Interactive Video Hotspot Retrieval, IEEE, 2001, pp. 3-8. |
Liu, Y. et al., Automatic Texture Segmentation for Texture-Based Image Retrieval, Proceedings of the 10th International Multimedia Modelling Conference (MMM'04), IEEE, Jan. 5-7, 2004, pp. 285-288. |
Mae et al., Object Recognition Using Appearance Models Accumulated into Environment, Proc. 15-th Intl. Conf. on Pattern Recognition, 2000, vol. 4, pp. 845-848. |
Rademacher, View-Dependent Gemoetry, Computer Graphics Proceedings, Annual Conference Series, SIGGRAPH 99, Los Angeles, California Aug. 8-13, 1999, pp. 439-446. |
Reniers et al., Skeleton-based Hierarchical Shape Segmentation, IEEE International Conference on Shape Modeling an Applications, SMI'07, Jun. 1, 2007, Computer Society, pp. 179-188. |
Roth, M.T. et al., the Garlic Project, Proc. of the 1996 ACM SIGMOD International Conference on Management of Data, Montreal, Quebec, Canada, Jun. 4, 1996, pp. 557. |
U.S. Office Action, U.S. Appl. No. 13/273,189, dated Nov. 28, 2012, 26 pages. |
U.S. Office Action, U.S. Appl. No. 13/273,186, dated Dec. 17, 2012, 28 pages. |
U.S. Office Action, U.S. Appl. No. 11/461,279, dated Dec. 19, 2012, 31 pages. |
U.S. Notice of Allowability, U.S. Appl. No. 12/240,590, dated Dec. 20, 2012, 4 pages. |
U.S. Office Action, U.S. Appl. No. 11/461,037, dated Jan. 7, 2013, 21 pages. |
U.S. Appeal Decision, U.S. Appl. No. 11/461,085, dated Jan. 23, 2013, 8 pages. |
U.S. Office Action, U.S. Appl. No. 12/340,124, dated Jan. 23, 2013, 23 pages. |
U.S. Notice of Allowance, U.S. Appl. No. 13/415,756, dated Feb. 4, 2013, 7 pages. |
U.S. Office Action, U.S. Appl. No. 12/060,206, dated Feb. 8, 2013, 16 pages. |
Non-Final Office Action for U.S. Appl. No. 12/253,715, dated Jan. 7, 2015, 35 pages. |
Notice of Allowance for U.S. Appl. No. 13/192,458, dated Jan. 28, 2015, 9 pages. |
Final Office Action for U.S. Appl. No. 13/494,008, dated Feb. 10, 2015, 20 pages. |
Non-Final Office Action for U.S. Appl. No. 13/933,078, dated Feb. 26, 2015, 7 pages. |
Final Office Action for U.S. Appl. No. 11/461,164, dated Mar. 12, 2015, 19 pages. |
Non-Final Office Action for U.S. Appl. No. 12/060,198, dated Mar. 13, 2015, 22 pages. |
Notice of Allowance for U.S. Appl. No. 13/789,669, dated Mar. 16, 2015, 8 pages. |
Non-Final Office Action for U.S. Appl. No. 11/461,147, dated Mar. 20, 2015, 11 pages. |
U.S. Office Action, U.S. Appl. No. 11/461,300, Feb. 23, 2012, 15 pages. |
U.S. Office Action, U.S. Appl. No. 11/461,300, Jun. 11, 2010, 17 pages. |
U.S. Office Action, U.S. Appl. No. 11/461,300, Oct. 6, 2010, 17 pages. |
U.S. Office Action, U.S. Appl. No. 12/060,206, Dec. 15, 2011, 9 pages. |
U.S. Office Action, U.S. Appl. No. 12/059,583, Jan. 26, 2012, 29 pages. |
U.S. Office Action, U.S. Appl. No. 12/060,198, Sep. 1, 2011, 48 pages. |
U.S. Office Action, U.S. Appl. No. 12/060,200, Sep. 2, 2011, 27 pages. |
U.S. Office Action, U.S. Appl. No. 12/121,275, Apr. 20, 2011, 14 pages. |
U.S. Office Action, U.S. Appl. No. 12/121,275, Oct. 19, 2011, 16 pages. |
U.S. Office Action, U.S. Appl. No. 12/210,511, Apr. 4, 2011, 20 pages. |
U.S. Office Action, U.S. Appl. No. 12/210,511, Sep. 28, 2011, 17 pages. |
U.S. Office Action, U.S. Appl. No. 12/210,519, Jan. 5, 2012, 11 pages. |
U.S. Office Action, U.S. Appl. No. 12/210,519, Jun. 16, 2011, 12 pages. |
U.S. Office Action, U.S. Appl. No. 12/210,519, Mar. 14, 2011, 13 pages. |
U.S. Office Action, U.S. Appl. No. 12/210,532, Oct. 31, 2011, 15 pages. |
U.S. Office Action, U.S. Appl. No. 12/210,540, Oct. 14, 2011, 13 pages. |
U.S. Office Action, U.S. Appl. No. 12/240,596, Aug. 6, 2010, 15 pages. |
U.S. Office Action, U.S. Appl. No. 12/240,596, Feb. 2, 2012, 17 pages. |
Non-Final Office Action for U.S. Appl. No. 14/604,619, dated Oct. 7, 2015, Moraleda et al., 9 pages. |
Final Office Action for U.S. Appl. No. 13/330,492, dated Oct. 8, 2015, Graham et al., 20 pages. |
Final Office Action for U.S. Appl. No. 12/060,198, dated Oct. 8, 2015, Erol et al., 32 pages. |
Non-Final Office Action for U.S. Appl. No. 13/914,417, dated Oct. 14, 2015, Erol et al., 18 pages. |
Final Office Action for U.S. Appl. No. 11/461,164, dated Nov. 27, 2015, Hull et al., 20 pages. |
Notice of Allowance for U.S. Appl. No. 12/247,205, dated Apr. 8, 2015, 15 pages. |
Non-Final Office Action for U.S. Appl. No. 13/330,492, dated Apr. 8, 2015, 19 pages. |
Notice of Allowance for U.S. Appl. No. 12/719,437, dated Apr. 10, 2015, 16 pages. |
Notice of Allowance for U.S. Appl. No. 13/933,078, dated May 16, 2015, 7 pages. |
Non-Final Office Action for U.S. Appl. No. 11/461,164, dated Jun. 30, 2015, 20 pages. |
Non-Final Office Action for U.S. Appl. No. 12/059,583, dated Jul. 2, 2015, Jonathan J. Hull, 29 pages. |
Non-Final Office Action for U.S. Appl. No. 12/060,206, dated Jul. 23, 2015, Berna Erol et al., 23 pages. |
Non-Final Office Action for U.S. Appl. No. 13/494,008, dated Aug. 13, 2015, Jonathan J. Hull et al., 21 pages. |
U.S. Office Action, U.S. Appl. No. 12/059,583, dated Sep. 10, 2012, 28 pages. |
U.S. Notice of Allowance, U.S. Appl. No. 12/240,590, dated Oct. 1, 2012, 6 pages. |
U.S. Notice of Allowance, U.S. Appl. No. 12/491,018, dated Oct. 11, 2012, 7 pages. |
U.S. Office Action, U.S. Appl. No. 13/192,458, dated Oct. 11, 2012, 16 pages. |
U.S. Office Action, U.S. Appl. No. 13/415,756, dated Oct. 26, 2012, 10 pages. |
U.S. Office Action, U.S. Appl. No. 12/253,715, dated Nov. 14, 2012, 29 pages. |
U.S. Office Action, U.S. Appl. No. 11/461,300, dated Nov. 28, 2012, 23 pages. |
U.S. Notice of Allowance, U.S. Appl. No. 12/121,275, dated Nov. 28, 2012, 17 pages. |
JP Office Action, JP Application No. 2008-180789, dated Sep. 25, 2012, 3 pages. |
Tomohiro Nakai; Document Image Retrieval Based on Cross-Ration and Hashing IEICE Technical Report; The Institute of Electronics, Information and Communication Engineers; dated Mar. 11, 2005; vol. 104 No. 742; pp. 103-108. |
U.S. Office Action, U.S. Appl. No. 13/415,228, dated Dec. 3, 2012, 17 pages. |
Notice of Allowance for U.S. Appl. No. 13/729,458, dated Sep. 29, 2014, 8 pages. |
Final Office Action for U.S. Appl. No. 13/933,078, dated Oct. 6, 2014, 14 pages. |
Notice of Allowance for U.S. Appl. No. 12/060,200, dated Nov. 5, 2014, 8 pages. |
Non-Final Office Action for U.S. Appl. No. 13/789,669, dated Nov. 19, 2014, 13 pages. |
Final Office Action for U.S. Appl. No. 13/330,492, dated Nov. 26, 2014, 18 pages. |
Notice of Allowance for U.S. Appl. No. 12/340,124, dated Dec. 19, 2014, 12 pages. |
Yanagisawa Kiyoshi, “Access Control Management System using Face Recognition Technology” Nippon Signal Technical Journal, Japan, The Nippon Signal Co., Ltd., Mar. 1, 2002, vol. 26, No. 1, 9 pages (pp. 21-26). |
United States Final Office Action, U.S. Appl. No. 12/719,437, Mar. 1, 2012, 22 pages. |
United States Notice of Allowance, U.S. Appl. No. 11/461,126, Mar. 5, 2012, 8 pages. |
United States Notice of Allowance, U.S. Appl. No. 11/461,143, Mar. 8, 2012, 2 pages. |
Japan Patent Office, Office Action for Japanese Patent Application JP2007-199984, Mar. 13, 2012, 3 pages. |
United States Notice of Allowance, U.S. Appl. No. 11/776,530, Mar. 26, 2012, 5 pages. |
United States Non-Final Office Action, U.S. Appl. No. 12/240,590, Apr. 4, 2012, 12 pages. |
United States Notice of Allowance, U.S. Appl. No. 13/168,638, Apr. 4, 2012, 11 pages. |
United States Final Office Action, U.S. Appl. No. 12/265,502, Apr. 5, 2012, 23 pages. |
United States Final Office Action, U.S. Appl. No. 12/060,198, Apr. 12, 2012, 48 pages. |
United States Final Office Action, U.S. Appl. No. 12/060,200, Apr. 12, 2012, 38 pages. |
United States Final Office Action, U.S. Appl. No. 11/461,294, Apr. 13, 2012, 16 pages. |
United States Final Office Action, U.S. Appl. No. 11/461,286, Apr. 16, 2012, 30 pages. |
United States Non-Final Office Action, U.S. Appl. No. 11/461,279, Apr. 19, 2012, 35 pages. |
United States Notice of Allowance, U.S. Appl. No. 11/827,530, Apr. 24, 2012, 21 pages. |
China Patent Office, Office Action for Chinese Patent Application CN200680039376.7, Apr. 28, 2012, 11 pages. |
United States Non-Final Office Action, U.S. Appl. No. 12/121,275, May 18, 2012, 15 pages. |
U.S. Office Action, U.S. Appl. No. 12/240,596, Jan. 21, 2011, 13 pages. |
U.S. Office Action, U.S. Appl. No. 12/247,205, Oct. 6, 2011, 10 pages. |
U.S. Office Action, U.S. Appl. No. 12/253,715, Aug. 31, 2011, 20 pages. |
U.S. Office Action, U.S. Appl. No. 12/265,502, Oct. 14, 2011, 23 pages. |
U.S. Office Action, U.S. Appl. No. 12/340,124, Oct. 24, 2011, 22 pages. |
U.S. Office Action, U.S. Appl. No. 12/719,437, Dec. 9, 2010, 16 pages. |
U.S. Office Action, U.S. Appl. No. 12/879,933, Mar. 2, 2011, 7 pages. |
U.S. Office Action, U.S. Appl. No. 12/879,933, Oct. 28, 2011, 15 pages. |
Wikipedia Online Encyclopedia, Image Scanner, Last Modified Feb. 9, 2010, pp. 1-9, [Online] [Retrieved on Feb. 13, 2010] Retrieved from the Internet <URL:http://en.wikipedia.org/wiki/Image—scanner>. |
Wikipedia Online Encyclopedia, Waypoint, Last Modified Feb. 13, 2010, pp. 1-4, [Online] Retrieved on Feb. 13, 2010] Retrieved from the Internet <URL:http://en.wikipedia.org/wiki/Waypoint>. |
Esposito, F. et al., “Machine Learning Methods for Automatically Processing Historical Documents: From Paper Acquisition to XML Transformation,” Proceedings of the First International Workshop on Document Image Analysis for Libraries (DIAL '04), IEEE, 2004, pp. 1-8. |
Lu, Y. et al., “Document Retrieval from Compressed Images,” Pattern Recognition, 2003, pp. 987-996, vol. 36. |
Erol, B. et al., Linking Presentation Documents Using Image Analysis, IEEE, Nov. 9-12, 2003, pp. 97-101, vol. 1. |
European Search Report, European Application No. 09170045.0, Nov. 24, 2009, 4 pages. |
European Summons for Oral Proceedings, European Application No. 07015093.3, Sep. 16, 2011, 4 pages. |
Extended European Search Report, Application No. 09178280.5, Aug. 31, 2010, 6 pages. |
Extended European Search Report, European Patent Application No. 082523770, May 2, 2011, 6 pages. |
Notice of Allowance for U.S. Appl. No. 13/273,186 dated Jul. 10, 2014, 9 pages. |
Non-Final Office Action for U.S. Appl. No. 13/330,492 dated Jul. 17, 2014, 16 pages. |
Final Office Action for U.S. Appl. No. 12/253,715, dated Jul. 25, 2014, 40 pages. |
Final Office Action for U.S. Appl. No. 12/340,124, dated Aug. 21, 2014, 26 pages. |
Final Office Action for U.S. Appl. No. 13/789,669 dated Aug. 29, 2014, 9 pages. |
Non-Final Office Action for U.S. Appl. No. 13/494,008 dated Sep. 10, 2014, 18 pages. |
Non-Final Office Action for U.S. Appl. No. 11/461,164, dated Sep. 15, 2014, 18 pages. |
Notice of Allowance for U.S. Appl. No. 11/461,085, dated Sep. 17, 2014, 5 pages. |
Moghaddam et al., Visualization and User-Modeling for Browsing Personal Photo Libraries, Mitsubishi Electric Research Laboratories, dated Feb. 2004, 34 pages. |
Japanese Application Office Action for JP Publication No. 2013-192033, dated Jun. 24, 2014, 7 pages. |
Japanese Application Office Action for JP Publication No. 2013-222655, dated Aug. 26, 2014, 5 pages. |
Jonathan Hull, Mixed Media Reality (MMR) A New Method of eP-Fusion, Ricoh Technical Report, Ricoh Company, Ltd., dated Dec. 1, 2007, No. 33, p. 119-125; online search dated Aug. 22, 2013 <URL: http://www.ricoh.com/ja/technology/techreport/33/pdf/A3314.pdf >. |
Notice of Allowance for U.S. Appl. No. 12/059,583, dated Jan. 15, 2016, Hull et al., 9 pages. |
Notice of Allowance for U.S. Appl. No. 13/330,492, dated Jan. 26, 2016, Graham et al., 8 pages. |
Final Office Action for U.S. Appl. No. 12/060,206, dated Feb. 1, 2016, Erol et al., 26 pages. |
Notice of Allowance for U.S. Appl. No. 13/494,008, dated Feb. 16, 2016, Hull et al., 15 pages. |
Notice of Allowance for U.S. Appl. No. 14/604,619, dated Feb. 18, 2016, Moraleda et al., 8 pages. |
Notice of Allowance for U.S. Appl. No. 11/461,164, dated Mar. 29, 2016, Hull et al., 8 pages. |
Non-Final Office Action for U.S. Appl. No. 12/060,198, dated Apr. 8, 2016, Erol et al. 35 pages. |
Number | Date | Country | |
---|---|---|---|
20080027983 A1 | Jan 2008 | US |