Data processing systems, devices, and methods for content analysis

Information

  • Patent Grant
  • 10325011
  • Patent Number
    10,325,011
  • Date Filed
    Friday, September 30, 2016
    8 years ago
  • Date Issued
    Tuesday, June 18, 2019
    5 years ago
Abstract
Systems, devices and methods operative for identifying a reference within a figure and an identifier in a text associated with the figure, the reference referring to an element depicted in the figure, the reference corresponding to the identifier, the identifier identifying the element in the text, placing the identifier on the figure at a distance from the reference, the identifier visually associated with the reference upon the placing, the placing of the identifier on the figure is irrespective of the distance between the identifier and the reference.
Description
TECHNICAL FIELD

The present disclosure relates to systems, devices and methods for data processing. More particularly, the present disclosure relates to systems, devices and methods for aiding users in content analysis.


BACKGROUND

This section is intended to introduce the reader to various aspects of art that may be related to various aspects of the present techniques, which are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, the discussion should be understood that these statements are to be read in this light, and not as admissions of prior art. Likewise, in the present disclosure, where a document, act or item of knowledge is referred to or discussed, this reference or discussion is not an admission that the document, act or item of knowledge or any combination thereof was at the priority date, publicly available, known to the public, part of common general knowledge or otherwise constitutes prior art under the applicable statutory provisions; or is known to be relevant to an attempt to solve any problem with which the present disclosure is concerned.


U.S. Pat. No. 5,774,833 is herein incorporated by reference in its entirety.


U.S. Pat. No. 5,845,288 is herein incorporated by reference in its entirety.


U.S. Pat. No. 8,160,306 is herein incorporated by reference in its entirety.


A typical figure, such as an anatomical figure, an engineering figure, an architectural figure or a patent figure, contains certain elements that indicate by shape and size the nature of the object the figure is intended to depict. Often, included with these figure are alphanumeric reference characters which point to, and are placed next to, the element for which the element corresponds. A user viewing the figure typically has to read through a textual description of the figure, which may be many pages long or in a different location from the figure, to determine what element each alphanumeric reference character refers to, in order to understand the nature of the specific element, as well as the overall figure. This process may be time-consuming, expensive and error-prone.


While certain aspects of conventional technologies have been discussed to facilitate the present disclosure, no technical aspects are disclaimed. The claims may encompass one or more of the conventional technical aspects discussed herein.


BRIEF SUMMARY

The present disclosure addresses at least one of the above problems. However, the present disclosure may prove useful in addressing other problems and deficiencies in a number of technical areas. Therefore, the claims, as recited below, should not necessarily be construed as limited to addressing any of the particular problems or deficiencies discussed herein.


Example embodiments of the present disclosure provide systems, devices and methods for aiding users in content analysis.


An example embodiment of the present disclosure is a computer-implemented method which includes identifying a reference within a figure and an identifier in a text associated with the figure. The reference referring to an element depicted in the figure. The reference corresponding to the identifier. The identifier identifying the element in the text. The method further includes placing the identifier on the figure at a distance from the reference. The identifier visually associated with the reference upon the placing. The placing of the identifier on the figure is irrespective of the distance between the identifier and the reference.


In an example embodiment of the present disclosure the identifier is visually associated with the reference via at least one line displayed on the figure irrespective of the distance between the identifier and the reference.


In an example embodiment of the present disclosure the at least one line is colored for visual distinction.


In an example embodiment of the present disclosure the identifier is visually associated with the reference via a geometric shape displayed on the figure, the shape enclosing the reference and the identifier on the figure.


In an example embodiment of the present disclosure the shape is colored for visual distinction.


In an example embodiment of the present disclosure the identifier is colored on the figure for visual distinction.


In an example embodiment of the present disclosure the computer-implemented method may further provide for printing the figure after the placing of the identifier on the figure, the printed figure including both the identifier and the reference.


In an example embodiment of the present disclosure the placing of the identifier on the figure is user-customizable.


In an example embodiment of the present disclosure the figure and the text are stored in different locations.


In an example embodiment of the present disclosure the computer-implemented method may further provide for, if the text associates an another identifier with the reference, placing the another identifier on the figure adjacent to the identifier without overlapping the identifier.


In an example embodiment of the present disclosure the computer-implemented method may further provide for, receiving the figure from an image capture device before the identifying of the reference within the figure.


In an example embodiment of the present disclosure the computer-implemented method may further provide for, performing a frequency analysis before the placing of the identifier on the figure when the identifier conflicts with an another identifier in the text.


In an example embodiment of the present disclosure the computer-implemented method may further provide for, performing optical character recognition on the text to aid in identifying the identifier.


In an example embodiment of the present disclosure the computer-implemented method may further provide for, creating a bidirectional hyperlink relationship between the reference in the figure and the identifier in the text.


In an example embodiment of the present disclosure the identifier is placed on the figure on an axis of orientation such that a viewer avoids rotating the figure to read the identifier.


In an example embodiment of the present disclosure the computer-implemented method may further provide for, translating the identifier into a language different from the text, the figure including the translated identifier.


In an example embodiment of the present disclosure the identifier and the reference are placed apart from each other in the figure so as to make readability easier while having a proper scale and being compliant with at least one of preselected and customized margins.


In an example embodiment of the present disclosure the computer-implemented method may further provide for, avoiding the placing of the identifier on the figure if the identifier is associated with at least one of length, width, depth, volume, diameter, radius, density and direction.


In an example embodiment of the present disclosure the computer-implemented method may further provide for, repeating the process for a plurality of references within the figure.


An example embodiment of the present disclosure is a computer-implemented method which includes identifying a reference within a figure and an identifier in a text associated with the figure. The reference referring to an element depicted in the figure. The reference corresponding to the identifier. The identifier identifying the element in the text. The method further includes replacing the reference with the identifier on the figure.


In an example embodiment of the present disclosure the computer-implemented method may further provide for, printing the figure after the replacing of the reference with the identifier, the printed figure including the identifier but not the reference.


In an example embodiment of the present disclosure the computer-implemented method may further provide for, if the text associates an another identifier with the reference, placing the another identifier on the figure adjacent to the identifier without overlapping the identifier.


An example embodiment of the present disclosure is a computer-implemented method which includes identifying a reference within a figure and an identifier in a text associated with the figure. The reference referring to an element depicted in the figure. The reference corresponding to the identifier. The identifier identifying the element in the text. The method further includes placing the identifier within the element on the figure.


The present disclosure may be embodied in the form illustrated in the accompanying drawings. Attention is called to the fact, however, that the drawings are illustrative. Variations are contemplated as being part of the disclosure, limited only by the scope of the claims. The above and other features, aspects and advantages of the present disclosure will become better understood to one skilled in the art with reference to the following drawings, detailed description and appended claims.





BRIEF DESCRIPTION OF DRAWINGS

The accompanying drawings, which are incorporated into and form a part of the specification, illustrate example embodiments of the present disclosure. Together with the detailed description, the drawings serve to explain the principles of the present disclosure. The drawings are only for the purpose of illustrating example embodiments of the present disclosure and are not to be construed as necessarily limiting the disclosure. Like numbers can refer to like elements throughout. The above and other aspects, advantages and features of the present disclosure will become better understood to one skilled in the art with regard to the following description, appended claims and accompanying drawings where:



FIG. 1 is a flowchart of an example embodiment of a visual association process according to the present disclosure;



FIG. 2 is a flowchart of another example embodiment of a visual association process according to the present disclosure;



FIG. 3 is a flowchart of yet another example embodiment of a visual association process according to the present disclosure;



FIGS. 4a-4e are diagrams depicting an example embodiment of a process of visual association according to the present disclosure;



FIGS. 5a-5c are diagrams depicting another example embodiment of a process of visual association according to the present disclosure;



FIGS. 6a-6b are diagrams of an example embodiment of a figure before and after visual association according to the present disclosure;



FIG. 7 is a network diagram of an example embodiment of a network within which visual association is performed according to the present disclosure; and



FIGS. 8a-8b are diagrams of an example embodiment of a figure before and after visual association according to the present disclosure.





DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS

The present disclosure will now be described more fully with reference to the accompanying drawings, in which example embodiments of the disclosure are shown. The disclosure may, however, be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the disclosure to those skilled in the art.


According to principles of the present disclosure, any verbs as used herein can imply direct or indirect, full or partial, action or inaction. For example, when an element is referred to as being “on,” “connected” or “coupled” to another element, the element can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being “directly connected” or “directly coupled” to another element, there are no intervening elements present.


Although the terms first, second, etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms are only used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the teachings of the present disclosure.


The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be necessarily limiting of the disclosure. As used herein, the singular forms “a,” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “includes” and/or “comprising,” “including” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.


Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.


Furthermore, relative terms such as “below,” “lower,” “above,” and “upper” may be used herein to describe one element's relationship to another element as illustrated in the accompanying drawings. It will be understood that relative terms are intended to encompass different orientations of the device in addition to the orientation depicted in the accompanying drawings. For example, if the device in the accompanying drawings is turned over, elements described as being on the “lower” side of other elements would then be oriented on “upper” sides of the other elements. Similarly, if the device in one of the figures is turned over, elements described as “below” or “beneath” other elements would then be oriented “above” the other elements. Therefore, the example terms “below” and “lower” can, therefore, encompass both an orientation of above and below.


If any disclosures are incorporated herein by reference and such incorporated disclosures conflict in part or whole with the present disclosure, then to the extent of conflict, and/or broader disclosure, and/or broader definition of terms, the present disclosure controls. If such incorporated disclosures conflict in part or whole with one another, then to the extent of conflict, the later-dated disclosure controls.



FIG. 1 is a flowchart of a visual association method according to the first embodiment of the present disclosure. A process 100 includes blocks 110-120. Process 100 can be performed via a single core processor or a multi-core processor, irrespective of whether the cores are local to each other.


Block 110 includes matching a reference in a figure to an identifier found in a text corresponding to the reference which identifies the element referred to by the reference. The reference in the figure is an alphanumeric character visually referring to an element of the figure. One or more alphanumeric characters may be used, or even non-alphanumeric character references may be used. The reference can be or include symbols as well. An identifier is a name or brief description of the element, which is often textually described. Alternatively, the identifier can even include a plurality of terms, a sentence or even a paragraph.


Typically, the name of the element is disclosed in a description of the figure. For example, in a figure of a patent application, the number 10, which is a reference, can visually refer to an element of the figure, such as a chair. The word “chair” is an identifier that is disclosed in the specification describing the figure of the patent application.


Block 120 includes visually associating in the figure the identifier with the reference. Block 120 can be performed close in time or far apart in time to block 110. One way of visually associating the identifier with the reference is by placing the identifier adjacent to the reference. Alternatively, non-adjacent visual association is possible as well where the identifier refers to the reference irrespective of where the identifier is placed on the figure. Thus, the term “chair” does not have to be adjacent to reference 10. As long as there is a visual association between the term “chair” and reference 10, even if the term “chair” is at a far distance from reference 10, such as, for example, at a corner of the page of the figure, a bottom center or top center of the page of the figure, along a left or right side of the page of the figure, a user can easily identify what the reference 10 is referring to. An example of an adjacent visual association is if the number 10 in the figure refers to a chair, then the word “chair” is placed adjacent to the number 10. Thus, a viewer of the figure, such as a student, a scientist, a hobbyist, an engineer or a patent professional, can easily identify what the reference 10 is referring to without having to peruse the text to find the identifier. Visual associating an identifier with a corresponding reference, even when the two are not adjacent, is described herein.



FIG. 2 is a flowchart of a visual association method according to the first embodiment of the present disclosure. A process 200 includes blocks 210-250.


Block 210 includes searching within a figure for a reference referring to an element of the figure. One way of performing the searching is via computer vision or computer pattern recognition, such as optical character recognition (OCR). The computer searches the figure to locate references, such as alphanumeric and non-alphanumeric characters referring to elements in the figure. In an example embodiment, the searching within the figure for the reference can be within selective geographic regions of the figure. A determination can be made of which selective geographic regions of the figure can be performed automatically, via a preset rule or manually. Alternatively, the figure can be searched via text searching.


Block 220 includes copying the found references into a data structure. One example of a data structure is a list or an array. One example of the found reference is an alphanumeric character, such as the numbers 10 or 20. Block 230 includes searching within a text describing the figure for an identifier corresponding to the reference. Although typically a single reference refers to a single identifier, a possibility exists of another type of correspondence, such as one to many, many to one, or many to many. In such a case, either an error message is generated and displayed, for example, adjacent to the reference. Alternatively, a mode computation/frequency analysis with respect to the reference or the identifier is made, from which it is determined which identifier should be displayed adjacent to a reference, the mode term is flagged and the mode term is used for any subsequent blocks. The flagging can be used later to visually indicate potential imprecision of visual association.


In an example embodiment of the present disclosure, the searching within the text describing the figure can be done within only selective portions of the description as selected by the user, whether a human or a machine/software. A determination of which selective portions of the description can be made automatically via a preset rule or manually.


Block 240 includes copying the found identifier into the data structure. Block 250 includes visually associating in the figure the identifier with the reference.



FIG. 3 is a flowchart of yet another example embodiment of a visual association process according to the present disclosure. A process 300 includes blocks 310-340.


Block 310 includes searching within a text describing a figure for an identifier corresponding to a reference referring to an element within the figure. The text searching can include OCR.


Block 320 includes storing the found identifier in a data structure.


Block 330 includes searching the figure for the reference. One way of searching the figure for the reference is to create a map of locations where in the figure the references are located. Another way of searching the figure for the reference is to search for a presence of just the reference.


Block 340 includes visually associating in the figure the stored identifier with the found reference.



FIGS. 4a-4e are diagrams depicting an example embodiment of a process of visual association according to the present disclosure.



FIG. 4a depicts a patent figure prior to visual association. Although depicted figure is a patent figure, other types of figures, such as architectural, engineering, anatomical, scientific, historical, blueprints, financial or geographical figures, having a textual description of the figures can be used as well. Any type of content can be depicted in the figure. The figures can be any types of diagrams, flowcharts or tree diagrams. The figures can be depicted in any type of views, such as a side view, a perspective view, a top view or bottom views. The figures can be grayscale, white/black or color. The figures can be linked or broken into a plurality of sub-figures depicting one object together. The figures can be drawn by hand, created via a computer or automatically drawn.



FIG. 4b depicts references stored within a data structure, such as a table or an array. The references are obtained from analyzing, via a computer, FIG. 32 as depicted in FIG. 4a. The analyzing can be performed via OCR or other processes as known in the art.



FIG. 4c depicts descriptive text, such as a patent detailed description, describing elements depicted in FIG. 4a. The elements are referenced by the references shown in FIG. 4a and stored within the data structure of FIG. 4b. The descriptive text can be stored in the same file as FIG. 32 as depicted in FIG. 4a or the descriptive text can be stored in a file different file, whether one a same computer or a different computer, from the file containing FIG. 32 as depicted in FIG. 4a.



FIG. 4d depicts the data structure after the descriptive text depicted in FIG. 4c has been parsed and matched accordingly, which can occur in one or more steps/processes. As shown in FIG. 4d, paragraph [0079] has been parsed according to the references stored in the data structure and corresponding identifiers are stored in the data structure. Thus, the data structure stores the identifiers corresponding to the references.



FIG. 4e depicts different ways of visually associating the identifiers with the references.


Identifier “first database” is placed adjacent to reference 10 using a line. The identifier can be in the same font or font size as the rest of the figure, as determined automatically via a computer or manually via a user, or the font or font size can be different, as determined automatically via a computer or manually via a user. The identifier, the reference or the line can be highlighted. The line can also visually associate a plurality of elements. The line can be a straight line or a curved/zigzag line. The line can be without any gaps or the line can be defined via a plurality of closely spaced elements, which can be iconic, symbolic or alphanumeric. The line can be a plurality of aligned or parallel lines. The line can be placed over other elements or avoid placement over other elements, references or identifiers. The computer can be programmed to determine how to properly place the line, such as to place or avoid placing over other elements, references or identifiers. Alternatively, the user can select how to properly place the line or maneuver/drag the line on the figure. The line, the reference or the identifier or any of their portions can be of any color. A user or a computer can automatically select colors. The line can be colored to be visually distinct from the reference, the identifier or the element or other elements, references or identifiers. The line can be hyperlinked, whether uni-directionally or bi-directionally. Upon clicking, the hyperlink can lead to other elements, references and identifiers whether in the present figure, other figures, the text description or other text descriptions. Upon clicking, the hyperlink can allow for popups, hover-overs or slide-outs to disclose information relating to the element, reference or identifier or other elements, references or identifiers.


In an alternative example embodiment, visual association can be performed via placing the identifier adjacent to the reference and placing a shape, such as a rectangle, a box, a circle, an oval, a trapezoid or any other shape, over the reference and the identifier on the figure. The shape can fully or partially encompass the identifier and the reference. The shape delineation, the shape background or foreground, the identifier, the reference or any of their portions can be colored for visual distinction. The shape can be defined via a single line or a plurality of lines, dots, minuses, pluses or other visual elements, including alphanumeric characters. The shape can be a bubble, whether a popup, a hover-over or slide-out. The shape can be hyperlinked. The identifier can be hyperlinked, whether uni-directionally or bi-directionally. Upon clicking, the hyperlink can lead to other elements, references and identifiers whether in the present figure, other figures, the text description or other text descriptions. Upon clicking, the hyperlink can allow for popups, hover-overs or slide-outs to disclose information relating to the element, reference or identifier or other elements, references or identifiers.


Identifier “CPU” replaces the reference 20 as depicted in FIG. 4a. The identifier can be in the same font or font size as the rest of the figure, as determined via automatically via a computer or manually via a user, or the font or font size can be different, as determined via automatically via a computer or manually via a user. The identifier can also visually associate a plurality of elements. The identifier, the reference or the line can be highlighted. The identifier can be placed over other elements or avoid placement over other elements, references or identifiers. The computer can be programmed to determine how to properly place the line, such as to place or avoid placing over other elements, references or identifiers. Alternatively, the user can select how the identifier replaces the reference in the figure. The identifier or any of its portions can be of any color. A user or a computer can automatically select colors. The identifier can be colored to be visually distinct from the reference, the identifier or the element or other elements, references or identifiers. The identifier can be hyperlinked, whether uni-directionally or bi-directionally. Upon clicking, the hyperlink can lead to other elements, references and identifiers whether in the present figure, other figures, the text description or other text descriptions. Upon clicking, the hyperlink can allow for popups, hover-overs or slide-outs to disclose information relating to the element, reference or identifier or other elements, references or identifiers.


Identifier “second database” is placed within the element corresponding to the reference 30. The element, such as its location, size or shape, is automatically determined by a computer using various software algorithms as known in the art. These algorithms can employ computer vision/pattern recognition. The algorithms can refer to element library as publically or privately available. Such library can be stored on the computer or available via the Internet. The algorithms can also determine the element via determining meaning of the identifier as looked up in internal or external library/database. The element can be filled with color for visual distinction. The color can be manually selected by a user or the color can be automatically selected by a computer. A plurality of identifiers, whether identifying same or different element, can be placed within the element and can be visually distinct from other elements, references and identifiers. The identifier, the reference or the line can be highlighted. The identifier can be in the same font or font size as the rest of the figure, as determined via automatically via a computer or manually via a user, or the font or font size can be different, as determined via automatically via a computer or manually via a user. The identifier can also visually associate a plurality of elements. The identifier can be placed over other elements or avoid placement over other elements, references or identifiers. The computer can be programmed to determine how to properly place the line, such as to place or avoid placing over other elements, references or identifiers. Alternatively, the user can select how the identifier replaces the reference in the figure. The identifier, the reference or any of their portions can be of any color. The identifier can be colored to be visually distinct from the reference, the identifier or the element or other elements, references or identifiers. The identifier can be hyperlinked, whether uni-directionally or bi-directionally. Upon clicking, the hyperlink can lead to other elements, references and identifiers whether in the present figure, other figures, the text description or other text descriptions. Upon clicking, the hyperlink can allow for popups, hover-overs or slide-outs to disclose information relating to the element, reference or identifier or other elements, references or identifiers.


Regardless of visual association, a user can select or a computer can automatically decide to shrink or compact the figure so as to allow for placement of the identifier or a plurality of identifier so as to allow for readability of the identifier or the plurality of the identifiers. For example, font sizes can be automatically increased.


Any method of visual association can allow for any portion of any element, identifier, reference, line, shape, character, symbol, tag, hyperlink or any other way of visual association to be of any color or any color for visual distinction. Any of these types of visual association can be automatically or manually combined in any way and any of these types of visual association can be automatically or manually be made visually distinct from other references or identifiers. For example, a computer can automatically determine how to visually associate and such determination can mix and match different types of visual associations. Such mix and match can depend on the context or content of the figure, such as whether to write over or avoid writing over other elements, references or identifiers. One element can be visually associated with all or less than all ways of visually associating.



FIGS. 5a-5c are diagrams depicting another example embodiment of a process of visual association according to the present disclosure.



FIG. 5a depicts descriptive text, such as a patent detailed description, describing various elements in a corresponding figure, in which the elements are referenced by references and named via identifiers.



FIG. 5b depicts a data structure after the descriptive text depicted in FIG. 5a has been parsed, matched and stored in the data structure. As shown in FIG. 5b, paragraph [0079] has been parsed and matched by the references and corresponding identifiers and stored in the data structure. Thus, the data structure stores the identifiers corresponding to the references.



FIG. 5c depicts different ways of visually associating the identifiers with the references. Identifier “first database” is adjacent to reference 10. Identifier “CPU” replaces the reference 20. Identifier “second database” is placed within the element corresponding to the reference 30.


Any of these types of visual association can be automatically or manually combined in any way, even with FIGS. 4a-4e, and any of these types of visual association can be automatically or manually be made visually distinct from other references or identifiers.



FIGS. 6a-6b are diagrams of an example embodiment of a figure before and after visual association according to the present disclosure. FIG. 6a depicts a microphone pen before visual association. FIG. 6b depicts the microphone pen after visual association. Each identifier as depicted in FIG. 6b can be visually associated with a reference as depicted in FIG. 6a that the identifier replaced. For example, as shown in FIG. 6b, the identifier “chamber” can be visually associated with the reference 204 using any visual association methods as described herein.



FIG. 7 is a network diagram of an example embodiment of a network within which visual association is performed according to the present disclosure. A network 700 includes a user computer 710 connected to a network, such as the Internet. A first server 720 and a second server 730 are accessible via the network.


Any one or all or computer 710 and servers 720 can be any type of a computer, such as a desktop, a laptop, a mainframe, a cloud-computing system, a smartphone, a tablet computer or a workstation.


Visual association, as described herein, can be performed locally on user computer 710 by a program installed on a hard disk or can be run as a module within other software, such as a word processing application, a browser or a mobile app. Alternatively, visual association can be performed via a website or a web portal. Alternatively, visual association can be performed by first server 720 wherein a user of user computer 710 accesses first server 720, performs the visual association on a selected figure or a file and then downloads the visually associated figure or the file. More alternatively, visual association can be performed by first server 720 on a set files which are then stored in a database on second server 730. Then, a user of user computer 710 accesses second server 730 to download a selected visually associated figure or a visually associated file.


The visual associating may include printing the visually associated file or a figure of the file or a section of the figure. When printing multiple pages with the visually associated figures on the same sheet, the visual association of one page avoids interfering with visual association of other sheets. Also, the visually associating can be performed according to a view of the figures, wherein the view is a portrait view or a landscape view.


In an example embodiment, the visually associating is performed according to a preset rule, such as placing the identifier a certain distance from the reference or visually associating in a way such that all identifiers fit on a single screen or a page. The distance can be manually selected. Alternatively, the distance can be automatically selected by a computer upon analysis of a section of the figure or the figure to determine optimal placement and/or method of visual association.


In an example embodiment, in a computer network environment, one user can perform a visual association process, store and allow access to the visually associated file (old one or create new one) to other users. Thus, other users can avoid repetition of the visual association process in order to improve efficiency.


In an example embodiment, upon matching of the references and identifiers, the method can further display or be used for data mining, such as determining which elements have missing references or identifiers.


In an example embodiment, the visual associating can be performed on a section of a figure, a single FIGURE, multiple figures within a file, a single FIGURE within each of multiple files, multiple figures in multiple files, or via a directory within memory storing a plurality of files with figures.


Even though the figure and the description can be stored in one computer file, the figure and the description can be stored in multiple computer files, in one or multiple computers, and/or in one or multiple distinct locales. Further, the figure can be obtained from an image capture device, such as a scanner, and matched with the description. Likewise, the description can be automatically or manually obtained from a description database, such as a patent text database or a digital textbook, and then automatically or manually matched with the figure. Also, although a single FIGURE and a single description are described, multiple figures can be one described as one figure and one figure can be described in multiple descriptions and if identifiers conflict, then a frequency analysis can be used or a preset rule can be invoked. Moreover, if a description of the figure is an image, then text within the image can be recognized via OCR technology and then parsed as described herein.


In an example embodiment, in the figure, by selecting the reference, the element or a portion of the element itself, such as by clicking on the element/reference or by hovering the mouse over the element/reference, the user may cause to be dynamically displayed, such as adjacent to the reference or element or visually associated with the reference/element, an identifier associated with a respective reference. Since each element or a portion thereof is associated with a different reference, moving the mouse from one element to another enables the reference associated with the another element to be displayed as described herein.


In an example embodiment, in a patent application or a patent grant stored in a computer accessible file, if the user selects at least one word in a claim and that selected word is shown in a figure, as determined by the parsing and identification from the description and location via the reference in the figure, then the element in the figure and/or the numeral corresponding to the element will be highlighted or shown in a new window or bubble or any other type of data display that inform the user of the location of the at least one word in the claim. This allows the user to quickly find the associated figure or element in the figure for further understanding of the text. Similarly, the user can do the reverse, whereby the user selects an element of the figure, which highlights a corresponding associated text in the description or the concept in a claim, such as a noun, by showing in a new window or bubble or any other type of data display that inform the user of the location of the at identifier.


In an example embodiment, in a patent application or a patent grant stored in a computer accessible file, after parsing and visually associating, the data can be viewed via calendar views, such as for a continuation-in-part patent application where a date or dates of filing or priority can be associated with references/identifiers to identify newly added subject matter, alerts, such as via conflicting references/identifiers, information bubbles associated with references/identifiers, color variances for references/identifiers, such as user-customizable color palettes for each or all or any as desired references/identifiers.


In an example embodiment, in a figure, after parsing and corresponding references to identifiers or vice versa, a listing of references and corresponding identifiers can be displayed on a side of the figure or corner of the page or anywhere else away from the actual figure in a form of a table or any other representation of data that allows the user to easily identify which identifiers the references refer to. This can be done simultaneously with or alternatively to the visual association as described herein.


In an example embodiment, a figure or a plurality of figures can be adjusted to have a same or similar axis of orientation to allow for convenient reading of the figure. Likewise, in one or more figures, references or identifiers can be placed or adjusted or rotated or moved to have a similar axis of orientation so as to be viewed without rotating the figure or the figures. Fonts or font sizes can be automatically adjusted as well.


In an example embodiment, after parsing and matching the identifiers and the references on at least one figure, the user can click the reference/identifier to jump or lead to a first instance of such reference/identifier in the description or the claims or the figure.


In an example embodiment, after parsing and matching the identifiers and the references on at least one figure, upon clicking/hovering over/selecting the reference, a scrollable/expandable/window with a description of at least a portion of the figure or a specific description of the element corresponding to the selected reference is shown.


In an example embodiment, whether before, during or after parsing and matching the identifiers and the references on at least one figure or a section of the figure, at least one of the references or identifiers in the figure can be translated into any human language. The language can be as selected from a menu provided to a user or automatically detected via a computer, whether local or remote, and then facilitate translation. The translation can occur via using online translation engine, such as Google Translate, or locally, via locally stored translation library or using a computer's operating system. The translation can occur before, during or after the placing of the identifier on the figure.


In an example embodiment, in a patent application or a patent grant stored in a computer accessible file, manually select a plurality of words from at least one independent claim or automatically parse at least one independent claim into a plurality of nouns and match via searching the plurality of words or the parsed nouns to at least one visually associated figure that contains the plurality of words or the parsed nouns in another patent application or another patent grant stored in another computer accessible file on a network source, such as a database hosted on a server. Also, any other visually associated figures in other literature or websites or applications can be matched to as well. Thus, this method can be used to identify an anticipation rejection compliant with at least US patent law.


In an example embodiment, in a patent application or a patent grant stored in a computer accessible file, manually select a plurality of words from at least one independent claim or automatically parse at least one independent claim into a plurality of nouns and match via searching the plurality of words or the parsed nouns to at least one figure in the same file. Then, at least temporarily hiding, such as via placing white space or color contrasting or putting an X through, the references or the identifiers in the figure that are not associated or correspond to the nouns or the words. Thus, only references having identifiers associated with the claim are shown in the figure.


In an example embodiment, references and identifiers are placed apart from each other in the figure so as to make readability easier while being compliant with preselected or customized margins and having a proper scale.


In an example embodiment, some measurements, such as length, width, depth, volume, diameter, radius, density, direction, can remain unlabeled. Such measurements can be detected by presence of various signs, such as arrows on the figure or when the text identifies the identifiers as such.


In an example embodiment, via the disclosure the user can determine whether a claimed element is in the specification or the figure.


In an example embodiment, an examiner can put on 3d glasses, such as made by Nvidia, and perform any disclosures provided herein without running visual association process on the actual file having references and identifiers. Rather, the disclosure as described herein is performed by the software for the glasses.


In an example embodiment, the disclosed technology can ensure a figure's compliance with 37 CFR 1.83 or 1.84 and display warnings if the figure is not compliant. For example, if the figure has improper margins, fonts, font sizes, colors, the disclosed technology can notify non-compliance with 37 CFR 1.83 or 1.84.


In an example embodiment, the disclosure can be performed on one file, a plurality of files or portions retrieved from a plurality of files. Also, the disclosure can be performed via one or a plurality of computers or servers. Also, the files can be stores on one computer or a plurality of computers in any way. Also, the disclosure can be performed locally or remotely or on one computer or a software app or over a computer network, such as the Internet.


In an example embodiment, visual association can be performed on a video showing a plurality of images or figures where the video is associated with text mentioning the elements as shown in the video. The video can have audio reading the text.


In an example embodiment, any portion of any embodiments or permutations thereof, as described herein, can be combined in any way according to the principles of the present disclosure.



FIGS. 8a-8b are diagrams of an example embodiment of visual association a figure before and after visual association according to the present disclosure. Any of the methods of visual association can be combined in any way. For example, although one figure can be visually associated in one method of visual association, the one figure can include multiple methods of visual association. When a plurality of figures is desired to be visually associated, then all or less than all figures can be associated in a same or different ways. Any elements, references, identifiers, methods of visual associations or portions thereof can be hyperlinked. When a computer decides which visual method to employ, then the computer uses algorithms which look for presence of empty space, such as white space, near the reference to place the identifier, possibility of reference/visual association placement over other elements, references, identifiers or methods of visual association, size of the figure or portion of the figure, screen space, font size, colors, speed/memory of computer or web connection, user confusion (as defined by a user or programmed in advance) and other similar concerns.


Note that reference 180 within the element has been replaced with identifier “base” within the element corresponding to reference 180. The identifier can be colored for visual distinction or be same color as at least a portion of the figure. A user can select or a computer can automatically determine as to how to most optimally visually associate. With the reference 180, the computer automatically chose to write within the element.


Note that references 96 and 120 outside of their elements have been replaced with identifiers “cylinder” and “center” outside of their elements corresponding to references 96 and 120. Alternatively, such replacement could be done within the elements, like done regarding reference 180. The identifiers can be colored for visual distinction or be same color as at least a portion of the figure. A user can select or a computer can automatically determine as to how to most optimally visually associate. With the references 96 and 120, the computer automatically chose to replace the reference.


Note that reference 206 has been visually associated via an alphanumeric character corresponding to a plus symbol. Alternatively, non-alphanumeric character, such as a symbol or an icon, can also be used. The character or the reference can be can be colored for visual distinction or be same color as at least a portion of the figure. A user can select or a computer can automatically determine as to how to most optimally visually associate. With the reference 206, the computer automatically chose to write within the element.


Note that reference 140 has been visually associated via a line defined via a broken line visually associating over at least one element. The line indicates the identifier “drum” corresponding to the reference 140. The line, the reference or identifier can be can be colored for visual distinction or be same color as at least a portion of the figure. A user can select or a computer can automatically determine as to how to most optimally visually associate. With the reference 140, the computer automatically chose to use a broken line method over the at least one element.


Note that reference 128 has been visually associated via an identifier “rod” placed near the reference 128 and a shape placed around the reference 128 and the identifier “rod.” The shape can be a rectangle, a box, a circle, an oval, a trapezoid or any other shape. The shape can fully or partially encompass the identifier and the reference. The shape delineation, the shape background or foreground, the identifier or the reference can be colored for visual distinction. The shape can be defined via a single line or a plurality of lines, dots, minuses, pluses or other visual elements, including alphanumeric characters. The shape can be a bubble, whether a popup, a hover-over or slide-out. The shape can be hyperlinked. The identifier can be hyperlinked, whether uni-directionally or bi-directionally. Upon clicking, the hyperlink can lead to other elements, references and identifiers whether in the present figure, other figures, the text description or other text descriptions. Upon clicking, the hyperlink can allow for popups, hover-overs or slide-outs to disclose information relating to the element, reference or identifier or other elements, references or identifiers. A user can select or a computer can automatically determine as to how to most optimally visually associate. With the reference 128, the computer automatically chose to the shaping method.


Note that reference 126 has been visually associated with identifier “pipe” via an unbroken line in a non-adjacent manner i.e. irrespective of the distance between the reference 126 and the identifier “pipe.” The element, the line, the reference, the identifier or any portions thereof can be colored for visual distinction or be same color as at least a portion of the figure. A user can select or a computer can automatically determine as to how to most optimally visually associate. With the reference 126, the computer automatically chose to the unbroken line method.


Note that reference 166 has been visually associated with identifier “plate” via a broken line in a non-adjacent manner i.e. irrespective of the distance between the reference 126 and the identifier “pipe.” The element, the line, the reference, the identifier or any portions thereof can be colored for visual distinction or be same color as at least a portion of the figure. A user can select or a computer can automatically determine as to how to most optimally visually associate. With the reference 166, the computer automatically chose to the broken line method to associate over other elements and association over other references, identifiers, methods of visual association or any portions thereof can also be performed.


As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or computer program product. Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.


Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus or device.


A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate or transport a program for use by or in connection with an instruction execution system, apparatus or device.


Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.


Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. Other types of programming languages include HTML5, Flash and other similar languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).


Aspects of the present disclosure are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.


These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.


The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.


The flowchart and block diagrams in the Figures illustrate the architecture, functionality and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.


The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. The embodiment was chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure for various embodiments with various modifications as are suited to the particular use contemplated.


The flow diagrams depicted herein are just one example. There may be many variations to this diagram or the steps (or operations) described therein without departing from the spirit of the disclosure. For instance, the steps may be performed in a differing order or steps may be added, deleted or modified. All of these variations are considered a part of the claimed disclosure.


While the preferred embodiment to the disclosure had been described, it will be understood that those skilled in the art, both now and in the future, may make various improvements and enhancements which fall within the scope of the claims which follow. These claims should be construed to maintain the proper protection for the disclosure first described.


The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations in techniques and structures will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure as set forth in the claims that follow. Accordingly, such modifications and variations are contemplated as being a part of the present disclosure. The scope of the present disclosure is defined by the claims, which includes known equivalents and unforeseeable equivalents at the time of filing of this application.

Claims
  • 1. A method comprising: receiving, via a server, a first request from a first client;responsive to the first request: accessing, via the server, a first document based at least in part on the first request, wherein the first document includes a first figure and a first text, wherein the first figure includes a first alphanumeric string, wherein the first text includes a second alphanumeric string,identifying, via the server, the first alphanumeric string,identifying, via the server, the second alphanumeric string,forming, via the server, a first association between the first alphanumeric string and the second alphanumeric string,modifying, via the server, the first figure in a first manner based at least in part on the first association,presenting, via the server, the first document including the first figure, as modified, to the first client, andsaving, via the server, the first document including the first figure, as modified, in a database;receiving, via the server, a second request from a second client, wherein the second request is after the first request;responsive to the second request: determining, via the server, whether the second request corresponds to the first document saved in the database with the first figure as modified, based on the second request corresponding to the first document saved in the database with the first figure as modified: retrieving, via the server, the first document including the first figure, as modified, from the database, andserving, via the server, the first document including the first figure, as modified, to the second client,based on the second request not corresponding to the first document saved in the database with the first figure as modified: accessing, via the server, a second document including a second figure with a third alphanumeric string and a second text with a fourth alphanumeric string,forming, via the server, a second association between the third alphanumeric string and the fourth alphanumeric string,modifying, via the server, the second figure in a second manner based at least in part on the second association, wherein the first manner is same as the second manner,presenting, via the server, the second document including the second figure, as modified, to the second client, andsaving, via the server, the second document including the second figure, as modified, in the database.
  • 2. The method of claim 1, wherein at least one of: the first alphanumeric string is identified in the first figure based at least in part on searching, via the server, the first figure for the first alphanumeric string based at least in part on the second alphanumeric string, orthe third alphanumeric string is identified in the second figure based at least in part on searching, via the server, the second figure for the third alphanumeric string based at least in part on the fourth alphanumeric string.
  • 3. The method of claim 1, wherein the first figure is stored in a first record of the database and the second figure is stored in a second record of the database.
  • 4. The method of claim 1, wherein the database is stored in a memory, wherein the memory is a random-access memory.
  • 5. The method of claim 1, wherein the second client includes a user input device, wherein the second request includes a set of alphanumeric symbols generated via the user input device, wherein the server determines whether the second request corresponds to the first document saved in the database based at least in part on the set of alphanumeric symbols corresponding to the first document.
  • 6. The method of claim 1, wherein at least one of the first figure or the second figure is modified based at least in part on writing, via the server, a shape onto the at least one of the first figure or the second figure such that the shape encloses at least one of the first alphanumeric string or the third alphanumeric string respectfully.
  • 7. The method of claim 6, wherein the shape is colored differently than the at least one of the first alphanumeric string or the third alphanumeric string.
  • 8. The method of claim 1, wherein at least one of the first figure is modified based at least in part on a first characteristic associated with the first client or the second figure is modified based at least in part on a second characteristic associated with the second client.
  • 9. The method of claim 1, wherein the second request corresponds to the first document based at least in part on the second request referring to a same published patent document as the first request.
  • 10. The method of claim 1, wherein at least one of the first figure or the second figure is modified based at least in part on translating, via the server, at least one of the second alphanumeric string or the fourth alphanumeric string respectfully.
  • 11. A method comprising: accessing, via a processor, a patent document with a claim sentence;parsing, via the processor, the claim sentence into a plurality of words including a first word;accessing, via the processor, a prior art document of a plurality of prior art documents stored in a database, wherein the prior art document includes a figure, wherein the figure shows an object and a second word, wherein the object is visually associated with the second word in the figure, wherein the second word is searchable in the figure, wherein the prior art documents are relative to the patent document;locating, via the processor, the second word in the figure;determining, via the processor, whether the first word matches the second word; andtaking, via the processor, an action based at least in part on the first word matching the second word.
  • 12. The method of claim 11, wherein the action comprises generating, via the processor, a message, and requesting, via the processor, an output device to output the message.
  • 13. The method of claim 12, wherein the message contains a content informative of the first word matching the second word.
  • 14. The method of claim 11, wherein the patent document is remote from the database.
  • 15. The method of claim 11, wherein the action comprises at least one of: hyperlinking, via the processor, the first word to the second word,hyperlinking, via the processor, the second word to the first word,copying, via the processor, the first word,making, via the processor, the first word visually distinct, orhighlighting, via the processor, the first word.
  • 16. The method of claim 11, further comprising at least one of: translating, via the processor, the first word before the action, ortranslating, via the processor, the second word before the action.
  • 17. The method of claim 11, wherein the patent document is opened in at least one of a word processor application or a browser.
  • 18. The method of claim 11, wherein the database is stored in a memory, wherein the memory is a random-access memory of a server.
  • 19. A method comprising: accessing, via a processor, a patent document with a claim sentence;parsing, via the processor, the claim sentence into a plurality of words including a first word;accessing, via the processor, a prior art document of a plurality of prior art documents stored in a database, wherein the prior art document includes a figure, wherein the figure shows an object and a second word, wherein the object is visually associated with the second word in the figure, wherein the second word is searchable in the figure, wherein the prior art documents are relative to the patent document;locating, via the processor, the second word in the figure;determining, via the processor, whether the first word does not match the second word; andtaking, via the processor, an action based at least in part on the first word not matching the second word.
  • 20. The method of claim 19, wherein the action comprises generating, via the processor, a message, and requesting, via the processor, an output device to output the message.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. application Ser. No. 14/979,395, filed on Dec. 27, 2015, which is a continuation of U.S. application Ser. No. 13/623,251, filed on Sep. 20, 2012, which claims a benefit of priority to U.S. application 61/633,523, filed on Feb. 13, 2012 and a benefit of priority to U.S. application 61/537,314, filed on Sep. 21, 2011. All of the above-identified applications are herein fully incorporated by reference for all purposes.

US Referenced Citations (432)
Number Name Date Kind
4504972 Scherl et al. Mar 1985 A
5073953 Westdijk Dec 1991 A
5103489 Miette Apr 1992 A
5111408 Amjadi May 1992 A
5144679 Kakumoto et al. Sep 1992 A
5159667 Borrey et al. Oct 1992 A
5278980 Pedersen et al. Jan 1994 A
5321770 Huttenlocher et al. Jun 1994 A
5341469 Rossberg et al. Aug 1994 A
5369714 Withgott et al. Nov 1994 A
5508084 Reeves et al. Apr 1996 A
5514860 Berson May 1996 A
5579414 Fast et al. Nov 1996 A
5594809 Kopec et al. Jan 1997 A
5594815 Fast et al. Jan 1997 A
5594817 Fast et al. Jan 1997 A
5623679 Rivette et al. Apr 1997 A
5623681 Rivette et al. Apr 1997 A
5689585 Bloomberg et al. Nov 1997 A
5696963 Ahn Dec 1997 A
5713016 Hill Jan 1998 A
5721763 Joseph et al. Feb 1998 A
5726736 DeWolff et al. Mar 1998 A
5737740 Henderson et al. Apr 1998 A
5754840 Rivette et al. May 1998 A
5767978 Revankar et al. Jun 1998 A
5774833 Newman Jun 1998 A
5799325 Rivette et al. Aug 1998 A
5806079 Rivette et al. Sep 1998 A
5809318 Rivette et al. Sep 1998 A
5841900 Rahgozar et al. Nov 1998 A
5845288 Syeda-Mahmood Dec 1998 A
5845300 Comer et al. Dec 1998 A
5845301 Rivette et al. Dec 1998 A
5848409 Ahn Dec 1998 A
5850474 Fan et al. Dec 1998 A
5889886 Mahoney Mar 1999 A
5893126 Drews et al. Apr 1999 A
5895473 Williard et al. Apr 1999 A
5949752 Glynn et al. Sep 1999 A
5956468 Ancin Sep 1999 A
5982931 Ishimaru Nov 1999 A
5983180 Robinson Nov 1999 A
5991751 Rivette et al. Nov 1999 A
5991780 Rivette et al. Nov 1999 A
5995659 Chakraborty et al. Nov 1999 A
5999907 Donner Dec 1999 A
6002798 Palmer et al. Dec 1999 A
6008817 Gilmore, Jr. Dec 1999 A
6014663 Rivette et al. Jan 2000 A
6026388 Liddy et al. Feb 2000 A
6029177 Sadiq et al. Feb 2000 A
6038561 Snyder Mar 2000 A
6049811 Petruzzi et al. Apr 2000 A
6056428 Devoino et al. May 2000 A
6058398 Lee May 2000 A
6065026 Cornelia et al. May 2000 A
6120025 Hughes, IV Sep 2000 A
6154725 Donner Nov 2000 A
6167370 Tsourikov et al. Dec 2000 A
6175824 Breitzman et al. Jan 2001 B1
6189002 Roitblat Feb 2001 B1
6195459 Zhu Feb 2001 B1
6202043 Devoino et al. Mar 2001 B1
6249604 Huttenlocher et al. Jun 2001 B1
6263314 Donner Jul 2001 B1
6289341 Barney Sep 2001 B1
6321236 Zollinger et al. Nov 2001 B1
6339767 Rivette et al. Jan 2002 B1
6360236 Khan et al. Mar 2002 B1
6377965 Hachamovitch et al. Apr 2002 B1
6401118 Thomas Jun 2002 B1
6422974 Schimmel Jul 2002 B1
6434580 Takano Aug 2002 B1
6462778 Abram et al. Oct 2002 B1
6477524 Taskiran et al. Nov 2002 B1
6499026 Rivette et al. Dec 2002 B1
6516097 Pritt Feb 2003 B1
6546390 Pollack et al. Apr 2003 B1
6556992 Barney et al. Apr 2003 B1
6560590 Shwe et al. May 2003 B1
6565610 Wang et al. May 2003 B1
6584223 Shiiyama Jun 2003 B1
6594393 Minka et al. Jul 2003 B1
6621595 Fan et al. Sep 2003 B1
6623529 Lakritz Sep 2003 B1
6628285 Abeyta et al. Sep 2003 B1
6636249 Rekimoto Oct 2003 B1
6662178 Lee Dec 2003 B2
6665656 Carter Dec 2003 B1
6694331 Lee Feb 2004 B2
6724369 Slotta Apr 2004 B2
6731789 Tojo May 2004 B1
6738518 Minka et al. May 2004 B1
6757081 Fan et al. Jun 2004 B1
6766069 Dance et al. Jul 2004 B1
6793429 Arrison Sep 2004 B2
6799718 Chan et al. Oct 2004 B2
6801201 Escher Oct 2004 B2
6826305 Zhu Nov 2004 B2
6836883 Abrams et al. Dec 2004 B1
6845486 Yamada et al. Jan 2005 B2
6879990 Boyer et al. Apr 2005 B1
6959280 Risen, Jr. et al. Oct 2005 B1
6970860 Liu et al. Nov 2005 B1
6971619 Pearson Dec 2005 B2
6980680 Batchelder et al. Dec 2005 B2
6993708 Gillig Jan 2006 B1
6996295 Tyan et al. Feb 2006 B2
6996575 Cox et al. Feb 2006 B2
7005094 Jack Feb 2006 B2
7010751 Shneiderman Mar 2006 B2
7013433 Schorr et al. Mar 2006 B1
7024408 Dehlinger et al. Apr 2006 B2
7047255 Imaichi et al. May 2006 B2
7047487 Bates et al. May 2006 B1
7051277 Kephart et al. May 2006 B2
7082427 Seibel et al. Jul 2006 B1
7086028 Davis et al. Aug 2006 B1
7130848 Oosta Oct 2006 B2
7139755 Hammond Nov 2006 B2
7167823 Endo et al. Jan 2007 B2
7246104 Stickler Jul 2007 B2
7259753 Keely et al. Aug 2007 B2
7296015 Poltorak Nov 2007 B2
7321687 Yamamoto Jan 2008 B2
7333984 Oosta Feb 2008 B2
7365739 Hiromori Apr 2008 B2
7366705 Zeng et al. Apr 2008 B2
7418138 Ahmed Aug 2008 B2
7542934 Markel Jun 2009 B2
7561742 Boose et al. Jul 2009 B2
7581168 Boon Aug 2009 B2
7599580 King et al. Oct 2009 B2
7599867 Monroe et al. Oct 2009 B1
7606757 Poltorak Oct 2009 B1
7613626 Muniganti et al. Nov 2009 B1
7636886 Wyle et al. Dec 2009 B2
7640198 Albanese et al. Dec 2009 B1
7644360 Beretich, Jr. et al. Jan 2010 B2
7672022 Fan Mar 2010 B1
7680686 Tellefsen et al. Mar 2010 B2
7685042 Monroe et al. Mar 2010 B1
7711676 Stuhec May 2010 B2
7730061 Gruhl et al. Jun 2010 B2
7783637 Bitsch et al. Aug 2010 B2
7792728 Poltorak Sep 2010 B2
7792832 Poltorak Sep 2010 B2
7801909 Poltorak Sep 2010 B2
7818342 Stuhec Oct 2010 B2
7835966 Satchwell Nov 2010 B2
7844487 Chapman Nov 2010 B2
7853506 Satchwell Dec 2010 B2
7853572 Lundberg et al. Dec 2010 B2
7864365 Campbell et al. Jan 2011 B2
7865519 Stuhec Jan 2011 B2
7876959 Matsuda et al. Jan 2011 B2
7882002 Monroe et al. Feb 2011 B2
7890851 Milton, Jr. Feb 2011 B1
7894677 Konig et al. Feb 2011 B2
7904355 Johnson Mar 2011 B1
7904453 Poltorak Mar 2011 B2
7912792 Lehrman et al. Mar 2011 B2
7941468 Zellner et al. May 2011 B2
7953295 Vincent May 2011 B2
7958067 Schmidtler et al. Jun 2011 B2
7970213 Ruzon et al. Jun 2011 B1
7975214 Boegelund et al. Jul 2011 B2
7979358 Clem et al. Jul 2011 B1
7984047 Sukman Jul 2011 B2
8015492 Reid Sep 2011 B2
8031940 Vincent Oct 2011 B2
8036971 Aymeloglu et al. Oct 2011 B2
8098934 Vincent Jan 2012 B2
8112701 Gur et al. Feb 2012 B2
8136050 Sacher et al. Mar 2012 B2
8141036 Wagner et al. Mar 2012 B2
8160306 Neustel Apr 2012 B1
8171049 Ah-Pine et al. May 2012 B2
8174462 Rosander et al. May 2012 B2
8189917 Campbell May 2012 B2
8200487 Peters et al. Jun 2012 B2
8230326 Albornoz et al. Jul 2012 B2
8237745 Cornell et al. Aug 2012 B1
8239301 Monroe et al. Aug 2012 B2
8291386 Daniel Oct 2012 B2
8301487 Rapperport et al. Oct 2012 B2
8312067 Elias et al. Nov 2012 B2
8370143 Coker Feb 2013 B1
8370240 Monroe et al. Feb 2013 B2
8396814 Sundaram et al. Mar 2013 B1
8412598 Early et al. Apr 2013 B2
8429601 Andersen Apr 2013 B2
8458060 Esary et al. Jun 2013 B2
8463679 Kaplan et al. Jun 2013 B2
8504349 Manu et al. Aug 2013 B2
8539346 Albornoz et al. Sep 2013 B2
8543381 Connor Sep 2013 B2
8547330 Buck Oct 2013 B2
8560429 Buck Oct 2013 B2
8570326 Gorev Oct 2013 B2
8606671 Lee et al. Dec 2013 B2
8667609 Tan et al. Mar 2014 B2
8683439 Daniel Mar 2014 B2
8705863 Trauba Apr 2014 B1
8732060 Salomon et al. May 2014 B2
8744135 Roman Jun 2014 B2
8805093 Zuev et al. Aug 2014 B2
8805848 Bhatia et al. Aug 2014 B2
8806324 Theobald Aug 2014 B2
8843407 Tan Sep 2014 B2
8854302 Buck Oct 2014 B2
8855999 Elliot Oct 2014 B1
8875093 Balasubramanian et al. Oct 2014 B2
8884965 Chuang et al. Nov 2014 B2
8909656 Kumar et al. Dec 2014 B2
8930897 Nassar Jan 2015 B2
8938686 Erenrich et al. Jan 2015 B1
8954840 Theobald Feb 2015 B2
9015671 Johnson Apr 2015 B2
9104648 Glasgow Aug 2015 B2
9158507 Simonyi et al. Oct 2015 B2
9176944 Coker Nov 2015 B1
9183561 Hanumara et al. Nov 2015 B2
9201956 Lundberg et al. Dec 2015 B2
9229966 Aymeloglu et al. Jan 2016 B2
9236047 Rasmussen Jan 2016 B2
20010027452 Tropper Oct 2001 A1
20010027460 Yamamoto et al. Oct 2001 A1
20010039490 Verbitsky et al. Nov 2001 A1
20020007267 Batchilo et al. Jan 2002 A1
20020016707 Devoino et al. Feb 2002 A1
20020042784 Kerven et al. Apr 2002 A1
20020062302 Oosta May 2002 A1
20020077832 Leonid et al. Jun 2002 A1
20020077835 Hagelin Jun 2002 A1
20020077853 Boru et al. Jun 2002 A1
20020077942 Wilkinson Jun 2002 A1
20020083084 Sugiyama Jun 2002 A1
20020095368 Tran Jul 2002 A1
20020100016 Van De Vanter et al. Jul 2002 A1
20020107896 Ronai Aug 2002 A1
20020138297 Lee Sep 2002 A1
20020138465 Lee Sep 2002 A1
20020138473 Whewell et al. Sep 2002 A1
20020138474 Lee Sep 2002 A1
20020138475 Lee Sep 2002 A1
20020141641 Zhu Oct 2002 A1
20020147738 Reader Oct 2002 A1
20020161464 Weiner Oct 2002 A1
20020184130 Blasko Dec 2002 A1
20030004988 Hirasawa et al. Jan 2003 A1
20030007014 Suppan et al. Jan 2003 A1
20030026459 Won et al. Feb 2003 A1
20030028364 Chan et al. Feb 2003 A1
20030030270 Franko, Sr. et al. Feb 2003 A1
20030033270 Budka et al. Feb 2003 A1
20030033295 Adler et al. Feb 2003 A1
20030046307 Rivette et al. Mar 2003 A1
20030065606 Satchwell Apr 2003 A1
20030065607 Satchwell Apr 2003 A1
20030065774 Steiner et al. Apr 2003 A1
20030088573 Stickler May 2003 A1
20030088581 Maze et al. May 2003 A1
20030126128 Watson Jul 2003 A1
20030130837 Batchilo et al. Jul 2003 A1
20030187832 Reader Oct 2003 A1
20030208459 Shea et al. Nov 2003 A1
20030225749 Cox et al. Dec 2003 A1
20040003013 Coulthard et al. Jan 2004 A1
20040015481 Zinda Jan 2004 A1
20040017579 Lim Jan 2004 A1
20040021790 Iga Feb 2004 A1
20040037473 Ahmed et al. Feb 2004 A1
20040040011 Bosworth et al. Feb 2004 A1
20040049498 Dehlinger et al. Mar 2004 A1
20040059994 Fogel et al. Mar 2004 A1
20040078192 Poltorak Apr 2004 A1
20040078365 Poltorak Apr 2004 A1
20040088305 Kintzley et al. May 2004 A1
20040088332 Lee et al. May 2004 A1
20040098673 Riddoch et al. May 2004 A1
20040133562 Toong et al. Jul 2004 A1
20040158559 Poltorak Aug 2004 A1
20040174546 Guleryuz Sep 2004 A1
20040205540 Vulpe et al. Oct 2004 A1
20040205599 Whewell et al. Oct 2004 A1
20040220842 Barney Nov 2004 A1
20040225592 Churquina Nov 2004 A1
20040243566 Ogram Dec 2004 A1
20040249824 Brockway et al. Dec 2004 A1
20040261011 Stuckman et al. Dec 2004 A1
20050005239 Richards Jan 2005 A1
20050018057 Bronstein et al. Jan 2005 A1
20050071367 He et al. Mar 2005 A1
20050096999 Newell et al. May 2005 A1
20050108652 Beretich et al. May 2005 A1
20050108682 Piehler et al. May 2005 A1
20050114770 Sacher et al. May 2005 A1
20050119995 Lee Jun 2005 A1
20050149851 Mittal Jul 2005 A1
20050165736 Oosta Jul 2005 A1
20050177795 Weiss et al. Aug 2005 A1
20050187949 Rodenburg Aug 2005 A1
20050210009 Tran Sep 2005 A1
20050210382 Cascini Sep 2005 A1
20050216828 Brindisi Sep 2005 A1
20050234738 Hodes Oct 2005 A1
20050243104 Kinghorn Nov 2005 A1
20050256703 Markel Nov 2005 A1
20050267831 Esary et al. Dec 2005 A1
20050278227 Esary et al. Dec 2005 A1
20050283337 Sayal Dec 2005 A1
20060004861 Albanese et al. Jan 2006 A1
20060026146 Tvito Feb 2006 A1
20060031178 Lehrman et al. Feb 2006 A1
20060031179 Lehrman Feb 2006 A1
20060036542 McNair Feb 2006 A1
20060047574 Sundaram et al. Mar 2006 A1
20060059072 Boglaev Mar 2006 A1
20060106746 Stuhec May 2006 A1
20060106755 Stuhec May 2006 A1
20060112332 Kemp et al. May 2006 A1
20060136535 Boon Jun 2006 A1
20060150079 Albornoz et al. Jul 2006 A1
20060173699 Boozer Aug 2006 A1
20060173920 Adler et al. Aug 2006 A1
20060190805 Lin Aug 2006 A1
20060198978 Antonini Sep 2006 A1
20060221090 Takeshima et al. Oct 2006 A1
20060230333 Racovolis et al. Oct 2006 A1
20060248120 Sukman Nov 2006 A1
20070001066 Lane Jan 2007 A1
20070073625 Shelton Mar 2007 A1
20070073653 Raab Mar 2007 A1
20070078889 Hoskinson Apr 2007 A1
20070136321 Allen et al. Jun 2007 A1
20070198578 Lundberg et al. Aug 2007 A1
20070208669 Rivette Sep 2007 A1
20070226250 Mueller et al. Sep 2007 A1
20070255728 Abate et al. Nov 2007 A1
20070291120 Campbell et al. Dec 2007 A1
20070294192 Tellefsen Dec 2007 A1
20080059280 Tellefsen et al. Mar 2008 A1
20080126264 Tellefsen et al. May 2008 A1
20080154848 Haslam et al. Jun 2008 A1
20080183639 DiSalvo Jul 2008 A1
20080183759 Dehlinger Jul 2008 A1
20080189270 Takimoto et al. Aug 2008 A1
20080195604 Sears Aug 2008 A1
20080215354 Halverson et al. Sep 2008 A1
20080216013 Lundberg et al. Sep 2008 A1
20080222512 Albornoz et al. Sep 2008 A1
20080243711 Aymeloglu et al. Oct 2008 A1
20080256428 Milton Oct 2008 A1
20080281860 Elias et al. Nov 2008 A1
20080310723 Manu et al. Dec 2008 A1
20080313560 Dalal Dec 2008 A1
20090006327 Pamp Jan 2009 A1
20090037804 Theobald Feb 2009 A1
20090037805 Theobald Feb 2009 A1
20090044090 Gur et al. Feb 2009 A1
20090044091 Gur et al. Feb 2009 A1
20090044094 Rapp Feb 2009 A1
20090070738 Johnson Mar 2009 A1
20090083055 Tan Mar 2009 A1
20090086601 McClellan et al. Apr 2009 A1
20090094016 Mao Apr 2009 A1
20090138466 Henry et al. May 2009 A1
20090138812 Ikedo et al. May 2009 A1
20090144696 Andersen Jun 2009 A1
20090157679 Elias et al. Jun 2009 A1
20090192877 Chapman Jul 2009 A1
20090210828 Kahn Aug 2009 A1
20090228777 Henry Sep 2009 A1
20090259522 Rapperport et al. Oct 2009 A1
20090259523 Rapperport et al. Oct 2009 A1
20090276694 Henry et al. Nov 2009 A1
20090327946 Stignani et al. Dec 2009 A1
20100005094 Poltorak Jan 2010 A1
20100050157 Daniel Feb 2010 A1
20100050158 Daniel Feb 2010 A1
20100070495 Gruhl et al. Mar 2010 A1
20100080461 Ferman Apr 2010 A1
20100106642 Tan Apr 2010 A1
20100131427 Monroe et al. May 2010 A1
20100191564 Lee et al. Jul 2010 A1
20100241691 Savitzky et al. Sep 2010 A1
20100250340 Lee et al. Sep 2010 A1
20100262512 Lee et al. Oct 2010 A1
20100262901 DiSalvo Oct 2010 A1
20100293162 Odland et al. Nov 2010 A1
20110016431 Grosz et al. Jan 2011 A1
20110019915 Roman Jan 2011 A1
20110035364 Lipsey Feb 2011 A1
20110054884 Drakwall et al. Mar 2011 A1
20110066644 Cooper et al. Mar 2011 A1
20110091109 Zuev et al. Apr 2011 A1
20110093373 Monroe et al. Apr 2011 A1
20110109632 Gorev May 2011 A1
20110137822 Chapman Jun 2011 A1
20110138338 Glasgow Jun 2011 A1
20110145120 Lee et al. Jun 2011 A1
20110184726 Connor Jul 2011 A1
20110188759 Filimonova et al. Aug 2011 A1
20110196809 Salomon et al. Aug 2011 A1
20110208610 Halverson et al. Aug 2011 A1
20110225489 Simonyi et al. Sep 2011 A1
20110231325 Allen et al. Sep 2011 A1
20110238684 Krause Sep 2011 A1
20110239151 Allen et al. Sep 2011 A1
20110288863 Rasmussen Nov 2011 A1
20110295893 Wu Dec 2011 A1
20110307499 Elias et al. Dec 2011 A1
20120076415 Kahn Mar 2012 A1
20120109638 Xiao et al. May 2012 A1
20120109813 Buck May 2012 A1
20120128251 Petrou May 2012 A1
20120144499 Tan et al. Jun 2012 A1
20120176412 Stuebe et al. Jul 2012 A1
20120191757 Gross et al. Jul 2012 A1
20120216107 Iwabuchi Aug 2012 A1
20120271748 DiSalvo Oct 2012 A1
20120278341 Ogilvy et al. Nov 2012 A1
20130144810 Simpson Jun 2013 A1
20130246435 Yan et al. Sep 2013 A1
20130246436 Levine Sep 2013 A1
20130318090 Bhatia et al. Nov 2013 A1
20140019329 Newell et al. Jan 2014 A1
20140195904 Chang et al. Jul 2014 A1
20140258927 Rana et al. Sep 2014 A1
20140358973 Roman Dec 2014 A1
Foreign Referenced Citations (6)
Number Date Country
102609606 Jul 2012 CN
WO2005048055 May 2005 WO
WO2006031952 Mar 2006 WO
WO2011011002 Jan 2011 WO
WO2013141886 Sep 2013 WO
WO2015148410 Oct 2015 WO
Non-Patent Literature Citations (187)
Entry
TeamPatent—Secure Patent R&D Workspace, <http://www.teampatent.com/index.html, printed on Feb. 28, 2017>.
EDYT, <http://www.edyt.com/, printed on Feb. 28, 2017>.
C Riedl et al., Detecting Figure and Part Labels in Patents: Competition-Based Development of Image Proccessing Algorithms, pp. 1-14 (2014).
J Zhang and R Kasturi, Text detection using edge gradient and graph spectrum, ICPR, pp. 3979-3982 (2010).
Alvestrand, H. “Tags for the Identification of Languages”. Network Working Group Request for Comments: 3066. Jan. 2001. Retrieved from http://www.ietf.org/rfc/rfc3066.txt on Feb. 21, 2016.
Oracle. “JSR-000175 A Metadata Facility for the Java TM Programming Language” Dec. 5, 2003. Oracle. Downloaded from http://jcp.org/aboutJava/communityprocess/review/jsr175/index.html on Feb. 21, 2016.
W3C, Dave Raggett, Arnaud Le Hors, and Ian Jacobs, editors. “HTML 4.01 Specification,” Dec. 24, 1999. W3C. Retrieved from https://www.w3.org/TR/html4/on Feb. 21, 2016.
Patentcafe, Advanced Technology Patent Search, Patent Analytics and Intellectual Property Management Solutions, <available at http://www.patentcafe.com, printed on Nov. 20, 2012>.
Neustel Software, Inc., PatentHunter, <available at http://www.patenthunter.com, printed on Nov. 20, 2012>.
United States Patent and Trademark Office, USPTO Partners with NASA's Center for Collaborative Innovation and TopCoder on Competition to Modernize Tools for Patent Examination <available at http://www.uspto.gov/news/pr/2012/12-19.jsp, printed on Nov. 20, 2012>.
White House, New Center for Excellence Fuels Prize to Help Modernize Tools for Patent Examination <http://www.whitehouse.gov/blog/2011/12/16/new-center-excellence-fuels- --prize-help-modernize-tools-patent-examination, printed on Nov. 20, 2012>.
Top Coder, Contest: USPTO Algorithm Challenge, Problem: Patent Labeling, <http://community.topcoder.com/longcontest/?module=ViewProblemStatemen- -t&rd=15027&pm=11645, printed on Nov. 20, 2012>.
Top Coder, Contest: USPTO Algorithm Followup Challenge Problem: Patent Labeling2, <http://community.topcoder.com/longcontest/?module=ViewProblemStatemen- -t&compid=24976&rd=15087, printed on Nov. 20, 2012>.
Top Coder, $10,000 USPTO Algorithm Challenge, <http://community.topcoder.com/ntl/?page.sub.--id=743, printed on Nov. 20, 2012>.
Cronje, Jaco, “Figure Detection and Part Label Extraction From Patent Drawing Images,” Twenty-third Annual Symposium of the Pattern Recognition Association of South Africa, Nov. 29-30, 2012.
Vrochidis et al., “Towards content-based patent image retrieval: A framework perspective,” World Patent Information, 2010, vol. 32, pp. 94-106.
Tiwari et al., “PATSEEK: Content Based Image Retrieval System for Patent Database,” Proceedings of International Conference on Electronic Business, Beijing, China, 2004, pp. 1167-1171.
Huet et al., “Relational skeletons for retrieval in patent drawings,” IEEE International Conference on Image Processing, 2001, vol. 2, pp. 737-740.
Zhiyuan et al., An Outward-Appearance Patent-Image Retrieval Approach Based on the Contour-Description Matrix, Frontier of Computer Science and Technology, Japan-China Joint Workshop, 2007, pp. 86-89.
Worring et al., “Content based hypertext creation in text/figure databases,” Image Databases and Multimedia Search, Series on software engineering and knowledge engineering, 1997, vol. 8, pp. 87-96.
Li et al., “Graphics Image Processing System,” Eighth IAPR International Workshop on Document Analysis Systems, 2008, pp. 455-462.
Li et al., “Associating figures with descriptions for patent documents,” Ninth IAPR International Workshop on Document Analysis Systems, 2010, pp. 385-392.
Kang, Le et al. “Local Segmentation of Touching Characters using Contour based Shape Decomposition”, Document Analysis Systems, 2012: 460-464.
Zhou, Shusen et al. “An Empirical Evaluation on Online Chinese Handwriting Databases”, Document Analysis Systems 2012: 455-459.
Impedovo, Sebastiano et al. “A New Cursive Basic Word Database for Bank-Check Processing Systems”, Document Analysis Systems 2012: 450-454.
Fang, Jing et al. “Dataset, Ground-Truth and Performance Metrics for Table Detection Evaluation”, Document Analysis Systems 2012: 445-449.
Dendek, Piotr Jan et al. “Evaluation of Features for Author Name Disambiguation Using Linear Support Vector Machines”, Document Analysis Systems 2012: 440-444.
Alves, N. F. et al. “A Strategy for Automatically Extracting References from PDF Documents”, Document Analysis Systems 2012: 435-439.
Mazalov, V. et al. “Linear Compression of Digital Ink via Point Selection”, Document Analysis Systems 2012: 429-434.
Anh Khoi Ngo Ho et al. “Panel and Speech Balloon Extraction from Comic Books”, Document Analysis Systems 2012: 424-428.
Malik, M.I. et al. “A Signature Verification Framework for Digital Pen Applications”, Document Analysis Systems 2012: 419-423.
Ramakrishnan, K. et al. “Learning Domain-Specific Feature Descriptors for Document Images”, Document Analysis Systems 2012: 415-418.
Bart, E. “Parsing Tables by Probabilistic Modeling of Perceptual Cues”, Document Analysis Systems 2012: 409-414.
Ui-Hasan, A. et al. “OCR-Free Table of Contents Detection in Urdu Books”, Document Analysis Systems 2012: 404-408.
Chazalon, J. et al. “A Simple and Uniform Way to Introduce Complimentary Asynchronous Interaction Models in an Existing Document Analysis System”, Document Analysis Systems 2012: 399-403.
Afzal, M.Z et al. “Improvements to Uncalibrated Feature-Based Stereo Matching for Document Images by Using Text-Line Line Segmentation”, Document Analysis Systems 2012: 394-398.
Kumar, D. et al. “OTCYMIST: Otsu-Canny Minimal Spanning Tree for Born-Digital Images”, Document Analysis Systems 2012: 389-393.
Quan Meng et al. “Text Detection in Natural Scenes with Salient Region”, Document Analysis Systems 2012: 384-388.
Louloudis, G. et al. “Efficient Word Retrieval Using a Multiple Ranking Combination Scheme”, Document Analysis Systems 2012: 379-383.
Philippot, E. et al. “Use of PGM for form recognition”, Document Analysis Systems 2012: 374-379.
Chanda, S. et al. “Text Independent Writer Identification for Oriya Script”, Document Analysis Systems 2012: 369-373.
Liwicki, M. et al. “Seamless Integration of Handwriting Recognition into Pen-Enabled Displays for Fast User Interaction”, Document Analysis Systems 2012: 364-369.
Kitadai, A. et al. “Similarity Evaluation and Shape Feature Extraction for Character Pattern Retrieval to Support Reading Historical Documents”, Document Analysis Systems 2012: 359-363.
Cong Kinh Nguyen et al. “Web Document Analysis Based on Visual Segmentation and Page Rendering”, Document Analysis Systems 2012: 354-359.
Ahmed, S. et al. “Extraction of Text Touching Graphics Using SUR”, Document Analysis Systems 2012: 349-353.
Truyen Van Phan et al. “Collecting Handwritten Nom Character Patterns from Historical Document Pages”, Document Analysis Systems 2012: 344-349.
Ahmed, S. et al. “Automatic Room Detection and Room Labeling from Architectural Floor Plans”, Document Analysis Systems 2012: 339-343.
Kobayashi, T. et al. “Recognizing Words in Scenes with a Head-Mounted Eye-Tracker”, Document Analysis Systems 2012: 333-338.
Tsukada, M. et al. “Expanding Recognizable Distorted Characters Using Self-Corrective Recognition”, Document Analysis Systems 2012: 327-332.
Porwal, U. et al. “Ensemble of Biased Learners for Offline Arabic Handwriting Recognition”, Document Analysis Systems 2012: 322-326.
Shahab, A. et al. “How Salient is Scene Text?”, Document Analysis Systems 2012: 317-321.
Ramaiah, C. et al. “Accent Detection in Handwriting Based on Writing Styles”, Document Analysis Systems 2012: 312-316.
Papandreou, A. et al. “Word Slant Estimation Using Non-horizontal Character Parts and Core-Region Information”, Document Analysis Systems 2012: 307-311.
Tianyi Gui et al. “A Fast Caption Detection Method for Low Quality Video Images”, Document Analysis Systems 2012: 302-306.
Diem, Markus et al. “Skew Estimation of Sparsely Inscribed Document Fragments”, Document Analysis Systems 2012: 292-301.
Xiaoyan Lin et al. “Performance Evaluation of Mathematical Formula Identification”, Document Analysis Systems 2012: 287-291.
Pal, S. et al. “Off-Line Bangla Signature Verification”, Document Analysis Systems 2012: 282-286.
Ohta, M. et al. “CRF-based Bibliography Extraction from Reference Strings Focusing on Various Token Granularities”, Document Analysis Systems 2012: 276-281.
Xi Luo et al. “Impact of Word Segmentation Errors on Automatic Chinese Text Classification”, Document Analysis Systems 2012: 271-275.
Wang Song et al. “Toward Part-Based Document Image Decoding”, Document Analysis Systems 2012: 266-270.
Vu Nguyen et al. “A Compact Size Feature Set for the Off-Line Signature Verification Problem”, Document Analysis Systems 2012: 261-265.
Mori, M. et al. “How Important is Global Structure for Characters?”, Document Analysis Systems 2012: 255-260.
Smith, E.H.B. et al. “Effect of “Ground Truth” on Image Binarization”, Document Analysis Systems 2012: 250-254.
Bloechle, J.-L. et al. “OCD Dolores—Recovering Logical Structures for Dummies”, Document Analysis Systems 2012: 245-249.
Ghorbel, A. et al. “Optimization Analysis Based on a Breadth-First Exploration for a Structural Approach of Sketches Interpretation”, Document Analysis Systems 2012: 240-244.
Chattopadhyay, T. et al. “On the Enhancement and Binarization of Mobile Captured Vehicle Identification Number for an Embedded Solution”, Document Analysis Systems 2012: 235-239.
Matsushita, T. et al. “Effect of Text/Non-text Classification for Ink Search Employing String Recognition”, Document Analysis Systems 2012: 230-234.
Takeda, K. et al. “Real-Time Document Image Retrieval on a Smartphone”, Document Analysis Systems 2012: 225-229.
Pirlo, G. et al. “Voronoi-Based Zoning Design by Multi-objective Genetic Optimization”, Document Analysis Systems 2012: 220-224.
Biswas, S. et al. “Writer Identification of Bangla Handwritings by Radon Transform Projection Profile”, Document Analysis Systems 2012: 215-219.
Cunzhao Shi et al. “Graph-Based Background Suppression for Scene Text Detection”, Document Analysis Systems 2012: 210-214.
Cutter, M.P. et al. “Capture and Dewarping of Page Spreads with a Handheld Compact 3D Camera”, Document Analysis Systems 2012: 205-209.
Zhang, J. et al. “A Hybrid Network Intrusion Detection Technique Using Random Forests”, in Proceedings of IEEE First International Conference on Availability, Reliability and Security (ARES'06) 2006.
Shahab, A. et al. “ICDAR 2011 robust reading competition challenge 2: Reading text in scene images”, in Proc. Int. Conf. Document Analysis and Recognition (ICDAR'11) 2011: 1491-1496.
Casey, R. et al. “Strategies in character segmentation: a survey”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 18, No. 7 (1996): 690-706.
Zheng, Y. et al. “Machine printed text and handwriting identification in noisy document images”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 26, No. 3 (2004): 337-353.
Chou, P. A. et al. “Recognition of equations using a two-dimensional P. A. Chou, stochastic context-free grammar”, in Visual Communications and Image Processing IV, ser. SPIE Proceedings Series, W. A. Pearlman, Ed., vol. 1199 (1989): 852-863.
Lu, T. et al. “A novel knowledge-based system for interpreting complex engineering drawings: Theory, representation, and implementation”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, No. 8 (2009): 1444-1457.
Lu, Z. “Detection of text regions from digital engineering drawings”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 20, No. 4 (1998): 431-439.
Zanibbi, R. et al. “A survey of table recognition”, Document Analysis and Recognition, vol. 7, No. 1 (2004): 1-16.
Fletcher, L. et al. “A robust algorithm for text string separation from mixed text/graphics images”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 10, No. 6 (1988): 910-918.
Lai, C. et al. “Detection of dimension sets in engineering drawings”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 16, No. 8 (1994): 848-855.
Abbas, A. et al. “A literature review on the state-of-the-art in patent analysis”, World Patent Information 2014: 1-11.
Caihong, J. et al. “Ontology-based Patent Abstracts' Knowledge Extraction”, New Technology of Library and Information Service, 2 (2009):23-28, Abstract at: http://search.scirp.org/paper/1468741#.VctgJ.sub.--mzJBk (accessed on Aug. 12, 2015).
Matsuo, Y. et al. “Keyword Extraction from a Single Document using Word Co-occurrence Statistical Information”, Proceedings of the Seventeenth International Florida Artificial Intelligence Research Society Conference, Miami, Florida, 2004: 392-396.
Fan, J. et al. “Automatic knowledge extraction from documents”, IBM Journal of Research and Development, vol. 56, No. 314 (2012) 5:1-5:10.
Toussaint, G. T. “The use of context in pattern recognition”, Pattern Recognition, vol. 10, No. 3 (1978): 189-204.
Basu, K. et al. “Recognition of Similar Shaped Handwritten Characters Using Logistic Regression”, Document Anaysis Systems 2012: 200-204.
Bhowmik, T.K. et al. “Lexicon Reduction Technique for Bangle Handwritten Word Recognition”, Dociment Analysis Systems 2012: 195-199.
Shirai, K. et al. “Removal of Background Patterns and Signitures for Magnetic Ink Character Recognition of Checks”, Document Analysis Systems 2012: 190-194.
Shiraishi, S. et al. “A Part-Based Skew Estimation Method”, Document Analysis Systems 2012: 185-189.
Richarz, J. et al. “Towards Semi-supervised Transcription of Handwritten Historial Weather Reports”, Document Analysis Systems 2012; 180-184.
Ferrer, M.A. et al. “Is It Possible to Automatically Identify Who Has Forged My Signature? Approaching to the Identification of a Static Signature Forger”, Document Analysis Systems 2012: 175-179.
Shaus, A. et al. “Quality Evaluation of Facsimiles of Hebrew First Temple Period Inscriptions”, Document Analysis Systems 2012: 170-174.
Boumaiza, A. et al. “Symbol Recognition Using a Galois Lattice of Frequent Graphical Patterns”, Document Analysis Systems 2012: 165-169.
Baolan Su et al. “An Effective Staff Detection and Removal Technique for Musical Documents”, Document Analysis Systems 2012: 160-164.
Aguilar, F.D.J. et al. “ExpressMatch: A System for Creating Ground-Truthed Datasets of Online Mathematical Expressions”, Document Analysis Systems 2012: 155-159.
Roy, P.P. et al. “An Efficient Coarse-to-Fine Indexing Technique for Fast Text Retrieval in Historical Documents”, Document Analysis Systems 2012: 150-154.
Fiel, S. et al. “Writer Retrieval and Writer Identification Using Local Features”, Document Analysis Systems 2012: 145-149.
Weihan Sun et al. “Similar Fragment Retrieval of Animations by a Bag-of-Features Approach”, Document Analysis Systems 2012: 140-144.
Jain, R. et al. “Logo Retrieval in Document Images”, Document Analysis Systems 2012: 135-139.
Dutta, S. et al. “Robust Recognition of Degraded Documents Using Character N-Grams”, Document Analysis Systems 2012: 130-134.
Aiquan Yuan et al. “Offline handwritten English character recognition based on convolutional neural network”, Document Analysis Systems 2012: 125-129.
Elagouni, K. et al. “Combining Multi-scale Character Recognition and Linguistic Knowledge for Natural Scene Text OCR”, Document Analysis Systems 2012: 120-124.
Dar-Shyang Lee et al. “Improving Book OCR by Adaptive Language and Image Models”, Document Analysis Systems 2012: 115-119.
Qiu-Feng Wang et al. “Improving Handwritten Chinese Text Recognition by Unsupervised Language Model Adaptation”, Document Analysis Systems 2012: 110-114.
Rashid, S.F. et al. “Scanning Neural Network for Text Line Recognition”, Document Analysis Systems 2012: 105-109.
Khayyat, M. et al. “Arabic Handwritten Text Line Extraction by Applying an Adaptive Mask to Morphological Dilation”, Document Analysis Systems 2012: 100-104.
Garz, A. et al. “Binarization-Free Text Line Segmentation for Historical Documents Based on Interest Point Clustering”, Document Analysis Systems 2012: 95-99.
Wei Fan et al. “Local Consistency Constrained Adaptive Neighbor Embedding for Text Image Super-Resolution”, Document Analysis Systems 2012: 90-94.
Messaoud, I.B. et al. “Document Preprocessing System—Automatic Selection of Binarization”, Document Analysis Systems 2012: 85-89.
The-Anh Pham et al. “A Robust Approach for Local Interest Point Detection in Line-Drawing Images”, Document Analysis Systems 2012: 79-84.
Sharma, N. et al. “A New Method for Arbitrarily-Oriented Text Detection in Video”, Document Analysis Systems 2012: 74-78.
Bo Bai et al. “A Fast Stroke-Based Method for Text Detection in Video”, Document Analysis Systems 2012: 69-73.
Sharma, N. et al. “Recent Advances in Video Based Document Processing: A Review”, Document Analysis Systems 2012: 63-68.
Cunzhao Shi et al. “Adaptive Graph Cut Based Binarization of Video Text Images”, Document Analysis Systems 2012: 58-62.
Dong Liu et al. “A Prototype System of Courtesy Amount Recognition for Chinese Bank Checks”, Document Analysis Systems 2012: 53-57.
Yalniz, I.Z. et al. “An Efficient Framework for Searching Text in Noisy Document Images”, Document Analysis Systems 2012: 48-52.
Busagala, L.S.P. et al. “Multiple Feature-Classifier Combination in Automated Text Classification”, Document Analysis Systems 2012: 43-47.
Zhao, D. et al. “New Spatial-Gradient-Features for Video Script Identification”, Document Analysis Systems 2012: 38-42.
Gordo, A. et al. “Document Classification Using Multiple Views”, Document Analysis Systems 2012: 33-37.
Lamiroy, B. et al. “The Non-geek's Guide to the DAE Platform”, Document Analysis Systems 2012: 27-32.
Stamm, K. et al. “Attentive Tasks: Process-Driven Document Analysis for Multichannel Documents”, Document Analysis Systems 2012: 22-26.
Liwicki, M. et al. “Koios++: A Query-Answering System for Handwritten Input”, Document Analysis Systems 2012: 17-21.
Tkaczyk, D. et al. “A Modular Metadata Extraction System for Born-Digital Articles”, Document Analysis Systems 2012: 11-16.
Forcher, B. et al. “Towards Understandable Explanations for Document Analysis Systems”, Document Analysis Systems 2012: 6-10.
Lopresti, D. et al. “Adapting the Turing Test for Declaring Document Analysis Problems Solved”, Document Analysis Systems 2012: 1-5.
Vrochidis, S. et al. “Concept-based patent image retrieval”, World Patent Information 34 (2012): 292-303.
Fang, C. “Deciphering Algorithms for Degraded Document Recognition,” PhD dissertation, State Univ. of New York at Buffalo 1997: 1-211.
Nagy, G. et al. “Optical character recognition: An illustrated guide to the frontier”, Procs. Document Recognition and Retrieval VII, SPIE vol. 3967 (2000): 58-69.
Nagy, G. “Twenty years of document image analysis in PAMI”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, No. 1, (2000): 58-69.
Tassey, G. et al. “Economic impact assessment of NIST's text REtrieval conference (TREC) program”, National Institute of Standards and Technology, Gaithersburg, Maryland 2010.
Smith, R. “An overview of the tesseract OCR engine”, in Proc. Int. Conf. Document Analysis and Recognition, vol. 2, Curitiba, Brazil 2007: 629-633.
Russell, B.C. et al. “LabelMe: a database and web-based tool for image annotation”, Int. J. Computer Vision, vol. 77, No. 1-3, (2008): 157-173.
Rice, S.V. et al. “The fifth annual test of OCR accuracy”, Information Science Research Institute 1996: 1-44.
Gobeill, J. et al. “Report on the TREC 2009 experiments: Chemical IR track”, in Text Retrieval Conf. 2009.
Bosch, A. et al. “Image classification using random forests and ferns”, in ICCV 2007:1-8.
Carreras, X. et al. “Hierarchical Recognition of Propositional Arguments with Perceptrons”, In Proceedings of CoNLL-2004 Shared Task 2004.
Csurka, G. et al. “XRCE's Participation at Patent Image Classification and Image-based Patent Retrieval Tasks of the Clef-IP 2011” in: Proceedings of CLEF 2011, Amsterdam 2011.
Couasnon, B. “DMOS, a generic document recognition method: application to table structure analysis in a general and in a specific way”, Int. J. Document Analysis and Recognition, vol. 8, No. 2-3 (2006): 111-122.
Do, T-H. et al. “Text/graphic separation using a sparse representation with multi-learned dictionaries”, in Int. Conf. Pattern Recognition, Tsukuba, Japan 2012: 689-692.
Gold, E. “Language identification in the limit”, Information and Control, vol. 10 (1967): 447-474.
Zanibbi, R. et al. “Historical recall and precision: summarizing generated hypotheses”, in Proc. Int. Conf. Document Analysis and Recognition, Seoul, South Korea, vol. 1 (2005): 202-206.
Coates, A. et al. “Text detection and character recognition in scene images with unsupervised feature learning”, in Document Analysis and Recognition (ICDAR), 2011 International Conference (2011): 440-445.
Tiwari, A. et al. “PATSEEK: Content Based Image Retrieval System for Patent Database”, in Proceedings International Conference on Electronic Business, Beijing, China 2004.
Meng, L. et al. “Research of Semantic Role Labeling and Application in Patent knowledge Extraction”, Proceedings of the First International Workshop on Patent Mining and Its Applications (IPAMIN) 2014, Hildesheim 2014.
Kanungo, T. et al. “Understanding engineering drawings: A survey”, in Proc. Work. Graphics Recognition 1995: 217-228.
Kavukcuoglu, K. et al. “Learning convolutional feature hierarchies for visual recognition”, Advances in Neural Information Processing Systems 2010: 1090-1098.
Liang, J. et al. “Camera-based analysis of text and documents: A survey”, Int. J. Document Analysis and Recognition, vol. 7, No. 2-3 (2005): 84-104.
Chawla, N. et al. “SMOTE: Synthetic Minority Over-Sampling Technique”, Journal of Artificial Intelligence Research 16 (2002): 321-357.
Dori, D. et al. “Automated CAD conversion with the machine drawing understanding system: concepts, algorithms, and performance”, IEEE Trans. Syst., Man, Cybern. A, vol. 29, No. 4 (1999): 411-416.
Pradhan, S. et al. “Support Vector Learning for Semantic Argument Classification”, Machine Learning Journal. 60, 1/3 (2005): 11-39.
Wu, V. “Textfinder: an automatic system to detect and recognize text in images”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 21, No. 11 (1999): 1224-1229.
Moosmann F. et al. “Randomized clustering forests for image classification”, IEEE Transactions on PAMI, 30(9) (2008): 1632-1646.
Gao, M. et al. “A combined SMOTE and PSO based RBF classifier for two-class imbalanced problems”, Neurocomputing 74 (2011): 3456-3466.
Coates, A. et al. “An analysis of single-layer networks in unsupervised feature learning”, in Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS) 2011: 215-223.
Karaoglu, S. et al. “Object Reading: Text Recognition for Object Recognition”, ECCV Workshops 3 (2012): 456-465.
Handley, J. et al. “Document understanding system using stochastic context-free grammars”, in Proc. Int. Conf. Document Analysis and Recognition, Seoul, South Korea 2005: 511-515.
Palmer, M. et al. “The Proposition Bank: An annotated corpus of semantic roles”, Computational Linguistics 31, 1 (2004): 71-105.
Chawla, N. et al. “SMOTEBoost: Improving prediction of the minority class in boosting”, in 7th European Conference on Principles and Practice of Knowledge Discovery in Databases 2003: 107-119.
Pradhan, S. et al. “Semantic Role Labeling Using Different Syntactic Views”, Association for Computational Linguistics Annual Meeting, Ann Arbor, Michigan 2005: 581-588.
Zhou, W. et al. “Principal visual word discovery for automatic license plate detection”, IEEE Trans. Image Process., vol. 21, No. 9 (2012): 4269-4279.
Koomen, P. et al. “Generalized Inference with Multiple Semantic Role Labeling Systems”, Proceedings of CoNLL-2005 Ann Arbor, Michigan 2005: 181-184.
Breiman, L. “Random Forests”, in Machine Learning, 45(1) 2001: 5-32.
Karatzas, D. et al. “ICDAR 2011 robust reading competition-challenge 1: reading text in born-digital images (web and email)”, in Proc. Int. Conf. Document Analysis and Recognition (ICDAR'11) 2011: 1485-1490.
Niemeijer, M. et al. “Retinopathy online challenge: automatic detection of microaneurysms in digital color fundus photographs”, IEEE Trans. Med. Imag., vol. 29, No. 1 (2010): 185-195.
Roller, S. et al. “A multimodal LDA model integrating textual, cognitive and visual modalities”, in Proceedings of the 2013 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, Seattle, Washington 2013: 1146-1157.
Zanibbi, R. et al. “Decision-based specification and comparison of table recognition algorithms”, in Machine Learning in Document Analysis and Recognition, Berlin, Germany: Springer 2008: 71-103.
Terwiesch, C. et al. “Innovation Tournaments: Creating and Selecting Exceptional Opportunities”, Boston, MA: Harvard Business Press, 2009.
Epshtein, B. et al. “Detecting text in natural scenes with stroke width transform”, in IEEE Conf. Computer Vision and Pattern Recognition 2010: 2963-2970.
Jung, K. et al. “Text information extraction in images and video: a survey”, Pattern Recognition, vol. 37, No. 5 (2004): 977-997.
Tombre, K. et al. “Text/graphics separation revisited”, in Document Analysis Systems, ser. Lecture Notes in Computer Science, Lopresti, D.P. et al. Eds., vol. 2423. Springer 2002: 200-211.
Viola P. et al. “Rapid object detection using a boosted cascade of simple features”, in Proc. of CVPR 2001, vol. 1 (2001): 511-518.
Breiman, L. “Manual—Setting up, using and understanding random forests V4.0”, 2003: 1-33.
Nielsen, R.D., et al. “Mixing Weak Learners in Semantic Parsing”, 42nd Annual Meeting of the Association for Computational Linguistics, Barcelona, Spain 2004: 1-8.
Wagner, R. et al. “The String-to-String Correction Problem”, ACM, vol. 21, No. 1 (1974): 168-173.
Archak N. “Money, glory and cheap talk: analyzing strategic behavior of contestants in simultaneous crowdsourcing contests on topcoder.com”, in Proc. Int. Conf. World Wide Web (WWW'10) 2010: 21-30.
Chan, K.F. et al. “Error detection, error correction and performance evaluation in on-line mathematical expression recognition”, Pattern Recognition, vol. 34, No. 8 (2001): 1671-1684.
Schapire, R.E. et al. “Improved Boosting Algorithms Using Confidence-rated Predictions”, Proceedings of the Eleventh annual conference on Computational learning theory, Madison, Wisonsin 1998: 80-91.
Wang, J. et al. “Classification of imbalanced data by using the SMOTE algorithm and locally linear embedding”, in 8th International Conference on Signal Processing, 3 (2006):16-20.
Wu, V. et al. “Textfinder: an automatic system to detect and recognize text in images”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 21, No. 11 (1999): 1224-1229.
Sidiropoulos, P. et al. “Content-based binary image retrieval using the adaptive hierarchical density histogram”, Pattern Recognition Journal, 44(4) 2011: 739-750.
Vrochidis, S. et al. “Towards Content-based Patent Image Retrieval; A Framework Perspective”, World Patent Information Journal, 32(2) 2010: 94-106.
Wang, H-Y. “Combination approach of SMOTE and biased-SVM for imbalanced datasets”, Proc. of the IEEE Int. Joint Conf. on Neural Networks, IJCNN 2008, Hong Kong (PRC) 2008: 22-31.
Xu, B. et al. “An improved random forest classifier for image classification”, in Information and Automation (ICIA), 2012 International Conference on IEEE 2012: 795-800.
LexisNexis, LexisNexis® PatentOptimizer™ Quick Reference Card, 54 pages, Aug. 13, 2010, retrieved from https://law.lexisnexis.com/literature/patentoptimizerquickreference.pdf on Oct. 25, 2017.
Paula J. Hane, LexisNexis Adds TotalPatent to Its Suite of Solutions, 6 pages, Sep. 17, 2007, retrieved from http://newsbreaks.infotoday.com/NewsBreaks/LexisNexis-Adds-TotalPatent-to-Its-Suite-of-Solutions-39623.asp on Oct. 25, 2017.
Related Publications (1)
Number Date Country
20170017620 A1 Jan 2017 US
Provisional Applications (2)
Number Date Country
61633523 Feb 2012 US
61537314 Sep 2011 US
Continuations (2)
Number Date Country
Parent 14979395 Dec 2015 US
Child 15281270 US
Parent 13623251 Sep 2012 US
Child 14979395 US