1. Field of the Invention
The present invention relates to an image processing device and an image processing method that generate electronic document data including two-way link information from a paper document or electronic document data.
2. Description of the Related Art
In general, a paper document and an electronic document include characters, graphics and the like. For example, there is a paper document, an electronic document or the like that includes an “object” (region 1614), an “anchor expression accompanying the object (for example, an expression such as a “figure number”, “Figure 1” or “FIG. 1”)” (region 1612) and a “text including the anchor expression” (region 1613) shown in
However, when the reader has difficulty in grasping the correspondence relationship between the “object” and the “description text for the object” in a document, the reader needs much time to read it to understand correctly. The reader needs extra time to understand the content of the document. Here, as an example of a paper document in which the correspondence relationship between the “object” and the “description text for the object” is difficult to grasp, an example of
As described above, in the document in which the correspondence relationship between the “object” and the “description text for the object” is difficult to grasp, the reader of such a document disadvantageously takes much time to read it, and also takes an extra time to understand the content of the document.
To overcome the problem, Japanese Patent Laid-Open No. H11-066196(1999) discloses an invention in which a paper document is optically read and a document that can be utilized in various computers corresponding to utilization purposes can be generated. Specifically, an electronic document is generated by producing hypertext on figures and figure numbers. Then, the “figure number” in the text is clicked with a mouse or the like, and thus it is possible to display a figure corresponding to the “figure number” on a screen.
However, in Japanese Patent Laid-Open No. H11-066196(1999), link information from an “anchor expression in a text” to an “object” is generated whereas link information, in the opposite way, from the “object” to the “anchor expression in the text” or to a “description text for the object” is not generated. Thus, it is time-consuming to search the “description text for the object” from the “object”.
It is also time-consuming for the reader to first read the “description text for the object” and reference the “anchor expression in the text” to find the “object” and thereafter return to the “description text for the object” that has been immediately previously read. In other words, it is time-consuming to search for the position (what page, what paragraph and what line) of the “description text for the object”.
To overcome the foregoing problems, an image processing device according to the present invention includes: an input unit configured to input document image data; a region division unit configured to divide the document image data into a plurality of regions according to attributes, the divided regions including a text region, a caption region and an object region which is accompanied by the caption region; a character recognition unit configured to obtain character information by executing character recognition process for each character within each of the text region and the caption region divided by the region division unit; an anchor expression extraction unit configured to extract, from the character information in the caption region, an anchor expression which includes a predetermined character string identifying the object region; a text search unit configured to search for the anchor expression extracted by the anchor expression extraction unit from the character information in the text region; a link information generation unit configured to generate two-way link information associating an anchor expression peripheral region and an image peripheral region with each other, the anchor expression peripheral region being a region including the anchor expression for which the text search unit searches in the text region, the image peripheral region being a region including the object region; and a format conversion unit configured to generate electronic document data including document image data and the two-way link information.
In the present invention, electronic document data that is a feature of the present invention and that includes two-way link information is automatically produced between an “object” and a “description text for the object”, and thus the following effects can be obtained. When a reader reads a “text including an anchor expression”, that is, the “description text for the object” and searches for the corresponding “object”, it is possible to display the “object” with a simple operation.
Further features of the present invention will become apparent from the following description of exemplary embodiments (with reference to the attached drawings).
Preferred embodiments of the present invention will be explained below with reference to drawings.
In
An image bus I/F 212 is a bus bridge that connects the system bus 221 and an image bus 222 which transfers image data at a high speed, and that changes a data structure. The image bus 222 is formed with, for example, a PCI bus or an IEEE 1394 bus. On the image bus 222, the following devices are arranged. A raster image processor (RIP) 213 analyzes a PDL (page description language) code and expands it to a bit map image having a specified resolution, that is, achieves so-called rendering processing. When this expansion is performed, attribute information is added in units of pixels or in units of regions. This is called image region determination processing. The image region determination processing is performed to add attribute information indicating objects such as characters (text) and lines, graphics and images for each pixel or each region. For example, an image region signal is output from the RIP 213 according to the type of object of PDL description within a PDL code, and attribute information corresponding to an attribute indicated by its signal value is stored such that it is associated with a pixel or a region corresponding to the object. Hence, the image data includes the associated attribute information. A device I/F 214 connects the scanner portion 201 that is an image input device to the control unit 204 through a signal line 223, and connects the printer portion 202 that is an image output device to the control unit 204 through a signal line 224, and thereby changes the synchronous system/asynchronous system of image data. A scanner image processing portion 215 corrects, processes and edits input image data. A printer image processing portion 216 performs correction, resolution change and the like corresponding to the printer portion 202 on print output image data that needs to be output to the printer portion 202. An image turning portion 217 turns image data that has been input to erect the image data, and outputs it. A data processing portion 218 will be described later.
The configuration and operation of the data processing portion 218 shown in
The image data scanned by the scanner portion 201 shown in
In this case, a known method can be used as a region division method. An example thereof will be explained. An input image is first binarized to generate a binary image, and the resolution of the binary image is reduced to generate a thinned-out image (reduced image). For example, when a 1/(M×N) thinned-out image is produced, the binary image is divided for every M×N pixels, and, if a black pixel is present within the M×N pixels, the corresponding pixel after reduction is set to a black pixel whereas, if a black pixel is not present, the corresponding pixel is set to a white pixel, and thus the thinned-out image is produced. Then, in the thinned-out image, a portion (coupled black pixel) coupled to a black pixel is extracted to produce a rectangle circumscribing the coupled black pixel. When rectangles (rectangle of one character) close to a character image size are arranged side by side, or when either a vertical or horizontal one is a rectangle (a rectangle of the coupled black pixel where a few characters are connected) close to a character image size and the similar rectangles are arranged near a short side, it is highly likely to be a character image constituting one character string. In this case, the rectangles are coupled with each other, and thus a rectangle representing one character string is obtained. Since a group of rectangles representing one character string whose short sides have substantially the same length and which are spaced substantially regularly in a column direction is highly likely to be a text portion, they are coupled and a text region is extracted. A photograph region, a figure region and a table region are extracted by a coupled black pixel that is larger in size than a character image. Consequently, they are divided into, for example, regions 501 to 506 shown in
The attribute information addition portion 302 adds an attribute to each of the regions obtained by division by the region division portion 301. The processing operation of the attribute information addition portion 302 will now be explained using, as an example, input image data 500 shown in
On the other hand, when the other regions are extremely small in size, the attribute information addition portion 302 determines the regions to be “noise”. When the attribute information addition portion 302 performs white pixel outline tracking on the interior of the coupled black pixel having a small pixel density and rectangles circumscribing the white pixel outline are regularly arranged, the attribute information addition portion 302 determines the regions to be “tables” whereas when they are not regularly arranged, it determines the regions to be “line figure (figure)”. The other regions having a high pixel density are determined to be pictures or photographs, and a “photograph” attribute is added to them. The regions to which the attributes of the “table”, the “line figure” and the “photograph” are added correspond to the “object” described above, and are characterized in that they have attributes other than characters. Furthermore, when the character region determined not to be a text is present in the vicinity of (for example, above or below the region) the regions to which the attributes of the “table”, the “line figure” and the “photograph” are added, the attribute information addition portion 302 determines it to be a character region that describes the regions of the “table”, the “line figure” and the “photograph”. Then, the attribute information addition portion 302 adds a “caption” attribute to the region. The region to which the “caption” attribute is added and the region accompanying the “caption” are stored in the region to which the “caption” attribute is added, such that they are associated with each other in order to identify the region (the objects of the “table”, the “line figure” and the “photograph”) accompanying the “caption”. Specifically, as shown in
When the attribute information addition processing described above is performed, in the image data shown in
The character recognition portion 303 performs known character recognition processing on a region including a character image (that is, a region whose attribute is the “character”, the “text”, the “heading”, the “small heading”, the “caption” or the like), and associates the result as character information with a region of interest and stores it in the storage portion 211. For example, as shown in
As described above, information on the position, the size and the region attribute, information on the page, the character information on the character recognition result (character code information) and the like extracted by the region division portion 301, the attribute information addition portion 302 and the character recognition portion 303 are associated for each of the regions and are stored in the storage portion 211. For example, when an input image data example shown in
The link processing portion 304 generates link information between a region (region whose attribute is the “photograph”, the “line figure”, the “table”, the “illustration” or the like) accompanying the caption detected by the attribute information addition portion 302 and the “text including an anchor expression”. Then, the link processing portion 304 stores the generated link information in the storage portion 211. The details of the link processing portion 304 will be explained later.
The format conversion portion 305 uses information obtained from the region division portion 301, the attribute information addition portion 302, the character recognition portion 303 and the link processing portion 304 to convert the input image data 300 into the electronic document data 310. Examples of the electronic document data 310 include file formats such as SVG, XPS, PDF and Office Open XML. The converted electronic document data 310 is either stored in the storage portion 211 or transmitted to the client PC 101 through the LAN 102. The user of the document reads the electronic document data 310 with an application (for example, Internet Explorer Adobe Reader or MS Office (registered trademarks)) installed in the client PC 101. The reading of the electronic document data 310 with the application will be described in detail later. The electronic document data 310 includes page display information (such as an image for display) by graphics or the like and content information (such as metadata) by semantic description of a character or the like.
The processing performed in the format conversion portion 305 is mainly two. One is to perform, on each region, flattening and smoothing, edge enhancement, color quantization, binarization and the like, to perform processing for conversion into a specified format and to allow storage in the electronic document data 310. Conversion into graphic data (vector data) of vector pass description or graphic data (JPEG data) of bitmap description is performed on, for example, the region whose attribute is the “character”, the “line figure” or the “table”. As a technology for conversion into vector data, a known vectorization technology can be used. Region information (position, size and attribute) stored in the storage portion 211, character information within the region and link information are made to correspond to them, and conversion into the electronic document data 310 is performed.
In the format conversion portion 305, the conversion processing method performed on each region differs depending on the region attribute. For example, the vector conversion processing is suitable for graphics, such as a character and a line figure, whose colors are composed of black and white or a few colors whereas the vector conversion processing is not suitable for an image region having gradation such as a photograph. In order for the appropriate conversion to be performed as described above and according to the attribute of each region, a corresponding table shown in
For example, according to the corresponding table shown in
In the corresponding table shown in
The purpose of performing the erasing processing described above is that the image data 300 after completion of the processing of each region (after completion of the painting-out processing) can be utilized as part data on the image of the “background”. In this image data (background image) for background, parts (for example, a pixel corresponding to base within the image data 300) other than the region obtained by division by the region division processing are left. When the electronic document data 310 is described, the description is performed by superimposing graphic data (foreground image) obtained by the vector conversion processing or the image clipping processing on the background image part data (background image) and displaying it. In this way, it is possible to prevent lack of information on the background image (the color of the base) and form graphic data without redundancy.
Hence, although, on the region (character region) of the “character” attribute, the image clipping processing using binarization and the image erasing processing from the image data 300 are performed, it is possible not to perform the vectorization processing and the image clipping processing on the regions of the other attributes. In other words, the pixels on which the processing is not performed (pixel information within the region whose attribute is the “photograph”, the “line figure” or the “table”) are left within the background image part data, and they are described such that parts of the image of the “character” are superimposed on the background image.
A plurality of corresponding tables shown in
An example of the generated electronic document data 310 is shown in
Descriptions 601 to 606 of
An anchor expression extraction portion 402 analyzes character information in a caption region accompanying the object selected by the link information supply target selection portion 401, and extracts an anchor expression from the analyzed character information. When the anchor expression is found, the anchor expression extraction portion 402 extracts as the anchor expression the corresponding portion from the character information and as the caption expression the other portions. The anchor expression extraction portion 402 also has the function of removing an insignificant character string (such as a meaningless symbol string) using the characteristic of a character code, a dictionary and the like. This is because the anchor expression extraction portion 402 copes with erroneous recognition and the like of the character recognition in which a decoration, a division line and an image appearing in the boundary of the text portion of a document are interpreted as characters. In order for the anchor expression to be extracted, a multilingual character string pattern such as figure numbers and the corresponding erroneous recognition pattern of character recognition are stored in the dictionary, and thus it is possible to enhance the extraction accuracy of the anchor expression and to perform the character correction of the anchor expression. The caption expression can be processed in the same manner as described above. In other words, an analysis using natural-language processing, erroneous recognition correction of character recognition and the like can be performed, and the function of correcting a symbol, a character decoration and the like appearing in the boundary with the anchor expression and in the front and back of the anchor expression and removing them can also be given.
A text search portion 403 uses the anchor expression extracted by the anchor expression extraction portion 402 to search for character information in each text region of the document, and detects the same anchor expression. The text search portion 403 specifies the corresponding region in a description expression in the text including the extracted anchor expression and describing the object, that is, the “description text for the object.” Here, it is possible to produce search indices (as a technology for producing indices and a technology for utilizing it to achieve the high-speed search, known index producing/searching technologies can be used) for achieving the high-speed search. Moreover, with a batch search using a large number of anchor expressions, it is also possible to achieve the high-speed search. By storing and utilizing a multilingual character string pattern such as figure numbers and the corresponding erroneous recognition pattern of character recognition on the “description text for the object”, it is possible to provide the function of enhancing the search accuracy and performing correction.
A link information generation portion 404 generates link information that associates the object selected by the link information supply target selection portion 401 with the “description text for the object” searched and extracted by the text search portion 403. Specifically, the link information generation portion 404 generates the link information indicating the specified “description text for the object” from the selected “object.” At the same time, the link information generation portion 404 generates the link information of the opposite way, that is, the link information indicating the “object” from the “description text for the object” (mainly the anchor expression in the text). The generated link information is stored as link information 413 in the storage portion 211. In the present embodiment, link information associated with one way is referred to as one-way link information, and link information associated with two ways is referred to as two-way link information.
A link information collection/output portion 405 uses the link information 413 generated by the link information generation portion 404, converts it into a format that can be processed by the format conversion portion 305 and outputs it. Thus, the format conversion portion 305 generates the electronic document data 310.
A link processing control portion 406 entirely controls the link processing portion 304. Mainly, the link processing control portion 406 allocates each region of the image data 300 to appropriate processing portions 401 to 405, along with region information 411 (information on the position, the size and the attribute associated with each region) stored in the storage portion 211 of
The operation of each portion of the link processing portion 304 will be explained again in more detail with an example where processing is actually performed.
The outline of the entire processing performed by the image processing system of the first embodiment will now be explained with reference to the flowchart of
At step S701, the region division portion 301 divides one page of input image data into regions to extract the regions. For example, a region 908 is extracted from image data 901 (page 1) shown in
At step S702, the attribute information addition portion 302 adds an attribute to each region according to the type of region divided at step S701. For example on page 1 shown in
At step S703, the character recognition portion 303 performs the character recognition processing on the region to which the attribute of the character (such as a text, a caption, a heading or a small heading) is added at step S702, associates the result with the region like character information, and stores it in the storage portion 211. For example, at step S703, the “character information” shown in
At step S704, the data processing portion 218 determines whether or not the processing at steps S701 to S703 is performed on all pages. If the processing is performed on all pages (yes at step S704), the process proceeds to step S705. If an unprocessed page is present (no at step S704), the process returns to step S701. As described above, the processing at steps S701 to S704 is performed on four pages of the image data 901 to 904 shown in
Then, at step S705, the link processing portion 304 performs the link processing for the extraction of the anchor expression, the generation of graphic data and the generation of the link information. The details of the link processing performed by the link processing portion 304 at step S705 will be described later with reference to the flowchart of
At step S706, the format conversion portion 305 converts, based on the information stored in the storage portion 211 as shown in
Here, the explanation of
The details of the link processing at step S705 in
At step S801, the link information supply target selection portion 401 references the region information 411 stored in the storage portion 211, and selects one of regions on which the link information generation processing has not been performed from the regions (regions such as a figure, a photograph and an illustration) indicating the “object.” In other words, if there is an object that has not been processed, the non-processed object is selected as an object to be processed, and the process proceeds to step S802. If there is no object or all objects have been processed, the process proceeds to step S812. For example, the photograph region 911 is first selected from the image data 901 to 904 of four pages shown in
At step S802, with respect to the object selected by the link information supply target selection portion 401, the anchor expression extraction portion 402 extracts an anchor expression and a caption expression from character information in a caption region accompanying the object. Here, the anchor expression refers to character information (character string) for identifying an object, and the caption expression refers to character information (character string) for describing an object. In the character information included in the caption region accompanying the object, the following cases are possible: a case where an anchor expression alone is described therein; a case where a caption expression alone is described therein; a case where both expressions are described therein; and a case where neither of those expressions is described therein. For example, the anchor expression is often expressed as a combination of a specific character string such as a “figure” or “Fig” with a number or a symbol. Hence, an anchor character string dictionary in which those specific character strings are registered is prepared in advance, and it is possible to identify an anchor expression (that is, an anchor character string+a number/symbol) by comparing the caption expression with the dictionary. Among character strings in the caption region, character strings other than the anchor expression are determined to be the caption expression. In other words, with respect to character information “Figure 1 AAA” in the caption region 912, the anchor expression is “Figure 1”, and the caption expression is “AAA.” Specifically, as shown in
At step S803, the link processing control portion 406 determines whether or not an anchor expression is extracted from the caption region at step S802. If the anchor expression is extracted (yes at step S803), the process proceeds to step S804 whereas, if the anchor expression is not extracted (no at step S803), the process returns to step S801. Since, in the image data shown in
At step S804, the text search portion 403 searches the character information in the text region stored in the storage portion 211 for an anchor expression identical to the anchor expression extracted by the anchor expression extraction portion 402. For example, the text search portion 403 searches the character information in the text regions 908, 910 and 913 shown in
At step S805, the text search portion 403 determines whether or not an anchor expression is detected from the character information in the text region at step S804. If the anchor expression is detected (yes at step S805), the process proceeds to step S806 whereas, if the anchor expression is not detected (no at step S805), the process returns to step S801. If, at step S805, an anchor expression is detected from the text region, this text region is associated with the anchor expression, and it is stored in the storage portion. For example, as shown in
Hereinafter, in steps S806 and S807, the processing of the object selected at step S801 is performed. Also, insteps S808 and S809, the processing of the text region from which the anchor expression is detected at step S804 is performed.
At step S806, the link information generation portion 404 generates a link identifier on an object, associates it with the object selected by the link information supply target selection portion 401 and stores it in the storage portion 211. For example, as shown in
At step S807, the link information generation portion 404 generates graphic data on an object, associates the graphic data with the link identifier generated at step S806, and stores it in the storage portion 211. Here, the graphic data generated at step S807 indicates an image peripheral region including at least a figure, a table or the like within the object. For example, as shown in
At step S808, the link information generation portion 404 generates a link identifier on a text region, associates it with the text region having the “anchor expression” detected by the text search portion 403 and stores it in the storage portion 211. For example, as shown in FIG. 9B, the link information generation portion 404 generates a link identifier “text—01”, and associates the link identifier with the text region 908. If there are N text regions having the same anchor expression, the link information generation portion 404 generates, as the link identifiers “text—01” to “text_N”, N link identifiers, and associates them with the corresponding regions.
Then, at step S809, the link information generation portion 404 generates graphic data, associates the graphic data with the link identifier generated at step S808 and stores it in the storage portion 211. Here, the graphic data generated at step S809 indicates an anchor expression peripheral region including at least the anchor expression extracted at step 804. For example, graphic data (the “coordinate X”, the “coordinate Y”, the “width W” and the “height H”)=(“X14”, “Y14”, “W14” and “H14”) associated with the link identifier “text—01” shown in
At step S810, the link information generation portion 404 generates a link to the “description text for the object” from the “object.” The generated link information includes information indicating a response operation when the reader of the electronic document in the present embodiment takes any action to the object. The response operation refers to an operation of moving, for example, when a reader clicks an object within an electronic document being read with a mouse or the like, to a page on which a description expression for the object is present and of displaying and highlighting a graphic data portion corresponding to a link destination with a specified color. Here, the production of the link at step S810 is related to an “action of the reader” and an “action of the application” in the link information 915 shown in
The link information 915 shown in
At step S811, the link information generation portion 404 generates a link to the object in regard to the “description text for the object.” The generated link includes information indicating a response operation when the reader of the electronic document in the present embodiment takes any action to the “description text for the object” (mainly a region of graphic data indicating the vicinity of the anchor expression in the text). Here, the production of the link at step S811 is related to the “action of the reader” and the “action of the application” in the link information 914 shown in
The link information 914 shown in
As described above, at steps S810 and S811, the link information from the “object” to the “description text for the object” and the link information in the opposite way, from the “description text for the object” (mainly the anchor expression in the text) to the “object” are generated. The link information generated in the present embodiment is link information of two ways, that is, two-way link information.
Thereafter, the processing at steps S802 to S811 is performed on all the objects, and, if at step S801, no unprocessed object is determined to be present, the process proceeds to step S812.
At step S812, the information stored in the storage portion 211 shown in
As described above, the explanation of
An operation performed when the reader of the document reads the electronic document data generated in the present embodiment with an application will now be described with reference to
In a conventional technology, when the reader reads the electronic document data and searches for an object indicated by the anchor expression (e.g. “Figure 1”) included in an anchor expression peripheral region 1006, a method of searching for it by pressing down the page scroll button 1002 is generally used. An another method of searching for it by entering the “Figure 1” as a search key word is also generally used. Then, the reader reads the object indicated by the anchor expression, thereafter presses down the page scroll button 1002 to return to page 1 and reads the subsequent sentences.
On the other hand, in the present invention, when the reader reads the electronic document data including the link information described above, the reader clicks the anchor expression peripheral region 1006 including the anchor expression shown in
When the image data shown in
As described above, in the present embodiment, the electronic document data including the two-way link information between the “object” and the “description text for the object” is automatically generated in a paper document including the “object” and the “description text for the object”, and thus the following effects are obtained. When the reader reads the “text including the anchor expression”, that is, the “description text for the object”, and searches for the corresponding “object”, it is possible to display the “object” with a simple operation. Moreover, by utilizing the drawing information (graphic data) highlighting the “object”, it is possible to make the position of the “object” legible. It is also possible to return to the “description text for the object” with a simple operation. Furthermore, at the time of the return to the “description text for the object”, it is possible to make the immediately previously read position (what page, what paragraph and what line) legible. Likewise, even when the “object” is first read, it is possible to display the “description text for the object” with a simple operation.
In the present embodiment, an explanation is mainly given of a case where, in a document having a plurality of pages, a page having the “object” and a page having the “description text for the object” are separated from each other. However, the present invention is not limited to this; the same effects are obtained even in a paper document in which it is difficult to understand a correspondence between the “object” and the “description text for the object.” One example is a paper document in which a page having the “object” and a page having the “description text for the object” are the same but they are located apart from each other. Another example is a paper document in which at least one of the “object” and the “description text for the object” is described as a small description (including small characters). Yet another example is a document having a complicated layout.
The first embodiment deals with a case where, as shown in
At step S1301, the link information generation portion 404 determines whether or not the number of anchor expressions extracted from the text at step S804 is two or more. If it is one (no at step S1301), the process proceeds to step S1302 whereas if it is two or more (yes at step S1301), the process proceeds to step S1303.
At step S1302, the link information generation portion 404 generates information indicating an instruction to move to a link destination on the “action of the application.” Then, the link information generation portion 404 associates the generated information with a link identifier, and stores it in the storage portion 211.
At step S1303, the link information generation portion 404 generates, as information for the “action of the application”, information indicating an instruction to display a list, associates it with candidate display information and stores it in the storage portion 211. The display of the list is information for giving an instruction to display the list of choices of destinations so that, since the number of anchor expressions for one “object” in the text is two or more, the reader can select which of the positions of the anchor expressions the reader moves to. This information on the display of the list includes information on a link identifier associated with the extracted anchor expression. In the second embodiment, the information for giving an instruction to move to the link destination is referred to as link information, the information for giving an instruction to display the list is referred to as the candidate display information and they are distinguished from each other. In other words, it is important that, if the number of anchor expressions for one “object” is one, the link information is generated whereas if a plurality of anchor expressions is present, the candidate display information is generated.
The result of processing according to the second embodiment and performed by the image processing device will now be described. By performing the processing according to the second embodiment, two anchor expressions “Figure 1” are extracted from the text region 1208 on page 1 shown in
On the other hand, an object 1211 is associated with the link identifier “(image—01)” by the processing at step S806, and is further associated with graphic data corresponding to the link identifier “(image—01)” by the processing at step S807. Then, since a plurality of anchor expressions is extracted from the text, at step S1303 in the flowchart of
An operation performed when the reader of the document reads the electronic document data of the second embodiment with the application will now be described with reference to
The flowchart shown in
At step S1401, when the reader clicks the anchor expression peripheral region on the application, the application displays, according to the link information, a page including a region associated with an identifier indicated by the “movement to the link destination.” Specifically, the operation is performed according to the information on the “action of the application” in the link information 1216 to 1218 shown in
At step S1402, the application temporarily holds position information on the anchor expression peripheral region selected by the reader at step S1401. In other words, the image peripheral region displayed at step S1401 holds the position information such that which of the anchor expression peripheral regions 1216 to 1218 is selected and displayed by the reader can be identified. Specifically, the position information is information (position information indicating the position of the anchor expression peripheral region, identification information for identifying the anchor expression peripheral region or the link identifier corresponding to the anchor expression peripheral region) on the clicked anchor expression peripheral region. The temporarily held position information is used in processing performed when the image peripheral region shown in
The flowchart shown in
At step S1403, when the reader clicks the image peripheral region, the application makes a reference to check whether or not the position information is maintained. If the position information is maintained, this indicates that the image peripheral region is displayed by the immediately previous selection of any of the anchor expression peripheral regions by the reader.
At step S1404, the application determines, based on the position information referenced at step S1403, whether or not the image peripheral region is displayed according to the link information. If the position information is present, the image peripheral region is determined to be displayed according to the link information from the anchor expression peripheral region, and the process proceeds to step S1408. If the position information is not present, the process proceeds to step S1405.
At step S1408, the application references, based on the position information, the position of the anchor expression peripheral region selected by the reader before the image peripheral region is displayed, and displays the anchor expression peripheral region corresponding to the position. A description will be given of a case where the image peripheral region 1501 shown in
Then, at step S1405, the application determines whether or not the information on the “action of the application” associated with the link identifier corresponding to the image peripheral region clicked by the reader is the “movement to the link destination” or the “display of the list.” As described above, as the description of the electronic document data, the “action of the application” includes the description in which, if the number of anchor expressions is one, the “movement to the link destination” is performed whereas if the number of anchor expressions is two or more, the “display of the list” is described. If the “movement to the link destination” is performed (step S1405; the movement to the link destination), the process proceeds to step S1406 whereas if the “display of the list” is performed (step S1405; the display of the list), the process proceeds to step S1407.
At step S1406, the application displays the text region including the anchor expression peripheral region associated with the link identifier indicated by the “movement to the link destination”, and highlights it in red so that the anchor expression peripheral region can be identified. Since the movement to the link destination is the same as in the first embodiment, its description will not be repeated.
At step S1407, the application references the link identifier included in the information on the “display of the list”, and displays, as a list, character information before and after each of the anchor expressions from the character information on the text region associated with the link identifier. For example, when the image peripheral region 1501 shown in
At step S1409, the application determines whether or not the reader selects the document from the list display displayed at step S1407. If it is selected, the process proceeds to step S1410 whereas if it is not selected, the process is completed. For example, if the document 1503 shown in
Although, in the second embodiment described above, steps S1404 and S1405 are explained in this order, the order may be reversed (the determination at step S1405 is first performed, and if the “display of the list” is determined at step S1405, the processing at step S1404 may be performed).
The explanation of
As described above, the different types of link information are generated depending on whether the number of anchor expressions is one or two or more, and thus the following effects can be obtained. Even when the number of anchor expressions for the “object” is two or more, the reader can move, with a simple operation, from a page having the “object” to a page that is desired by the reader and that has the “description text for the object.” Furthermore, since the character information before and after each of the anchor expressions is submitted in a list format, the reader can easily determine and select to which page the reader moves. Furthermore, even when the page of the “description text for the object” to the page of the “object”, it is possible to return, with a simple operation, to the page of the “description text for the object” that has been immediately previously seen.
The first and second embodiments deal with a case where a paper document including the “object” and the “description text for the object” is scanned by a scanner into image data, this image data is input and electronic document data having two-way link information is produced. However, the document that is input is not limited to a paper document; it may be an electronic document. In other words, an electronic document such as SVG, XPS, PDF or Office Open XML that does not include two-way link information is input, and electronic document data having two-way link information can be produced. Furthermore, when the electronic document that is input already has region information (position, size and attribute) and character information, the processing performed in the region division portion 301, the attribute information addition portion 302 and the character recognition portion 303 is omitted, and thus it is possible to enhance the efficiency of the processing.
Although, in the second embodiment, an example of the candidate display information is the “display of the list”, it is not limited to a display in a list format. For example, a “message display” or an “error display” that indicates a plurality of choices of destinations may be used.
Aspects of the present invention can also be realized by a computer of a system or apparatus (or devices such as a CPU or MPU) that reads out and executes a program recorded on a memory device to perform the functions of the above-described embodiment (s), and by a method, the steps of which are performed by a computer of a system or apparatus by, for example, reading out and executing a program recorded on a memory device to perform the functions of the above-described embodiment(s). For this purpose, the program is provided to the computer for example via a network or from a recording medium of various types serving as the memory device (e.g., computer-readable medium).
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2010-088657, filed Apr. 7, 2010, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2010-088657 | Apr 2010 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
5204947 | Bernstein et al. | Apr 1993 | A |
5832476 | Tada et al. | Nov 1998 | A |
5892843 | Zhou et al. | Apr 1999 | A |
6178434 | Saitoh | Jan 2001 | B1 |
6353840 | Saito et al. | Mar 2002 | B2 |
7170647 | Kanatsu | Jan 2007 | B2 |
7287222 | Takata et al. | Oct 2007 | B2 |
7298900 | Kanatsu | Nov 2007 | B2 |
7340092 | Tanaka et al. | Mar 2008 | B2 |
7382939 | Kanatsu | Jun 2008 | B2 |
7715625 | Tagawa et al. | May 2010 | B2 |
7965892 | Kanatsu | Jun 2011 | B2 |
20010042083 | Saito et al. | Nov 2001 | A1 |
20050027745 | Sohma et al. | Feb 2005 | A1 |
20050232484 | Tagawa et al. | Oct 2005 | A1 |
20070047813 | Simske et al. | Mar 2007 | A1 |
20070118794 | Hollander et al. | May 2007 | A1 |
20080080769 | Kanatsu | Apr 2008 | A1 |
20090290192 | Kosaka | Nov 2009 | A1 |
20090324065 | Ishida et al. | Dec 2009 | A1 |
20090324080 | Yamazaki et al. | Dec 2009 | A1 |
Number | Date | Country |
---|---|---|
1677435 | Oct 2005 | CN |
H11-066196 | Mar 1999 | JP |
2005-135164 | May 2005 | JP |
Entry |
---|
Office Action issued on Aug. 31, 2012, in counterpart CN application No. 20110083039.0. |
Number | Date | Country | |
---|---|---|---|
20110252315 A1 | Oct 2011 | US |