The exemplary embodiments described herein relate to the added image processing system, image forming apparatus, and added image getting-in method.
A paper document printed by the image forming apparatus may be written by a user. The arts for scanning a document written by a user like this and extracting and using the image of the portion written from the scanned image are proposed.
As one of the arts, the art disclosed in U.S. Patent Application Publication No. 2006/0044619 may be cited. In U.S. Patent Application Publication No. 2006/0044619, the art, when printing an electronic document, for giving the information of identifying the electronic document to a paper document, thereby extracting the written image, and then selecting and printing a document reflecting an entry to the electronic document such as only a document only written in the paper document or an original document is disclosed. Further, in Japanese Patent Application Publication No. 2006-65524, to give the related information and access authority information of each entry person to the entry in the paper document, thereby prepare a document with only a part of the entry left is recorded.
However, in U.S. Patent Application Publication No. 2006/0044619, the entry in the paper document is related to the document itself, so that in another document using a part of the text of the document, the entry cannot be put. Therefore, the reusability of the entry is restricted.
Furthermore, if the text and entry are only related to each other, when there are many same texts in the document, all the texts are written. Namely, among the same texts in the document, the entry cannot be added only to the text aimed at by the user, thus the reusability of the entry may be said to be short of convenience.
An aspect of the present disclosure relates to an added image processing system, containing: a document storing portion configured to store a document file to be electronic information; an added image obtaining portion configured to obtain a difference in case comparing the document file stored in the document storing portion identified on the basis of a scanned image obtained by scanning a paper document with the scanned image as an added image; a corresponded text obtaining portion configured to obtain a text corresponding to the added image obtained by the added image obtaining portion; a text metadata obtaining portion configured to obtain text metadata of the corresponded text; an added image storing portion configured to store the corresponded text, the text metadata, and the added image in relation to each other; an added image getting-in portion, on the basis of the text metadata, configured to add the added image stored in the added image storing portion to a new document file; and a text metadata selecting portion configured to select an attribute considered in case adding the added image to the new document file by the added image getting-in portion.
Further, an aspect of the present disclosure relates to an added image processing system, containing: a document storing memory to store a document file to be electronic information; a scanned image memory to store a scanned image obtained by scanning a paper document; an added image storing memory to obtain a difference in case comparing the document file of the document storing memory identified on the basis of the scanned image with the scanned image as an added image and storing the text metadata of the corresponded text corresponding to the added image and the added image in relation to each other; and a controller, on the basis of the text metadata, to control to add the added image stored in the added image storing memory to a new document file and select an attribute considered in case adding the added image.
Further, an aspect of the present disclosure relates to an added image getting-in method, containing: storing a document file to be electronic information; obtaining a scanned image of a paper document scanned; obtaining a document file corresponding to the scanned image as an originated print document file from the stored document file; obtaining a difference in case comparing the originated print document file with the scanned image as an added image and obtaining a text in the originated print document file corresponding to the added image; storing the text metadata of the corresponded text and the added image in relation to each other; selecting an attribute considered in case adding the added image to the document file; and adding the stored added image to the document file on the basis of the selected attribute and the text metadata.
Hereinafter, the embodiments will be explained with reference to the embodiments.
First Embodiment
The first embodiment will be explained by referring to
The added image processing system is composed of an image forming apparatus 1, a document administration server, and a client PC 3. These units are connected by a network 4 and transfer information.
The image forming apparatus 1 includes a printer portion 11 for printing a document and a scanned image obtaining portion 12 for scanning a paper document and obtaining a scanned image. The document administration server 2 includes a document storing portion 13, an originated print document file obtaining portion 14, an added image obtaining portion 15, a corresponded text character string obtaining portion 16, a text metadata obtaining portion 17, an added image storing portion 18, a text metadata selecting portion 19, and an added image getting-in portion 20. The client PC 3 has a document browsing portion 21.
Next, the processing portions included in the document administration server 2 and the client PC 3 will be explained.
The document storing portion 13 stores a document file which is electronic information together with metadata such as the ID for uniquely identifying the document file, creator of the document file, creation date, and categories.
The originated print document file obtaining portion 14 obtains the originated print document file which is a document file as an origin of the scanned image obtained by the scanned image obtaining portion 12.
The added image obtaining portion 15 compares the scanned image obtained by the scanned image obtaining portion 12 with the originated print document file obtained by the originated print document file obtaining portion and obtains the portion having a difference as an added image.
The corresponded text character string obtaining portion 16 identifies a text character string of the originated print document file to which the added image obtained by the added image obtaining portion 15 corresponds and obtains the identified character string. And, the text metadata obtaining portion 17 analyzes the metadata included in the text character string obtained by the corresponded text character string obtaining portion 16 and obtains the analyzed metadata.
The added image storing portion 18 stores the added image obtained by the added image obtaining portion 15 together with the corresponded text character string and the text metadata thereof. The text metadata selecting portion 19 enables writing only in the text aimed at by the user. Thereafter, the added image getting-in portion 20, when there is a text character string to which the added image is related in the document data, can add the added image stored in the added image storing portion 18 to the document file.
The document browsing portion 21 is a portion for indicating the information stored in the document storing portion 13 and the added image storing portion 18 to the user.
The added image can be added to the document file using the respective processing portions explained above. The detailed flow up to addition of the added image to the document file will be indicated below.
The user accesses the document administration server 2 from the client PC 3 and can refer to the document list stored in the document storing portion 13 by the display of the client PC. And, the user designates the document file to be printed from the document list at the client PC 3. Then, the document file designated by the user is printed by a printer portion 11 of the image forming apparatus 1 and a paper document is output.
Here, when printing the document file stored in the document storing portion 13, the information capable of identifying the printed document file such as the file name of the target document file, storing folder, and printed page range is converted, for example, to a code such as a bar code, is added to a paper document, and then is output. When scanning the paper document by the bar code, the document file which is an originated print can be identified.
On the paper document printed in this way, the user can execute writing using a writing tool. For example, the document file D1 shown in
Next, the process of extracting the image added (added image) from the paper document to which the handwritten postscript is added will be explained using the flow chart shown in
Firstly, at ACT 101, the scanned image obtaining portion 12 obtains the scanned image. This time, the scanned image of the paper document to which the postscript of the handwritten image shown in
Next, at ACT 102, the originated print document file obtaining portion 14 obtains at ACT 101 the document file which is an origin of the paper document scanned (hereinafter, referred to as an originated print document file). When the document shown in
As one of the methods for concretely obtaining the originated print document file, a method for reading a bar code for identifying the document file recorded in the paper document may be cited. The method is enabled, as mentioned above, by adding the bar code for identifying the document file when printing the paper document.
Further, when no bar code is given to the paper document, the originated print document file obtaining portion 14 may obtain the document file closest to the scanned image using the similar image retrieval executed by the document storing portion 13. Or, the originated print document file obtaining portion 14 may permit the user to directly designate the originated print document file from the document files stored by the document storing portion 13. In this case, the originated print document file obtaining portion 14 indicates the document file list stored in the document storing portion 13 to the user and provides an UI (user interface) to be selected.
At ACT 103, the originated print document file obtaining portion 14 judges whether the originated print document file of the scanned image is stored in this way in the document storing portion 13 or not. When the originated print document file of the scanned image is not stored in the document storing portion 13 (NO at ACT 103), the extraction process of the added image is finished.
When the originated print document file is decided to be stored in the document storing portion 13 (YES at ACT 103), the process goes to ACT 104 and the added image obtaining portion 15 compares the scanned image with the originated print document file and extracts the image added to the paper document as an added image.
The added image obtaining portion 15 compares the scanned image obtained at ACT 101 with the originated print document file obtained at ACT 102 and detects a difference (at ACT 104). The difference detected here is detected as an added image. In this case, the added image obtaining portion 15 compares the image shown in
Next, at ACT 105, the added image obtaining portion 15 decides whether there is an added image or not. When there is no difference at ACT 104 between the scanned image and the originated print document file, the added image obtaining portion 15 judges that there is no added image (NO at ACT 105) and finishes the added image extracting process.
When the difference is detected at ACT 104 between the scanned image and the originated print document file, that there is an added image is judged (YES at ACT 105) and the process goes to ACT 106.
At ACT 106, the corresponded text character string obtaining portion 16 obtains the text character string in the originated print document file corresponding to the added image extracted at ACT 104. On the added image 501 of the image shown in
From this process, the text character string of “Trial system” is judged to correspond to the added image 501. The corresponded text character string obtaining portion 16 performs such a process for all the added images extracted at ACT 104 and obtains the text character strings (corresponded text character strings) corresponding to the added images. Also for the added image 52, the underlined portion is detected similarly and the corresponded text character string “XML” can be extracted from the originated print document file.
Further, for the added image 501 and added image 502, the underline is extracted and the corresponded text character string is obtained. However, instead of the underline, the circle mark enclosing the text character string is detected, thus the corresponded text character string may be obtained. Further, a threshold value of the distance between the added image and the text character string is set and if the distance between the added image and the text character string is equal to or smaller than the threshold value, the text character string may be judged as a corresponded text character string corresponding to the added image.
Next, the corresponded text character string obtaining portion 16 judges at ACT 107 whether the text character string corresponding to the added image can be obtained or not at ACT 106. The corresponded text character string obtaining portion 16, if the corresponded text character string corresponding to the added image is not obtained (NO at ACT 107), finishes the added image extracting process. If even one corresponded text character string corresponding to the added image can be obtained (YES at ACT 107), the process goes to ACT 108. Further, among the added images obtained at ACT 104, the added image the corresponded text character string corresponding to which cannot be obtained at ACT 106 is ignored in the subsequent process.
At ACT 108, the text metadata obtaining portion 17 obtains the metadata of the corresponded text character string obtained at ACT 106. As one of the metadata of the corresponded text character string, the layout attributes such as “Heading”, “Text”, and “Header” may be cited. The layout attributes are judged from the size and position of the text character string in the document file. For example, if the text character string has a large font size and exists on the upper part of the page, the text character string is decided as a “heading”.
As metadata of another corresponded text character string, metadata such as “storing folder” indicating the storing folder storing the originated print document file, “creator” preparing the document file, or “category” of the document decided by the user may be obtained.
The extraction of the added image added to the paper document is performed in the aforementioned flow.
Next, the process of adding the added image stored in the added image storing portion 18 to a new document file will be explained.
The user can browse the document file stored in the document storing portion 13 of the document administration server 2 by the document browsing portion 21 of the client PC 3. For example, a document browsing application having a screen as shown in
The document browsing application shown in
The display in
Further, if a print button 909 is clicked, the document file under browsing can be printed by the printer portion 11. If an added image getting-in button 910 is clicked, the screen relating to the process of adding the added image stored in the added image storing portion 18 to the document file is displayed.
Next, the control for the text metadata selection getting-in by the text metadata selecting portion 19 and the added image getting-in portion 20 will be explained. If the added image getting-in button 910 shown in
In
An example is shown in
The added image 1201 is an added image indicated by an added image storing format 801 shown in
In this case, on the evaluation metadata item selecting screen shown in
Similarly, the added image 1202 is an added image indicated by an added image storing format 802 shown in
Here, in the added image storing portion 18, as shown in
Further, when the added image is a handwritten character string, the added image storing portion 18 has a text character string conversion portion for converting the handwritten character string to the same text character string as the text of the originated print document file from the character information. Instead of the handwritten added image, an added image converted to a text character string may be added. As an example, the document file when the handwritten added images 1210 and 1202 shown in
By use of the embodiment aforementioned, the added image corresponding to the text character string in the document file can be added to the document file.
The added image corresponds to the text character string in the document, so that when the text character string corresponding to the added image is inserted into a document different from the document from which the added image is extracted, the added image can be inserted into the text character string. Further, the added image is related to the text metadata of the corresponded text character string, so that even when there exist many same texts in the document, the added image can be added only to a text consistent with the text metadata designated by the user. In addition, only when the category of the document and the category of the added image coincide with each other, the added image is inserted, so that an added image which is different from and independent of the category of the document can be prevented from insertion. Namely, the added image can be added only to the text aimed at by the user and the reusability of the added image is raised.
Further, in this embodiment, among the added images extracted from a scanned paper document, the added image judged that there is no corresponding corresponded text character string is ignored in the subsequent process. However, the added image having no corresponded text character string may be stored in relation to the metadata of the document file itself. Instead of the corresponded text character string, if the added image is stored in relation to the position information in the document file, an added image having no corresponded text character string can be used.
Second Embodiment
Next, the second embodiment will be explained by referring to
Hereinafter, to the same portions as those of the first embodiment, the same numerals are assigned and only the characteristic portions of this embodiment will be explained.
In this embodiment, the processing portions included in the document administration server 2 in the first embodiment are all included in the image forming apparatus.
The added image processing system is composed of the image forming apparatus 1 and the client PC 3 and these units transfer information via the network 4.
The processing portions having the same names as those of the first embodiment bear respectively the same functions The image forming apparatus, similarly to the first embodiment, includes the printer portion 11, scanned image obtaining portion 12, furthermore, document storing portion 13, originated print document file obtaining portion 14, added image obtaining portion 15, corresponded text character string obtaining portion 16, text metadata obtaining portion 17, added image storing portion 18, text metadata selecting portion 19, and added image getting-in portion 20. The client PC 3, similarly to the first embodiment, has the document browsing portion 21.
The added image extracting process from the scanned paper document and the added image getting-in process to the document file are performed in the same flow by the same processing portions as those of the first embodiment. In this embodiment, the image forming apparatus 1 includes the processing portions included in the document administration server 2 of the first embodiment, so that the scanned image of the paper document read by the image forming apparatus 1 does not need to be sent to the server via the network and is processed in the image forming apparatus 1. Further, when printing the document file for which the added image getting-in process is performed, there is no need for the server to communicate with the image forming apparatus 1 via the network.
Further, in this embodiment, the document browsing portion 21 for browsing the data stored in the document storing portion 13 and added image storing portion 18 by the user is included in the client PC 3, though the document browsing portion may be included in the image forming apparatus 1. This, for example, enables to display the data stored in the document storing portion 13 and the added image storing portion 18 on the control panel included in the image forming apparatus 1 to instruct printing and enables the user to instruct the added image getting-in process to the document file.
Third Embodiment
Next, the third embodiment will be explained by referring to
Hereinafter, to the same portions as those of the first and second embodiments, the same numerals are assigned and only the characteristic portions of this embodiment will be explained.
The block diagram of the processing portions included in the added image processing system of the third embodiment are shown in
The selection of the added image getting-in method will be explained concretely by referring to the flow chart shown in
If the user clicks the added image getting-in button 910 on the display screen by the document browsing application shown in
If the evaluation metadata item is selected on the evaluation metadata item selecting screen and then an OK button 1005 is clicked, the added image getting-in method selecting screen shown in
Next, the added image getting-in methods of “Overwrite”, “Insert”, and “Mark” which are shown as an example in
“Overwrite”, as shown in
“Insert” is a method, when adding the added image, for shifting and displaying the image in the document file which comes under the added image and goes out of sight, thereby eliminating the images of the document file which go out of sight due to the added image. Namely, as shown in
Next, to the “Mark”, as shown in
As described in this embodiment, if the user can select the added image getting-in method, the added image getting-in format suited to the case that the user uses the document file can be selected.
Number | Date | Country | Kind |
---|---|---|---|
2009-231172 | Oct 2009 | JP | national |
This application is based upon and claims the benefit of priority from U.S. provisional application 61/147268, filed on Jan. 26, 2009; the entire contents of each of which are incorporated herein by reference. This application is also based upon and claims the benefit of priority from Japanese Patent Application No. 2009-231172, filed on Oct. 5, 2009; the entire contents of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
61147268 | Jan 2009 | US |