This invention relates to a document processing system, a document processing method, a computer readable medium and a data signal, and in particular to a user interface, which uses an actual document.
According to an aspect of the invention, a document processing system includes a first acquisition section, a storage section, a second acquisition section and a data processor. The first acquisition section acquires a position specified by a user in an actual document. The storage section stores layout data of the document and content data of the document associated with the layout data. The second acquisition section acquires data in the specified position from the content data of the document stored in the storage section, based on the layout data stored in the storage section and the specified position. The data processor processes the data acquired by the second acquisition section.
Exemplary embodiments of the invention will be described in detail below with reference to accompanying drawings wherein:
Exemplary embodiments of the invention will be discussed in detail based on the accompanying drawings.
To realize this function, the document processing system 10 includes a computer 12 to which the projector 18 is connected. A camera 14 and the pointing device 16 are also connected to the computer 12.
The computer 12 is connected to a communication network 32 such as the Internet and can perform data communication with a server 34 connected to the communication network 32. Another computer 38 is also connected to the communication network 32 can also perform data communication with the server 34. The computer 38 is a computer for creating document data. The computer 38 includes a printer 40, which prints document data on a print medium such as a sheet of paper to prepare the actual document 24. When creating a document, the computer 38 generates identification information of the document, that is, document ID. The document ID may be a hash value generated based on the document data, for example. The computer 38 assigns to each document, information representing the document ID. Then, the printer 40 prints such information on the actual document 24. In the exemplary embodiment, a one-dimensional or two-dimensional bar code 26 is adopted as the information representing the document ID. The bar code 26 is printed in a corner of the actual document 24 as shown in
When creating the document data, the computer 38 transmits the document data to the server 34 together with the document ID. Upon reception of the document data and the document ID from the computer 38, the server 34 registers them in a document database 36. In this case, the server 34 extracts the layout data 44 contained in the document data from the document data.
When the user gets the actual document 24 printed out by the printer 40, the user captures the bar code 26 formed in the corner of the actual document 24 with the camera 14. The captured image of the bar code 26 is input to the computer 12. Then, the computer 12 generates the document ID represented by the captured image of the bar code 26. The computer 12 transmits the generated document ID to the server 34 through the communication network 32. The server 34 reads the layout data 44, which is associated with the received document ID, from the document database 36, and returns the read layout data 44 to the computer 12.
The computer 12 is configured to project a calibration image (not shown), which specifies a location at which the actual document 24 is to be placed, onto the projection target region 22. The calibration image may include four L-shaped images indicating positions to match the four corners of the actual document 24. The user places the actual document 24 at a predetermined position in the projection table 20 with relying on a position guide of the calibration image. The computer 12 retains data for specifying a region where the actual document 24 thus guided by the position guide is placed, that is, actual-document specifying data. Furthermore, the computer 12 manages the current position of the cursor image 30. The computer 12 a relative position of the current position of the cursor image 30 to the region specified by the actual-document specifying data, to thereby acquire which position in the actual document 24 the user specifies. The computer 12 determines which description region the user specifies, based on the acquired position and the layout data 44 received from the server 34. That is, the computer 12 determines which of one or more description regions specified by the layout data 44 the cursor image 30 is currently located in.
If the cursor image 30 is positioned in any of the description regions, the computer 12 causes the projector 18 to project an image, which identifies the description region (region identification image), onto the actual document 24. At this time, the projection position of the region identification image is adjusted so that description, which corresponds to a region identified by the region identification image and is formed on the actual document 24, is located in the region identified by the region identification image.
That is, if the user performs drag-and-drop operation of the pointing device 16 as described above in a state where any of description regions is indicated by the cursor image 30 and the region identification image 42 identifying the description region is projected onto the actual document 24, the already generated document ID is transmitted through the communication network 32 to the server 34 based on data, which specifies the indicated description region (description-region specifying data) and the bar code 26. The server 34 acquires data, which indicates the description in the description region specified by the received description-region specifying data from among the plural pieces of document data stored in the document database 36 in association with the received document ID. Then, the server 34 returns the acquired data to the computer 12. The data is passed as the processing target of the program relating to the program window 28. Thus, a program such as document processing executes data processing using the data received from the server 34.
As shown in
Thereafter, coordinates of the current position of the cursor image 30 are calculated in accordance with an output signal of the pointing device 16 (S105). It is determined whether or not the coordinates of the current position of the cursor image 30 are in any of description regions of the actual document 24 (S106). If the coordinates of the current position of the cursor image 30 are not in any of the description regions, the process returns to S105. On the other hand, if the coordinates of the current position of the cursor image 30 is in any of the description regions, the projector 18 projects a region identification image, which identifies the description region, onto the description region (S107). Further, it is determined whether or not the user clicks the pointing device 16 (S108). If the user does not click the pointing device 16, the process returns to S105. On the other hand, if the user clicks the pointing device 16, the coordinates of the current position of the cursor image 30 at the timing when the clicking is released, that is, when the user releases the button of the pointing device 16 are acquired as coordinates of an end position of the drag-and-drop operation (S109). It is determined whether or not the coordinates of the end position of the drag-and-drop operation are in the program window 28 (S110). If the coordinates of the end position of the drag-and-drop operation are not in the program window 28, the process returns to S105. On the other hand, if the coordinates of the end position of the drag-and-drop operation are in the program window 28, the description-region specifying data, which specifies the description region identified by the projected region identification image, is transmitted to the server 34. Then, the data of the description returned from the server 34 is received (S111). Processing of the received data is started by the program corresponding to the program window 28 (S112).
Various modified embodiments of the invention are possible. For example, when the data of the description determined by the description-region specifying data is read from the document database 36, the server 34 may record the fact in association with the user of the computer 12. In so doing, control of charging the user for acquiring the data of each description, etc., is made possible. As one of service cases utilized as a portal UI in the ubiquitous society, the user may use a multiple function machine installed in a convenience store to purchase the electronic version of any desired one of photos, text, etc., published in magazines and newspapers and transfer it to his or her own mobile machine.
Further, a display for displaying the processing result of the computer 12 on a display screen may be connected to the computer 12, the projector 18 may further project an image indicating the current user's specification position onto the projection target region, and if the specification position is placed out of a predetermined projection area, the window 28 may be displayed on the display screen of the display.
Further, data storage means may store the storage address of the content data in place of the content data and data acquisition means may acquire the data based on the content data from the storage means indicated by the storage address.
Number | Date | Country | Kind |
---|---|---|---|
2006-177375 | Jun 2006 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
5511148 | Wellner | Apr 1996 | A |
5732227 | Kuzunuki et al. | Mar 1998 | A |
5778352 | Inoue et al. | Jul 1998 | A |
5784487 | Cooperman | Jul 1998 | A |
5821929 | Shimizu et al. | Oct 1998 | A |
5841900 | Rahgozar et al. | Nov 1998 | A |
5917490 | Kuzunuki et al. | Jun 1999 | A |
5999664 | Mahoney et al. | Dec 1999 | A |
6005547 | Newman et al. | Dec 1999 | A |
6005680 | Luther et al. | Dec 1999 | A |
6067112 | Wellner et al. | May 2000 | A |
6128102 | Ota | Oct 2000 | A |
6178270 | Taylor et al. | Jan 2001 | B1 |
6266057 | Kuzunuki et al. | Jul 2001 | B1 |
6297804 | Kashitani | Oct 2001 | B1 |
6323876 | Rao et al. | Nov 2001 | B1 |
6346933 | Lin | Feb 2002 | B1 |
6408257 | Harrington et al. | Jun 2002 | B1 |
6446099 | Peairs | Sep 2002 | B1 |
6463220 | Dance et al. | Oct 2002 | B1 |
6671684 | Hull et al. | Dec 2003 | B1 |
6674426 | McGee et al. | Jan 2004 | B1 |
6801673 | Chao et al. | Oct 2004 | B2 |
7675504 | Smith et al. | Mar 2010 | B1 |
20010022575 | Woflgang | Sep 2001 | A1 |
20010032057 | Smith et al. | Oct 2001 | A1 |
20010044858 | Rekimoto | Nov 2001 | A1 |
20020037104 | Myers et al. | Mar 2002 | A1 |
20020041325 | Maggioni | Apr 2002 | A1 |
20020126161 | Kuzunuki et al. | Sep 2002 | A1 |
20030103238 | MacLean et al. | Jun 2003 | A1 |
20030128875 | Pilu et al. | Jul 2003 | A1 |
20030187886 | Hull et al. | Oct 2003 | A1 |
20040032428 | Pilu et al. | Feb 2004 | A1 |
20040220898 | Eguchi et al. | Nov 2004 | A1 |
20040220962 | Kaneda | Nov 2004 | A1 |
20050076295 | Simske et al. | Apr 2005 | A1 |
20050128297 | Katsuyama | Jun 2005 | A1 |
20050154980 | Purvis et al. | Jul 2005 | A1 |
20050165747 | Bargeron et al. | Jul 2005 | A1 |
20060072009 | Moesgaard Kjeldsen et al. | Apr 2006 | A1 |
20060072830 | Nagarajan et al. | Apr 2006 | A1 |
20060085477 | Phillips et al. | Apr 2006 | A1 |
20060181518 | Shen et al. | Aug 2006 | A1 |
20060181519 | Vernier et al. | Aug 2006 | A1 |
20060285772 | Hull et al. | Dec 2006 | A1 |
20070011140 | King et al. | Jan 2007 | A1 |
Number | Date | Country |
---|---|---|
06-110993 | Apr 1994 | JP |
A-07-168949 | Jul 1995 | JP |
A-09-134442 | May 1997 | JP |
A-09-319556 | Dec 1997 | JP |
2001056837 | Feb 2001 | JP |
2004-213518 | Jul 2004 | JP |
Entry |
---|
Newman et al., A Desk Supporting Computer-Based Interaction With Paper Documents, 1992, ACM Conference on Human Factors in Computing Systems, pp. 587-592. |
Hong et al., Advanced Paper Document in a Projection Display, Oct. 2004, Springer Berlin/Heidelberg, Lecture Notes in Computer Science vol. 3332, pp. 81-87. |
Rekimoto et al., Augmented Surfaces: A Spatially Continuous Work Space for Hybrid Computing Environments, 1999, ACM Conderence on Human Factors in Computing Systems, pp. 378-385. |
Koike et al., Integrating Paper and Digital Information on Enhanced Desk: A Method for Realtiem Finger Tracking on an Augmented Desk System, Dec. 2001, ACM Transactions on Computer-Human Interaction, vol. 8, No. 4, pp. 307-322. |
Wellner, Interacting With Paper on the Digital Desk, Jul. 1993, ACM Communication of the ACM vol. 35 No. 7, pp. 87-96. |
Robinson, et al., The Livepaper System: Augmenting Paper on an Enhanced Tabletop, 2001, Elsevier Science Ltd, Computer & Graphics 25, pp. 731-743. |
Office Action issued in JP Application No. 2006-177375 (with English translation). |
Number | Date | Country | |
---|---|---|---|
20070296695 A1 | Dec 2007 | US |