The present invention relates to a technique to embed information into image data.
Electronic data formats include an electronic data format, such as a PDF (portable document file) specified by the ISO, capable of embedding an object, such as a moving image and sound, into its file. Then, embedment of an object can be executed on an application within a personal computer (PC) compatible with the electronic file.
In recent years, electronic files are generated frequently in an apparatus other than PC, such as an MFP (Multi Function Peripheral) including a scan function to optically read a document. Then, for electronic files generated in the MFP etc., it is requested to enable association with data, such as a moving image and sound, by a certain method.
As to this point, for example, Japanese Patent Laid-Open No. 2008-306294 discloses the method for attaching image data generated by an image processing apparatus to an electronic mail together with moving image data by utilizing the file attachment function of electronic mails.
At present, a PC is necessary separately from an image processing apparatus in order to embed a moving image file etc. into image data obtained by scan. Further, a compatible application (in the case where the file format of image data is the PDF, Acrobat etc.) that runs on the PC is also necessary. Then, a user is required to perform a procedure/task that takes time and effort. Specifically, the user is required to perform the following task. First, the user generates image data by scanning a document in the image processing apparatus and sends the image data to an arbitrary PC. In the PC, the user opens the received image data using a compatible application and specifies a moving image file etc. to be embedded and embeds it into the image data. Then, the user transmits the image data into which the moving image file is embedded to a target destination from the PC.
Further, the GUI and the input I/F in the general image processing apparatus, such as the MFP, have not developed to the degree of the PC, and therefore, there is such a problem that it is difficult to perform the detailed operation to specify an area at the time of embedding a moving image file etc. in an attempt to achieve the above-mentioned series of tasks only by the image processing apparatus.
An image processing apparatus according to the present invention includes a unit configured to optically read a document and to digitize it in accordance with a predetermined file format, an area determining unit configured to determine whether there is an area into which an object can be embedded in image data obtained by the digitization, and a unit configured to, in a case where the area determining unit determines that there is the area into which an object can be embedded, embed an image representing an object into the area.
According to the present invention, it is made possible to embed data, such as a moving image file, into image data generated by an image processing apparatus by a simple operation in the image processing apparatus.
Further features of the present invention will become apparent from the following description of exemplary embodiments (with reference to the attached drawings).
Hereinafter, an aspect for executing the present invention is explained using the drawings.
An MFP 100 includes a CPU 101, a RAM 102, a storage unit 103, a GUI 104, a reading unit 105, a printing unit 106, and a communication unit 107, and is connected with another external device, such as a PC (not shown schematically), via a network 200.
The CPU 101 totally controls each unit by reading control programs and executing various kinds of processing.
The RAM 102 is used as a temporary storage area, such a main memory and a work area, of the CPU 101.
The storage unit 103 is used as a storage area of programs read onto the RAM, various kinds of settings, files, etc.
The GUI 104 includes a touch panel LCD display device etc. and displays various kinds of information and receives inputs of various kinds of operation instructions (commands).
The reading unit 105 optically reads a document set on a document table, not shown schematically, and generates image data (electronic file) in a predetermined format.
The printing unit 106 forms an image on a recording medium, such as paper, by using generated image data etc.
The communication unit 107 performs communication with an external device, such as a PC, via the network 200, such as a LAN.
A main bus 108 is a bus that connects each unit described above.
At step 201, the CPU 101 determines whether there is a scan request from a user via the GUI 104. In the case where there is a scan request, the procedure proceeds to step 202. On the other hand, in the case where there is no scan request, the CPU 101 stands by until a scan request is made.
At step 202, the CPU 101 sets a file format at the time of digitization and its transmission destination. This setting is performed in accordance with selection of a user made on a screen for selecting a file format (File format selection screen) and a screen for setting a transmission destination of generated image data (Transmission destination setting screen) displayed on the GUI 104.
At step 203, the CPU 101 instructs the reading unit 105 to read (scan) a document and the reading unit 105 scans the document and generates image data in accordance with a file format selected via the GUI 104.
At step 204, the CPU 101 determines whether the file format of the generated image data is a file format capable of embedding an object. This format determination is performed by, for example, referring to a determination table (table associated with information, such as a flag, indicating whether embedment can be performed for each file format) stored in the storage unit 103. For example, in
At step 205, the CPU 101 determines whether a user has given instructions to embed an object into the generated image data. The user's instructions whether to embed the object into the image data are given via the Embedment setting screen displayed on the GUI 104.
At step 206, the CPU 101 performs area separation processing on the image data obtained by scan. The area separation processing is a technique to separate image data into a character area, a figure area, an image area, another area (such as an area in which color does not change or changes slightly like a blank area (area in which the amount of change is equal to or less than a fixed value)), etc, for each attribute.
At step 207, the CPU 101 determines whether there exists an area into which an object can be embedded (hereinafter, referred to as an “embeddable area”) in each area extracted by the area separation processing for each page included in the image data. This area determination is performed, for example, based on whether the above-described area specified in advance as an embeddable area and in which the amount of change is small, or the blank area exists. In the case where there are one or more pages determined to have an embeddable area, the procedure proceeds to step 208. On the other hand, in the case where it is determined that there is no page having an embeddable area, the procedure proceeds to step 216.
At step 208, the CPU 101 sets a page of the generated image data into which an object is embedded (embedment destination of an object). This setting is performed in accordance with selection of a user made on a screen for specifying an embedment target page (Embedment page setting screen) displayed on the GUI 104.
At step 209, the CPU 101 sets an object to be embedded (target object) into the set page. This setting is performed in accordance with selection of a user made on a screen for specifying a moving image file, a sound file, etc., to be embedded (Embedment object setting screen) displayed on the GUI 104.
At step 210, the CPU 101 compares the embeddable area extracted at step 206 and the area of the image representing the target object set at step 209 and including a reproduction button for the object (hereinafter, referred to as an “object image”). Then, the CPU 101 determines whether there is a sufficient space for embedding the object image within the embeddable area. This determination is performed by, for example, sequentially checking whether a rectangular area the same size as the object image (for example, 640×480 pixels) is included in the embeddable area from the end of the embeddable area. In the case where it is determined that there is a sufficient space for embedding the object image within the embeddable area, the procedure proceeds to step 212. On the other hand, in the case where it is determined that there is not a sufficient space for embedding the object image within the embeddable area, the procedure proceeds to step 211.
At step 211, the CPU 101 performs conversion processing to reduce the object image (to reduce the number of pixels) so that the object image is included within the embeddable area. At this time, it is desirable to set in advance to which extent the object to be embedded can be reduced for each type of the object. For example, in the case of a moving image film, 320×240 pixels are set as the lower limit value, in the case of a sound file, 32×32 pixels are set, and so on. Then, it may also be possible to cause the procedure to proceed to step 216, to be described later, in the case where the size of the extracted embeddable area is smaller than the lower limit value.
At step 212, the CPU 101 determines whether a floating window is specified as the execution type of the target object. The specification of the execution type of the target object is performed in accordance with selection of a user made on a screen for specifying an execution type of a moving image file etc. to be embedded (Embedment object execution type setting screen) displayed on the GUI 104.
At step 213, the CPU 101 performs settings in the meta information of the object to be embedded so that the execution takes place in the area of the object image at the time of the execution, and embeds the object image into the embeddable area on the page specified at step 208.
At step 214, the CPU 101 performs settings in the meta information of the object to be embedded so that the execution takes place in the floating window at the time of the execution, and embeds the object image into the embeddable area on the page specified at step 208.
At step 215, the CPU 101 determines whether the user has further given instructions to embed another object. The user's instructions whether to embed another object are given via the Embedment setting screen displayed on the GUI 104.
At step 216, the CPU 101 determines whether the user has given instructions to attach an object to a generated electronic file. The user's instructions whether to attach the object to the electronic file are given via an Attachment setting screen displayed on the GUI 104.
At step 217, the CPU 101 displays an Attachment object setting screen (not shown schematically) similar to
At step 218, the CPU 101 instructs the communication unit 107 to transmit the generated image data to a specified transmission destination. In the case of the present embodiment, any of the image data into which the object is embedded (steps 213, 214), the image data to which the object is attached (step 217), and the image data into which no object is embedded and to which no object is attached (No at step 204 or 205) is transmitted.
In the flowchart shown in
Further, in the flowchart shown in
By the processing as described above, it is made possible for a user to embed an object, such as a moving image and sound, into image data generated by scan without requiring the user to perform complicated operations on the GUI and the input I/F on the image processing apparatus.
Further, it is also possible to make use of the various functions (specification of a transmission destination in the transmission address list, electronic signature function, etc.) of the image processing apparatus for the image data into which an object is embedded, and therefore, the convenience of the user is further improved.
Aspects of the present invention can also be realized by a computer of a system or apparatus (or devices such as a CPU or MPU) that reads out and executes a program recorded on a memory device to perform the functions of the above-described embodiment(s), and by a method, the steps of which are performed by a computer of a system or apparatus by, for example, reading out and executing a program recorded on a memory device to perform the functions of the above-described embodiment(s). For this purpose, the program is provided to the computer for example via a network or from a recording medium of various types serving as the memory device (e.g., computer-readable medium).
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
This application claims the benefit of Japanese Patent Application No. 2012-274584, filed Dec. 17, 2012, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2012-274584 | Dec 2012 | JP | national |