1. Field of the Invention
The present invention relates to an image processing system configured to estimate time necessary in printing data and a control method.
2. Description of the Related Art
In recent years, there has been a demand for connecting a great number of image processing apparatuses, such as a printer, a scanner, a digital copier, and a facsimile machine, and coordinating them to enhance their functions and to realize a high productivity. In order to meet such demand, an image data format that is used for transmitting images between image processing apparatuses has been developed. This image data format (hereinafter referred to as vector data) is independent of print resolution.
The image processing apparatus which receives the vector data rasterizes the vector data into a bitmapped image. Therefore, image degradation due to resolution conversion does not occur and a fine image can be acquired as the most suitable bitmapped image for each image processing apparatus is generated. The vector data technique is important in coordinating various types of devices having different capabilities. Further, a technique has been developed in which various types of information that is not targeted for printing is associated with the vectorized image data for easier processing and for easier image search.
Further, by storing an image that is input by an image input apparatus as a file in a secondary storage device of an image output apparatus, the image can be repeatedly output whenever a user wishes to output the image. A function of an image output apparatus in which data is stored in a file format in a secondary storage device for the purpose of reuse is called a box function and the file system is called a box system. The box function enables a user to repeatedly reuse previously generated image data, for example, to reprint the stored image data or to send the stored image data to other image processing apparatuses with different capabilities.
For a user of such an image processing apparatus, it would be useful if the end time of a job can be precisely estimated during the process. Japanese Patent Application Laid-Open No. 2001-22544 discusses a digital copying machine that includes a user interface that displays an estimated end time, and a technique in which a user is notified of the estimated end time by an application on a host computer.
Time necessary in printing until the print is output is classified into two periods:
(1) Rasterization of vector data and generation of bitmapped image; and
(2) Transmission of bitmapped images of all pages to a printer engine and formation of the images.
A precise estimation of period (2) can be calculated based on the number of pages to be printed and the capability of the printer engine (print speed). However, estimation of period (1) is not simple for a variety of reasons. For example, the period (1) varies greatly depending on data content and rasterization capability of the image processing apparatus. Generally, rasterization processing of character data (text region) is quicker than image data. Further, an amount of processing needed for image data differs greatly depending on the number of rendering objects. Furthermore, processing time of an image processing apparatus varies greatly between an apparatus having a dedicated hardware for rasterization and an apparatus which rasterizes data using software. In addition, if software is used for rasterization, processing time differs greatly depending on a processing capability of a central processing unit (CPU) and a memory capacity of the apparatus.
Japanese Patent Application Laid-Open No. 2001-22544 discusses a technique by which estimated processing time is calculated considering a type of a rendering object included in page description data and a processing capability of an image processing apparatus which outputs the data. The estimated processing time is added to the image data or stored in an apparatus on the image data generation side as additional information that is not printed. Thus, estimated processing time which is once estimated in association with certain image data can be reused if the same image data is output from the same image processing apparatus. In addition, Japanese Patent Application Laid-Open No. 2001-22544 discusses a solution to a timing problem that occurs when time is calculated by the receiving apparatus. Previously, time could not be calculated until analysis of the whole content of the page description data is finished.
However, in Japanese Patent Application Laid-Open No. 2001-22544, the apparatus that sends data needs to know processing capability of the output image processing apparatus in advance for calculating the time (1). In future coordination of image processing apparatuses, it is required that various devices which are connected on a network send, receive, and store images in a flexible manner. However, when a new image processing apparatus is added or an optional feature is added to or removed from an image processing apparatus, capability of the apparatus is also changed. Accordingly, when an image is transmitted to a great number of image processing apparatuses or when a destination image processing apparatus is changed, capability of the destination apparatus needs to be collected for each time.
Further, since the estimated processing time of each destination apparatus is calculated by the image sending apparatus using the above-described capability information, processing load of the image sending apparatus increases when the number of destination apparatuses is increased.
Furthermore, when image data which is stored in a box system is used, estimated processing time of the image data cannot be used even if the image data includes estimated processing time as additional information.
The present invention is directed to realizing efficient calculation and reuse of estimated time information.
According to an aspect of the present invention, an image processing apparatus includes a processing amount index calculation unit configured to analyze content of image data that is independent of print resolution and to calculate a processing amount index indicating a processing amount necessary in converting the image data into a bitmapped image, a storing unit configured to store the calculated processing amount index as additional information associated with the image data, and a sending unit configured to send the image data and the additional information.
According to an exemplary embodiment of the present invention, efficient calculation and reuse of estimated time information can be performed as well as earlier calculation of the print processing.
Further features and aspects of the present invention will become apparent from the following detailed description of exemplary embodiments with reference to the attached drawings.
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate exemplary embodiments, features, and aspects of the invention and, together with the description, serve to explain the principles of the invention.
Various exemplary embodiments, features, and aspects of the present invention are described in detail below with reference to the drawings.
A configuration of a one-drum (1D) color MFP according to an exemplary embodiment of the present invention is described with reference to
The 1D color MFP is configured to form an image on a sheet as a physical medium. The ID color MFP includes a scanner unit 101, a laser exposure unit 102, a photosensitive drum 103, an image forming unit 104, a fixing unit 105, a paper feed/convey unit 106, and a printer control unit (not shown) controlling all of these units.
The scanner unit 101 illuminates a document placed on a document positioning plate to optically scan the document image, converts the image into an electric signal, and forms image data.
The laser exposure unit 102 directs a light beam which is modulated depending on the image data, such as a laser beam, to a polygonal mirror which rotates at a constant angular speed. The reflected light is emitted to the photosensitive drum 103 as reflected scanning light.
The image forming unit 104 is configured to form an image by a series of electrophotographic processes including rotating the photosensitive drum 103, applying an electric charge to a charging unit, developing a latent image formed on the photosensitive drum 103 by the laser exposure unit 102 with toner, and transferring the toner image to a sheet. The image forming unit 104 also recovers a minute amount of toner which remains untransferred on the photosensitive drum 103. While a transfer drum 107 makes four rotations, the sheet is set on a predetermined position on the transfer drum 107 and developing units (developing stations) for magenta (M), cyan (C), yellow (Y), and black (K) toner sequentially repeat the aforementioned electrophotographic process. After making four rotations, the sheet having a four full-color transferred toner image is conveyed from the transfer drum 107 to the fixing unit 105.
The fixing unit 105 includes a combination of rollers and belts and a heat source, such as a halogen heater. The fixing unit 105 applies heat and pressure to fix the toner which is transferred to the sheet by the image forming unit 104.
The paper feed/convey unit 106 includes one or more sheet storage spaces represented by a sheet cassette or a paper deck. According to an instruction from the printer control unit, one sheet out of a plurality of sheets stored in a sheet storage space 108 is separated and conveyed to the image forming unit 104. The sheet is wound around the transfer drum 107 of the image forming unit 104 and conveyed to the fixing unit 105 after the transfer drum 107 makes four rotations. During the four rotations, a toner image of the aforementioned each YMCK color is transferred to the sheet. Further, for forming images on both sides of the sheet, the sheet which passed through the fixing unit 105 is controlled to be conveyed to the image forming unit 104 again through a conveyance path 109.
The printer control unit communicates with an MFP control unit which controls the entire MFP. Based on an instruction from the MFP control unit, the printer control unit controls each state of the above-described scanner, laser exposure, image forming, fixing, and paper feed/convey units so that the entire printing process is operated smoothly. The MFP is programmed in accordance with the present invention as described in detail below.
The PC generates image data, generates a processing time index of the image data in association with the image data as additional information, and sends the image data and additional information to each MFP.
MFP-a, MFP-b, and MFP-c, include a hard disk drive (HDD) H1, an HDD H2, and an HDD H3, respectively. Each HDD is a secondary storage device. Each printer engine (hereinafter referred to as the “engine”), which is installed in each MFP, has a different print resolution and a different print speed (number of printing pages per minute: ppm). The MFP-a has a print resolution of 600 dots per inch (dpi) and has a print speed of 40 ppm, the MFP-b has a print resolution of 1200 dpi and has a print speed of 120 ppm, and the MFP-c has a print resolution of 600 dpi and has a print speed of 20 ppm.
A processing capability and type of a renderer (or a rasterizer) installed in the MFP is different for each MFP. The MFP-a and the MFP-b include a renderer of a similar type (referred to as “Ra” in
Generally, a renderer is not capable of processing a rendering instruction group which is rendered by a different type of renderer. The rendering instruction group is generally called a Display List (hereinafter referred to as “DL”). The DL, which is generated by software from vector data having a complex rendering description, is an instruction that can be processed by hardware. The DL is dependent on print resolution.
The MFP-a, the MFP-b, the MFP-c, and the PC can communicate with each other using a network protocol. The arrangement of the MFPs connected via the LAN N1 is not limited to the above-described physical arrangement. Further, an apparatus other than an MFP, such as a server or a printer for example, can be additionally connected to the LAN N1.
A CPU 205 is a central processing unit configured to control the entire MFP. A RAM 206 is a system work memory used in operation of the CPU 205. The RAM 206 is also an image memory used as a temporary storage of the input image data. Further, a ROM 207 is a boot ROM where a system boot program is stored. An HDD 208 is a hard disk drive in which system software used for various types of processing and input image data can be stored. The system software stored in the HDD 208 includes program code for implementing processing in accordance with the present invention.
An operation unit interface (I/F) 209 is an interface unit for an operation unit 210. The operation unit 210 has a display screen configured to display data, such as the image data and text data. The operation unit I/F 209 is configured to transmit operation screen data to the operation unit 210. Further, the operation unit I/F 209 is used for transmitting information input by an operator via the operation unit 210 to the CPU 205. A network interface 211 includes, for example, a LAN card or the like. When the network interface 211 is connected to the LAN 10, the network interface 211 sends and receives information to and from an external apparatus. Further, a modem 212, which is connected to the public line 204, sends and receives information to and from an external apparatus. The above-described units are arranged on a system bus 213.
An image bus I/F 214 is an interface configured to connect the system bus 213 to an image bus 215 which is used for transferring image data at a high speed. The image bus I/F 214 is also a bus bridge configured to convert a data structure. Other units connected to the image bus 215 are a raster image processor (RIP) 216, a device I/F 217, a scanner image processing unit 218, a printer image processing unit 219, an image processing unit for image editing 220, and a color management module (CMM) 230.
The RIP 216 is configured to rasterize PDL code or vector data into a bitmapped image. The device I/F unit 217 connects the control unit 200 to the scanner 201 and the printer engine 202. The device I/F unit 217 is used for synchronous/asynchronous conversion of the image data.
The scanner image processing unit 218 is configured to perform various types of processing to image data which is output from the scanner 201, such as correction, processing, and editing. The printer image processing unit 219 makes corrections to the image data to be printed and converts its resolution according to a capability of the printer engine 202. The image processing unit for image editing 220 is configured to make various types of image processing, such as rotation, reduction, and expansion of the image data. The CMM 230 is a hardware module dedicated for color conversion processing, also referred to as color space conversion processing, of the image data based on a profile or calibration data. The profile is function-like information used for converting color image data expressed in a device-dependent color space into a device-independent color space, such as the Lab color space (also commonly known as the CIE 1976 (L*, a*, b*) color space and CIELAB). The calibration data is used for adjusting color reproduction characteristics of the scanner 201 and the printer engine 202 in the color multifunction peripheral.
A printer interface 1200 is a unit configured to transfer input/output data to and from an external apparatus. A protocol control unit 1101 is a unit configured to perform communication with an external apparatus by analyzing and sending a network protocol.
A vector data generation unit 1102 generates vector data or vectorizes data from a bitmapped image. The vector data is independent of print resolution.
A metadata generation unit 1103 generates secondary information acquired during the vectorization process as metadata. The metadata is additional data used, for example, for searching but not used for rendering. A processing amount index necessary in rendering the vector data is also generated as metadata.
A PDL analysis unit 1104 is a unit configured to analyze the PDL data and convert the PDL data into intermediate code (Display List) which is a format that enables easier processing. The intermediate code generated by the PDL analysis unit 1104 is sent to a data rendering unit 1105. The data rendering unit 1105 rasterizes the above-described intermediate code into bitmapped data. The bitmapped data is successively stored in a page memory 1106.
The page memory 1106 is a volatile memory configured to temporarily store the bitmapped data rendered by the data rendering unit 1105. A panel input/output control unit 1020 controls input/output to and from the operation panel.
A document storage unit 1030 is a unit configured to store a data file which contains vector data, a Display List, and metadata in units of input document job. The document storage unit 1030 is a secondary storage device, such as a hard disk. This data file is referred to as a “document” in the present exemplary embodiment.
A scan control unit 1500 is configured to perform various processing on the image data output from the scanner, such as correction, processing, and editing.
A print control unit 1300 converts content of the page memory 1106 into a video signal and transfers the video signal to a printer engine unit 1400. The printer engine unit 1400 is a printing mechanism unit configured to form an image on a recording sheet from the received video signal.
A system control unit 1010 organizes the above-described various types of software control units and controls the entire MFP as a system. Further, the system control unit 1010 collects print speed (ppm) of the printer engine and processing capability (index) of the renderer at the time of system start-up and stores the data in the RAM 206 (processing capability index storage).
Further, the system control unit 1010 controls operation in units, such as print operation and scan operation, as a job, controls the panel input/output control unit 1020, and displays a processing status of the job on the operation unit 210.
Next, generation of the vector data, Display List (DL), and metadata, which are included in a document, will be described.
Next, a document in which the vector data is associated with the metadata is generated by document generation processing d3. Subsequently, by DL generation processing d5, a DL is generated from the vector data in the document. The generated DL is stored in the document and transferred to rendering processing d7 to be converted into a bitmapped image.
The bitmapped image is recorded on a paper medium by print processing d8 and is output as a print product. The entire processing starting with the scanning processing d1 can be repeated by setting the print product on the document exposure unit.
First, a region segmentation of the bitmapped image is performed by region segmentation processing d1.
The region segmentation is performed by analyzing the bitmapped image data which is input, segmenting the data into regions according to an object included in the image, and determining and classifying an attribute of each region. The attribute of the regions is, for example, text (TEXT), photo (PHOTO), line (LINE), picture (PICTURE), or table (TABLE).
Referring now also to
Referring still to
On the other hand, the image attribute region in the regions classified by the attribute is converted into image information by image information extraction processing d3. The image information is a character string that expresses a feature of the image, such as “flower” or “face”. A conventional image processing technique using image feature quantity detection or face recognition can be used for extracting the image information. The image feature quantity is a frequency or a density of pixels included in an image. Further, according to the attribute of the region-segmented object, an index of a processing amount necessary in rendering each object is calculated (processing amount index calculation). The calculation of the index is based on the attribute or number of characters or lines included in the object. For PHOTO and PICTURE objects, color scale and paint type (gradation, translucency) will be added to the index of the processing amount. Further, as for processing required in a superposition of a plurality of objects within a page or processing of a translucent image, the image is generated as metadata in a page unit.
The generated character string and image information and their processing amount indexes are arranged in a data format by format conversion processing d4 to generate metadata.
First, the PDL data generation by a printer driver on a PC will be described referring to
Next, a data flow on the side of the image processing apparatus which has received the PDL data will be described with reference to
Further, from the bitmapped image generated by the rendering processing d3, a character string and image information are generated as metadata by metadata generation processing d5, which is described above referring to
A PDL including character string information is included in the various types of PDL, including LIPS and PS. With such a PDL, additional metadata is generated from a character string when the PDL is analyzed d1. The metadata generated in metadata generation d5 and the additional metadata from PDL data analysis d1 are stored in the document during document generation d6.
The vector data generated in the PDL data analysis d1 and the DL generated in DL generation d2 are stored in the document by the document generation processing d6.
Next, document generation processing and print processing will be described referring to
In step S1301, the system control unit 1010 executes the aforementioned region segmentation processing of the input image data. In the following description, a segmented region may be referred to as an “object”. In step S1302, the system control unit 1010 classifies a type or attribute of each region into TEXT, GRAPHIC, or IMAGE. The TEXT, GRAPHIC, and IMAGE go under different processing. For example, regarding attributes which are classified into TEXT, PHOTO, LINE, PICTURE, and TABLE in
If the region attribute is TEXT, then the process proceeds to step S1310. The system control unit 1010 executes OCR processing in step S1310, extracts a character string in step S1311, and converts a recognized character contour into vector data in step S1312. Then in step S1313, the system control unit 1010 calculates a processing amount necessary in rendering as an index. In step S1314, the system control unit converts the character string extracted in step S1311 as well as the processing amount index calculated in S1313 into metadata.
Although the metadata generated from the character string is a collection of character code, the character code is information necessary in a keyword search. However, even if the character code is recognized in the OCR processing, font types, such as “Mincho” or “Gothic”, character size, such as “10 pt” or “12 pt”, and font attributes, such as “italic” or “bold” are not recognized. Thus, not the character code but the character contour is stored as vector data for rendering.
On the other hand, if the region attribute is IMAGE in step S1302, the process proceeds to step S1320. In step S1320, the system control unit 1010 extracts image information. In this step, a feature of an image is detected by using a conventional image processing technique, such as the image feature quantity detection or the face recognition. In step S1321, the system control unit 1010 converts the detected image feature into a character string. This conversion will be easy if a table that contains feature parameter and character string is ready for use. Vectorization will not be made to the region attribute of IMAGE. Since image data can be stored as vector data, rendering processing is unnecessary. Thus, in the case of IMAGE, the system control unit 1010 does not calculate the processing amount index nor perform conversion of data into metadata and converts only the feature character string into metadata in step S1322. Alternatively, in order to generate a format similar to a format of other attributes, metadata with a processing amount index of zero can be generated and added.
If the region attribute is GRAPHIC in step S1302, the process proceeds to step S1330. In step S1330, the system control unit 1010 vectorizes the data. Subsequently in step S1331, the system control unit 1010 calculates the processing amount index necessary in rendering. If the object has a special effect and is painted or translucent, the index is calculated taking a processing amount of the special effect into consideration. Next, in step S1332, the system control unit converts the processing amount index into metadata.
If vectorization of the object and conversion of the processing amount index into metadata are completed, then in step S1350, the system control unit 1010 determines whether processing for one page is completed. If it is not completed (NO in step S1350), then in step S1360, the system control unit 1010 adds the processing amount index to the processing amount of the processed page. Then, the process returns to step S1302, and processing of the next object is performed (processed amount index in page unit).
If it is determined in step S1350 that the page processing is completed (YES in step S1350), then in step S1351, the system control unit 1010 adds the processing amount index of the processed page to the entire processing amount index of the document. In step S1352, the system control unit 1010 determines whether the processing of the last page is completed. If it is determined that the last page is not processed (NO in step S1352), the process returns to step S1301 and the system control unit processes the next page. If it is determined that the last page is processed in step S1352 (YES in step S1352), then in step S1353, conversion of the whole data into the document format is completed (whole processing amount index) and the processing of
It is to be noted that processing for generating a document including PDL data from a printer driver of a PC is the same as generating a document from an input image data except that the input data is data output by an application. Thus, further detailed description of the process flow of the document generation by the printer driver is omitted.
In step S1401, the system control unit 1010 receives the document data. Subsequently, rasterization of vector data starting from step S1402 and analysis of additional information starting from step S1420 are started in parallel. In step S1402, the system control unit 1010 generates a DL from the vector data in the document.
Next, in step S1403, the system control unit 1010 adds the generated DL to the document and renders the DL to a bitmapped image in step S1404. In step S1405, the system control unit 1010 executes print processing on a paper medium and the processing ends.
On the other hand, in the additional information analysis starting from step S1420, the system control unit 1010 analyzes the metadata acquired from the document data. In step S1421, the system control unit 1010 acquires the processing amount index from the metadata. The processing amount index may include physical print page count as well as processing amount necessary in rendering. Next, in step S1422, the system control unit 1010 acquires processing capability information of the processing apparatus that prints the document. The capability information includes a rendering capability and a print speed (ppm) of the printer engine (acquisition of apparatus processing capability).
In step S1423, the system control unit 1010 calculates estimated time necessary in rendering based on the processing amount index necessary in the rendering processing and the rendering capability information of the processing apparatus which prints the document acquired in step S1422 (image processing time estimation). Further, the system control unit 1010 calculates actual time necessary in forming the image by the printer engine from the page count and the engine speed. In step S1424, the calculated estimated processing time is notified to the user or notified to a PC or another apparatus that is connected to the network. Further, as the rendering processing and the print processing proceed, the system control unit 1010 updates the processing time. The system control unit 1010 makes necessary notification until the whole processing is completed. Details of step S1424 will be described below.
First, in step S1501, the system control unit 1010 analyzes the PDL data. In step S1502, the system control unit 1010 determines whether metadata is included in the PDL data. If metadata, such as character string information, is included in the PDL data (YES in step S1502), then the process proceeds to step S1510. In step S1510, the system control unit 1010 adds the metadata of the PDL data to the metadata of the document, and the process proceeds to step S1503.
On the other hand, if metadata is not included in the PDL data (NO in step S1502), then the process proceeds to step S1503. In step 1503, the system control unit 1010 processes data other than the metadata. This processing is the same as (or alternatively similar to) the document print processing described referring to
Since the vector data (a) is rendering data that is independent of print resolution, layout information, such as page size and orientation, is included in the page header (x2). An object (x4) that is rendering data, such as line, polygon, and Bezier curve, is linked one by one to the summary information (x3). As a whole, a plurality of objects (x4) are linked to the summary information (x3). The summary information (x3) describes a feature of the plurality of objects as a whole and includes attribute information of a segmented region that is described with reference to
The metadata (b) is additional information that is unrelated to the rendering processing. The metadata (b) includes information necessary for estimating processing time, such as processing amount index and page count, as well as information used for search. The page information (x5) includes processing amount index necessary in rendering the rendering data included in the page. The detailed information (x6) includes object details including OCR information and a generated character string (character code string) as image information.
Further, the metadata (b) includes total information (x20) in which information, such as rendering amount index and total page count, of the entire document is included. The total information (x20) is designed to contribute to an acquisition of the processing amount and page count of the whole document at an early timing when the document processing is performed. Thus, the total information (x20) is configured so that it can be directly referred from the document header (x1). Similarly, the page information (x5) is linked to each page header (x2) so that the processing amount index of the relevant page can be smoothly acquired for each page (addition of the whole processing amount or in a page unit).
Further, since metadata is linked to the summary information (x3) of the vector data (a), the detailed information (x6) can be searched from the summary information (x3).
The DL (c) is intermediate code which is used by the renderer when the renderer rasterizes data into bitmapped data. A page header (x7) includes a management table of rendering information (instruction) in a page and the instruction (x8) includes rendering information dependent on print resolution.
As illustrated in a data structure 17-1 having header regional, a vector data region a2, a metadata region a3, and a DL region a4 of the document are arranged in an arbitrary address in the memory.
As illustrated in a data structure 17-2, the vector data region, the metadata region, and the DL region of the document are serialized in a file.
The processing amount index is stored for each page so that while the image processing apparatus is processing the document, a processing status or the remaining processing time can be notified to the user. For example, although pages 1 and 100 illustrated in
Further, a detailed configuration of the document data will be described taking page 1 as an example. Summary information of page 1 includes “TEXT” and “IMAGE”. Character contours “H,e,l,l,o” (object t1) and “W,o,r,l,d” (object t2) are linked to the summary information of the “TEXT” as vector data. In addition, character code strings (metadata mt) “Hello” and “World” are referred to from the summary information.
Further, a photo image of a butterfly (object i1) in Joint Photographic Experts Group (JPEG) format is linked to the summary information of the “IMAGE”. Furthermore, image information (metadata mi) “butterfly” is referred to from the summary information. Thus, for example, if a text is searched using a keyword “World”, the search will be made by acquiring vector page data sequentially from a document header and then searching metadata which is linked to “TEXT” from the summary information linked to the page header.
The estimation and display of the PDL printing and the job end time according to the present exemplary embodiment will now be described referring to
In
The processing of the MFP-a will now be described with reference to the flowchart in
In step S2003, the system control unit 1010 acquires a processing amount index of rendering and number of pages to be printed from the total information portion MA of metadata of the document in order to estimate job end time. In step S2004, the system control unit 1010 acquires capability information of the MFP-a that is also necessary in calculating the job end time. In step S2005, the system control unit 1010 calculates time necessary in rendering and time necessary in printing. The estimated end time which is calculated is displayed on the operation unit 210 in step S2050.
Referring now also to
Referring still to
When the last page is printed (YES in step S2014), then in step S2015, the system control unit 1010 ends the processing, updates the display of the operation unit 210, and ends the job. The same type of processing is performed for PrintData2 and PrintData3. The processing time of PrintData2 and PrintData3 is displayed on the operation unit 210 according to the data content and updated as the processing proceeds.
Reference is now made again to
In
A screen 701 shows detailed information of the job 620, a screen 751 shows detailed information of the job 621, and a screen 771 shows detailed information of the job 622. A pause button 720 can be selected to temporarily stop the processing, a close button 721 can be selected to return to the job list screen (
The screen 701 is a detail screen of the job 620, in other words, a job detail screen of PrintData1. Job information 702 includes information such as a user name and a document name. A field 703 shows a status of the rendering processing. According to the screen 701, the rendering of all 120 pages is completed. A field 704 shows the number of output pages. Printing of 28 pages out of 120 pages is completed. A field 705 shows an output time of the job. The printing will take approximately two minutes until it is completed. A field 706 shows a total time necessary in completing the printing of the job 620 and the processing of other jobs which the MFP-a holds. Since the job 620 is the first of all jobs, “approximately 2 minutes” is displayed in both fields 705 and 706.
The screen 751 is a detail screen of the job 621, in other words, a job detail screen of PrintData2. Job information 752 includes information such as a user name and a document name. A field 753 shows a status of the rendering processing. According to the screen 751, rendering of 75 pages out of 150 pages is completed. Afield 754 shows the number of output pages. Printing of not even one page of 150 pages is completed. A field 755 shows an output time of the job. The rendering processing has approximately 5 minutes remaining and the printing processing has approximately 4 minutes remaining. A field 756 shows a total time necessary in completing the printing of the job 621 and the processing of other jobs that the MFP-a holds. Since the job 621 will be started after the job 620, “approximately 11 minutes” is displayed in the field 756.
The screen 771 is a detail screen of the job 622, in other words, a job detail screen of PrintData3. Job information 772 includes information such as a user name and a document name. A field 773 shows a status of the rendering processing and a field 774 shows the number of output pages. None of the pages is completed. A field 775 shows an output time of the job. The rendering will take approximately 2 minutes and the printing will take approximately 1 minute. A field 776 shows a total time necessary in completing the printing of the job 622 and the processing of other jobs that the MFP-a holds. Since the job 622 will be started after the jobs 620 and 621, “approximately 13 minutes” is displayed in the field 776.
The display of the estimated end time of the rendering processing and the estimated end time of the print processing are updated as the processing in steps S2011 and S2013 proceeds and notified to the operator via the operation unit 210. In step S2050, the system control unit 1010 not only displays the estimated end time on the operation unit 210 of the MFP-a but also can notify the operator of the estimated end time via an application on the PC.
In the first exemplary embodiment, a PDL printing operation directed from a PC to an MFP is described. According to a second exemplary embodiment of the present invention, a document stored in an MFP is reused and printed out on a plurality of MFPs, namely MFP-a, MFP-b, and MFP-c. According to the present exemplary embodiment, metadata of a processing amount included in the generated document can be used repeatedly and even if the number of destinations is increased, a load of the data sending apparatus is not increased. Further, since the processing time is estimated by each MFP based on the processing amount of the received document and the capability of the MFP itself, precise estimated processing time can be calculated even if the capability of the connected apparatuses is different. The rendering and printing times for MFP-a, MFP-b, and MFP-c are shown respectively at 2201, 2202, and 2203, and described further below.
Next,
In addition to performing the printing operation, each MFP calculates the estimated time necessary in rendering and printing based on the metadata included in the document and the capability information of the MFP itself. The estimated time of processing of PrintData1 calculated by the MFP-a, the MFP-b, and the MFP-c is illustrated in
Each MFP updates the display of the waiting time or the notification to the operator according to the progress made in the rendering and printing until the print processing of PrintData1 is completed.
PrintData1 can be reused by the MFP-a, the MFP-b, and the MFP-c even after the printing is completed if PrintData1 is stored in the HDD. The processing amount index and page count information added to the metadata can be reused when PrintData1 is printed again or even when PrintData1 is sent to another apparatus for printing. Further, the apparatus sending PrintData1, which is the MFP-S in the present exemplary embodiment, does not need to calculate the processing amount index for apparatuses to which to output PrintData1. Accordingly, even if PrintData1 is output to a great number of apparatuses having a different processing capability, the processing amount in the MFP-S remains unchanged.
The present exemplary embodiment describes that the processing amount index which is added to the metadata at the time of document generation can be used in the estimation of processing time when a plurality of documents are combined.
The document 1 is a 120-page document with a rendering processing amount (ProcIndex) of 4000. The document 2 is a 10-page document with a rendering processing amount of 1000. If the two documents are combined, in addition to the vector data portion which is actually printed, the metadata portion is also combined. The rendering processing amount and the page count of the newly generated document 3, which is stored in the total information portion MA of the metadata of the document 3, will be a simple addition of the rendering processing amounts and the page counts of the documents 1 and 2. The processing amount index and the page count of the newly generated document can be used as is with the first exemplary embodiment and the second exemplary embodiment. Further, in a case where the combined document is combined again with another document, the processing amount of the newly combined document can be also acquired by a simple calculation. Complex processing such as analyzing vector data content is not necessary.
The present invention can be applied to a system including a plurality of devices, or to an apparatus including a single device. For example, a scanner, a printer, a PC, a copier, a multifunction peripheral or a facsimile machine can constitute exemplary embodiments of the present invention.
The above-described exemplary embodiments can also be achieved by supplying a software program that realizes each function of the aforementioned exemplary embodiments, directly or by remote operation, to the system or the apparatus and a computer included in the system reading out and executing the provided program code.
Thus, the program code itself which is installed in the computer to realize the function and the processing of the present invention on the computer constitutes the above-described embodiments. In other words, the computer-executable program configured to realize the function and the processing of the present invention itself constitutes an exemplary embodiment of the present invention.
In this case, a form of the program can be in any form, such as object code, a program executed by an interpreter, or script data supplied to an operating system (OS) so long as the computer-executable program has a function of a program.
A storage medium for storing the program includes a floppy disk, a hard disk, an optical disk, a magneto-optical disk, a compact disc read-only memory (CD-ROM), a compact disc-recordable (CD-R), a compact disc-rewritable (CD-RW), a magnetic tape, a non-volatile memory card, a ROM, and a digital versatile disc (DVD), such as a DVD-read only memory (DVD-ROM) and a DVD-recordable (DVD-R).
Further, the program can be downloaded by an Internet/intranet website using a browser of a client computer. The computer-executable program of an exemplary embodiment of the present invention itself or a file including compressed program and has an automated install function can be downloaded from the website to a recording medium, such as a hard disk. Further, the present invention can be realized by dividing program code of the program into a plurality of files and then downloading the files from different websites. In other words, a World Wide Web (WWW) server by which a program file used for realizing a function of the exemplary embodiments on a computer is downloaded to a plurality of users can also constitute an exemplary embodiment of the present invention.
Furthermore, the program of an exemplary embodiment of the present invention can be encrypted, stored in a recording medium, such as a CD-ROM, and distributed to users. In this case, the program can be configured such that only the user who satisfies a predetermined condition can download an encryption key from a website via the Internet/intranet, decrypt the encrypted program by the key information, execute the program, and install the program on a computer.
Further, the functions of the aforementioned exemplary embodiments can be realized by a computer which reads and executes the program. An operating system (OS) or the like running on the computer can perform a part or whole of the actual processing based on the instruction of the program. This case can also realize the functions of the aforementioned exemplary embodiments.
Further, a program read out from a storage medium can be written in a memory provided in a function expansion board of a computer or a function expansion unit connected to the computer. Based on an instruction of the program, the CPU of the function expansion board or a function expansion unit can execute a part or all of the actual processing. The functions of the aforementioned exemplary embodiments can be realized in this manner.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all modifications, equivalent structures, and functions.
This application claims priority from Japanese Patent Application No. 2007-019468 filed Jan. 30, 2007, which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | Kind |
---|---|---|---|
2007-019468 | Jan 2007 | JP | national |