Reference is made to commonly-assigned copending U.S. patent application Ser. No. ______ (Attorney Docket No. K000543US01NAB), filed herewith, entitled PAGE COMPLEXITY ANALYZER, by Lax; the disclosure of which is incorporated herein.
The present invention relates to high speed variable data printing systems and, more particularly, to assuring the supply of sufficient data to the print engine of a high speed printing system so the engine can operate continuously and efficiently.
High speed printing systems must assure the supply of sufficient data to the print engine so that the engine can run without stopping since it is impractical to stop and start the printer in a short period of time. Typically, the system is designed to run at full rate for most documents. However, a certain document may have very complex pages that when processing them the system will not be capable of providing the data at full rate. The complexity of a page originates from ripping raster image processing (RIP) complexity which for page description languages (PDL) such as Postscript is highly data dependent and may consume considerable amount of time for page processing. Most printers buffer a small number of rasterized pages in memory to help smooth the data stream to the printer. This may ease short or erratic increases in page complexity, but cannot solve the problem for long sequences of complex pages. In this case the printer must be slowed down in order to accommodate for the increase in complexity.
A solution to the problem is described in the U.S. Pat. No. 6,762,855 (Goldberg et al.), which suggests using a large image buffer memory for holding completely processed images (previously rasterized) in combination with an intelligent control system. The raster image processed pages can be stored in a compressed form to increase capacity of storage. This enables the system to accumulate slack time that is left over from raster image processing of non-complex pages, and allocate it to complex pages so speed of printing is optimized with average ripping time. This accommodates situations where, on average, the complexity of pages and the system bandwidth are comparable, but may be overtaxed for some periods. The printer controller uses information from the buffer manager to control the speed of the transport, so that the speed of consuming image buffers is matched to the speed of ripping the pages. The printer controller continually inspects the backlog of pages in the buffer memory. If the backlog of pages grows, the printer controller instructs the transport to increase the web speed. If the backlog of completed pages fills the memory buffers (or some other predefined amount of pages is reached), the speed is increased to maximum web speed. However, if the backlog of completed pages decreases, the controller assumes that the amount of processing is such that the printer cannot maintain the current web speed. Hence it will ramp the transport down to maintain the queue at a steady point.
The invention described below uses a digital front end (DFE) architecture at which a job is ripped (rasterized) and stored in a ready to print (RTP) format on disk during the process stage. The processing can be done offline if desired. The RTP format is highly structured and therefore the complexity of an RTP format page can be predicted more precisely than a page which is described by a PDL language.
Briefly, according to one aspect of the present invention a method for estimating complexity of pages and controlling printing speed of a printing machine wherein the speed is set according to the estimated complexity of pages, the method includes the steps of processing pages elements; storing the processed pages elements in an input memory; estimating complexity of pages assembly according to the processed elements of pages in the estimated pages; assembling the pages from the elements of pages comprised in the estimated pages to produce ready to print pages; transmitting the ready to print pages to a print engine for printing; and setting the print engine speed wherein the speed is selected according to the estimated complexity of pages assembly.
The invention and its objects and advantages will become more apparent in the detailed description of the preferred embodiment presented below.
The present invention will be directed in particular to elements forming part of, or in cooperation more directly with the apparatus in accordance with the present invention. It is to be understood that elements not specifically shown or described may take various forms well known to those skilled in the art.
Print controller 104 includes a control station 112 configured to control data feeding sequence to print engine 108, to be coordinated with press controller 140. Processing print station 116 receives digital print job for processing. The processing of the print jobs is performed by RIP element 132. The print jobs are ripped (rasterized) into an RTP format and stored on disk storage 120 during the processing stage. The processing can be done offline if desired. While printing, the RTP is read from storage 120 into dedicated hardware elements (124, 128, 136) which prepare and send the processed pages to print heads 144 to be printed using transport element 148.
The RTP format is a highly efficient format which contains both fixed and variable elements in a compressed format. The ripping of a fixed element is performed only once per job, thus saving processing time and a single copy is stored per job, thus saving storage space. This is a huge advantage in common VDP (variable data printing) jobs, where many pages share the same master background and the unique variable information is only a small part of the page.
During the backend processing stage 208, the RTP data is read from storage 120 by the data feeder 232. The RTP data and related elements are further provided to hardware elements 236, to be assembled and delivered to the printer for printing.
During the printing stage as is shown in
Similarly to the cited prior art U.S. Pat. No. 6,762,855, input memory 304 is a large raster image buffer of processed elements that can be used to resolve the ripping time complexity which is inherently undetermined. Although this invention does not use complete raster image processed pages, by the available page layout a determination is made for each page whether all element required for page assembly are loaded into input memory 304. Monitoring the amount of pages that are loaded into input memory 304 also resolves the issue of the time it takes to read elements from storage 120.
The accumulation of elements in the input memory 304 cannot simply be used to control the print speed as there are additional sources of complexity which arise from the fact that the pages are not fully assembled and these complexities are not resolved yet. Pages with many elements or with several very large elements may be complex pages since they have an ‘area coverage’ (i.e. total number of pixels) that is greater than what the system was designed for. Additionally, total compressed size of all elements may be very large, requiring more bandwidth than designed for. Hence, although merger 300 hardware is designed to handle complex pages, i.e. pages with high area coverage or with high bandwidth requirements (low compression ratio), there may always be a case which exceeds the design. In these extreme and rare cases a problem may still arise, which will require to ramp down the printer speed. The relatively small FIFO 316 does not enable to use its backlog information for controlling the printing speed there is not enough time to slow down the printer gradually. Moreover, it introduces an acute problem of ensuring that the small FIFO 316 does not empty out during printing due to a sequence of very complex pages.
Fortunately, opposed to ripping time, the time required for decompression and page assembly can be estimated and an upper bound and can be determined from several simple parameters of the elements that make up a page. As stated above, the system was designed to handle a certain level of complexity. Pages that stand within the designed complexity will be set to complexity level of 1, and will be counted as one ready page. Pages that exceed the designed complexity will be set to complexity level greater than 1, and will be counted as less than a ready page. A detailed description of RTP page complexity calculation is given below. This information is used to control the press speed in a manner that will be described in the following sections, thus enabling the smooth printing of complex jobs and RTP pages.
A fundamental assumption is that the merger is designed and tuned to handle must jobs at full printer speed. That is, cases of pages with complexity greater than 1 are uncommon and sporadic. The chance for a sequence of complex pages, i.e. pages with complexity greater than 1, is very low. Hence, the cases that requires the printer to slow down due to page assembly complexity are rare. This enables a stricter but simpler approach to be taken and handle the worst case situation as this does not happen often. For instance, although some pages may principally have a complexity that is less than 1, which could be theoretically used to compensate for pages with complexity greater than 1, a strict but simplified approach is taken and set the minimal complexity to 1.
The goal is to ensure that the FIFO buffer 316 never empties out. The basic idea is to look ahead and analyze the complexity of a block of pages, rather than a single page, and adjust the printer's speed so that it prints the block at a speed that is appropriate to the block's complexity. For example, a block that has a calculated complexity that is twice the normal complexity will be printed at a printer speed that is half the full speed, as the overall maximum time required to prepare and assemble the pages may be up to twice the time required for a normal page. This guarantees that the backlog of pages in the FIFO 316 does not decrease while printing the block of pages. Assuming the FIFO buffer 316 was full before the print started, it should remain full at the end of printing each block regardless of its complexity.
The block size is chosen as a fraction of the FIFO buffer 316 capacity, typically around half the size (5-10 pages). On one hand, this gives a high enough resolution of the complexity to manage the printer's speed, and on the other hand it is a large enough block to relax the influence of sporadic complex pages.
For each block a block percent is calculated. The calculated block percent represents an average of the page percent of the pages in the block. The page percent is the reciprocal of the page complexity as defined in page complexity explanation below. For example a block percent 100 indicates that the pages in the block can be printed at machine maximal speed, whereas block percent 50 indicates that the pages in the block are more complicated and therefore may require lowering the speed of the printer for effective printing.
The block percent gives the maximal percent of the full speed at which the block can be printed. Furthermore, a linked list of block percents for all pages that are loaded into the input memory 304, is constructed. By analyzing the list of block percents, the printer controller can determine the actual printing speed per block that does not exceed the maximal printing speed calculated by the page complexity on one side and on the other side stands with in the transports acceleration capabilities with smooth changes.
In
By setting a minimal amount of ready blocks required for full printing speed (e.g. 10 blocks), the printer controller will ramp down the speed when the list of ready block drops below this threshold. The printer controller will set the printing speed to the minimum between the speed dictated by the complexity and the speed that matches the rate of the emptying out page buffer as is shown in
The page complexity calculation is explained below. The system is designed to handle a certain level of “complexity”, generally overall 8:1 compression ratio (CRD=8) and 300% coverage (CVD=300). Hence, pages that are within this complexity are processed in real-time and do not require the slowing down of the print engine. Pages that are more complex will require more processing time and may require the slowing down of the print engine.
Pages that stand within the designed complexity will have a complexity of 1, and will be counted as one page. Pages that exceed the designed complexity will have a complexity greater than 1, and will counted as less than a page. The functions for calculating the complexity of a page and for deriving the percent of a page that is done are given below.
For each page a total compressed size (CST) is defined as the total size in bytes of all compressed elements that are going to be decompressed for the page.
where N is the number of elements in a page and CSi is the compressed size (in bytes) of element i.
For each page an effective decompressed size (DSE) is defined. Generally, the decompressed size is the number of pixels that are written to memory during decompression of the page. However, DSE accounts also for some overhead that exists in the system and consumes time that could have been used for writing more pixels. This is achieved by adding to DSE an amount of pixels that their time of writing to memory corresponds to the time lost on overheads.
Where N is the number of elements in a page and DSEi is the effective decompressed size of element i, and is given by the formula below:
DSEi=(xEi*yi)+PINIT
In order to determine DSEi its components are to be defined:
x
Ei=MAX{xi, BURSTmin}
P
INIT
=T
init·Freq·Factor
CRA=PS/CST,
and the actual coverage as:
CVA=DSE/PS·100.
The page complexity (PC) is defined as:
PC=MAX{CRD/CRACVA/CVD, 1},
PC>=1
and the page percent (PP) is given by:
PP=1/PC*100
0%<PP<=100%
The invention has been described in detail with particular reference to certain preferred embodiments thereof, but it will be understood that variations and modifications can be effected within the scope of the invention.