This relates generally to digital video compression systems.
In the commonly-owned patent application Ser. No. 10/260,534 ('534 application), published as US Publication No. US2005-0069034 one common inventor described a new technology for encoding digital video that exhibited particular success in the computer video arts. The contents of that publication will be presumed to be of knowledge to the reader, and are incorporated herein by reference.
In the typical computer video scenario, digital pixel information is prepared by a server 7 (
The video compressor 17 can be a local hardware component near or in the server 7 (anywhere, such as on a daughter card, a hang-off device, an external dongle, on the motherboard, etc.), a software component (anywhere, such as in a local CPU, a video processor, loaded in the motherboard, etc.), or an external pod communicating with the server via a communication link, network, wireless, or other coupling protocol.
Inside the video compressor 17, one of the frame buffers 11 and 12 receives the serial pixels from the source video 10 and loads them into the frame buffer to (typically) mimic the local frame buffer 6. A switch ahead of the frame buffers 11 and 12 loads a current (or “new”) frame into one of the frame buffers 11 or 12 while the other of the frame buffers 11 or 12 retains the previous (or “old”) frame that the switch had just previously directed to it. In that way, at any given time, one of the frame buffers 11/12 retains a complete old frame and the other of the frame buffers 11/12 is being fed a new frame. The frame buffers then alternative, frame-by-frame, storing/loading the old/new frames.
The old and new frames are used by the video compressor 17 to determine relationships between pixels in the current frame compared to the previous frame. An encoder 13 within the video compressor 17 determines those relationships between the pixels in the current frame (drawn from the new frame buffer 11/12) and pixels in the prior frame (drawn from the old frame buffer 11/12). The encoder 13 may also determine relationships between pixels each located with the current frame. In each case, the relationships can include run-length relationships or series relationships.
Run-length relationships identify runs of pixels in the serial pixel stream (from the source video 10) that have pixel values related to already known pixel values. By identifying the relationship, the decoder is instructed to “copy” the known pixel(s) for the identified run-length, rather than writing the independently identified pixel values. The run-length relationships can include any relationship determined between pixels of the current frame or between pixels of the current and previous frames. They may include the so-called (1) “copy old,” (2) “copy left,” (3) “copy above,” or other locational relationship commands. The “copy old” (CO) command is particularly appropriate for the present disclosure. In it, the pixel values for pixel locations in the current run-length of the current frame are determined to be the same as those pixel values of the previous frame in the same pixel locations. The CO command simply tells the decoder copy the same pixels for a run of X number of pixels that are identical to the pixels in the same run location of the previous frame. Similarly, the “copy left” (CL) command and “copy above” (CA) command indicate that the present run of pixels are the same as the pixels on the left of the current pixels (in the case of the CL command), or the pixels are the same as the pixels above the current pixels (in the case of the CA command). Of course, other kinds of locational relationships (other can “old,” “left,” and “above”) can be and are envisioned as well.
In the preferred run-length cases, the format for the encoding can include (using eight bit bytes by way of example only):
(1) For a first byte in the encoding, the byte can begin with a number of first bits identifying a code indicative of the run-length type (CO, CL, CA, etc.) followed by a remaining number of bits identifying the run length itself. For example, an eight bit byte can employ the first three bits for code indication followed by the next five bits indicating in a binary word the run length (up to a 25 pixel run length).
(2) Another following byte of encoding if the run length exceeds 25 pixels, where the first bit is a code indicating that the byte continues the previous run, followed by seven more bits in the binary word (which when strung with the previous 5 bits of the previous word will make a 12 bit word indicative of up to a 212 pixel run length).
(3) A number of additional following bytes like those in (2) where the run length exceeds the 2n pixel run length of the string of previous bits.
“Series” commands are a little different from the run-length commands and can contribute remarkable efficiency to the video compression. They are described in more detail in the '534 application, so only a brief description will be provided here. In essence, the series commands instruct the decoder to write a run of pixels using just two prior-known colors. In the preferred series cases, the format for the encoding can include (using eight bit bytes by way of example only):
If neither run-length nor series encoding is available or plausible, then the encoder will resort to higher overhead single-pixel color commands (usually requiring three bytes per pixel color for five bit color, and more for higher quality color) to instruct the decoder on a particular pixel value.
As shown in
The video compressor 17 communicates with a client 19, typically by a network connection via a standard network interface (not shown) such as an Ethernet or other suitable network communication system. Of course, the video compressor 17 and client 19 could also communicate by other communication means such as a hard wire, wireless, etc. The system of
At the client 19, the decoder 18 is usually an application or script function in the local processing system 21 already in the client 19. If the client 19 is a computer workstation, for example, the decoder 18 is an application that runs on the local CPU employing some local memory 22. Also, client 19 usually contains a frame buffer 20 (sometimes on a separate video processing board) that receives the pixel information for a frame from the decoder 18. In practice, the objective is to move the information from the frame buffer 6 in the server 7 to the frame buffer 20 in the client 19 through the frame buffers 11/12 in the video compressor 17. In the midst, the video compressor reduces the size of the frame of information by the run-length, series, and pixel encoding and the decoder 18 restores the size of the frame by decoding it.
Presently, the cost of the frame buffer chip 16 is driving the cost of the video compressor 17. As the price of FPGAs for FPGA chip 14 (or alternatively ASICs, etc.) is falling, the price of the frame buffer chip has come to dominate the parts cost. We have developed a way to eliminate the frame buffer chip 16 without altering the kinds of encoding commands used and thus advantageously not altering the decoder function 18 in any way.
Instead of storing all pixels in a video frame buffer, the disclosed encoder stores check values for groups of pixels and the minimum number of pixels necessary to execute the implemented commands. Grouping of pixels can be done in any fashion, for example; a group can be all pixels on a single video line, a portion of a line, multiple lines or screen rectangles containing portions of multiple lines. For purposes of example only, the embodiment described below defines all pixels on each single video line as a group of pixels, therefore a video screen of 1024 by 768 pixels would have 1024 groups of pixels and 1024 check values stored in memory.
In order to take advantage of the “copy old” commands we must have a reference to a previous frame. Using a 16 bit CRC calculation for each line allows us to use the “copy old” command on a line by line basis. For 1024 lines, it requires 1024 CRC values taking up 2048 bytes of memory that will still fit inside today's FPGAs.
In order not to have a large storage space for a frame's worth of compressed data we can compress different lines of pixel data over multiple video frames. We can compress lines only as fast as it can be sent across the network. This can be done by having a small compressor buffer, for example 4 k bytes. If there is enough buffer space in the compressor to compress the worst case line (all make pixels), then the line of video is compressed (using the make pixel, make series, copy left, copy above command, and copy old command with a matching CRC) and placed in the compressor buffer, otherwise, the compressor compresses the line using the “copy old” command which will only take a few bytes of data for the entire line. In other words, in the later case the line is simply held over from a previous frame until the network can accommodate new video compression. If a line does not get compressed on this frame, then the corresponding line will either get compressed on the following frames and the user experience will be satisfactory.
While the encoder encodes the first line of the frame, it also computes a check value from all the incoming pixels on that line and stores the check value for that line in the local buffer 15 of the FPGA. The check value can be computed from the incoming pixels before or after the pixels are encoded. It then sends the encoding to the decoder 18, which decodes the information in its normal manner and loads the resultant pixel values for that line in the frame buffer 20, as usual. The encoder then continues with the next line of the frame until each line of the frame is encoded, a corresponding check value is stored in the local buffer 15, and the encoding is sent to the decoder 18.
When the first line of the next frame arrives, its check value is determined. If the check value is the same as the check value stored for the frame previously sent to the decoder, then the encoder encodes the line as a “copy old” command using the entire line as the run length. The stored check value remains the same in the local buffer 15 and will be used again for the same line when the next frame arrives. Subsequent lines in this frame that have the same check values as corresponding lines in previously sent frames will also be encoded completely as “copy old” and added to the run length in the previously started “copy old” command. The decoder, receiving the “copy old” command operates on it as it normally would: it copies the old pixels from the prior known frame for the entire line.
If the check value for a line is different from that stored in the local buffer 15, then the encoder overwrites the new check value for that line in the local buffer 15 and then sends the new encoded line to decoder 18. Decoder 18 again decodes the line as it normally would.
If the network throughput is insufficient for an encoded line to be sent, the encoder leaves the check value for that group unchanged and sends a command instructing the decoder not to change those pixels (even though they did change). This defers the updating of that line until the next frame. (This form of flow control would not be required if a frame buffer were used to hold all pixels from all lines until the network throughput was sufficient to resume sending).
The first line to be encoded following a line that had its updating deferred (as described above), can not use encoding commands that reference pixels in the deferred line. For example, the Copy Above (CA) command cannot be used for any of the line, the Copy Left (CL) command can not be used on the first pixel of the line, and the Make Series (MS) command can not be used anywhere on the line if it references a pixel on the deferred line. Other than these restrictions, all other encoding commands can be used.
As can been seen, the decoder 18 has no ability to realize when the encoder has chosen to encode the line based on the normal encoding procedure versus the mandated “copy old for a line run-length” procedure. It simply writes pixels as it's told by the same kinds of run-length, series, and pixel commands normally sent to it. The encoder sends the normal run-length, series, and pixel commands line-by-line unless it determines for a particular line that a check value is the same or network throughput is insufficient to send the encoded commands, in which case it mandates the “copy old for a line run-length” command.
In the end, the encoder no longer has to store entire frames of information, so the frame buffer chip can be eliminated. All of its encoding can be accomplished by receiving and encoding just a line or so at a time using just “copy left,” “copy above,” “make series,” and “draw pixel” commands until the check value determination reveals that a “copy old” is appropriate. In that instance, the encoder does not even have to know (and could not find out anyway) what the “old” pixels were—only that whatever the decoder has stored as the “old” pixels are ones that it should copy. The encoder then stores only a line (or few lines), which is a small enough amount of data (compared to one or two entire frames, for example) that it can be stored in the local buffer 15 and the frame buffer chip can be eliminated from the video compressor.
The present invention, in order to be easily understood and practiced, is set out in the following non-limiting examples shown in the accompanying drawings, in which:
a is a flow chart of an example video compression process;
b is a flow chart of an example send determination process;
Reference is made to
In an example video compression system, a client 19 with a workstation monitor is expected to receive video signals for display on the monitor from a distant server 7. Video signals are notoriously high-volume signals. A single screen of video at a common resolution of 1024 by 768 can be around one million pixels. Each pixel has a defined color, and each color has a defined red component, blue component, and green component (other color schemes are also known and can be used, but the so-called RGB system will be used herein by way of illustration and not limitation). Each red, blue and green color component is defined by a numeric value written as a binary word, sometimes five bits long (providing 25=32 possible color values for each red, green, and blue) but as long as the system can reasonably accommodate without limit. With five bit component values, a minimum of 15 bits are required to define each single pixel color, which are usually embodied in two eight-bit bytes. The one-million pixels for a video screen thus require two-million bytes to define the colors. A screen usually refreshes every 1/60th of a second, so transportation of 120 megabytes per second would be required to deliver streaming video without compression. And, that assumes a relatively low 5-bit color scheme, where many users would prefer a higher quality of color composition. Some communication links may accommodate such large volumes of constantly streaming data—but not many—especially if there are multiple simultaneous users employing the same communication link.
To alleviate the video data volume, a video compressor 17 receives the video from a source video 10 and reduces it. Each frame of the video is alternatively loaded into a new/old frame buffer 11/12 where it is retained for use by the encoder 13 programmed into an FPGA chip 14. According to one example, the encoder 13 encodes the video by a hierarchical choice of run-length encoding, series encoding, or as a last resort individual pixel encoding. The run-length encoding essentially identifies a run of pixels the color of which can be identified on the basis of pixel colors that are already known. Thus, a CO command will instruct the decoder for a current pixel to copy the color of the pixel at the same pixel location as the current pixel but from the previous frame. A CA command will instruct the decoder for a current pixel to copy the color of the pixel immediately above the current pixel in the same frame. A CL command will instruct the decoder for a current pixel to copy the color of the pixel immediately to the left of the current pixel in the same frame. Assuming again, by way of example only, a five-bit color scheme in which each pixel would require three eight-bit bytes to identify its individual color, that same pixel may be identifiable as a Copy command in only a single byte. Further, if a continuous run of pixels can be identified as all comporting to a common command condition (such as each pixel in a run of 100 pixels is the same as their corresponding old, above or left pixels), then a code can be written to tell the decoder in a byte or two that a Copy command applies to a run of 100 pixels. In such a case, a run of 100 pixels that could require 300 bytes of coding to individually identify each pixel color could be accurately encoded with only a byte or two.
Example formats for copy encoding can be found in the '534 application. One such example is described below for purpose of convenience to the reader. In it, an eight-bit byte is assumed, although the encoding can be used in any byte size of any number of bits. For copy commands, each byte is in the format: CCCRRRRR, where the first three C-bits identify the command type according to the following key:
000=Copy Old Command
001=Copy Left Command
010=Copy Above Command
The next five R-bits identify the run length. If the current run is determined by the encoder 13 to be less than 25 (i.e., 32 continuous pixels), then the five R-bits of one byte will encode the run length. If the run length is more than 25 then a following eight-bit byte is encoded with the same three command bits followed by another five bits that are combined with the five bits of the preceding byte to make a ten-bit word accommodating a 210 run length (i.e., 1024 continuous pixels). A third byte can be added to encode a run of 215 run length (i.e., 32,768 continuous pixels) and a fourth byte can accommodate an entire screen as a continuous run of 220 run length (i.e., 1,048,576 continuous pixels, which is more than the pixels in one full screen of 760×1024).
Thus, a continuous run amounting to an entire screen that would have taken around 120 million bytes to write the individual colors can be written in just four bytes of encoding.
The series command is used whenever a run of pixels of just two colors is found. In it, the two colors are first encoded using one of the copy of pixel-draw commands so the decoder knows the actual value of the two possible colors (in essence, the decoder knows that the two colors immediately preceding the series command are the two colors to be used in the series, with the first color assigned the “0” value and the second color assigned the “1” value). The first byte in the series bytes has the following format: CCCXDDD, where C is the command key identifying the series command, X is the multiple byte indicator, and the D-bits indicate which of the two possible colors the next three consecutive pixels are. In this case, the CCC key for the series command is 011 (a code unique compared to the copy command keys). The X bit is set to “0” if the run of two-color series is just three pixels (corresponding to the three D-bits) long, and to “1” if the next byte continues the two-color series. Each subsequent series byte then takes the form of: XDDDDDDD, where the X-bit again indicates whether the next byte continues the series (by indicating “1” until the last byte in the series is reached in which case it is set to “0” to indicate the next byte is the last one) and each D-bit again indicates the “0” or “1” color for the next seven pixels in the continuous series.
If neither a copy command nor a series command will effectively encode the next pixel(s), the encoder resorts to encoding the next pixel as a make pixel command. In this command, the pixel color is communicated using the traditional Red, Green, Blue color values. For five bit color, the two byte pixel command takes the form of: CRRRRRGG GGGBBBBB, where C is a key, for example “1,” that identifies the make-pixel command (note that none of the other commands began with a “1”), RRRRR is five-bit red color value, GGGGG is the five-bit green color value, and BBBBB is the five-bit blue color value. The encoder 13 tries to encode pixels using other, more efficient encoding before it finally resorts to as few make-pixel encodings as possible and can return again to a copy or series command.
The above descriptions, especially with respect to the copy old command, require the video compressor 17 to store a previous frame as it receives the current frame from the source video 10 in order to compare the current pixel values with the pixel values in the same locations of the previous frame. Thus,
On the client side, a decoder 18 receives the copy, series and make-pixel commands and re-writes the pixel colors based on those commands into a frame buffer 20. The decoder 18 can be a script function or an application written in the existing local processor 21 of the client 19.
The video compressor 17 can be employed as a hang-on device to the server 7, or it can be included in server 7 as a daughter card, as a built-in to the video processor, as an embedded process in the mother board, or any other hardware or software accommodation. In any event, it is advantageous to reduce the cost of the components in the video compressor 17, including the frame buffer chip 16. The embodiment of
To utilize copy commands in serial video encoding where pixel values are received only once in sequence (where pixels of a frame are received from the top left corner of the frame to the bottom right corner of a frame going from left-to-right/top-to-bottom) the pixels to be compared and all the pixels there between must be stored. Thus all copy commands can be thought of specifying the “distance” a reference pixel is from a pixel to be encoded. The memory requirements for each of the copy commands described above are as follows:
Thus for copy pixel commands the memory required to utilize such commands is a function of the number of pixels between the pixels to be compared and the amount of data per pixel. Further, it should appreciated that it does not require any additional memory to implement a new copy command when another copy command is implemented for a pixel that is “further away” from the current pixel than the new reference pixel e.g. no additional pixels need to be stored to implement CA when CO is already implemented and no additional pixels need to be stored to implement CL when CA is already implemented.
More generally, the memory requirements described above can be applied to comparison encoding of groups of consecutive pixels. Where the memory requirements to implement a type of group encoding are a function of the number of groups between the groups to be compared and the amount of data per group. For example, if a copy command based on whether two corresponding lines are the same is to be implemented, values representing the lines need to be stored and values representing the lines there between need to be stored. Further, like copy pixel commands described above, no additional memory is required to implement a new copy command for a given type of group of pixels when another copy command for the given type of group is already implemented, provided the command that is already implemented compares a group that is located “further away” from the current group than the group that is to be implemented. For example, if a copy command is implemented that compares a current line of pixels to the corresponding line in the previous frame, no additional memory is required to implement a copy command that compares the current line of pixels to the line of pixels directly above it.
Typically, storing values representing a group of pixels requires less memory than storing a value for each pixel in the group, i.e. storing a 16 bit value representing a line requires less memory than storing a 16 bit value for each pixel in the line. Thus, when memory constraints do not allow a particular pixel that is “too far” from the current pixel to be compared with the current pixel, a compromise can be made, i.e. the particular pixel can be placed in a group of pixels and the group can be compared to the corresponding group comprising the present pixel. For example, when there is not enough memory to implement the traditional CO command (not enough memory to store a frame's worth of pixels) the pixel in the old frame can be grouped in a line and the line from the old frame can be compared to the current line. Further, copy pixel commands can still be implemented for pixels that are “closer” to the current pixel. Thus, memory can be optimized by choosing a combination of groups which comprise different number of pixels that are different “distances” from the current group. One way to optimize memory is to choose a set of groups where there are more pixels in the groups that are “further” from the current pixel and fewer pixels in the groups that are “closer” to the current pixel.
The following example is used to illustrate an example method of choosing groups to optimize memory. Suppose each pixel and groups of pixels can be represented by two bytes and a frame is 1024 lines by 768 pixels. To implement a CO command for pixels approximately 1.5 MB of data needs to be stored. To implement a CA command for pixels approximately 1.5 KB of data needs to be stored. To implement a copy old command for a line of pixels 2050 bytes need to be stored. Thus, to implement a CA command and a copy old command for a line approximately 3.5 KB of memory is required (2050 bytes for line values (1024+1)*2 and bytes for pixels 1538 bytes for pixel values (768+1)*2). Thus, if there is not enough memory implement the CO command for a pixel, the CA command can be implemented for pixels and a copy command can be implemented for lines. Further, if there is sufficient memory, pixels could be grouped into half lines instead of full lines.
It should be appreciated that when the first pixel of a group of pixels is the same as an individual pixel that was previously compared to the current pixel no modification needs to be made to the decoder. For example, the decoder is able to interpret a CO command for a line of pixels starting with the first pixel in the line encoded using the same header as the CO by simply specifying the number of pixels to do the CO command for a line of pixels. For example, the following encoding scheme could be implemented without modifying decoder 18. CO could be used to encode groups comprising half a line of pixels, CA could be used to encode groups of four consecutive pixels and CL could be used single pixels.
In
The decoder 18 operates the same as the decoder did with the frame buffers present. In other words, it does not know or care whether the coding commands were produced by standard encoding or by check value replacement encoding. It simply decodes the run-length, series, and pixel commands exactly as it would have done otherwise.
The encoding steps are shown schematically in
The check value operation can be any kind of determinative operation. The simplest may be a check sum in which the bit values of the pixels are summed. Any other kind of determinative operations could also be employed. Check value algorithms are widely-known and vary widely. Any of the known check-sum, cyclic redundancy check or other determinative algorithms may be employed, and check values or determinative algorithms designed in the future may be employable as well. Whichever check value algorithm is chosen, it should in ideal situation yield a value that is uniquely associated with that object, like a fingerprint. There is, however, a trade-off between degree of distinction and size/complexity of the check value. A check value that is long and complex may be virtually guaranteed to uniquely correspond to a particular line encoding, but it may also be so long and complex that its determination or storage impedes the desirable results of encoding the video quickly and storing the check value locally. That is, a check value that takes too long to compute will hold up the delivery of the video line to the decoder (10's of thousands of lines may be moving each second). Also, a check value that is itself too long may fill the local buffer 15 of the FPGA will check sum values, leaving no further buffer space available for general FPGA processing.
With any check value algorithm, it is possible that the same value could be inadvertently created for two different screens, so the preferred check value algorithm would be one that minimized this probability. Check value algorithms that include pixel position in the calculation are preferred because they minimize inadvertently creating the same value for different video screens that typically occur sequentially, such as a cursor moving horizontally and relocating along a video line. A method of periodically updating the decoder's video screen without relying on the check value could also be included since the chance of inadvertently creating the same value for two different screens can be minimized but never completely eliminated. A 16-bit check value has a 1 in 65,536 chance of inadvertently creating the same value for two different screens.
The trade-offs for selecting pixel group size interact with the trade-offs for selecting the length of the check value. Larger groups require fewer check values per frame but are more wasteful when network throughput is insufficient. The amount of buffer memory available and the expected network throughput are key factors in selecting the optimal value for both of these values.
Once an appropriate check value algorithm is chosen and employed on the pixels of line 31 of
Referring to
If during the comparison of
a is a flow chart of an example process. In step 60, a line of pixel values are received from the source video 10 by the encoder 23 and the check value for the line is calculated. In step 61, a determination is made whether a RESET condition has occurred. Any of the following events can trigger a RESET condition: the first line of the frame being received, a reset, a resolution change, or a frame initialization reset. In the event of a RESET condition, a black frame is generated, all check values are reset, and all lines are marked as UPDATED at step 62. The black frame is then encoded and packets corresponding to the initial black frame are sent to decoder 18. This insures that the decoder 18 has an initial reference for the lines of the next frame that is received.
At step 63 a determination is made as to whether all lines are marked as UPDATED. A line is marked as being updated when it has been encoded and sent to the decoder 18 using step 74 for the first frame and step 74 and 75 for the rest of the frames. An “all lines marked as UPDATED” condition occurs when all of the lines have been encoded and sent to the decoder 18. This means the decoder 18 has been refreshed or UPDATED with new pixel data for each line for an entire frame. If all lines are not marked as UPDATED, then only the lines that have not been UPDATED, will be encoded and sent to the decoder 18. All of the other lines will be encoded using the copy old command. Once all of the lines have been marked as UPDATED, it is time to start over with a new frame so at step 64 all the lines will be marked as NOT UPDATED to allow new information for each line to be sent to the decoder 18.
At step 65, a determination is made as to whether the line has been marked as UPDATED. If the line has already been marked as UPDATED, all the pixels in the line are marked as NOT CHANGED at step 72. If the line has not been marked as UPDATED, at step 66, the check value of the line is compared with the previously-stored check value for the corresponding line. If the check values are equal (i.e. the current line is the same as the corresponding line within a threshold amount of deviation) the line is marked as UPDATED at step 67 and the pixels are marked as NOT CHANGED at step 72.
If the check values are not equal, at step 68, a determination is made as to whether there is enough space in an output buffer to encode the current line using all the encoding commands. Lack of space in the output buffer can occur when network throughput is insufficient to transmit previously encoded lines. When there is not enough space in the output buffer, buffer space is conserved by marking the pixels of the current line as NOT CHANGED at step 72 which will generally allow the line to be encoded using less buffer space. Since, the pixels of the line are inaccurately marked as NOT CHANGED the line is marked as NOT UPDATED at step 69.
If there is enough space in the output buffer to encode the line using all the encoding commands, the line is marked as UPDATED and the stored check value is overwritten by the current check value at step 70. Further the pixels in the line are marked as CHANGED at step 71.
Once the pixels of the line have been marked as CHANGED or NOT CHANGED, the encoding process begins at step 73. At step 73, a determination is made as to whether the pixels are marked as CHANGED or NOT CHANGED. When the pixels are marked as CHANGED the pixels of the line are encoded using the standard DVC commands (i.e., CA, CL, MS, and MP) at step 74. To encode using the copy above command the pixels values need to be compared to the pixel values from the previous line. When the pixels are marked as NOT CHANGED as is the case when the line has already been marked as UPDATED, the check values are equal, or there is not enough buffer space to encode the line using the standard DVC commands, the line is encoded by copying the corresponding line from the previous frame at step 75.
Once a line is encoded using either the standard DVC commands or by copying the corresponding line from the previous frame a encoded line is sent to the output buffer and a determination is made as to whether the line should be sent to the decoder at step 76. The send determination step is shown in more detail in
To make encoding more efficient lines should not be sent immediately once they are encoded due to the fact that an encoding command may extend to the next line. For example, a copy above command may continuously apply for the last pixels of the current line and first pixels of the next line. Thus, unfinished runs are not sent even though they continue from one frame to another. Further, an encoded finished run may not have been sent to due insufficient network throughput. In serial video, encoded runs need to be sent in order. Thus, at step 76a, a determination is made as to whether any prior encoded finished runs remain unsent. If so, the encoded finished runs are sent at step 76b. After the prior encoded finished runs are sent or if no unsent prior encoded finished runs exist, the finished encoded runs of the current line are sent and unfinished runs are delayed at step 76c.
At step 77, it is determined whether all the lines are encoded. If all the lines are encoded, the process is done. If not, the process repeats until all the lines are encoded.
If the encoder finds no match between the current check value 81 and the stored value at step 82, the serial pixel stream 31 is encoded using standard DVC commands and the mis-match check value from step 82 is then over-written into the local buffer 15 at the location corresponding to the line (in this case, Check Value AA).
Thus, using standard DVC encoding or encoding using the check sum and “Copy Old” commands, encoded line 80 is generated. When a command applies to the last pixels of a line (e.g. CA, CL, CO, MS) the encoder 23 does not immediately send the pixels for which the command applies, but instead increments a running counter that counts the number of pixels in the previous and current lines that qualify for the command. This improves the efficiency of the compression by not terminating runs prematurely and thus increasing the run lengths where appropriate. The process of incrementing the counter and waiting for the next line is shown at step 86 (and again, later at step 97).
Then, the next line 32 arrives at the encoder 23 and its associated check value is determined and compared with the previously stored value of 0x67. Again, if the check value matches, at step 93, then the “copy old” command is again presumed at step 94. If a match does not occur, at step 92, the serial pixel stream 32 is encoded using standard DVC commands at step 95 and the check value stored in the local buffer 15 corresponding to the line (in this case, Check Value AB) is overwritten. If there is sufficient network throughput, shown at step 96, then any unsent encodings (from, for example, encoded line 80) are compiled and sent based on a run equal to the counter value, followed by the finished encodings of current code stream 90. The process then continues from line-to-line and frame-to-frame, indefinitely.
Although the disclosure describes and illustrates various embodiments of the invention, it is to be understood that the invention is not limited to these particular embodiments. Many variations and modifications will now occur to those skilled in the art of backup communication. For full definition of the scope of the invention, reference is to be made to the appended claims.
This is a continuation-in-part of U.S. patent application Ser. No. 10/260,534 to Dambrackas, entitled “Video Compression System,” filed Oct. 1, 2002 (Dambrackas Video Compression). This case is related to U.S. patent application Ser. No. 11/282,688, entitled Video Compression Encoder, filed on Nov. 21, 2005, publication number US-2006-0126718-A1.
Number | Name | Date | Kind |
---|---|---|---|
3710011 | Altemus et al. | Jan 1973 | A |
3925762 | Heitlinger et al. | Dec 1975 | A |
3935379 | Thornburg et al. | Jan 1976 | A |
4005411 | Morrin, II | Jan 1977 | A |
4134133 | Teramura et al. | Jan 1979 | A |
4142243 | Bishop et al. | Feb 1979 | A |
4369464 | Temime | Jan 1983 | A |
4384327 | Conway et al. | May 1983 | A |
4667233 | Furukawa | May 1987 | A |
4764769 | Hayworth et al. | Aug 1988 | A |
4774587 | Schmitt | Sep 1988 | A |
4855825 | Santamaki et al. | Aug 1989 | A |
4873515 | Dickson et al. | Oct 1989 | A |
4959833 | Mercola et al. | Sep 1990 | A |
5046119 | Hoffert et al. | Sep 1991 | A |
5083214 | Knowles | Jan 1992 | A |
5235595 | O'Dowd | Aug 1993 | A |
5251018 | Jang et al. | Oct 1993 | A |
5325126 | Keith | Jun 1994 | A |
5339164 | Lim | Aug 1994 | A |
5418952 | Morley et al. | May 1995 | A |
5430848 | Waggener | Jul 1995 | A |
5465118 | Hancock et al. | Nov 1995 | A |
5497434 | Wilson | Mar 1996 | A |
5519874 | Yamagishi et al. | May 1996 | A |
5526024 | Gaglianello et al. | Jun 1996 | A |
5566339 | Perholtz et al. | Oct 1996 | A |
5572235 | Mical et al. | Nov 1996 | A |
5630036 | Sonohara et al. | May 1997 | A |
5659707 | Wang et al. | Aug 1997 | A |
5664029 | Callahan et al. | Sep 1997 | A |
5664223 | Bender et al. | Sep 1997 | A |
5721842 | Beasley et al. | Feb 1998 | A |
5731706 | Koeman et al. | Mar 1998 | A |
5732212 | Perholtz et al. | Mar 1998 | A |
5754836 | Rehl | May 1998 | A |
5757973 | Wilkinson et al. | May 1998 | A |
5764479 | Crump et al. | Jun 1998 | A |
5764924 | Hong | Jun 1998 | A |
5764966 | Mote, Jr. | Jun 1998 | A |
5781747 | Smith et al. | Jul 1998 | A |
5796864 | Callahan | Aug 1998 | A |
5799207 | Wang et al. | Aug 1998 | A |
5805735 | Chen et al. | Sep 1998 | A |
5812169 | Tai et al. | Sep 1998 | A |
5812534 | Davis et al. | Sep 1998 | A |
5828848 | MacCormack et al. | Oct 1998 | A |
5844940 | Goodson et al. | Dec 1998 | A |
5861764 | Singer et al. | Jan 1999 | A |
5864681 | Proctor et al. | Jan 1999 | A |
5867167 | Deering | Feb 1999 | A |
5870429 | Moran et al. | Feb 1999 | A |
5898861 | Emerson et al. | Apr 1999 | A |
5946451 | Soker | Aug 1999 | A |
5948092 | Crump et al. | Sep 1999 | A |
5967853 | Hashim | Oct 1999 | A |
5968132 | Tokunaga et al. | Oct 1999 | A |
5997358 | Adriaenssens et al. | Dec 1999 | A |
6003105 | Vicard et al. | Dec 1999 | A |
6008847 | Bauchspies | Dec 1999 | A |
6012101 | Heller et al. | Jan 2000 | A |
6016316 | Moura et al. | Jan 2000 | A |
6032261 | Hulyalkar | Feb 2000 | A |
6038346 | Ratnakar | Mar 2000 | A |
6040864 | Etoh | Mar 2000 | A |
6055597 | Houg | Apr 2000 | A |
6060890 | Tsinker | May 2000 | A |
6065073 | Booth | May 2000 | A |
6070214 | Ahern | May 2000 | A |
6084638 | Hare et al. | Jul 2000 | A |
6094453 | Gosselin et al. | Jul 2000 | A |
6097368 | Zhu et al. | Aug 2000 | A |
6124811 | Acharya et al. | Sep 2000 | A |
6134613 | Stephenson et al. | Oct 2000 | A |
6146158 | Peratoner et al. | Nov 2000 | A |
6154492 | Araki et al. | Nov 2000 | A |
6195391 | Hancock et al. | Feb 2001 | B1 |
6202116 | Hewitt | Mar 2001 | B1 |
6233226 | Gringeri et al. | May 2001 | B1 |
6240481 | Suzuki | May 2001 | B1 |
6240554 | Fenouil | May 2001 | B1 |
6243496 | Wilkinson | Jun 2001 | B1 |
6304895 | Schneider et al. | Oct 2001 | B1 |
6327307 | Brailean et al. | Dec 2001 | B1 |
6345323 | Beasley et al. | Feb 2002 | B1 |
6360017 | Chiu et al. | Mar 2002 | B1 |
6370191 | Mahant-Shetti et al. | Apr 2002 | B1 |
6373890 | Freeman | Apr 2002 | B1 |
6377313 | Yang et al. | Apr 2002 | B1 |
6377640 | Trans | Apr 2002 | B2 |
6404932 | Hata et al. | Jun 2002 | B1 |
6418494 | Shatas et al. | Jul 2002 | B1 |
6425033 | Conway et al. | Jul 2002 | B1 |
6453120 | Takahashi et al. | Sep 2002 | B1 |
6470050 | Ohtani et al. | Oct 2002 | B1 |
6496601 | Migdal et al. | Dec 2002 | B1 |
6512595 | Toda | Jan 2003 | B1 |
6516371 | Lai et al. | Feb 2003 | B1 |
6522365 | Levantovsky et al. | Feb 2003 | B1 |
6539418 | Schneider et al. | Mar 2003 | B2 |
6542631 | Ishikawa | Apr 2003 | B1 |
6567464 | Hamdi | May 2003 | B2 |
6571393 | Ko et al. | May 2003 | B1 |
6574364 | Economidis et al. | Jun 2003 | B1 |
6584155 | Takeda et al. | Jun 2003 | B2 |
6590930 | Greiss | Jul 2003 | B1 |
6661838 | Koga et al. | Dec 2003 | B2 |
6664969 | Emerson et al. | Dec 2003 | B1 |
6701380 | Schneider et al. | Mar 2004 | B2 |
6754241 | Krishnamurthy et al. | Jun 2004 | B1 |
6785424 | Sakamoto | Aug 2004 | B1 |
6829301 | Tinker et al. | Dec 2004 | B1 |
6833875 | Yang et al. | Dec 2004 | B1 |
6871008 | Pintz et al. | Mar 2005 | B1 |
6898313 | Li et al. | May 2005 | B2 |
6940900 | Takamizawa | Sep 2005 | B2 |
6972786 | Ludwig | Dec 2005 | B1 |
7006700 | Gilgen | Feb 2006 | B2 |
7013255 | Smith, II | Mar 2006 | B1 |
7020732 | Shatas et al. | Mar 2006 | B2 |
7031385 | Inoue et al. | Apr 2006 | B1 |
7085319 | Prakash et al. | Aug 2006 | B2 |
7093008 | Agerholm et al. | Aug 2006 | B2 |
7143432 | Brooks et al. | Nov 2006 | B1 |
7221389 | Ahern et al. | May 2007 | B2 |
7222306 | Kaasila et al. | May 2007 | B2 |
7272180 | Dambrackas | Sep 2007 | B2 |
7277104 | Dickens et al. | Oct 2007 | B2 |
7321623 | Dambrackas | Jan 2008 | B2 |
7336839 | Gilgen | Feb 2008 | B2 |
7373008 | Clouthier et al. | May 2008 | B2 |
7457461 | Gilgen | Nov 2008 | B2 |
7466713 | Saito | Dec 2008 | B2 |
7515632 | Dambrackas | Apr 2009 | B2 |
7515633 | Dambrackas | Apr 2009 | B2 |
7542509 | Dambrackas | Jun 2009 | B2 |
7609721 | Rao et al. | Oct 2009 | B2 |
7720146 | Dambrackas | May 2010 | B2 |
7782961 | Shelton et al. | Aug 2010 | B2 |
7941634 | Georgi et al. | May 2011 | B2 |
20010048667 | Hamdi | Dec 2001 | A1 |
20030005186 | Gough | Jan 2003 | A1 |
20030048943 | Ishikawa | Mar 2003 | A1 |
20030202594 | Lainema | Oct 2003 | A1 |
20030231204 | Haggie et al. | Dec 2003 | A1 |
20040017514 | Dickens et al. | Jan 2004 | A1 |
20040062305 | Dambrackas | Apr 2004 | A1 |
20040064198 | Reynolds et al. | Apr 2004 | A1 |
20040122931 | Rowland et al. | Jun 2004 | A1 |
20040228526 | Lin et al. | Nov 2004 | A9 |
20050005102 | Meggitt et al. | Jan 2005 | A1 |
20050025248 | Johnson et al. | Feb 2005 | A1 |
20050057777 | Doron | Mar 2005 | A1 |
20050069034 | Dambrackas | Mar 2005 | A1 |
20050089091 | Kim et al. | Apr 2005 | A1 |
20050108582 | Fung | May 2005 | A1 |
20050135480 | Li et al. | Jun 2005 | A1 |
20050157799 | Raman et al. | Jul 2005 | A1 |
20050198245 | Burgess et al. | Sep 2005 | A1 |
20050231462 | Chen | Oct 2005 | A1 |
20050249207 | Zodnik | Nov 2005 | A1 |
20050286790 | Gilgen | Dec 2005 | A1 |
20060039404 | Rao et al. | Feb 2006 | A1 |
20060092271 | Banno et al. | May 2006 | A1 |
20060120460 | Gilgen | Jun 2006 | A1 |
20060126718 | Dambrackas et al. | Jun 2006 | A1 |
20060126720 | Dambrackas | Jun 2006 | A1 |
20060126721 | Dambrackas | Jun 2006 | A1 |
20060126722 | Dambrackas | Jun 2006 | A1 |
20060126723 | Dambrackas | Jun 2006 | A1 |
20060161635 | Lamkin et al. | Jul 2006 | A1 |
20060262226 | Odryna et al. | Nov 2006 | A1 |
20070019743 | Dambrackas et al. | Jan 2007 | A1 |
20070165035 | Duluk et al. | Jul 2007 | A1 |
20070180407 | Vahtola | Aug 2007 | A1 |
20070248159 | Dambrackas | Oct 2007 | A1 |
20070253492 | Shelton et al. | Nov 2007 | A1 |
20090290647 | Shelton et al. | Nov 2009 | A1 |
Number | Date | Country |
---|---|---|
0395416 | Oct 1990 | EP |
0495490 | Jul 1992 | EP |
0780773 | Jun 1997 | EP |
0 844 567 | May 1998 | EP |
0270896 | Jun 1998 | EP |
0899959 | Mar 1999 | EP |
2318956 | Jun 1998 | GB |
2350039 | Nov 2000 | GB |
2388504 | Nov 2003 | GB |
64-077374 | Sep 1987 | JP |
62-077935 | Oct 1987 | JP |
63-108879 | May 1988 | JP |
01-162480 | Jun 1989 | JP |
01-303988 | Dec 1989 | JP |
H03-130767 | Apr 1991 | JP |
3192457 | Aug 1991 | JP |
6-77858 | Mar 1994 | JP |
H8-223579 | Feb 1995 | JP |
08-033000 | Feb 1996 | JP |
08-263262 | Oct 1996 | JP |
09-233467 | Sep 1997 | JP |
09-321672 | Dec 1997 | JP |
11-308465 | Apr 1998 | JP |
10-215379 | Aug 1998 | JP |
10-257485 | Sep 1998 | JP |
11-184800 | Jul 1999 | JP |
11-184801 | Jul 1999 | JP |
11-203457 | Jul 1999 | JP |
11-308465 | Nov 1999 | JP |
11-313213 | Nov 1999 | JP |
2000-125111 | Apr 2000 | JP |
2001-053620 | Feb 2001 | JP |
2001-148849 | May 2001 | JP |
2001-169287 | Jun 2001 | JP |
2002-043950 | Feb 2002 | JP |
2002-165105 | Jun 2002 | JP |
2003-174565 | Jun 2003 | JP |
2003-244448 | Aug 2003 | JP |
2003-250053 | Sep 2003 | JP |
2004-032698 | Jan 2004 | JP |
2004-220160 | Aug 2004 | JP |
589871 | Jun 2004 | TW |
I220036 | Aug 2004 | TW |
WO 9741514 | Nov 1997 | WO |
WO 9826603 | Jun 1998 | WO |
WO 9854893 | Dec 1998 | WO |
WO 9950819 | Oct 1999 | WO |
WO 0122628 | Mar 2001 | WO |
WO 02062050 | Aug 2002 | WO |
WO 03055094 | Jul 2003 | WO |
WO 03071804 | Aug 2003 | WO |
WO 2004032356 | Apr 2004 | WO |
WO 2004081772 | Sep 2004 | WO |
Entry |
---|
International Preliminary Examination Report in Corresponding PCT Application No. PCT/US2003/030650, mailed Aug. 25, 2006. |
Office Action Issued Jul. 11, 2006, in Corresponding Japanese Patent Application No. 2006-024444. |
Office Action Issued Jul. 4, 2006, in Corresponding Japanese Patent Application No. 2006-024442. |
Office Action Issued Jul. 4, 2006, in Corresponding Japanese Patent Application No. 2006-024443. |
Office Action Issued Mar. 7, 2006, in Corresponding Japanese Patent Application No. 2004-541433. |
Office Action Issued Mar. 7, 2006, in Corresponding Japanese Patent Application No. 2006-024442. |
Office Action Issued Mar. 7, 2006, in Corresponding Japanese Patent Application No. 2006-024443. |
Office Action Issued Mar. 7, 2006, in Corresponding Japanese Patent Application No. 2006-024444. |
PCT International Search Report and Written Opinion mailed Jan. 3, 2006 in PCT/US05/17626, International filing Jan. 3, 2006. |
PCT International Search Report and Written Opinion mailed Oct. 25, 2005 in PCT/US05/19256, International filing Oct. 25, 2005. |
PCT International Search Report for PCT/US03/10488, International filing Jul. 28, 2003. |
PCT International Search Report in corresponding PCT Application No. PCT/US2003/030650 mailed Apr. 20, 2006. |
CN Appln. No. 200710167085.2—Dec. 27, 2010 SIPO Office Action translation. |
JP 2007-518071—Feb. 8, 2011 JIPO Office Action. |
JP Appln. No. 2007-518086—Feb. 15, 2011 JPO Office Action. |
JP Appln. No. 2005-510478—Jul. 6, 2010 Decision of Rejection (English translation). |
CA Appln. No. 2,524,001—Aug. 30, 2010 CIPO Office Action. |
CA Appln. No. 2,627,037—Sep. 9, 2010 CIPO Office Action. |
CN Appln. No. 200710167085.2—Aug. 6, 2010 SIPO Office Action. |
JP Appln. No. 2007-098267—Jul. 27, 2010 JPO Notice of Reasons for Rejection with translation. |
Office Action issued Jul. 31, 2007 in corresponding Japanese Patent Application No. 2006-024444 (with English translation). |
U.S. Appl. No. 10/260,534, filed Oct. 1, 2002, Dambrackas. |
U.S. Appl. No. 10/434,339, filed May 9, 2003, Dambrackas. |
U.S. Appl. No. 10/629,855, filed Jul. 30, 2003, Johnson et al. |
U.S. Appl. No. 10/875,678, filed Jun. 25, 2004, Gilgen. |
U.S. Appl. No. 10/875,679, filed Jun. 25, 2004, Gilgen. |
U.S. Appl. No. 11/282,688, filed Nov. 21, 2005, Dambrackas et al. |
U.S. Appl. No. 11/339,537, filed Jan. 26, 2006, Dambrackas. |
U.S. Appl. No. 11/339,541, filed Jan. 26, 2006, Dambrackas. |
U.S. Appl. No. 11/339,542, filed Jan. 26, 2006, Dambrackas. |
U.S. Appl. No. 11/340,476, filed Jan. 27, 2006, Dambrackas. |
U.S. Appl. No. 11/528,569, filed Sep. 28, 2006, Dambrackas et al. |
U.S. Appl. No. 11/707,863, filed Feb. 20, 2007, Hickey et al. |
U.S. Appl. No. 11/707,879, filed Feb. 20, 2007, Hickey et al. |
U.S. Appl. No. 11/707,880, filed Feb. 20, 2007, Hickey et al. |
U.S. Appl. No. 11/819,047, filed Jun. 25, 2007, Dambrackas. |
U.S. Appl. No. 11/889,268, filed Aug. 10, 2007, Hickey et al. |
U.S. Appl. No. 60/774,186, filed Feb. 17, 2006, Hickey. |
U.S. Appl. No. 60/795,577, filed Apr. 28, 2006, Shelton, Gary. |
U.S. Appl. No. 60/836,649, filed Aug. 10, 2006, Hickey. |
U.S. Appl. No. 60/836,930, filed Aug. 11, 2006, Hickey. |
U.S. Appl. No. 60/848,488, filed Sep. 29, 2006, Hickey. |
“Avocent Install and Discovery Protocol Specification,” Version 1.3, Avocent Corporation Jul. 9, 2003 [30 pages]. |
“Avocent Secure Management Protocol Specification,” Version 1.8, Avocent Corporation Apr. 8, 2005 [64 pages]. |
CA Appln. No. 2,382,403—Dec. 8, 2009 CIPO Office Action. |
CA Appln. No. 2,476,102—Apr. 16, 2009 CIPO Office Action. |
CA Appln. No. 2,676,794—Feb. 12, 2010 CIPO Office Action. |
Digital Semiconductor 21152 PCI-TO-PCI Bridge, Data Sheet, Feb. 1996, Digital Equipment Corporation, Maynard, Mass. |
Duff et al. (ed.) “Xilinx Breaks One Million-Gate Barrier with Delivery of New Virtex Series”, Press Kit, Xilinx, Oct. 1998. |
EP Appln. No. 03818864.5—May 8, 2009 EPO Office Action. |
EP Appln. No. 05756603.6—Nov. 23, 2010 EPO Supplementary Search Report. |
European Office Action in European application 99960160.2, dated Mar. 16, 2006. |
European Office Action in European Application No. 99960160.2-2212 dated Nov. 15, 2006. |
Hill, T. “Virtex Design Methodology Using Leonardo Spectrum 1999.1”, Applications Note, Exemplar Logic Technical Marketing, Apr. 1999, Rev. 3.0, pp. 1-47. |
Hsieh et al “Architectural Power Optimization by Bus Splitting”, Design, Automation and Test in Europe Conference and Exhibition 2000. Proceedings, pp. 612-616. |
IBM Tech. Disc. Bull. “Procedure and Design to Facilitate”, Apr. 1994, vol. 37, Issue 4B, pp. 215-216. |
IBM Technical Disclosure Bulletin, “Multimedia System Packaging”, Sep. 1993, vol. 36, Issue 9A, pp. 525-530. |
IL Appln. No. 184058—Jul. 14, 2010 Ministry of Justice Commissioner of Patents Office Action with translation. |
International Preliminary Examination Report in PCT application PCT/US2003/04707, mailed Oct 1, 2004. |
International Search Report and Written Opinion mailed Aug. 1, 2008 in PCT Application PCT/US05/46352. |
International Search Report and Written Opinion mailed Jul. 29, 2008 in PCT Application PCT/US07/17700. |
International Search Report relating to PCT/US03/04707 dated Jul. 11, 2003. |
International Search Report, PCT/US99/25290 mailed Feb. 10, 2000. |
International Search Report, PCT/US99/25291 mailed Feb. 7, 2000. |
JP Appln. No. 2003-570573—Jul. 21, 2009 Office Action with English translation. |
McCloghrie, K., “Management Information Base for Network Management of TCP/IP-based internets: MIB II,” Network Working Group, Performance Systems International, (RFC 1213) Mar. 1991 [70 pages]. |
Mobility Electronics, Inc. Brochure for PCI Split Bridge, Scottsdale, AZ, 1999. |
MY Appln. No. PI20030508—Mar. 6, 2007 MYIPO Substantive Examination Adverse Report. |
MY Appln. No. PI20056038—Sep. 11, 2009 MyIPO Substantive Examination Adverse Report. |
MY Appln. No. PI20056038—Aug. 30, 2010 MyIPO Substantive Examination Adverse Report. |
MY Appln. No. PI20062195—Sep. 11, 2009 MYIPO Adverse Examination Report. |
Official Action mailed Feb. 6, 2008 in Japanese Application No. 2003-570573 [with English translation]. |
PCI Local Bus Specification, Revision 2.1, Jun. 1995, The PCI Special Interest Group, Portland, Oregon. |
PCT Written Opinion in PCT application PCT/US2003/04707, mailed Mar. 4, 2004. |
Schutti et al. “Data Transfer between Asynchronous Clock Domains without Pain”, RIIC, SNUG Europe 2000, Rev. 1.1, Feb. 2000, pp. 1-12. |
Search Report and Written Opinion mailed Jul. 16, 2008 in PCT Appln. No. PCT/US2007/17699. |
Supplementary European Search Report dated Aug. 9, 2010 in EP Appln. No. 03709132.9. |
U.S. Appl. No. 11/303,031—Apr. 29, 2009 PTO Office Action. |
U.S. Appl. No. 12/533,073—Mar. 9, 2011 PTO Office Action. |
CA Appln. No. 2,524,001—Nov. 30, 2009 CIPO Office Action. |
IL Appln. No. 167787—Jul. 21, 2009 Office Action. |
JP Appln. No. 2005-510478—Jul. 7, 2009 Notice of Reasons for Rejection [English translation]. |
JP Appln. No. 2006-271932—Oct. 6, 2009 Office Action with English summary. |
U.S. Appl. No. 11/790,994—Feb. 2, 2010 PTO Office Action. |
EP Appln. No. 05756603.6—Apr. 7, 2011 EPO Office Action. |
European Office Action in EP Appln. No. 057566036, dated Apr. 7, 2011 [6 pages]. |
U.S. Appl. No. 11/282,688—Apr. 28, 2010 PTO Office Action. |
Chrysafis et al., “Line-Based, Reduced Memory, Wavelet Image Compression,” Mar. 2000 [retrieved on Aug. 8, 2008], Retrieved from the internet: <http://sipi.usc.edu/˜ortega/Students/chrysafi/doc/ChristosChrysafis—line—based—I—P2000.pdf>. |
IL Appln. No. 171878—Apr. 28, 2009 Translation of Office Action. |
International Search Report and Written Opinion mailed Sep. 4, 2008 in PCT/US07/10376. |
Matsui et al., “High-speed Transmission of Sequential Freeze-pictures by Exchanging Changed Areas”, IEEE Transactions on Communications, IEEE Service Center, Piscataway, NJ, vol. COM-29, No. 12, Dec. 1, 1981, XP002089584, ISSN: 0090-6778. |
Official Action issued Dec. 9, 2008 in JP Appln. No. 2005-510478 [with English translation]. |
Search Report and Written Opinion mailed Aug. 26, 2008 in PCT Appln. No. PCT/US2006/021182. |
Thyagarajan K. S. et al., “Image Sequence Coding Using Interframe VDPCM and Motion Compensation”, ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing-proceedings 1989 Publ. by IEEE, vol. 3, 1989, pp. 1858-1861, XP010082957. |
U.S. Appl. No. 11/339,537—Mar. 13, 2009 PTO Office Action. |
U.S. Appl. No. 11/790,994—Jun. 2, 2009 PTO Office Action. |
Office Action Issued Aug. 5, 2008 in Corresponding Japanese Patent Application No. 2006-271932. |
Chinese Appln. No. 03816346.2—Jun. 12, 2009 SIPO Office Action (Translation). |
EP Appln. No. 03818864.5—Aug. 24, 2009 EPO Office Action. |
U.S. Appl. No. 11/819,047—Aug. 26, 2009 PTO Office Action. |
International Search Report and Written Opinion mailed Sep. 14, 2010 in PCT Appln. No. PCT/US2010/001941. |
CN Appln. No. 200710167085.2—Jun. 26, 2009 SIPO Office Action. |
U.S. Appl. No. 12/318,074—Sep. 14, 2009 PTO Office Action. |
U.S. Appl. No. 11/707,879—Nov. 28, 2011 PTO Office Action. |
U.S. Appl. No. 12/458,818—Nov. 22, 2011 PTO Office Action. |
CA Appln. No. 2,625,658—Oct. 12, 2011 CIPO Office Action. |
JP Appln. No. 2007-098267—Oct. 11, 2011 JPO Final Notice of Reasons for Rejection. |
Chinese Appln. No. 03816346.2—Jul. 6, 2011 SIPO Office Action (Translation). |
CN Appln. No. 200710167085.2—Aug. 2, 2011 SIPO Office Action (with English Translation). |
EP Appln. No. 03723915—Aug. 1, 2011 EPO Supplementary European Search Report. |
EP Appln. No. 07776613—Jul. 25, 2011 EPO Supplementary European Search Report. |
JP Appln. No. 2007-518071—Sep. 6, 2011 Decision of Rejection with statement of relevancy. |
U.S. Appl. No. 12/801,293—Aug. 15, 2011 PTO Office Action. |
CA Appln. No. 2,630,532—Jul. 25, 2011 CIPO Office Action. |
U.S. Appl. No. 12/533,073—May 23, 2011 PTO Office Action. |
JP Appln. No. 2007-518086—Jun. 21, 2011 JIPO Office Action. |
CN Appln. No. 03816346.2—Nov. 23, 2011 Office Action (with English translation). |
CA Appln. No. 2,650,663—Feb. 1, 2012 CIPO Office Action. |
CA Appln. No. 2,625,462—Feb. 5, 2012 CIPO Office Action. |
EP Appln. No. 05756603.6—Dec. 13, 2011 EP Office Action. |
IL191529—Dec. 4, 2011 Ministry of Justice Commissioner of Patents Office Action with translation. |
U.S. Appl. No. 12/458,818—Jan. 26, 212 PTO Office Action. |
“Lossless and Near-Lossless Coding of Continuous Tone Still Images (JPEG-LS)”, ISO/IEC JTC1/SC29/WG1 FCD 14495—Public Draft, XX, XX Jul. 16, 1997, pages I-IV,1, XP002260316, Retrieved from the Internet: URL:http://www.jpeg.org/public/fcd14495p.pdf paragraphs [04.2], [04.4], [A. |
CN Appln. No. 200910223192.1—May 10, 2012 SIPO Office Action with English translation. |
EP Appln. No. 05754824.0—Jul. 6, 2012 EPO Supplementary Search Report. |
Kyeong Ho Yang et al: “A contex-based predictive coder for lossless and near-lossless compression of video”, Image Processing, 2000. Proceedings. 2000 International Conference on Sep. 10-13, 2000, IEEE, Piscataway, NJ, USA, vol. 1, Sep. 10, 2000 pp. 144-1. |
CA Appln. No. 2,571,478—Jan. 31, 2012 CIPO Office Action. |
CA Appln. No. 2,625,658—Jun. 4, 2012 CIPO Office Action. |
CA Appln. No. 2,630,532—Mar. 20, 2012 CIPO Office Action. |
CN Appln. No. 200910223192.1—May 10, 2010 Office Action. |
Extended European Search Report mailed Feb. 14, 2012 in EPO Appln. No. 05854984.1. |
Official Letter mailed Mar. 16, 2012 in TW 94117242. |
U.S. Appl. No. 11/889,525—Feb. 15, 2012 PTO Office Action. |
Cagnazzo, Marco et al., “Low-Complexity Scalable Video Coding through Table Lookup VQ and Index Coding,” IDMS/PROMS 2002, LNCS 2515, pp. 166-175, 2002. |
EP Appln. No. 05754824.0—Sep. 14, 2012 EPO Office Action. |
EP Appln. No. 06849789.0—Oct. 23, 2012 Supplementary European Search Report. |
Goldberg, Morris et al., “Image Sequence Coding Using Vector Quantization,” IEEE Transactions on Communications., vol. COM-34, No. 7, Jul. 1986. |
MY Appln. No. PI20084298—Aug. 30, 2012 MyIPO Substantive Examination Adverse Report. |
Official Letter mailed Sep. 17, 2012 in TW Appln. No. 95120039. |
TW Appln. No. 94145077—May 17, 2012 Taiwanese Patent Office Official Letter. |
Bowling, Carl D. et al., “Motion Compensated Image Coding with a Combined Maximum A Posteriori and Regression Algorithm,” IEEE Transactions on Communications, New York, USA, vol. COM-33, No. 8, Aug. 1, 1985, pp. 844-857. |
EP Appln. No. 03818864.5—Sep. 20, 2012 EPO Office Action. |
Puri, A. et al., “Motion-compensated transform coding based on block motion-tracking algorithm,” International Conferences on Communications 1987, IEEE, pp. 136-140. |
Number | Date | Country | |
---|---|---|---|
20070019743 A1 | Jan 2007 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10260534 | Oct 2002 | US |
Child | 11528569 | US |