The invention relates to spatially scalable encoding and decoding processes that use a method for deriving coding information. More particularly, it relates to a method, also called inter-layer prediction method, for deriving coding information for high resolution images from the coding information of low resolution images.
State-of-art scalable hierarchical coding methods allow to encode the information hierarchically in order that it can be decoded at different resolution and/or quality levels. A data stream generated by a scalable coding device is thus divided into several layers, a base layer and one or more enhancement layers, also called high layers. These devices allow to adapt a unique data stream to variable transmission conditions (bandwidth, error rate . . . ) and also to the capacities of reception devices (CPU, characteristics of reproduction device . . . ). A spatially scalable hierarchical encoding method encodes (or decodes) a first part of data called base layer relating to low resolution images, and from this base layer encodes (or decodes) at least another data part called enhancement layer relating to high resolution images. The coding information relating to enhancement layer are possibly inherited (i.e. derived) from coding information relating to the base layer by a method called inter-layer prediction method. The derived coding information may possibly comprise: a partitioning pattern associated with block of pixels of the high resolution image (for splitting said block into several sub-blocks), coding modes associated with said blocks, possibly motion vectors and one or more image reference indices associated with some blocks allowing to reference the image used to predict said block. A reference image is an image of the sequence used to predict another image of the sequence. Thus, if not explicitly coded in the data stream, the coding information relating to the enhancement layer has to be derived from the coding information relating to low resolution images. State-of-art methods for deriving coding information cannot be used for high resolution images whose format is not linked to the format of low resolution images by a dyadic transform.
The invention relates to a method for deriving coding information for at least one image part of a high resolution image from coding information of at least one image part of a low resolution image, each image being divided into non-overlapping macroblocks themselves divided into non-overlapping blocks of a first size. Non-overlapping sets of three lines of three macroblocks defines hyper-macroblocks and coding information comprises at least macroblock coding modes and block coding modes. According to the invention, at least one macroblock of the at least one low resolution image part, called low resolution macroblock, is associated with each macroblock of the high resolution image part, called high resolution macroblock, so that the associated low resolution macroblock covers at least partly the high resolution macroblock when the low resolution image part upsampled by a predefined ratio multiple of 1,5 in both horizontal and vertical direction is superposed with the high resolution image part. The method comprises the following steps:
According to a preferred embodiment, a macroblock coding mode of a macroblock is called INTER if the macroblock is predicted temporally for coding or is called INTRA if the macroblock is not predicted temporally for coding. A macroblock coding mode is thus derived for a high resolution macroblock from the macroblock coding modes of the low resolution macroblocks associated with the high resolution macroblock as follows:
Each high resolution macroblock of the high resolution image part is divided in four non-overlapping blocks of a first size arranged in two lines of two blocks, one block located top left, called block B1, one block located top right, called block B2, one block located bottom left, called block B3, one block located bottom right, called block B4. According to a preferred embodiment, a block coding mode of a block is called INTER if the block is predicted temporally for coding or is called INTRA if the block is not predicted temporally for coding. Advantageously, a block coding mode is derived for each high resolution block of a first size which belong to a center macroblock of an hyper-macroblock from the macroblock coding modes of the four low resolution macroblocks associated with the center macroblock, one low resolution macroblock located top left, called macroblock cMB1, one low resolution macroblock located top right, called macroblock cMB2, one low resolution macroblock located bottom left, called macroblock cMB3, one low resolution macroblock located bottom right, called macroblock cMB4, as follows:
A block coding mode is derived for each high resolution blocks of a first size which belong to a corner macroblock of an hyper-macroblock from the macroblock coding modes of the low resolution macroblock, called macroblock cMB, associated with the corner macroblock as follows:
A block coding mode is derived for each high resolution blocks of a first size which belong to a vertical macroblock of an hyper-macroblock from the macroblock coding modes of the two low resolution macroblocks associated with the vertical macroblock, one low resolution macroblock located left, called macroblock cMBl, one low resolution macroblock located right, called macroblock cMBr, as follows:
A block coding mode is derived for each high resolution blocks of a first size which belong to an horizontal macroblock of an hyper-macroblock from the macroblock coding modes of the two low resolution macroblocks associated with the horizontal macroblock, one low resolution macroblock located top, called macroblock cMBu, one low resolution macroblock located bottom, called macroblock cMBd, as follows:
Preferentially, the method further comprises a step for homogenizing block coding modes of blocks of a first size within each high resolution macroblock when the high resolution macroblock contains at least one block of a first size whose block coding mode is INTRA.
Advantageously, coding information further comprises motion information and the method further comprises a step for deriving motion information for each high resolution macroblock from motion information of the low resolution macroblocks associated with the high resolution macroblock.
The step for deriving motion information for a high resolution macroblock comprises the following steps:
associating with each block of a second size in the high resolution macroblock, called high resolution block of a second size, a block of a second size in the low resolution macroblocks associated with the high resolution macroblock, called low resolution block of a second size, on the basis of the class of the high resolution macroblock and on the basis of the position of the high resolution block of a second size within the high resolution macroblock; and
deriving motion information for each block of a second size in the high resolution macroblock from motion information of the low resolution block of a second size associated with the high resolution block of a second size.
Preferentially, the motion information of one block or one macroblock comprises at least one motion vector having a first and a second component and at least one reference index associated with the motion vector selected among a first or a second list of reference indices, the indices identifying reference images.
Advantageously, after the step for deriving motion information, the method further comprises a step for homogenizing, for each high layer macroblock, motion information between sub-blocks of same block of a first size. This step consists, for each list of reference indices, in:
identifying, for each high resolution block of a first size of the high layer macroblock, the lowest index of the sub-blocks among the reference indices of said list of reference indices;
associating the lowest reference index with each of the sub-blocks whose current reference index is not equal to the lowest reference index, the current reference index becoming a previous reference index; and
associating, with each of the sub-block whose previous reference index is not equal to the lowest index, the motion vector of one of its neighboring sub-block whose the previous reference index is equal to the lowest reference index.
Preferentially, the associated motion vector is the motion vector of the first neighboring sub-block encountered when checking first the horizontal neighboring sub-block, secondly the vertical neighboring sub-block and thirdly diagonal neighboring sub-block.
Preferentially, the motion vector components of motion vectors of each high resolution macroblock in the high resolution image part and of each block in high resolution macroblocks if any are scaled by the following equations:
According to a specific embodiment, predefined ratio equals three divided by two and the blocks of a first size have a size of 8 by 8 pixels, the macroblocks have a size of 16 by 16 pixels, and the blocks of a second size have a size of 4 by 4 pixels.
Preferentially, the method is part of a process for coding video signals and/or is part of a process for decoding video signals.
The invention also relates to a device for coding at least a sequence of high resolution images and a sequence of low resolution images, each image being divided into non-overlapping macroblocks themselves divided into non-overlapping blocks of a first size. It comprises:
Moreover, the invention relates to a device for decoding at least a sequence of high resolution images and a sequence of low resolution images coded with the coding device defined previously, the coded images being represented by a data stream and each image being divided into non-overlapping macroblocks themselves divided into non-overlapping blocks of a first size. It comprises:
According to an important feature of the invention, non-overlapping sets of three lines of three macroblocks in said at least one image part of said high resolution image defining hyper-macroblocks and said coding information comprising at least macroblock coding modes and block coding modes, the inheriting means of the coding and decoding devices comprise:
Advantageously, the coding device further comprises a module for combining said base layer data stream and said enhancement layer data stream into a single data stream.
Advantageously, the decoding device further comprises extracting means for extracting said first part of said data stream and said second part of said data stream from said data stream.
Other features and advantages of the invention will appear with the following description of some of its embodiments, this description being made in connection with the drawings in which:
The invention relates to a method for deriving coding information of at least a part of a high resolution from coding information of at least a part of a low resolution image when the ratio between the high resolution image part dimensions and the low resolution image part dimensions are linked with a specific ratio, called inter-layer ratio, equal to 3/2 which corresponds to a non dyadic transform. The method can be extended to inter-layer ratios that are multiple of 3/2. Each image is divided in macroblocks. A macroblock of a low resolution image is called low resolution macroblock or base layer macroblock and is denoted BL MB. A macroblock of a high resolution image is called high resolution macroblock or high layer macroblock and is denoted HL MB. The preferred embodiment describes the invention in the context of spatially scalable coding and decoding and more particularly in the context of spatially scalable coding and decoding in accordance with the standard MPEG4 AVC described in the document ISO/IEC 14496-10 entitled << Information technology—Coding of audio-visual objects—Part 10: Advanced Video Coding >>. In this case, the low resolution images are coded and thus decoded according to the coding/decoding processes described in said document. When coding low resolution images coding information is associated with each macroblock in said low resolution image. This coding information comprises for example partitioning and sub-partitioning of the macroblock in blocks, coding mode (e.g. inter coding mode, intra coding mode . . . ), motion vectors and reference indices. A reference index associated with a current block of pixels allows to identify the image in which the block used to predict current block is located. According to MPE4-AVC, two reference index lists L0 and L1 are used. The method according to the invention thus allows to derive such coding information for the high resolution images, more precisely for at least some macroblocks comprised in these images. The high resolution images are then possibly coded using these derived coding information. In this case, the number of bits required to encode the high resolution images is decreased since no coding information is encoded in the data stream for each macroblock whose coding information is derived from low resolution images. Indeed, since the decoding process uses the same method for deriving coding information for the high resolution images, there is no need to transmit it.
In the sequel, two spatial layers are considered, a low layer (called base layer) corresponding to the images of low resolution and a high layer (called enhancement layer) corresponding to the images of high resolution. The high and low resolution images may be linked by the geometrical relations depicted on the
Width and height of base layer images (i.e. low resolution images) are defined respectively by wbase and hbase. Low resolution images may be a downsampled version of sub-images of enhancement layer images, of dimensions Wextract and hextract, positioned at coordinates (xorig, yorig) in the enhancement layer images coordinates system. Low and high resolution images may also be provided by different cameras. In this case, the low resolution images are not obtained by downsampling high resolution images and geometrical parameters may be provided by external means (e.g. by the cameras themselves). The values xorig and yorig are aligned on the macroblock structure of the high resolution image (i.e. for a macroblock of size 16 by 16 pixels, xorig and yorig have to be multiple of 16). On
In the context of spatially scalable coding process such as described in [JSVM1], high resolution macroblocks may be coded using classical coding modes (i.e. intra prediction and inter prediction) as those used to encode low resolution images. Besides, some specific macroblocks of high resolution images may use a new mode called inter-layer prediction mode (i.e. inter layer motion and texture prediction). This latter mode is notably authorized for enhancement layer macroblocks fully covered by scaled based layer, that is, whose coordinates (Mbx, MBy) verify the following conditions (i.e. grey-colored area in
MBx>=scaled_base_column_in_mbs and
MBx<scaled_base_column_in_mbs+scaled_base_width/16
And
MBy>=scaled_base_line_in_mbs and
MBy<scaled_base_line_in_mbs+scaled_base_height/16
Macroblocks that do not follow these conditions may only use classical modes, i.e. intra prediction and inter-prediction modes, while macroblocks following these conditions may use either intra prediction, inter prediction or inter-layer prediction modes. Such enhancement layer macroblock can exploit inter-layer prediction using scaled base layer motion information, using either “BASE_LAYER_MODE” or “QPEL_REFINEMENT_MODE”, as in the case of the macroblock aligned dyadic spatial scalability described in [JSVM1]. When using “QPEL_REFINEMENT_MODE” mode a quarter-sample motion vector refinement is achieved. Afterward, the encoding process will have to decide for each macroblock fully included in the cropping window, which coding mode to select between intra, inter prediction or and inter-layer. Before deciding which mode to finally select, it is required to derive for each macroblock in the grey-colored area the coding information that will be used to predict this macroblock if inter-layer coding mode if finally selected by the encoding process.
The
The method for deriving coding information, also called inter-layer prediction, is described in the sequel for a group of nine macroblocks referenced MHR on
According to a preferred embodiment, a prediction macroblock MBi_pred also called inter-layer motion predictor is associated with each macroblock MBi of an hyper-macroblock. According to another embodiment, a macroblock MBi inherits directly from base layer macroblocks without using such a prediction macroblock. In this case MBi_pred is identified with MBi in the method described below.
The method for deriving MBi_pred coding information is depicted on
A macroblock coding mode or macroblock label contains information on the type of macroblock prediction, i.e. temporal prediction (INTER) or spatial prediction (INTRA) and for INTER macroblock coding modes it may further contains information on how a macroblock is partitioned (i.e. divided in sub-blocks). The macroblock coding mode INTRA means that the macroblock will be intra coded, while the macroblock coding mode defined as MODE_X_Y means that the macroblock will be predicted and that it is furthermore partitioned into blocks of size X by Y as depicted on
To each macroblock MBi of an hyper-macroblock, is associated a set containing the base layer associated macroblocks as depicted on
As depicted on
IF mode[cMB]==INTRA, i.e. the macroblock coding mode associated with cMB is the INTRA mode, THEN all 8×8 blocks are labeled as INTRA blocks
ELSE the 8×8 blocks labels are given by the following table:
Thus for example, if mode[cMB]==MODE—8×16 and if the MBi under consideration is the macroblock referenced Corner_0 on
IF mode[cMB]==INTRA THEN, MBi_pred mode is labeled INTRA;
ELSE IF mode[cMB]==MODE—16×16 THEN MBi_pred is labeled MODE—16×16;
ELSE MBi_pred is labeled MODE—8×8.
As depicted on
IF mode[cMBl]==INTRA, THEN B1 and B3 are labeled as INTRA blocks
ELSE the B1 and B3 labels are directly given by the following table
IF mode[cMBr]==INTRA, THEN B2 and B4 are labeled as INTRA blocks
ELSE B2 and B4 labels are directly given by the following table:
Thus for example, if mode[cMBl]==MODE—8×16, if mode[cMBr]==MODE—8×8 and if the MBi under consideration is the macroblock referenced Vert_0 on
IF mode[cMBl]==INTRA and mode[cMBr]==INTRA THEN, MBi_pred is labeled INTRA;
ELSE IF at least one 8×8 block coding mode is equal to BLK—8×4 THEN MBi_pred is labeled MODE—8×8;
ELSE, IF mode[cMBl]==INTRA or mode[cMBr]==INTRA, THEN MBi_pred is labeled MODE—16×16;
ELSE MBi_pred is labeled MODE—8×16;
As depicted on
IF mode[cMBu]==INTRA, THEN B1 and B2 are labeled as INTRA blocks
ELSE the B1 and B2 labels are directly given by the following table:
IF mode[cMBd]==INTRA, THEN B3 and B4 are labeled as INTRA blocks
ELSE B3 and B4 labels are directly given by the following table:
IF mode[cMBu]==INTRA and mode[cMBd]==INTRA THEN, MBi_pred is labeled INTRA;
ELSE IF at least one 8×8 block coding mode is equal to BLK—4×8 THEN MBi_pred is labeled MODE—8×8;
ELSE, IF mode[cMBl]==INTRA or mode[cMBr]==INTRA, THEN MBi_pred is labeled MODE—16×16;
ELSE MBi-pred is labeled MODE—16×8.
As depicted on
For each Bj
IF all mode[cMBj] are equal to INTRA THEN, MBi_pred is labeled INTRA;
ELSE MBi_pred is labeled MODE—8×8.
The step 12 consists in deriving for each macroblock MBi_pred motion information from the motion information of its associated base layer macroblocks.
To this aim a first step 120 consists in associating with each 4×4 block of the macroblock MBi_pred, a base layer 4×4 block also called low resolution 4×4 block (from the base layer associated macroblocks). In the following, the 4×4 blocks location within a macroblock are identified by their number as indicated on
The second table defined below gives the number of the associated macroblock (among the four macroblocks referenced 1, 2, 3, and 4 on
A second step 121 consists in inheriting (i.e. deriving) motion information of MBi_pred from base layer associated macroblocks. For each list listx (Lx=0 or 1), the 4×4 block of MBi_pred gets the reference index and motion vector from the associated base layer 4×4 block which has been identified previously by its number. More precisely, the enhancement layer 4×4 block gets the reference index and motion vectors from the base layer block (i.e. partition or sub-partition) to which the associated base layer 4×4 block belongs. For example, if the associated base layer 4×4 block belongs to a base layer macroblock whose coding mode is MODE—8×16, then the 4×4 block of MBi_pred gets the reference index and motion vectors from the base layer 8×16 block to which associated base layer 4×4 block belongs.
According to a specific embodiment, If MBi_pred coding mode is not sub-partitioned (e.g. for example labeled with MODE—16×8), then it is not required to check each 4×4 blocks belonging to it. Indeed, the motion information inherited by one of the 4×4 blocks belonging to one of the macroblock partition (e.g. 16×8 block) may be associated with the whole partition.
According to a preferred embodiment, the step 13 consists in cleaning each MBi_pred in order to remove configurations that are not compatible with a given coding standard, in this case MPEG4 AVC. This step may be avoid if the inheriting method is used by a scalable coding process that does not require to generate a data stream in accordance with MPEG4 AVC.
To this aim a step 130 consists in homogenizing the 8×8 blocks of macroblocks MBi_pred with configurations not compatible with MPEG4-AVC standard by removing these 8×8 blocks configurations. For example, according to MPEG4-AVC, for each list, 4×4 blocks belonging to the same 8×8 block should have the same reference indices. The reference indice for a given list Lx referenced as rbi(Lx) and the motion vector referenced as mvbi(Lx) associated with a 4×4 block bi within an 8×8 block are thus possibly merged. In the following, each 4×4 blocks bi of an 8×8 block B are identified as indicated in
IF (MBi class is equal to Corner_X (With X=0 . . . 3) or MBi class is equal to Hori_X (With X=0 . . . 1)) THEN,
ELSE, IF (MBi class is equal to Vert_X (With X=0 . . . 1))
OTHERWISE nothing is done.
For each 8×8 block B (i.e. B1, B2, B3, B4 as depicted on
IF no 4×4 block uses this list, i.e. has no reference index in this list, THEN, no reference index and motion vector of this list are set to B
ELSE, reference index rB(Lx) for B is computed as follows
IF B block coding mode is equal to BLK 8×4 or BLK 4×8 THEN,
ELSE IF B block coding mode is equal to BLK 4×4
IF (rb1(Lx)!=rB(Lx)) THEN,
rb1(Lx)=rB(Lx)
IF (rb2(Lx)==rB(Lx)) THEN, mvb1(Lx)=mvb2(Lx)
ELSE IF (rb3(Lx)==rB(Lx))THEN, mvb1(Lx)=mvb3(Lx)
ELSE IF (rb4(Lx)==rB(Lx)) THEN, mvb1(Lx)=mvb4(Lx)
rb2(Lx)=rB(Lx)
IF (rb1(Lx)==rB(Lx)) THEN, mvb2(Lx)=mvb1(Lx)
ELSE IF (rb4(Lx)==rB(Lx)) THEN, mvb2(Lx)=mvb4(Lx)
ELSE IF (rb3(Lx)==rB(Lx)) THEN, mvb2(Lx)=mvb3(Lx)
rb3(Lx)=rB(Lx)
IF (rb4(Lx)==rB(Lx)) THEN, mvb3(Lx)=mvb4(Lx)
ELSE IF (rb1(Lx)==rB(Lx)) THEN, mvb3(Lx)=mvb1(Lx)
ELSE IF (rb2(Lx)==rB(Lx)) THEN, mvb3(Lx)=mvb2(Lx)
rb4(Lx)=rB(Lx)
IF (rb3(Lx)==rB(Lx)) mvb4(Lx)=mvb3(Lx)
ELSE IF (rb2(Lx)==rB(Lx)) mvb4(Lx)=mvb2(Lx)
ELSE IF (rb1(Lx)==rB(Lx)) mvb4(Lx)=mvb1(Lx)
A step 131 consists in cleaning (i.e. homogenizing) the macroblocks MBi_pred with configurations not compatible with MPEG4-AVC by removing within these macroblocks the remaining (i.e. isolated) INTRA 8×8 blocks and to enforce them to be INTER 8×8 blocks. Indeed MPEG4 AVC does not allow to have within a macroblock 8×8 INTRA blocks and INTER 8×8 blocks. Step 131 may be applied before step 130. This step is applied to the MBi_pred associated with the macroblocks MBi whose class is Vert_0, Vert_1, Hori_0, Hori_1, or C. In the sequel, Vertical_predictor[B] and Horizontal_predictor[B] represent respectively the vertical and horizontal 8×8 blocks neighbours of the 8×8 block B.
IF mode[MBi]==MODE—8×8 THEN,
For each 8×8 blocks
IF Horizontal_predictor[BINTRA] is not classified as INTRA THEN,
ELSE, IF Vertical_predictor[BINTRA] is not classified as INTRA THEN,
ELSE,
The step 14 consists in scaling derived motion vectors. To this aim, a motion vector scaling is applied to every existing motion vectors of the prediction macroblock MBi_pred. A motion vector mv=(dx, dy) is scaled using the following equations:
where sign[x] is equal to 1 when x is positive and −1 when x is negative.
Steps 10 to 14 allows to derive coding information for each MBi (or for each corresponding intermediate structure MBi_pred) fully included in the cropping window from the coding information of associated macroblocks and blocks of base layer.
The following optional step consists in predicting texture based on the same principles as inter layer motion prediction. This step may also be referenced as inter layer texture prediction step. It can be possibly used for macroblocks fully embedded in the scaled base layer window cropping window (grey-colored area in
The process in a decoding device works as follows. Let MBi be an enhancement layer texture macroblock to be interpolated. Texture samples of MBi are derived as follows:
Let (xP, yP) be the position of the upper left pixel of the macroblock in the enhancement layer coordinates reference. A base layer prediction array is first derived as follows:
the corresponding quarter-pel position (x4, y4) of (xP, yP) in the base layer is computed as:
the integer-pel position (xB, yB) is then derived as:
the quarter-pel phase is then derived as:
The base layer prediction array corresponds to the samples contained in the area (xB-8, yB-8) and (xB+16, yB+16). The same filling process, as used in the dyadic case and described in [JSVM1], is applied to fill samples areas corresponding to non existing or non available samples (for instance, in case of intra texture prediction, samples that do not belong to intra blocks). The base layer prediction array is then upsampled. The upsampling is applied in two steps: first, texture is upsampled using the AVC half pixel 6-tap filter defined in the document JVT-N021 from the Joint Video Team (JVT) of ISO/IEC MPEG & ITU-T VCEG, entitled “Draft ITU-T Recommendation and Final Draft International Standard of Joint Video Specification (ITU-T Rec. H.264 | ISO/IEC 14496-10 AVC)” and written by T. Wiegand, G. Sullivan and A. Luthra, then a bilinear interpolation is achieved to build the quarter pel samples, which results in a quarter-pel interpolation array. For intra texture, this interpolation crosses block boundaries. For residual texture, interpolation does not cross transform block boundaries.
The prediction sample pred[x, y] at each position (x, y), x=0 . . . N−1,y=0 . . . N−1, of the enhancement layer block is computed as:
pred[x, y]=interp[xl, yl]
with
interp[xl, yl] is the quarter-pel interpolated base layer sample at position (xl, yl)
A given macroblock MB of current layer can exploit intra layer residual prediction only if co-located macroblocks of the base layer exist and are intra macroblocks. For generating the intra prediction signal for high-pass macroblocks coded in I_BL mode, the corresponding 8×8 blocks of the base layer high-pass signal are directly de-blocked and interpolated, as in case of ‘standard’ dyadic spatial scalability. The same padding process is applied for deblocking.
A given macroblock MB of current layer can exploit inter layer residual prediction only if co-located macroblocks of the base layer exist and are not intra macroblocks. At the encoder, the upsampling process consists in upsampling each elementary transform block, without crossing the block boundaries. For instance, if a MB is coded into four 8×8 blocks, four upsampling processes will be applied on exactly 8×8 pixels as input. The interpolation process is achieved in two steps: first, the base layer texture is upsampled using the AVC half pixel 6-tap filter; then a bilinear interpolation, is achieved to build the quarter pel samples. Interpolated enhancement layer samples The nearest quarter pel position is chosen as the interpolated pixel.
The invention concerns a coding device 8 depicted on
The invention also concerns a decoding device 9 depicted on
According to another embodiment the decoding device receives two data stream: a base layer data stream and an enhancement layer data stream. In this case the device 9 does not comprise an extracting module 90.
The invention is not limited to the embodiments described. Particularly, the invention described for two sequences of images, i.e. two spatial layers, may be used to encode more than two sequences of images.
Number | Date | Country | Kind |
---|---|---|---|
05101224.3 | Feb 2005 | EP | regional |
0550477 | Feb 2005 | FR | national |
05102465.1 | Mar 2005 | EP | regional |
05290819.1 | Apr 2005 | EP | regional |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/EP06/50897 | 2/13/2006 | WO | 00 | 8/16/2007 |