This application claims priority to Korean Patent Application No. 2012-0075666 filed on Jul. 11, 2012 in the Korean Intellectual Property Office (KIPO), the entire contents of which are hereby incorporated by reference.
1. Technical Field
Example embodiments of the present invention relate in general to a technique of generating a block for video retrieval and processing a query and more specifically to a method of generating a block based on space information of a video and a method of processing a query based on the generated block.
2. Related Art
Along with the rapid population of video recording devices (for example, a digital camera, a smartphone, and the like), amateurs in addition to experts can easily produce a video. In addition, multimedia content such as a video can be easily uploaded or downloaded over the Internet, which results from the development of communication technology.
To download a desired video, a user retrieves the desired video using a search engine. The search engine retrieves the video based on text information such as a title of the video, subtitles included in the video, and the like. Since such a search engine retrieves a video based on only text information of the video, the user cannot accurately retrieve the desired video.
Particularly, when a user desires to retrieve a video containing information on a specific region and retrieves the video based on only text information without using space information (for example, a place where the video is photographed) of the video, the user cannot accurately retrieve the desired video.
Accordingly, example embodiments of the present invention are provided to substantially obviate one or more problems due to limitations and disadvantages of the related art.
Example embodiments of the present invention provide a method of generating a block for video retrieval based on space information of frames constituting a video.
Example embodiments of the present invention also provide an apparatus for generating a block for video retrieval based on space information of frames constituting a video.
Example embodiments of the present invention also provide a method of processing a query on the basis of a block that is generated based on space information of frames.
Example embodiments of the present invention also provide an apparatus for processing a query on the basis of a block that is generated based on space information of frames.
In some example embodiments, a method of generating a block for video retrieval, which is performed by an apparatus for generating the block for the video retrieval, the method includes detecting a reference frame having at least one of position information and direction information that changes nonlinearly from among frames constituting a video, the position information and direction information being space information of the frame, and generating a tilt block including a plurality of frames based on the reference frame.
The detecting of the reference frame may include generating a regression line based on a start frame and an end frame among the frames constituting the video, selecting any point having the same time information as any frame constituting the video on the regression line, calculating a distance between the any point on the regression line and the any frame, and determining the any frame as the reference frame when the calculated distance is greater than a predefined reference distance.
The detecting of the reference frame may include calculating a median value of direction information based on direction information of the frames constituting the video and determining any frame as the reference frame among the frames constituting the video when a difference between the median value and direction information of the any frame is greater than a predefined reference value.
The generating of the tilt block may include classifying the frames constituting the video into at least two groups based on the reference frame and generating a tilt block including frames constituting a group in parallel with a line formed by a start frame and an end frame among the frames constituting the group.
In other example embodiments, an apparatus for generating a block for video retrieval, the apparatus includes a detection unit configured to detect a reference frame having at least one of position information and direction information that changes nonlinearly from among frames constituting a video, the position information and direction information being space information of the frame, and a generation unit configured to generate a tilt block including a plurality of frames based on the reference frame, in which the generation unit generates the tilt block in parallel with a line formed by a start frame and an end frame among the plurality of frames.
In still other example embodiments, a method of processing a query by a query processing apparatus, the method includes extracting a tilt block corresponding to the query from among tilt blocks including a plurality of frames constituting a video, extracting two unit blocks corresponding to the query based on a distance between the query and a start frame constituting the extracted tilt block from among the unit blocks including the frames constituting the extracted tilt block, and extracting a unit block including the frame corresponding to the query based on position information of a frame included in any unit block from among between the extracted two unit blocks and a unit block positioned between the two unit blocks, in which the tilt block is generated in parallel with a line formed by a start frame and an end frame constituting the tilt block.
The extracting of the tilt block corresponding to the query may include, when the query is a range query, detecting critical points at which the range query and the tilt blocks overlap each other and detecting a tilt block including the critical points from among the tilt blocks.
The extracting of the two unit blocks may include extracting a first unit block corresponding to a critical point closest to the start frame and a second unit block corresponding to a critical point farthest from the start frame, from among the frames constituting the extracted tilt block.
The extracting of the unit block including the frame corresponding to the query may include extracting a unit block including a frame corresponding to the range query based on position information of a frame included in any unit block among the first unit block, the second unit block, and a unit block positioned between the first unit block and the second unit block.
In yet still other example embodiments, a query processing apparatus includes a first extraction unit configured to extract a tilt block corresponding to a query from among tilt blocks including a plurality of frames constituting a video, a second extraction unit configured to extract two unit blocks corresponding to the query based on a distance between the query and a start frame constituting the extracted tilt block from among unit blocks including the frames constituting the extracted tilt block, and a third extraction unit configured to extract a unit block including a frame corresponding to the query based on position information of a frame including any unit block from among between the extracted two unit blocks and a unit block positioned between the two unit blocks, in which the tilt block is generated in parallel with a line formed by a start frame and an end frame constituting the tilt block.
Example embodiments of the present invention will become more apparent by describing in detail example embodiments of the present invention with reference to the accompanying drawings, in which:
Since the present invention may be variously modified and have several exemplary embodiments, specific exemplary embodiments will be shown in the accompanying drawings and be described in detail in a detailed description.
However, it should be understood that the particular embodiments are not intended to limit the present disclosure to specific forms, but rather the present disclosure is meant to cover all modification, similarities, and alternatives which are included in the spirit and scope of the present disclosure.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first component may be designated as a second component, and similarly, the second component may be designated as the first component. The use of the term of ‘and/or’ means that combination of a plurality of related and described items or one item among a plurality of related and described items is included.
When it is mentioned that a certain component is “coupled with” or “connected with” another component, it may be understood that another component can exist between the two components although the component can be directly coupled or connected with the other component. Meanwhile, when it is mentioned that a certain component is “directly coupled with” or “directly connected with” another component, it has to be understood that another component does not exist between the two components.
In the following description, the technical terms are used only for explaining a specific exemplary embodiment while not limiting the present disclosure. Singular forms used herein are intended to include plural forms unless explicitly indicated otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or a combination thereof.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to to which this invention belongs. Terms such as terms that are generally used and have been in dictionaries should be construed as having meanings matched with contextual meanings in the art. In this description, unless defined clearly, terms are not ideally, excessively construed as formal meanings.
Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings. In describing the invention, in order to facilitate the entire understanding of the invention, like numbers refer to like elements throughout the description of the figures and the repetitive description thereof will be omitted.
Throughout the specification, a video includes a plurality of frames, a start frame denotes a frame that is positioned at a start point among frames constituting the video (hereinafter referred to as video frames), an end frame denotes a frame that is positioned at an end point among the video frames, and a frame is represented as a sector having space information in two dimensions. A unit block denotes a square block including one frame, and a tilt block denotes a square block including a plurality of frames and may be represented as a tilted square block. In addition, the unit block may denote an expected-minimum bounding rectangle (MBR), and the tilt block may denote a minimum bounding tilted rectangle (MBTR).
Referring to
Here, the position information P of the camera may be acquired through a global positioning system (GPS) sensor included in the camera and may be represented as latitude and longitude. The direction information {right arrow over (d)} of the camera may be acquired through a compass included in the camera. The viewing angle information θ of the camera in
Here, the frame 50 may be represented as a sector. The vertex of the sector denotes a position of a camera that photographs the frame 50. The angle between the two sides of the vertex denotes a viewing angle of the camera. The direction of the line extending from the vertex to the center of the arc of the sector denotes a direction of the camera. The length of each side of the sector denotes a viewing distance of the camera.
In
Referring to
Process of Detecting Reference Frame based on Position Information of Frame
Referring to
(Ps, {right arrow over (d)}s, θs, Rs) denotes position information, direction information, viewing angle information, and viewing distance information of the start frame Fs. (Pe, {right arrow over (d)}e, θe, Re) denotes position information, direction information, viewing angle information, and viewing distance information of the end frame Fe. (Pi, {right arrow over (d)}i, θi, Ri) denotes position information, direction information, viewing angle information, and viewing distance information of any frame Fi. (Pi′, {right arrow over (d)}i′, θi′, Ri′) denotes position information, direction information, viewing angle information, and viewing distance information of any point Fi′.
The block generation apparatus may generate a regression line
After generating the regression line
where ts is time information of the start frame Fs, te is time information of the end frame Fe, ti is time information of any frame Fi, Ps is position information of the start frame Fs, Pe is position information of the end frame Fe, and Pi′ is position information of any point Fi′ on the regression line.
An algorithm for selecting any point Fi′ on the regression line
where F, FOVstream is a frame group including a plurality of frames, Fs is a start frame among the plurality of frames, ts is time information of the start frame, Ps is position information of the start frame, Fe is an end frame of the plurality of frames, te is time information of the end frame, Pe is position information of the end frame, Fi is any frame of the plurality of frames, and ti is time information of any frame. Lines 5 to 8 of the algorithm shown in Table 1 indicate Equation 1 above.
After selecting any point Fi′ on the regression line
After calculating the distance between any point Fi′ on the regression line
An algorithm for detecting a reference frame in which position information changes nonlinearly based on the position information of the video frames may be represented as Table 2 below:
where F, FOVstream is a frame group including a plurality of frames, s is an index of a start frame among the plurality of frames, e is an index of an end frame of the plurality of frames, εP is a predefined reference distance, and MarkupFOVScene is a reference frame.
Lines 6 and 7 of the algorithm that is shown in Table 2 indicate that the distance between each Fi′ on the regression line
Lines 8 to 10 of the algorithm that is shown in
Lines 13 to 19 of the algorithm that is shown in
Process of Detecting Reference Frame Based on Direction Information of Frame
The block generation apparatus may calculate a median value of direction information based on direction information of the video frames (S210). The block generation apparatus may calculate a median value based on frames positioned in a certain time range. In this case, an average of minimum direction information (that is, direction information in which an angle with respect to a certain axis (for example, an X axis) is minimum) and maximum direction information (that is, direction information in which an angle with respect to a certain axis (for example, an X axis) is maximum) among direction information of the frames positioned in the certain time range may be calculated as the median value.
After calculating the median value, the block generation apparatus may determine any frame as the reference frame when a difference between the median value and the direction information of any frame among the video frames is greater than a predefined reference value (S220). On the other hand, when the difference between the median value and the direction information of any frame is equal to or less than the predefined reference value, all operations may be completed or operation S220 may be performed based on another frame.
An algorithm for detecting a reference frame in which direction information changes nonlinearly based on the direction information of the video frames may be represented as Table 3 below:
where F, FOVstream is a frame group including a plurality of frames, s is an index of a start frame among the plurality of frames, e is an index of an end frame of the plurality of frames, ε{right arrow over (d)} is a predefined reference value, {right arrow over (d)}′i and is a median value.
Lines 5 to 7 of the algorithm that is shown in Table 3 indicate that the median value is calculated, and Lines 8 to 15 of the algorithm that is shown in Table 3 indicate that any frame is determined as the reference frame among the frames according to a difference between the median value and the direction information of the frame.
The block generation apparatus may generate a tilt block using the reference frame that is detected through operation S100, generate the tilt block using the reference frame that is detected through operation S200, generate the tilt block using a common reference frame among the reference frame detected through operation S100 and the reference frame detected through operation S200, and generate the tilt block using both of the reference frame detected through operation S100 and the reference frame detected through operation S200 as expressed as Table 4 below:
where F, FOVstream is a frame group including a plurality of frames, εP is a predefined reference distance, ε{right arrow over (d)} is a predefined reference value, S1 is a group of reference frames that are detected through operation S100, and S2 is a group of reference frames that are detected through operation S200.
Process of Generating Tilt Block Based on Reference Frame
After detecting a reference frame, the block generation apparatus may generate a tilt block including a plurality of frames based on the detected reference frame (S200).
First, the block generation apparatus may classify video frames into at least two groups based on the reference frame (S210). For example, in
After classifying the frames into at least two groups based on the reference frame, the block generation apparatus may generate a tilt block that includes frames constituting the groups and is parallel with a line formed by a start frame and an end frame among the frames constituting the group (S220).
In
Referring to
An angle of the frame 50 shown in
The unit block 71 for the one frame 50 may be provided as shown in
In
The index of the tilt block may be represented as Table 5 below:
where Ps is position information of a start frame of a tilt block, Pe is position information of an end frame of the tilt block, and each of rleft, rright, rforward, and rback is a distance from a position of the start frame to a boundary of the tilt block.
Referring to
The reference frame detection unit 11 may detect a reference frame in which at least one of the position information and the direction information, which are space information of the frame, changes nonlinearly among video frames.
Specifically, the reference frame detection unit 11 may generate a regression line based on a start frame and an end frame among video frames, select any point on the regression line having the same time information as any video frame, calculate a distance between the any point on the regression line and the any frame, and determine the any frame as the reference frame when the calculated distance is greater than a predefined reference distance. Here, a detailed method of the reference frame detection unit 11 determining the reference frame is the same as described in operation S100.
In addition, the reference frame detection unit 11 may calculate a median value of direction information based on direction information of video frames and determine any frame as the reference frame when a difference between direction information of the any frame among the video frames and the median value is greater than a predefined reference value. Here, a detailed method of the reference frame detection unit 11 determining the reference frame is the same as described in operation S200.
The tilt block generation unit 12 may generate a tilt block including a plurality of frames based on the reference frame that is detected by the reference frame detection unit 11. Specifically, the tilt block generation unit 12 may classify the video frames into at least two groups based on the reference frame and generate a tilt block that includes frames constituting a group and is parallel with a line that is formed by a start frame and an end frame among the frames constituting the group. Here, a detailed method of the tilt block generation unit 12 generating the tilt block is the same as described in operation S300.
Functions performed by the reference frame detection unit 11 and the tilt block generation unit 12 may be also performed by any processor (for example, a central processing unit (CPU), a graphic processing unit (GPU), etc.). The operations shown in
In addition, the reference frame detection unit 11 and the tilt block generation unit 12 may be implemented as one single form, one physical device, or one module. Moreover, the reference frame detection unit 11 and the tilt block generation unit 12 may be implemented as a plurality of physical devices or groups instead of one physical device or group.
Referring to
Since one video has one or more tilt blocks, the query processing device may extract a tilt block corresponding to a query from among the tilt blocks of the video. In this case, since the position information of the tilt block may be found through an index shown in Table 5 above, the query processing device may extract the tilt block corresponding to the query based on the index. Here, since the tilt block is generated through the above-described block generation method for video retrieval, the tilt block is generated in parallel with a line formed by a start frame and an end frame that constitute the tilt block.
After extracting the tilt block corresponding to the query, the query processing apparatus may extract two unit blocks corresponding to the query based on a distance between the query and the start frame constituting the extracted tilt block from among unit blocks including frames constituting the extracted tilt block (S500).
After extracting the two unit blocks, the query processing apparatus may extract a unit block including a frame corresponding to a query based on position information of frames included in any unit block from among the extracted two unit blocks and unit blocks positioned between the two unit blocks (S600).
In this case, the query processing apparatus may extract a tilt block corresponding to the query by applying methods different depending on a type of the query (for example, a point query, a range query, etc.), extract two unit blocks corresponding to the query from among unit blocks including frames constituting the extracted tilt block, and extract a unit block including a frame corresponding to the query.
Method of Extracting Frame According to Point Query
Since the point query corresponds to the tilt block extracted in operation S400 and the tilt block includes a plurality of frames, it is inefficient to scan all frames included in the tilt block in order to extract a frame corresponding to the point query. In order to solve this problem, a query processing method according to an embodiment of the present invention includes scanning some frames included in the tilt block to extract the frame corresponding to the point query.
Referring to
In
The tilt block 70 may represent a group in which the unit blocks 71 are cascaded, and there may be a plurality of unit blocks 71 corresponding to the point query 80. Among the plurality of unit blocks 71 corresponding to the point query 80, a first frame and a last frame on a time axis may be calculated through Equations 2 and 3 below (S500):
where i and j are numbers of frames included in the tilt block 70, and the frame numbers are marked sequentially from a start frame. For example, when the tilt block 70 includes 10 frames, a frame number of the start frame is 1 and a frame number of the end frame is 10. In addition, n is the total number of frames included in the tilt block 70, l is a length of a line formed by the start frame and the end frame of the tilt block 70, D is a distance from the start frame of the tilt block 70 to a point query 80 that is projected on the line formed by the start frame and the end frame of the tilt block 70, and rforward and rback are indices of the tilt block that are described in Table 5 above.
Since there is a frame corresponding to the point query 80 between an i-th frame calculated through Equation 2 and a j-th frame calculated through Equation 3, it is possible to extract a frame corresponding to the point query 80 by scanning a frame positioned between the i-th frame and the j-th frame instead of scanning all the frames included in the tilt block 70.
In
When the point query 80 corresponds to the k-th frame, the point query 80 may be positioned forward within rforward and rearward within rback from Pk′ of the unit block 71 including the k-th frame.
Equation 4 below may be defined based on the above description, and it can be seen that the k-th frame satisfying Equation 4 corresponds to the point query 80:
where n is a total number of frames included in the tilt block, l is a length of a line formed by the start frame and the end frame of the tilt block, D is a distance from the start frame of the tilt block to a point query that is projected on the line formed by the start frame and the end frame of the tilt block, and rforward and rback are indices of the tilt block that are described in Table 5 above.
In addition,
indicates a distance from the start frame of the tilt block to the k-th frame.
That is, the query processing apparatus may extract the frame corresponding to the point query using Equation 4 (S600).
An algorithm for extracting the frame corresponding to the point query is as expressed in Table 6 below:
where Ps is position information of the start frame, Pe is position information of the end frame, q is the point query, n is the number of frames included in the tilt block, L is a list of frame corresponding to the point query, l is a distance between the position information of the start frame and the position information of the end frame, rleft, rright, rforward, and rack are indices of the tilt block, and B is a boundary of the tilt block.
Lines 8 to 20 of the algorithm shown in Table 6 indicate that the unit block including the frame corresponding to the point query that is described with reference to Equations 2, 3, and 4 is extracted.
Method of Extracting Frame According to Range Query
The range query is a request to provide a frame having specific position information and may include a plurality of pieces of position information of a desired frame. The range query has the plurality of pieces of position information and thus may be represented as a convex polygon.
The query processing apparatus may extract a critical point at which the tilt block and the range query overlap each other. The critical point is defined as a crossing point of the boundary of the tilt block and an edge of the range query, an apex of the range query positioned inside the tilt block, and an apex of the tilt block positioned inside the range query.
The query processing apparatus may extract a tilt block having the critical point as the tilt block corresponding to the range query from among tilt blocks (S400).
After extracting the tilt block corresponding to the range query, the query processing apparatus may calculate a unit block including a first frame on a time axis among a plurality of unit blocks corresponding to the range query through Equation 5 below and may calculate a unit block including a last frame on the time axis among the plurality of unit blocks corresponding to the range query through Equation 6 below (S500):
where i and j are numbers of frames included in the tilt block, and the frame numbers are marked sequentially from a start frame. For example, when the tilt block includes 10 frames, a frame number of the start frame is 1 and a frame number of the end frame is 10. n is a total number of frames included in the tilt block, l is a length of a line formed by the start frame and the end frame of the tilt block, Dmin is a length between the start frame and a critical point that is present at a position closest to the start frame among critical points, Dmax is a length between the start frame and a critical point that is present at a position farthest from the start frame among the critical points, and rforward and rback are indices of the tilt block that are described in Table 5 above.
That is, a unit block including an i-th frame calculated through Equation 5 is a unit block corresponding to a critical point that is present closest to the start frame, and a unit block including a j-th frame calculated through Equation 6 is a unit block corresponding to a critical point that is present farthest from the start frame.
Since there is a frame corresponding to the range query between the i-th frame calculated through Equation 5 and the j-th frame calculated through Equation 6, it is possible to extract a frame corresponding to the range query by scanning a frame positioned between the i-th frame and the j-th frame instead of scanning all the frames included in the tilt block.
After extracting two unit blocks corresponding to the range query, the query processing apparatus may extract the frame corresponding to the range query using Equation 4 above.
An algorithm for extracting the frame corresponding to the range query is as expressed in Table 7 below:
where Ps is position information of the start frame, Pe is position information of the end frame, Q is the range query, n is the number of frames included in the tilt block, L is a list of frame corresponding to the range query, l is a distance between the position information of the start frame and the position information of the end frame, and B is a boundary of the tilt block.
Lines 8 to 34 of the algorithm shown in Table 7 indicate that the unit block including the frame corresponding to the range query that is described with reference to Equations 4, 5, and 6 is extracted.
Referring to
The tilt block extraction unit 21 may extract a tilt block corresponding to the query from among tilt blocks including a plurality of frames constituting a video. Here, a detailed method of the tilt block extraction unit 21 extracting a tilt block corresponding to the query is the same as described in operation S400.
The unit block extraction unit 22 may extract two unit blocks corresponding to the query based on a distance between the query and the start frame constituting the extracted tilt block from among unit blocks including frames constituting the extracted tilt block. Here, a detailed method of the unit block extraction unit 22 extracting a unit block corresponding to the query is the same as described in operation S500.
The frame extraction unit 23 may extract a unit block including a frame corresponding to a query based on position information of frames included in any unit block from among the extracted two unit blocks and unit blocks positioned between the two unit blocks. Here, a detailed method of the frame extraction unit 23 extracting a unit block including the frame corresponding to the query is the same as described in operation S600.
Functions performed by the tilt block extraction unit 21, the unit block extraction unit 22, and the frame extraction unit 23 may be also performed by any processor (for example, a central processing unit (CPU), a graphic processing unit (GPU), etc.). The operations shown in
In addition, the tilt block extraction unit 21, the unit block extraction unit 22, and the frame extraction unit 23 may be implemented as one single form, one physical device, or one module. Moreover, the tilt block extraction unit 21, the unit block extraction unit 22, and the frame extraction unit 23 may be implemented as a plurality of physical devices or groups instead of one physical device or group.
Table 8 below represents the subroutines shown in Tables 1, 2, 3, 4, 6, and 7.
Here, pointPolygonItersect(q, P) indicates “true” when a point query q and a polygon P (for example, a tilt block, a unit block, etc.) overlap each other, pointFOVIntersect(q, F) indicates “true” when the point query q and a frame F overlap each other, polygonFOVIntersect(P, F) indicates “true” when the polygon P (for example, a tilt block, a unit block, etc.) and the frame F overlap each other, getIntersections(P1, P2) indicates all crossing points between the polygon P1 (for example, a tilt block, a unit block, etc.) and the polygon P2 (for example, a range query, etc.), addList(L, Fk) indicates that the frame is added to a list of the frame corresponding to the query, and projectedDistance(Ps, Pe, q) indicates a distance between the start frame and the query q projected on the line formed by the start frame and the end frame.
Result of Experiment
Table 9 below is a result of comparing a query processing time and a memory usage according to the query processing method of an embodiment of the present invention with a query processing time and a memory usage according to the conventional query processing method.
Here, GeoTree is the query processing method according to an embodiment of the present invention, and each of MBR-Filtering and R-Tree is the conventional query processing method.
With respect to the range query, 10,000 queries were generated at random to conduct the experiment. As a result, it can be seen that GeoTree, which is the query processing method according to an embodiment of the present invention, has processed the range query most quickly. Here, a numerical value in a parenthesis is a standard deviation.
With respect to the point query, 100,000 queries were generated at random to conduct the experiment. As a result, it can be seen that GeoTree, which is the query processing method according to an embodiment of the present invention, has processed the point query most quickly. Here, a numerical value in a parenthesis is a standard deviation.
With respect to the memory usage, it can be seen that GeoTree, which is the query processing method according to an embodiment of the present invention, using a smallest amount of memory.
Here, εP is a predefined reference distance described in operation S140, and εθ is a predefined reference value described in operation S220.
It can be seen from
According to an embodiment of the present invention, it is possible to process the same amount of query in a small amount of time using a small amount of memory, compared to conventional techniques, by generating a tilt block based on position information and direction information of a frame that change linearly and processing a query based on the tilt block.
While the example embodiments of the present invention and their advantages have been described in detail, it should be understood that various changes, substitutions, and alterations may be made herein without departing from the scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
10-2012-0075666 | Jul 2012 | KR | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/KR2013/001946 | 3/11/2013 | WO | 00 |