The present disclosure relates to an image processing device and an image processing method, an imaging apparatus, and a program that are associated with processing of image data shot with use of an optical lowpass filter.
A typically well-known single-plate digital camera perform shooting through Bayer coding in converting light into an electrical signal, and performs demosaicing processing on RAW data obtained by the shooting to restore information of lost pixel values.
PTL 1: Japanese Unexamined Patent Application Publication No. 2008-271060
However, true values of the pixel values are not obtainable by typical demosaicing processing, which makes it difficult to avoid degradation in resolution and occurrence of an artifact. A three-plate camera is available as a camera that allows for avoidance of such issues; however, the three-plate camera has a disadvantage of an increase in size of an imaging system, and poor portability. Further, it is also considered to improve resolution by shooting a plurality of images using an image stabilizer to synthesize the plurality of images. However, in such a case, a mechanical mechanism is necessary; therefore, high mechanical accuracy may be necessary.
Further, there is proposed a method in which light incoming through a slit is separated with use of a lowpass filter, and thereafter, information of pixels is restored. However, this is a technique for a line sensor, and it is difficult to apply the method to a two-dimensional imager.
It is desirable to provide an image processing device, an image processing method, an imaging apparatus, and a program that allow a high-resolution image to be obtained.
An image processing device according to an embodiment of the present disclosure includes an image processor that generates image data on the basis of a plurality of pieces of raw image data shot with mutually different lowpass characteristics of an optical lowpass filter, the image data that is higher in resolution than each of the plurality of pieces of raw image data.
An image processing method according to an embodiment of the present disclosure includes: generating image data on the basis of a plurality of pieces of raw image data shot with mutually different lowpass characteristics of an optical lowpass filter, the image data that is higher in resolution than each of the plurality of pieces of raw image data.
An imaging apparatus according to an embodiment of the present disclosure includes: an image sensor; an optical lowpass filter disposed on a light entrance side with respect to the image sensor; and an image processor at generates image data on the basis of a plurality of pieces of raw image data shot with mutually different lowpass characteristics of the optical lowpass filter, the image data that is higher in resolution than each of the plurality of pieces of raw image data.
A program according to an embodiment of the present disclosure causes a computer to serve as an image processor that generates image data on the basis of a plurality of pieces of raw image data shot with mutually different lowpass characteristics of an optical lowpass filter, the image data that is higher in resolution than each of the plurality of pieces of raw image data.
In the image processing device, the image processing method, the imaging apparatus, or the program according to the embodiment of the present disclosure, on the basis of the plurality of pieces of raw image data shot with mutually different lowpass characteristics of an optical lowpass filter, the image data is higher in resolution than each of the plurality of pieces o t raw image data is generated.
According to the image processing device, the image processing method, the imaging apparatus, or the program of the embodiment of the present disclosure, on the basis of the plurality of pieces of raw image data shot with mutually different lowpass characteristics of an optical lowpass filter, the image data is higher in resolution than each of the plurality of pieces of raw image data is generated, which makes it possible to obtain a high-resolution image.
It is to be noted that effects described here are not necessarily limited and may include any of effects described in the present disclosure.
Hereinafter, embodiments of the present disclosure are described in detail with reference to the drawings. It is to be noted that description is given in the following order.
1. First Embodiment (image processing with use of two pieces of raw image data)
The imaging apparatus according to the present embodiment includes a lens unit 1, a lowpass filter (LPF) 2, an imager (an image sensor) 3, an image processor 4, a memory 5, a display 6, an external memory 7, an operation unit 8, a main controller 40, a lowpass filter controller 41, and a lens controller 42.
As the lowpass filter 2, it is possible to use a variable optical lowpass filter 30 of which lowpass characteristics are varied by controlling a degree of separation of light for incoming light L1, as illustrated in
In the imaging apparatus, light is captured from the lens unit 1, and an image from which the light is separated by the lowpass filter 2 is formed on the imager 3. The imager 3 performs photoelectric conversion and A/D (Analog-to-Digital) conversion of an optical image, and transfers raw image data (RAW data) to the image processor 4.
The image processor 4 performs development processing while using the memory 5, and displays a shooting result on the display 6. Further, the image processor 4 stores the shooting result in the external memory 7. Each of the image processor 4 and the main controller 40 incorporates a CPU (Central Processing Unit) that configures a computer.
The main controller 40 receives an instruction from the operation unit 8, and controls the lens unit 1 through the lens controller 42. Further, the main controller 40 controls a degree of separation of the lowpass filter 2 through the lowpass filter controller 41.
Pixels 10 of the imager 3 typically have a coding pattern called a Bayer pattern as illustrated in
Therefore, in the technology of the present disclosure, a plurality of pieces of raw image data shot with mutually different lowpass characteristics of the lowpass filter 2 are synthesized to generate image data that is higher in resolution than each of the plurality of pieces of raw image data. The image processor 4 calculates a pixel value of at least one predetermined color of a plurality of colors at a position different from positions of pixels of the predetermined color. The “pixel value at the position different from the positions of the pixels of the predetermined color” here refers to a pixel value of the predetermined color (for example, G) at a position different from positions where the pixels of the predetermined color are present in the imager 3, for example. Alternatively, the “pixel value at the position different from the positions of the pixels of the predetermined color” also refers to a pixel value of the predetermined color for example, G) at a position different from positions where the pixels of the predetermined color are present in the raw image in a case where the shooting is performed with the lowpass filter 2 turned off.
Here, in the present embodiment, “high-resolution image data” refers to image data having more true values or values close to true values than raw image data. For example, as described above, in a case of the Bayer pattern as illustrated in
In the present embodiment, the plurality of pieces of raw image data to be synthesized include two pieces of raw image data. The image processor 4 calculates a true value of one predetermined color of the three colors R, G, and B in a pixel at a position different from positions of pixels of the predetermined color. This makes it possible to calculate a true value of the predetermined color at a position of a pixel of a color other than the predetermined color, for example. More specifically, as the two pieces of raw image data, two kinds of raw image data including an image shot with the lowpass filter 2 turned off and an image shot with the lowpass filter 2 turned on are obtained. A higher-resolution and artifact-free image is obtained by using information obtained from these raw image data.
As the lowpass filter 2, it is possible to use a liquid crystal optical lowpass filter (the variable optical lowpass filter 30) as illustrated in
The first birefringent plate 31 is disposed on a light incoming side of the variable optical lowpass filter 30, and, for example, an outer surface of the first birefringent plate 31 serves as a light incoming surface. The incoming light L1 is light that enters the light incoming surface from a subject side. The second birefringent plate 32 is disposed on a light outgoing side of the variable optical lowpass filter 30, and, for example, an outer surface of the second birefringent plate 32 serves as a light outgoing surface. Transmitted light L2 of the variable optical lowpass filter 30 is light that is outputted from the light outgoing surface to outside.
Each of the first birefringent plate 31 and the second birefringent plate 32 has birefringent property, and has a uniaxial crystal structure. Each of the first birefringent plate 31 and the second birefringent plate 32 has a function of performing ps separation of circularly-polarized light utilizing the birefringent property. Each of the first birefringent plate 31 and the second birefringent plate 32 includes, for example, crystal, calcite, or lithium niobate.
The liquid crystal layer 33 includes, for example, a TN (Twisted Nematic) liquid crystal. The TN liquid crystal has optical rotation property that rotates a polarization direction of passing light along rotation of the nematic liquid crystal.
A basic configuration in
The principle of the variable optical lowpass filter 30 is described with reference to
The variable optical lowpass filter 30 is allowed to control a polarization state of light, and to vary the lowpass characteristics continuously. In the variable optical lowpass filter 30, the lowpass characteristics is controllable by changing an electrical field to be applied to the liquid crystal layer 33 (a voltage to be applied across the first electrode 34 and the second electrode 35). For example, as illustrated in
The variable optical lowpass filter 30 allows the lowpass effect to be put in an intermediate state by changing the applied voltage Vb between the voltage Va and the voltage Vb. The characteristics to be achieved in a case where the lowpass effect becomes maximum are determined by characteristics of the first birefringent plate 31 and the second birefringent plate 32.
In each of the states in
In the state illustrated in
In the state illustrated in
In the variable optical lowpass filter 30, the lowpass characteristics are controllable by changing the applied voltage Vb to control the separation width d. A magnitude of the separation width d corresponds to a degree of separation of light by the variable optical lowpass filter 30. In the present embodiment, the “lowpass characteristics” refer to the separation width d, or the degree of separation of light. As described above, the variable optical lowpass filter 30 allows the lowpass effect to be put in an intermediate state between 0% and 100% by changing the applied voltage Vb between the voltage Va and the voltage Vb. In such a case, the optical rotation in the liquid crystal layer 33 may become an angle between 0 degree and 90 degrees in the intermediate state. Further, the separation width d in the intermediate state may become smaller than the value dmax of the separation width d, in a case where the lowpass effect is 100%. A value of the separation width d in the intermediate state may take any value between 0 and the value dmax by changing the applied voltage Vb. Here, the value of the separation width d may be set to an optimum value corresponding to a pixel pitch of the imager 3. The optimum value of the separation width d may be, for example, a value that allows a light beam entering a specific pixel in a case where the lowpass effect is 0% to be separated to enter another pixel adjacent to the specific pixel in a vertical direction, a horizontal direction, a left oblique direction, or a right oblique direction.
The image processor 4 converts raw image data (RAW data) outputted from the imager 3 into JPEG (Joint Photographic Experts Group) data, for example.
Next, the image processor 4 carries out γ processing (step S202) and color reproduction processing (step S203) on data of the RGB plane. In the γ processing and the color reproduction processing, a γ curve and a matrix for color reproduction corresponding to spectroscopic characteristics of the imager 3 are applied to the data of the RGB plane, and RGB values are converted into a standard color space such as Rec. 709, for example.
Thereafter, the image processor 4 performs JPEG conversion processing on the data of the RGB plane (step S204). In the JPEG conversion processing, the RGB plane is converted into a YCbCr color space for transmission, and a Cb/Cr component is thinned out to a half in a horizontal direction, and thereafter JPEG compression is performed.
Next, the image processor 4 carries out storage processing (step S205). In the storage processing, a suitable JPEG header is assigned to the JPEG data, and the data is stored as a file in the external memory 7 or the like.
The technology by the present disclosure replaces a portion of the demosaicing processing in the step S201 in essence in the typical processing illustrated in
It is to be noted that, even in a case where the technology by the present disclosure is used, as illustrated in steps S105(A) and S105(B) in
As the new image processing, the present embodiment describes the processing of calculating true values of a specific color of the three colors of R, G, and B in the pixels at all the pixel positions. Hereinafter, as an example, description is provided on a case where a pixel array of raw image data is the Bayer pattern as illustrated in
As described above, in the case of the Bayer pattern as illustrated in
G′xy =Gxy (Expression 1)
In terms of pixel positions other than the G pixel positions, upon shooting with the lowpass filter 2 turned on, light of G that has entered a B pixel position encircled as illustrated in
At this time, it is possible to represent GLxy at the G pixel position by the following (Expression 2), where a value in a case where shooting is performed with the lowpass filter 2 turned on is GLxy.
GL
xy
=αG′
xy
+βG′
x−1y
+βG′
x+1y
+βG′
xy−1
+βG′
xy+1
+γG′
x−1y−1
+γG′
x−1y+1
+γG′
x+1y−1
+γG′
x+1y+1 (Expression 2)
Here, each of G′x−1y, G′xy+1y, G′xy−1, and G′xy+1 means a true value for G at the R pixel position and the B pixel position on the top, bottom, left, or right of GLxy at the G pixel position, and is unknown quantity at the present moment.
α, β, and γ are coefficients determined by a degree of separation of the lowpass filter 2, and are known values that are controlled by the image processor 4, and are determined by the characteristics of the lowpass filter 2.
G′xy is the value Gxy obtained in a case where shooting is performed with the lowpass filter 2 turned off according to the (Expression 1). Further, the values G′x−1y−1, G′x−1y+1, G′x+1y −1, and G′x+1y+1 are true values for G at the G pixel positions; therefore, it is possible to replace each of these values with a value obtained in a case where shooting is performed with the lowpass filter 2 turned off.
Accordingly, for the G pixel positions, it is possible to represent the (Expression 2) by the following (Expression 3).
GL
xy
=αG
xy
+βG′
x−1y
+βG′
x+1y
+βG′
xy−1
+βG′
xy+1
+γG
x−1y−1
+γG
x−1y+1
+γG
x+1y−1
+γG
x+1y+1 (Expression 3)
On a view of a whole image, it is possible to establish the (Expressions 3) equal in number to the G pixels at respective positions, and the number of unknown quantities becomes equal to the number of R pixel positions and B pixel positions, resulting in matching between the number of expressions and the number of unknown quantities. Therefore, solving a series of simultaneous equations to be obtained from the whole image makes it possible to determine all of the unknown quantities. Here, the unknown quantities refer to true values for G at the R pixel positions or the B pixel positions. Consequently, using data of two images including an image shot with the lowpass filter 2 turned on and an image shot with the lowpass filter 2 turned off as the raw image data makes it possible to know the true values for G at all of the pixel positions.
As described above, in the present embodiment, in the image processor 4, it is possible to calculate true values for one color (G) of the three colors R, G, and B of the pixel at all the pixel positions. However, it is not possible to calculate true values for the other two colors (R and B) at all the pixel positions.
As described above, in the case of the Bayer pattern as illustrated in
It is to be noted that in a technique according to the present embodiment, it is not possible to calculate true values at all the pixel positions upon generation of R plane data and B plane data; however, it is possible to infer the values at all the pixel positions through interpolation. For example, as illustrated in the step S105(B) in
As described above, according to the present embodiment, the image data that is higher in resolution than each of a plurality of pieces of raw image data is generated on the basis of two pieces of raw image data shot with mutually different lowpass characteristics of the lowpass filter 2, which makes it possible to obtain a high-resolution and high-definition image.
According to the present embodiment, it is possible to obtain shot images with higher resolution and fewer artifacts for a currently-available single-plate digital camera. Further, according to the present embodiment, it is possible to configure an imaging system in a smaller size as compared with a three-plate digital camera. Moreover, according to the present embodiment, a mechanical mechanism is unnecessary, which facilitates to secure control accuracy, as compared with a system using an image stabilizer.
It is to be noted that, in the above description, the true values for G of the number X/2 of unknown pixels are determined on the basis of two pieces of raw image data, which makes it possible to properly restore the pixel values for G at positions that are not determined essentially in the Bayer pattern. Meanwhile, using three pieces of raw image data with different lowpass characteristics of the lowpass filter 2 makes it possible to obtain the true values for G pixels of the number X/2 or more of unknown pixels. Specifically, it is possible to obtain the true values for G at virtual pixel positions that are present midway between actual pixel positions in the Bayer pattern. This makes it possible to further improve resolution.
It is to be noted that the effects described in the description are merely illustrative and non-limiting, and other effects may be included. This applies to effects achieved by the following other embodiments.
Next, description is provided on a second embodiment of the present disclosure. In the following, portions having configurations and workings that are substantially similar to those in the above-described first embodiment are not described as appropriate.
In the above-described first embodiment, description is provided on the processing with use of data of two images in total including one image shot with the lowpass filter 2 turned off, and one image shot with the lowpass filter 2 turned on as raw image data; however, the number of the images to be used is not limited to two. An increase in the number of the images makes it possible to obtain a higher-resolution image.
For example, in a case where one image shot with the lowpass filter 2 turned off, and three images having different degrees of separation shot with the lowpass filter 2 turned on, that is, four images in total are shot, it is possible to establish a sufficient number of expressions for unknown quantities for R and B as well.
The technology by the present disclosure replaces a portion of the demosaicing processing in the step S201 in essence in the typical processing illustrated in
Even in a case where the technology by the present disclosure is used, as illustrated in step S105 in
In such a manner, in the present embodiment, the plurality of pieces of raw image data to be synthesized include four pieces of raw image data. It is possible for the image processor 4 to calculate the true values R′ for R at positions different from the R pixel positions, the true values G′ for G at positions different from the G pixel position, and the true value B′ for B at positions different from the B pixel positions with use of four pieces of raw image data shot with mutually different lowpass characteristics of the lowpass filter 2. This makes it possible for the image processor 4 to calculate the true values for each of the three colors R, G, and B at all the pixel positions.
As with the case where the values for G are calculated in the above-described first embodiment, the following (Expression 4), (Expression 5), and (Expression 6) are established for the R pixel positions.
RL1xy=α1Rxy+β1R′x−1y+β1R′x+1y+β1R′xy−1+β1R′xy+1+γ1R′x−1y−1+γ1R′x−1y+1+γ1R′x+1y−1+γ1R′x+1y−1 (Expression 4)
RL2xy=α2Rxy+β2R′x−1y+β2R′x+1y+β2R′xy−1+β2R′xy+1+γ2R′x−1y−1+γ2R′x−1y+1+γ2R′x+1y−1+γ2R′x+1y−1 (Expression 5)
RL3xy=α3Rxy+β3R′x−1y+β3R′x+1y+β3R′xy−1+β3R′xy+1+γ3R′x−1y−1+γ3R′x−1y+1+γ3R′x+1y−1+γ3R′x+1y−1 (Expression 6)
Each of the (Expression 4), the (Expression 5), and the (Expression 6) is an expression for R at the R pixel positions in controlling the lowpass filter 2 to have a given degree of separation. Suffixes 1, 2, and 3 in the expressions indicate pixel values or coefficients upon each shooting. If the total number of pixels is X, the number of R pixels is X/4, and the number of unknown quantities is 3X/4. Therefore, an increase in the number of the expressions by increasing the number of shot images as described above ensures a balance between the number of unknown quantities and the number of expressions, allowing for solving as simultaneous equations.
In the above description, the R pixel is taken as an example; however, the same is true for the B pixel.
Any other configurations, operation, and effects may be substantially similar to those in the above-described first embodiment.
Although the technology achieved by present disclosure is not limited to description of the above-described respective embodiments, and may be modified in a variety of ways.
For example, in the above-described respective embodiments, the Bayer pattern is exemplified as a pixel structure; however, the pixel structure may be a pattern other than the Bayer pattern. For example, a pattern including a portion in which pixels of a same color are adjacent by consecutively disposing two or more pixels of any of R, G, and B may be adopted. For example, a structure including infrared (IR) pixels may be adopted, as illustrated in
Alternatively, a structure including phase-difference pixels for phase-difference system AF (autofocus) may be adopted. In such a case, light-shielding films 21 may be provided on some of pixels as illustrated in
It is also possible to infer true values at positions of the above-described phase-difference pixels or defective pixels in manufacturing of the imager 3 as unknown quantities, in a similar manner. In such a case, however, it is necessary to increase the number of the expressions because of an increase in the unknown quantities. Accordingly, it is necessary to increase the number of shot images. For example, in a case where some of the G pixels of the R, G, and B pixels are replaced with the phase difference pixels, the number of unknown G pixels becomes greater than X/2, but the number of unknown R and B pixels is 3X/4 without change, which makes it possible to obtain true values for R, G, and B at all the pixel positions by taking four images. In other words, in a case where a half or less of the G pixels are replaced with the phase-difference pixels, it is unnecessary to increase the number of images to be shot to more than four. Meanwhile, in a case where more than half of the G pixels are replaced, or the R and B pixels are missing, it is necessary to take four or more images. In a case where a configuration in which true values for R, G, and B at pixel positions other than R, G, and B, such as the phase-difference pixels or IR pixels are determined is adopted, it is possible to restore the true values at positions of the missing R, G, and B pixels. This allows pixels having a function other than R, G, and B to be disposed at higher density. The true values for R, G, and B at positions of the phase-difference pixels or the IR pixels are different from reproduction of defective pixels in that positions to be determined are known in advance.
Further, in the above-described respective embodiments, the pixel value includes color information; however, the pixel value may include only luminance information.
Furthermore, in the above-described respective embodiments, an image shot with the lowpass filter 2 turned off is surely used; however, only images shot with the lowpass filter 2 turned on with different degrees of separation may be used. For example, in the above-described first embodiment, one image shot with the lowpass filter 2 turned off, and one image shot with the lowpass Filter 2 turned on are used. However, it is only necessary to ensure a balance between the number of unknown quantities and the number of expressions. Therefore, it is possible to know all true values for G with use of two images shot with the lowpass filter 2 turned on with different degrees of separation, as with the above-described first embodiment.
Moreover, in the above-described respective embodiments, description is provided on an example of the variable optical lowpass filter 30 as the lowpass filter 2; however, a configuration in which the lowpass filter is turned on or off by mechanically taking the lowpass filter in and out may be adopted.
Further, in the above-described respective embodiments, synthesis processing of a plurality of images (the step S105 in
Moreover, various forms are conceivable as variations of a camera to which the imaging apparatus illustrated in
Further, the technology achieved by the present disclosure is also applicable to an in-vehicle camera, a surveillance camera, etc.
Furthermore, in the imaging apparatus illustrated in
It is to be noted that processing by the image processor 4 may be executed as a program by a computer. A program of the present disclosure is, for example, a program provided from, for example, a storage medium to an information processing device and a computer system that are allowed to execute various program codes. Executing such a program by the information processing device or a program execution unit in the computer system makes it possible to achieve processing corresponding to the program.
Moreover, a series of image processing by the present technology may be executed by hardware, software, or a combination thereof. In a case where processing by software is executed, it is possible to install a program holding a processing sequence in a memory in a computer that is built in dedicated hardware, and cause the computer to execute the program, or it is possible to install the program in a general-purpose computer that is allowed to execute various kinds of processing, and cause the general-purpose computer to execute the program. For example, it is possible to store the program in the storage medium in advance. In addition to installing the program from the storage medium to the computer, it is possible to receive the program through a network such as LAN (Local Area Network) and the Internet and install the program in a storage medium such as a built-in hard disk.
Further, the present technology may have the following configurations, for example.
(1)
An image processing device, including:
an image processor that generates image data on the basis of a plurality of pieces of raw image data shot with mutually different lowpass characteristics of an optical lowpass filter, the image data that is higher in resolution than each of the plurality of pieces of raw image data.
(2)
The image processing device according to (1), in which the optical lowpass filter is a variable optical lowpass filter that is allowed to change a degree of separation of light for incoming light.
(3)
The image processing device according to (2), in which the optical lowpass filter is a liquid crystal optical lowpass filter.
(4)
The image processing device according to (2) or (3), in which the plurality of pieces of raw image data include raw image data shot with the degree of separation set to zero.
(5)
The image processing device according to any one of (1) to (4), in which
each of the plurality of pieces of raw image data includes data of pixels of a plurality of colors located at pixel positions different for each color, and
the image processor calculates a pixel value for at least one predetermined color of the plurality of colors at a position different from a position of a pixel of the predetermined color.
(6)
The image processing device according to any one of (1) to (5), in which the plurality of pieces of raw image data include two pieces of raw image data.
(7)
The image processing device according to (6), in which
each of the two pieces of raw image data includes data of pixels of three colors, and
the image processor calculates a pixel value for one predetermined color of the three colors at a position different from a position of a pixel of the predetermined color.
(8)
The image processing device according to (7), in which the three colors are red, green, and blue, and the predetermined color is green.
(9)
The image processing device according to any one of (1) to (4), in which
the plurality of pieces of raw image data include four pieces of raw image data,
each of the four pieces of raw image data includes data of pixels of a first color, a second color, and a third color, and
the image processor calculates a pixel value for the first color at a position different from a position of the pixel of the first color, a pixel value for the second color at a position different from a position of the pixel of the second color, and a pixel value for the third color at a position different from a position of the pixel of the third color.
(10)
An image processing method, including:
generating image data on the basis of a plurality of pieces of raw image data shot with mutually different lowpass characteristics of an optical lowpass filter, the image data that is higher in resolution than each of the plurality of pieces of raw image data.
(11)
An imaging apparatus, including:
an image sensor;
an optical lowpass filter disposed on a light entrance side with respect to the image sensor; and
an image processor that generates image data on the basis of a plurality of pieces of raw image data shot with mutually different lowpass characteristics of the optical lowpass filter, the image data that is higher resolution than each of the plurality of pieces of raw image data.
(12)
A program causing a computer to serve as an image processor that generates image data on the basis of a plurality of pieces of raw image data shot with mutually different lowpass characteristics of an optical lowpass filter, the image data that is higher in resolution than each of the plurality of pieces of raw image data.
This application claims the benefit of Japanese Priority Patent Application No. 2016-045504 filed with the Japan Patent Office on Mar. 9, 2016, the entire contents of which are incorporated herein by reference.
It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and alterations may occur depending on design requirements and other factors insofar as they are within the scope of the appended claims or the equivalents thereof.
Number | Date | Country | Kind |
---|---|---|---|
2016-045504 | Mar 2016 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2017/001693 | 1/19/2017 | WO | 00 |