This application is based upon and claims the benefit of priority from Japanese Patent Application No. 2012-084143, filed on Apr. 2, 2012; the entire contents of which are incorporated herein by reference.
Embodiments of the present invention generally relate to an image processing apparatus, a stereoscopic image display apparatus, an image processing method, and a computer program product.
In recent years, a device capable of generating three-dimensional images (volume data) has been practically used in the field of medical diagnostic imaging devices such as an X-ray computed tomography (CT) device, a magnetic resonance imaging (MRI) device, or an ultrasonic diagnostic device. Moreover, a technique of rendering volume data from an arbitrary viewpoint has been practically used. In recent years, a technique of rendering volume data from multiple viewpoints to generate parallax images and displaying the parallax images stereoscopically on a stereoscopic image display apparatus has been investigated.
In order to display volume data on the stereoscopic image display apparatus effectively, it is important to control the amount of pop-out of volume data so as to fall within an appropriate range. The amount of pop-out can be controlled by changing an amount of parallax. When a display target object is drawn as a computer graphic (CG) like rendering of volume data, the amount of parallax may be changed by changing a camera interval. When the camera interval is widened, the amount of parallax increases, and when the camera interval is narrowed, the amount of parallax decreases. However, since the relationship between the camera interval and the amount of pop-out varies depending on a hardware specification of a stereoscopic image display apparatus, a method of controlling the amount of pop-out via the camera interval is neither a versatile nor intuitive method.
A conventional technique of intuitively controlling the amount of pop-out via an interface called a boundary box is known. The boundary box is a region which is to be reproduced on the stereoscopic image display apparatus in a virtual space of the CG. When the boundary box is disposed in the virtual space, an appropriate number of cameras are automatically disposed at an appropriate interval so that a region inside the boundary box is reproduced on the stereoscopic image display apparatus. When the depth range of the boundary box is widened, the camera interval is narrowed, and the amount of pop-out decreases. Conversely, when the depth range of the boundary box is narrowed, the camera interval increases, and the amount of pop-out increases. In this manner, it is possible to control the amount of pop-out of a display target object by changing the depth range of the boundary box.
However, in the conventional technique, it can be understood that the cross-section at the center of the boundary box has the highest resolution (density of light beams emitted from the pixels of the display panel), and the near-side surface and the far-side surface correspond to the lower limit of the resolution. However, since the resolution in the depth direction from the near-side surface to the far-side surface changes in a non-linear form, it is difficult to understand the display resolution at an optional inner position within the boundary box (in other words, any position in the depth direction of the boundary box).
According to an embodiment, an image processing apparatus includes an acquiring unit configured to acquire volume data of a three-dimensional image; and a superimposed image generating unit configured to generate a superimposed image that is made by superimposing light information on a depth image when a parallax image obtained by rendering the volume data from multiple viewpoints is displayed as a stereoscopic image. The light information represents a relationship between a position in a depth direction of the stereoscopic image and resolution of the stereoscopic image. The depth image is obtained by rendering the volume data from a depth viewpoint at which the entire volume data in the depth direction is viewable.
Hereinafter, embodiments of an image processing apparatus, a stereoscopic image display apparatus, an image processing method, and a computer program product according to the present invention will be described in detail with reference to the accompanying drawings.
The image display system 1 generates a stereoscopic image from volume data of the three-dimensional image generated by the medical diagnostic imaging device 10. Moreover, the image display system 1 displays the generated stereoscopic image on a display unit to thereby provide a three-dimensional image that can be stereoscopically viewed for a physician or an examination engineer who works in a hospital. A stereoscopic image is an image including multiple parallax images having different parallaxes. Hereinafter, the respective devices will be described in order.
The medical diagnostic imaging device 10 is a device that can generate volume data of a three-dimensional image. Examples of the medical diagnostic imaging device 10 include an X-ray diagnostic device, an X-ray computed tomography (CT) device, a magnetic resonance imaging (MRI) device, an ultrasonic diagnostic device, a single photon emission computed tomography (SPECT) device, a positron emission computed tomography (PET) device, a SPECT-CT device in which a SPECT device and an X-ray CT device are integrated, a PET-CT device in which a PET device and an X-ray CT device are integrated, and a group of these devices.
The medical diagnostic imaging device 10 generates volume data by imaging a subject. For example, the medical diagnostic imaging device 10 collects projection data or data of an MR signal by imaging a subject and reconstructs multiple (for example, 300 to 500 pieces of) slice images (cross-sectional images) taken along the body-axis direction of the subject to thereby generate volume data. Specifically, as illustrated in
The image storage device 20 is a database that stores three-dimensional images. Specifically, the image storage device 20 stores the volume data and the position information transmitted from the medical diagnostic imaging device 10 and archives the volume data and the position information.
The stereoscopic image display apparatus 30 is a device that displays multiple parallax images having different parallaxes so that a viewer can observe the stereoscopic images. The stereoscopic image display apparatus 30 may be one which employs a 3D display method such as, for example, an integral imaging method (II method) or a multi-view system. Examples of the stereoscopic image display apparatus 30 include a TV, a PC, or the like which enables viewers to view stereoscopic images with naked eyes. The stereoscopic image display apparatus 30 of the present embodiment performs a volume rendering process on the volume data acquired from the image storage device 20 and generates and displays a group of parallax images. The group of parallax images is a group of images generated by performing a volume rendering process on the volume data by moving a viewpoint position by a predetermined parallax angle and is made up of multiple parallax images having different viewpoint positions.
The display unit 50 displays the stereoscopic images generated by the image processing unit 40. As illustrated in
A direct-view two-dimensional display, for example, an organic electroluminescence (EL), a liquid crystal display (LCD), a plasma display panel (PDP), or a projection display is used as the display panel 52. Moreover, the display panel 52 may include a backlight.
The light beam control unit 54 is disposed to face the display panel 52 with a gap interposed. The light beam control unit 54 controls an emission direction of the light beam from the respective pixels of the display panel 52. The light beam control unit 54 has a configuration in which multiple optical apertures for emitting light beams are arranged in the first direction so as to extend in a straight line shape. For example, a lenticular sheet in which multiple cylindrical lenses are arranged, a parallax barrier in which multiple slits are arranged, or the like is used as the light beam control unit 54. The optical apertures are disposed so as to correspond to the respective elemental images of the display panel 52.
In the present embodiment, although the stereoscopic image display apparatus 30 has a vertical stripe arrangement in which the sub-image elements of the same color component are arranged in the second direction, and each color component is repeatedly arranged in the first direction, the present invention is not limited to this. Moreover, in the present embodiment, although the light beam control unit 54 is disposed so that the extension direction of the optical aperture is identical to the second direction of the display panel 52, the present invention is not limited to this. For example, the light beam control unit 54 may be disposed so that the extension direction of the optical aperture is inclined with respect to the second direction of the display panel 52.
As illustrated in
In the respective elemental images 24, light beams emitted from the pixels (pixels 241 to 243) of the respective parallax images reach the light beam control unit 54. Moreover, the moving direction and the spreading of the light beams are controlled by the light beam control unit 54, and the light beams are emitted toward the entire surface of the display unit 50. For example, in the respective elemental images 24, the light beams emitted from the pixel 241 of the parallax image #1 are emitted in the direction indicated by arrow Z1. Moreover, in the respective elemental images 24, the light beams emitted from the pixel 242 of the parallax image #2 are emitted in the direction indicated by arrow Z2. Moreover, in the respective elemental images 24, the light beams emitted from the pixel 243 of the parallax image #3 are emitted in the direction indicated by arrow Z3. In this manner, in the display unit 50, the emission direction of the light beams emitted from the respective pixels of the respective elemental images 24 is adjusted by the light beam control unit 54.
Next, a detailed content of the image processing unit 40 will be described.
The acquiring unit 41 accesses the image storage device 20 to acquire the volume data generated by the medical diagnostic imaging device 10. The volume data may include position information for specifying the positions of respective organs such as a bone, a blood vessel, a nerve, or a tumor. The format of the position information is optional. For example, the position information may have a format in which identification information for identifying the type of an organ is managed in correlation with a group of voxels that constitute the organ, and may have a format in which identification information for identifying the type of an organ to which a voxel is included is added to each of the voxels included in the volume data. The volume data may include information on the coloring and permeability when the respective organs are rendered.
The parallax image generating unit 42 generates parallax images (a group of parallax images) of the volume data by rendering the volume data acquired by the acquiring unit 41 from multiple viewpoints. In rendering of the volume data, various existing volume rendering techniques can be used.
The description is continued by returning to
The superimposed image generating unit 43 generates a depth image obtained by rendering the volume data from a depth viewpoint which is a viewpoint different from the multiple viewpoints used when the parallax image generating unit 42 performs rendering and at which the entire volume data in the depth direction can be observed. Moreover, the superimposed image generating unit 43 generates a superimposed image obtained by superimposing light information that represents the relationship between the position in the depth direction (the normal direction of the screen surface) of the stereoscopic image and the resolution (density of light beams emitted from the pixels of the display panel 52) of the stereoscopic image when the parallax image generated by the parallax image generating unit 42 is displayed on the display unit 50 as a stereoscopic image, on the depth image. The detailed configuration and operation of the superimposed image generating unit 43 will be described below.
The first setting unit 61 sets the depth viewpoint described above. More specifically, the first setting unit 61 selects one of the multiple viewpoints used when the parallax image generating unit 42 performs rendering, and sets a viewpoint on a plane whose normal line corresponds to a straight line perpendicular to a vector that extends in the sight direction from the selected viewpoint (referred to as a first viewpoint) as a depth viewpoint. In the present embodiment, as illustrated in (a) of
The description is continued by returning to
The description is continued by returning to
Zn=L/(2×((L+g)/L)×psp/g×β+1)
Zf=−L/(2×((L+g)/L)×psp/g×β1) (1)
In Equation (1), Zn represents the distance in the depth direction from the screen surface to the position at which the resolution on the front side is β, and Zf represents the distance in the depth direction from the screen surface to the position at which the resolution on the inner side is β. Moreover, L represents an observation distance representing the distance from the screen surface to the position at which the viewer observes the stereoscopic image. Further, g represents a focal distance in air. Further, psp represents a horizontal width of a subpixel (sub-image element). Each of L, g, and psp is a constant that is determined by the specification (hardware specification) of the display unit 50.
In this example, the values L, g, and psp described above and Equation (1) are stored in a memory (not illustrated). The first superimposing unit 63 reads the values L, g, and psp described above and Equation (1) from the memory (not illustrated) and substitutes the value of an optional resolution β into Equation (1). In this way, the first superimposing unit 63 can calculate how far the distance of the position at which the resolution β is obtained is separated from the screen surface in the depth direction. For example, in order to calculate a position at which the resolution β0 on the screen surface decreases to 90%, resolution β0×0.9 may be substituted into the value p of Equation (1).
Moreover, the values Zn and Zf are corrected according to the value of the interval of the multiple viewpoints set by the parallax amount setting unit 46. More specifically, when the value of the interval of the multiple viewpoints set by the parallax amount setting unit 46 is equal as a predetermined default value, the values Zn and Zf are the same as the values obtained by Equation (1). However, for example, when the value of the interval of the multiple viewpoints set by the parallax amount setting unit 46 is twice the default value, the values Zn and Zf are corrected to the values that are half the values obtained by Equation (1). Moreover, for example, when the value of the interval of the multiple viewpoints set by the parallax amount setting unit 46 is half the default value, the values Zn and Zf are corrected to the values that are twice the values obtained by Equation (1).
As above, the first superimposing unit 63 calculates the isosurfaces based on the interval of the multiple viewpoints set in advance by the parallax amount setting unit 46 and the information (in this example, the values L, g, and psp described above and Equation (1)) representing the characteristics of the light beams emitted from the display unit 50. However, the method of calculating the isosurfaces is not limited to this. Moreover, the first superimposing unit 63 draws isolines that represent the isosurfaces as viewed from the depth viewpoint, respectively. The drawn isolines can be understood as light information that represents the relationship between the position in the depth direction of the stereoscopic image and the resolution of the stereoscopic image when the parallax image is displayed on the display unit 50 as a stereoscopic image. The first superimposing unit 63 superimposes the drawn isolines (light information) on the depth image and generates a superimposed image which represents the display resolution at an optional position in the depth direction of the volume data.
Moreover, the positions of the volume data displayed on the screen surface and the entire depth amount (which includes the amount of pop-out toward the near side from the screen surface plus the amount of sinking into the far side from the screen surface) when the volume data is displayed stereoscopically are determined in advance according to the interval of the multiple viewpoints set by the parallax amount setting unit 46. In the example of
The description is continued by returning to FIG. V. As illustrated in
The output unit 60 outputs (displays) the image combined by the image combining unit 45 on the display unit 50. The present invention is not limited to this, and for example, the image combining unit 45 may be not provided, and the output unit 60 may output only the superimposed image generated by the superimposed image generating unit 43 to the display unit 50. Moreover, for example, the output unit 60 may selectively output the superimposed image generated by the superimposed image generating unit 43 and any one of the respective parallax images generated by the parallax image generating unit 42 to the display unit 50. Further, for example, the output unit 60 may output the superimposed image generated by the superimposed image generating unit 43 and the respective parallax images generated by the parallax image generating unit 42 to another monitor (display unit).
Next, an operation example of the stereoscopic image display apparatus 30 according to the present embodiment will be described with reference to
In step S1002, the first setting unit 61 sets a depth viewpoint. In step S1003, the depth image generating unit 62 generates a depth image by rendering the volume data from the depth viewpoint. In step S1004, the first superimposing unit 63 calculates isosurfaces based on the interval of the multiple viewpoints used when the parallax image generating unit 42 performs rendering and the information representing the characteristics of the light beams emitted from the display unit 50 and draws isolines that represent the isosurfaces as viewed from the depth viewpoint, respectively. Moreover, the first superimposing unit 63 generates a superimposed image by superimposing the drawn isolines on the depth image.
In step S1005, the image combining unit 45 combines the respective parallax images of the volume data generated by the parallax image generating unit 42 and the superimposed image generated by the superimposed image generating unit 43. In step S1006, the output unit 60 displays the image combined by the image combining unit 45 on the display unit 50.
As described above, in the present embodiment, when a parallax image obtained by rendering volume data from multiple viewpoints is displayed as a stereoscopic image, isolines (light information) representing the relationship between the position in the depth direction of the stereoscopic image and the resolution of the stereoscopic image are superimposed on a depth image obtained by rendering the volume data from a depth viewpoint to obtain and display the superimposed image. In this way, the viewer can understand the exact resolution at an optional position in the depth direction of the volume data.
For example, the positional relationship between the depth image and the isolines or the interval of the isolines (the light information) may be changed according to the input by the viewer.
As illustrated in
For example, the viewer can perform an input operation of changing the positional relationship between the depth image and the isolines or the interval of the isolines by operating a mouse while viewing the default image displayed on the display unit 50 to designate a depth image or an isoline using a mouse cursor and moving the designated depth image or isoline in the vertical direction (the depth direction in
The superimposed image generating unit 43 changes (regenerates) the superimposed image according to the content of the setting of the second setting unit 44. Moreover, the parallax amount setting unit 46 changes the interval of the multiple viewpoints used when the parallax image generating unit 42 performs rendering according to the content of the setting of the second setting unit 44. Moreover, the parallax image generating unit 42 changes (regenerates) the parallax image by rendering the volume data from the multiple viewpoints of which the interval is changed by the parallax amount setting unit 46. Moreover, the image combining unit 45 combines the changed superimposed image with the respective changed parallax images of the volume data. The output unit 60 displays the image combined by the image combining unit 45 on the display unit 50.
Next, an operation example of the stereoscopic image display apparatus when the viewer performs an input operation of changing the positional relationship between the depth image and the isolines or the interval of the isolines while viewing the default image displayed on the display unit 50 will be described.
In step S1104, the image combining unit 45 combines the respective changed parallax images of the volume data with the changed superimposed image. In step S1105, the output unit 60 displays the image combined by the image combining unit 45 on the display unit 50.
As described above, in this example, the second setting unit 44 changeably sets the positional relation between the depth image and the isolines or the interval of the isolines according to the input by the viewer. Moreover, since the amount of pop-out (amount of parallax) of the volume data is changed according to the content of the setting of the second setting unit 44, the viewer can control the display resolution at an optional position of the volume data.
Next, a second embodiment will be described. The second embodiment is different from the first embodiment in that the second embodiment includes a function (hereinafter referred to as an allowable line display function) of drawing allowable lines as viewed from a depth viewpoint, of a surface (allowable value surface) on which the resolution when a parallax image of volume data is displayed as a stereoscopic image is equal to a predetermined allowable value and displaying the drawn allowable lines that are superimposed on a superimposed image. This will be described in detail below. The same portions as those of the first embodiment will be denoted by the same reference numerals, and description thereof will not be provided.
The description is continued by returning to
Next, an operation example of a stereoscopic image display apparatus when the allowable line display function is set to on will be described with reference to
The processes of steps S2002 to 52004 are the same as the processes of step S1002 to S1004 of
As described above, in the present embodiment, since the image in which the allowable lines are superimposed on the superimposed image is displayed on the display unit 50, the viewer can recognize the display limit of the volume data easily. Moreover, since the region of the volume data in which the resolution when the parallax image is displayed on the display unit 50 as a stereoscopic image is smaller than the allowable value is not displayed, it is possible to improve the visibility of the image within the display limit of the volume data.
For example, the third setting unit 48 may changeably set (change) the allowable value according to the input by the viewer. A method of allowing the viewer to input the allowable value is optional. For example, the average luminance value may be input by the viewer operating an operating device such as a mouse or a keyboard, and the allowable value may be input by the viewer performing a touch operation on the screen displayed on the display unit 50. Moreover, the second superimposing unit 65 changes the allowable lines according to the allowable value set by the third setting unit 48 and superimposes the changed allowable lines on the superimposed image. Moreover, the parallax image generating unit 420 changes the parallax image according to the allowable value set by the third setting unit 48. In this example, the allowable lines displayed on the display unit 50 in a state where the setting of the third setting unit 48 are not reflected will be referred to as default allowable lines and the stereoscopic image of the volume data will be referred to as a default stereoscopic image.
Next, an operation example of a stereoscopic image display apparatus when a viewer performs an input operation of changing the allowable value while viewing an image in which the default allowable lines are superimposed on the superimposed image or the default stereoscopic image will be described.
In step S2104, the image combining unit 45 combines the respective changed parallax images of the volume data with the image in which the changed allowable lines are superimposed on the superimposed image. In step S2105, the output unit 60 displays the image combined by the image combining unit 45 on the display unit 50.
Next, a third embodiment will be described. The third embodiment is different from the respective embodiments in that the isolines are superimposed on a depth image in which a cross-section-of-interest including at least a part of a region of interest of the volume data that the viewer wants to focus on is exposed to obtain and display a superimposed image. This will be described in detail below. The same portions as those of the respective embodiments described above will be denoted by the same reference numerals, and description thereof will not be provided.
In this example, the stereoscopic image and the superimposed image of the volume data displayed on the display unit 50 in a state where the region of interest is not set will be referred to as a default stereoscopic image and a default superimposed image, respectively, and both images will be referred to as default images when both images are not distinguished.
In this example, as illustrated in
In this example, the viewer designates the cross-sectional positions of the three cross-sectional images 70 to 72 using a mouse cursor or the like by operating a mouse or the like, and the cross-sectional positions are changeably input according to a dragging operation of the mouse or the scroll value of a mouse wheel. Moreover, the cross-sectional images 70 to 72 corresponding to the input cross-sectional positions are displayed on the display unit 50. In this way, the viewer can change the cross-sectional images 70 to 72 displayed on the display unit 50. The present invention is not limited to this, and a method of changing the cross-sectional images 70 to 72 displayed on the display unit 50 is optional.
The viewer designates a predetermined position on a certain cross-sectional image as a point of interest while switching the three cross-sectional images 70 to 72. A method of designating the point of interest is optional, and for example, a predetermined position on a certain cross-sectional image may be designated using a mouse cursor by operating a mouse. In this example, the point of interest designated by the viewer is expressed as a 3-dimensional coordinate value within the volume data.
In the present embodiment, the fourth setting unit 47 sets a point of interest designated by the viewer as a region of interest. In this example, although the region of interest is a point present within the volume data, the present invention is not limited to this, and for example, the region of interest set by the fourth setting unit 47 may be a surface having a certain size. For example, the fourth setting unit 47 may set a region having an optional size including the point of interest designated by the viewer as the region of interest. Moreover, for example, the fourth setting unit 47 may set the region of interest using the volume data acquired by the acquiring unit 41 and the point of interest designated by the viewer. More specifically, for example, the fourth setting unit 47 may calculate the central positions of the respective objects included in the volume data acquired by the acquiring unit 41 and the distance between the central positions and the three-dimensional coordinate value of the point of interest designated by the viewer and set an object having the smallest distance as the region of interest. Further, for example, the fourth setting unit 47 may set an object having the largest number of voxels included in a region having a certain size including the point of interest among the objects included in the volume data as the region of interest. Furthermore, when an object is present within a threshold distance from the point of interest, the fourth setting unit 47 may set the object as the region of interest. When no object is present within the threshold distance from the point of interest, the fourth setting unit 47 may set a region having an optional size around the point of interest as the region of interest.
Moreover, for example, when the viewer designates (points) a predetermined position in a three-dimensional space on the display unit 50 using an input unit such as a pen while viewing a default stereoscopic image, the fourth setting unit 47 may set the region of interest according to the designation. In any case, the fourth setting unit 47 may have a function of setting a region of interest that represents a region of the volume data that the viewer wants to focus on according to the input by the viewer.
The description is continued by returning to
Next, a detailed configuration of the superimposed image generating unit 431 of
The fifth setting unit 64 sets a cross-section of interest that represents a cross-section of the volume data including at least a part of the region of interest set by the fourth setting unit 47. In this example, the fifth setting unit 64 sets a cross-section of the volume data along the XZ plane including the point of interest (region of interest) set by the fourth setting unit 47 as the cross-section of interest.
A depth image generating unit 620 generates a depth image so that the cross-section of interest is exposed. More specifically, the depth image generating unit 620 generates the depth image so that a region of the volume data present between the depth viewpoint and the cross-section of interest is not displayed. That is, the depth image generating unit 620 does not perform sampling along an arbitrary line of sight (ray of light) with respect to the region of the volume data present between the depth viewpoint and the cross-section of interest.
The first superimposing unit 63 generates a superimposed image by superimposing the isolines described above on the depth image generated by the depth image generating unit 620. The image combining unit 45 combines the respective parallax images with the superimposed image. Moreover, the output unit 60 displays the image combined by the image combining unit 45 on the display unit 50.
As illustrated in
As described above, in the present embodiment, since isolines are superimposed on a depth image in which a cross-section of interest of the volume data including the point of interest that the viewer wants to focus on is exposed to obtain and display a superimposed image, the viewer can understand the resolution near the point of interest more easily.
For example, when the region of interest set by the fourth setting unit 47 is a surface having a certain size, the depth image generating unit 620 may generate a depth image so that the region of interest set by the fourth setting unit 47 is exposed. For example, when the point of interest designated by the viewer belongs to any one of the respective objects included in the volume data such as a bone, a blood vessel, a nerve, or a tumor, the fourth setting unit 47 may set an object to which the point of interest belongs as the region of interest, and the depth image generating unit 620 may generate the depth image so that the region of interest (the object to which the point of interest belongs) set by the fourth setting unit 47 is exposed. In this case, the fifth setting unit 64 may be not provided.
The respective embodiments and the respective modification examples described above may be combined with each other. Moreover, the image processing unit (40, 400, 410, 411) of the respective embodiments described above corresponds to an image processing apparatus of the present invention.
The image processing unit (40, 400, 410, 411) of the respective embodiments described above has a hardware configuration which includes a central processing unit (CPU), a ROM, a RAM, a communication I/F device, and the like. The functions of the respective units are realized when the CPU deploys the programs stored in the ROM onto the RAM and executes the programs. Moreover, the present invention is not limited to this, and at least a part of the functions of the respective units may be realized by an individual circuit (hardware).
Moreover, the programs executed by the image processing unit of the respective embodiments may be stored on a computer connected to a network such as the Internet and may be provided by being downloaded via the network. Further, the programs executed by the image processing unit of the respective embodiments may be provided or distributed via a network such as the Internet. Further, the programs executed by the image processing unit of the respective embodiments may be provided by being incorporated in advance in a nonvolatile recording medium such as a ROM.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
Number | Date | Country | Kind |
---|---|---|---|
2012-084143 | Apr 2012 | JP | national |