1. Field of the Invention
Embodiments of the present invention relate to a medical device configured to be inserted into a lumen of a subject, and more particularly to a medical device that performs highly accurate inspection/treatment based on three-dimensional image data of a subject.
2. Description of the Related Art
In recent years, diagnosis using a three-dimensional image has been widely performed. For example, three-dimensional image data inside a subject is acquired by picking up a tomographic image of the subject using an X-ray CT (Computed Tomography) apparatus, and diagnosis of a target site has been performed using the three-dimensional image data.
A CT apparatus continuously performs scanning on a subject in a helical mode (helical scan) by continuously moving the subject while continuously rotating an X-ray irradiation position and a detection position. Then, a three-dimensional image is created from a multiple of continuous two-dimensional tomographic images of the subject.
As one of three-dimensional images used for diagnosis, a three-dimensional image of the bronchus of the lung is known. A three-dimensional image of the bronchus is used for three-dimensionally figuring out a position of an abnormal portion at which a lung cancer and the like are suspected to exist, for example. Then, in order to check the abnormal portion by biopsy, a bronchoscope is inserted in the bronchus and a biopsy needle, biopsy forceps, or the like is protruded from a distal end portion of an insertion portion to take a sample of a tissue.
In tracts in a body which have multilevel bifurcations such as the bronchus, if an abnormal portion exists at a periphery of the bronchus, it is difficult to make the distal end portion reach in the vicinity of a target site precisely in a short time. Therefore, Japanese Patent Application Laid-Open Publication Nos. 2004-180940 and 2005-131042, for example, disclose an insertion navigation system which creates a three-dimensional image of a tract in the subject based on three-dimensional image data of the subject, calculates a route to a target point along the tract on the three-dimensional image, and creates a virtual endoscopic image of the tract along the route based on the image data, to display the created virtual endoscopic image.
In addition, Japanese Patent Application Laid-Open Publication No. 2003-265408 discloses an endoscope guiding apparatus which displays a position of the endoscope distal end portion in a superimposed manner on a tomographic image.
A medical device according to one aspect of the present invention includes: a storing section configured to store previously acquired three-dimensional image data of a subject; a position calculation section configured to calculate a position and a direction of a distal end portion of an insertion portion inserted into a lumen in a body of the subject; a route generation section configured to generate a three-dimensional insertion route for inserting the distal end portion to a target position through the lumen in the body of the subject, based on the three-dimensional image data; a tomographic image generation section configured to generate a two-dimensional tomographic image based on the position and the direction of the distal end portion, from the three-dimensional image data; and a superimposed image generation section configured to generate, based on image data in which the three-dimensional insertion route is superimposed on the two-dimensional tomographic image on a three-dimensional space, the three-dimensional space in a displayable manner, as a three-dimensional model image viewed along a desired line of sight.
Hereinafter, a medical device 1 according to a first embodiment of the present invention will be described with reference to drawings. As shown in
As described later, in the medical device 1, during an insertion operation, a superimposed image PW1 as a three-dimensional model image showing a three-dimensional space on which a tomographic image (oblique image) PO of a plane which includes a position of the distal end portion 2C at that time and which is perpendicular to the direction of the distal end portion 2C and a three-dimensional insertion route R are displayed in a superimposed manner is displayed on a display section 4 (See
As the position of the distal end portion 2C changes, that is, the insertion operation advances, the tomographic image PO to be displayed is automatically updated. Note that a position display mark P2C which shows the position of the distal end portion 2C of the insertion portion 2A is displayed on the tomographic image PO in a superimposed manner.
Next, a configuration of the medical device 1 will be described with reference to
The endoscope apparatus 2 is a bronchoscope including the insertion portion 2A which is insertion means having an image pickup section 2B as image pickup means disposed at the distal end portion 2C, and an endoscope control section 2D which controls the insertion portion 2A and the like. The insertion portion 2A includes inside thereof the channel 8 through which the treatment instrument 6 is insertable. When the distal end portion 2C is inserted close to the target site 9G, the treatment instrument 6 is protruded from a channel opening 8E of the distal end portion 2C and biopsy is performed.
The main body section 3 includes: an endoscopic image processing section 11; a superimposed image generation section 12 as superimposed image generation means; a position calculation section 20 as position calculation means; a virtual endoscopic image (Virtual Bronchus Scope image: hereinafter also referred to as “VBS image”) generation section 13; a tomographic image generation section 14 as tomographic image generation means; a CT image data storing section 15 as storing means; a core line calculation section 16 as core line calculation means; a route generation section 18 as route generation means; and a control section 10 as control means.
The control section 10 controls the whole navigation. The endoscopic image processing section 11 processes an image picked up by the image pickup section 2B and outputs endoscopic image (hereinafter, also referred to as “real image”). The CT image data storing section 15 stores three-dimensional image data of the subject 7 which was previously acquired by using a CT apparatus. The VBS image generation section 13 generates, based on the three-dimensional image data, a VBS image which uses the position, the direction, and the roll angle (hereinafter, also referred to as “position and the like”) of the distal end portion 2C as line-of-sight parameters.
The position calculation section 20 calculates the position and the like of the distal end portion 2C of the insertion portion 2A inserted into the bronchus 9. The core line calculation section 16 calculates a core line S of the bronchus 9 based on the three-dimensional image data. Here, the core line S shows information on a line connecting gravity center points on a vertical plane in the tract direction of the bronchus 9, that is, the longitudinal direction of the lumen. As the core line S, information on the center line which connects the center points on the vertical plane in the tract direction of the lumen, and the like may be used.
The route generation section 18 generates, based on the three-dimensional image data, the insertion route R along the core line S to the target site 9G which is a target position set by the operator using the input section 5.
The tomographic image generation section 14 generates, based on the three-dimensional image data, the tomographic image PO of a plane which includes the three-dimensional position of the distal end portion 2C calculated by the position calculation section 20 and which is perpendicular to the direction of the distal end portion 2C.
The superimposed image generation section 12 generates a three-dimensional space on which the three-dimensional insertion route R is superimposed on the tomographic image PO generated by the tomographic image generation section 14, as a superimposed image PW1 which is a three-dimensional model image observed along a predetermined line of sight LA.
The display section 4 displays a navigation image including at least one of the real image and the VBS image, and the superimposed image PW1, during insertion operation.
Note that each of the above-described constituent elements of the main body section 3 is not necessarily a separate hardware but may be a program to be read by the CPU to operate, for example.
Hereinafter, description will be made on the tomographic image generated by the tomographic image generation section 14 with reference to
An axial image PA shown in
Furthermore, the composite tomographic image shown in
Next, description will be made on a flow of processing steps performed by the medical device 1 with reference to the flowchart in
First, a target position setting screen shown in
Three-dimensional coordinates representing the target site 9G have to be set using the display section 4 which displays a two-dimensional image. Therefore, at first, three kinds of tomographic images, i.e., the axial image PA, the coronal image PC, and the sagital image PS are generated from the three-dimensional image data of the subject. The tomographic image for target position setting is created with the body axis set as the Z axis, for example.
As shown in
When the operator moves the target position mark P9G displayed in a superimposed manner on any of the tomographic images, using a mouse or the like as input means, in accordance with the movement, the target position marks 9G displayed on other tomographic images also move.
Note that also the insertion start position may be settable by moving operation of the start position mark P7A. In addition, the target position does not have to be a point, but may be a target region having a predetermined volume. Furthermore, in order to set the target position more precisely, the tomographic images may be displayed in an enlarged manner.
When the target site 9G is set, the route generation section 18 generates the insertion route R from the pharynx portion 7A as the insertion start position to the target site 9G as the target position, based on the three-dimensional image data stored in the CT image data storing section 15. The insertion route R is a core line leading to the target site 9G, which is a part of the core line S connecting the gravity center points or the center points of the luminal cross sections in the three-dimensional image data.
The route generation section 18 may generate a plurality of insertion routes, and the selection of a route may be left to the operator. That is, when the target site 9G is located between a plurality of lumens, or the target site 9G is a site having a volume equal to or larger than a predetermined volume, for example, a plurality of insertion routes are calculated.
As shown in
On the other hand, the VBS image generation section 13 generates VBS images of the bifurcation portions J1 to J4 on the insertion route R and thumbnail images as reduced images of the respective VBS images.
As shown in
The position calculation section 20 calculates the position and the like of the distal end portion 2C on a real-time basis, or at a predetermined time interval.
Then, the position calculation section 20 controls the VBS image generation section 13 to generate a VBS image similar to the real image photographed by the CCD (2B). That is, the VBS image generation section 13 generates a VBS image which uses the position, the direction and the roll angle (X1, Y1, Z1, a1, e1, r1) as line-of-sight parameters. In this embodiment, (X, Y, Z) represent three-dimensional coordinate values, (a) represents an azimuth angle, (e) represents an elevation angle, and (r) represents a roll angle.
Then, the position calculation section 20 compares the VBS image and the real image to calculate a similarity therebetween. The calculation of the similarity between the images is performed by a publicly known processing, and may be performed by using either a matching at a pixel data level or a matching at a level of features extracted from the images.
The matching processing between the real image and the VBS image is performed in the unit of frames of the real image. Therefore, the actual comparison processing is performed with the similarity between a still endoscopic image and the VBS image as a reference.
When an error e between the images calculated based on the similarity calculated by comparing the real image with the VBS image B is larger than a predetermined admissible error e0, the position calculation section 20 outputs line-of-sight parameters whose values have been changed to the VBS image generation section 13. The VBS image generation section 13 generates the next one VBS image according to the new line-of-sight parameters.
By performing the above-described processing repeatedly, that is, by changing the line-of-sight parameters, the VBS image B generated by the VBS image generation section 13 gradually becomes an image similar to the real image, and after repeating the processing a several times, the error e between the images becomes equal to or smaller than the admissible error e0.
Then, the position calculation section 20 calculates the information on the position and the like (X, Y, Z, a, e, r) of the distal end portion 2C based on the line-of-sight parameter of the VBS image similar to the real image. That is, the position, the direction and the roll angle of the distal end portion 2C calculated by the position calculation section 20, are more precisely the line-of-sight position, the line-of-sight direction, and the roll angle of the image pickup section 2B disposed at the distal end portion 2C.
The tomographic image generation section 14 generates a tomographic image of the plane P including the three-dimensional position (X, Y, Z) of the distal end portion 2C calculated by the position calculation section 20. Note that the operator can select a desired image from the cross-sectional images shown in
The superimposed image generation section 12 generates the superimposed image PW1 in which the insertion route R is superimposed on the tomographic image PO.
The three-dimensional model image, which is viewed along a desired line of sight LA, of the three-dimensional space on which the two-dimensional tomographic image PO and the three-dimensional insertion route R are arranged as shown in
The superimposed image PW1 shown in
In contrast, the superimposed image PW1 is a three-dimensional model image, and can be changed in a desired state by the operator arbitrarily changing the line of sight LA. For example, if the line of sight LA is set on the extension of the plane of the tomographic image PO, the tomographic image PO on the superimposed image PW1 is displayed with a line. In addition, in the superimposed image PW1, the tomographic image PO includes the distal end portion 2C. Therefore, the operator can acquire the information on the tissue around the distal end portion 2C.
Note that, in the superimposed image PW1, the point of the intersection between the route image PR showing the insertion route R and the tomographic image PO is the position of the distal end portion 2C at which the position display mark P2C is displayed.
In addition, the superimposed image generation section 12 shows the route image PR1 from the start position mark P7A showing the position of the pharynx portion 7A as the insertion start position to the position display mark P2C indicating the position of the distal end portion 2C with a distinguishable line which is different from the line representing the route image PR2 from the position display mark P2C to the target position mark P9G indicating the target position. That is, the route image PR1 is displayed with a dotted line, and the route image PR2 is mainly displayed with a solid line. Furthermore, the superimposed image generation section 12 displays a part of the route image PR2 with a dashed line, the part being located on the rear side of the tomographic image PO when viewed along the line of sight LA.
Note that the superimposed image generation section 12 may display the route image PR1 and the route image PR2 with different colors or different thicknesses in order to distinguish the route images from each other. In addition, the superimposed image generation section 12 does not have to display the route image PR1.
Furthermore, as shown in
Furthermore, as shown in
Furthermore, as shown in
Furthermore, as shown in
Note that, in
In addition, the tomographic image generation section 14 may generate the coronal image PC or the sagital image PS, which includes the position of the distal end portion 2C.
That is, the tomographic image generation section 14 generates a tomographic image based on the position and the direction of the distal end portion 2C, but is capable of generating a tomographic image based only on the position of the distal end portion 2C.
The superimposed image PW1 generated by the superimposed image generation section 12 is displayed on the display section 4 together with the real image and the VBS image.
Note that the superimposed image PW1 may be constantly displayed on the navigation screen, may be brought temporarily into a non-display state by a setting by the operator, or may be brought automatically into the non-display state under the control by the control section 10. In addition, the kind of the tomographic images displayed in the superimposed image PW1 may be changed by the setting by the operator or under the control by the control section 10.
The image to be displayed on the navigation screen may be selected based on the position of the distal end portion 2C. For example, when the distal end portion 2C is brought near to the bifurcation portion J, the display mode may be set to a display mode for displaying the navigation screen including the superimposed image PW1, and after the distal end portion 2C passed through the bifurcation portion J, the display mode may be switched to a display mode for displaying the navigation screen on which the superimposed image PW1 is not displayed.
The switching of the display mode is controlled by the control section 10 depending on presence or absence of setting of a trigger, similarly in the switching of the navigation mode (See
Until the distal end portion 2C is inserted close to the target site 9G (S17: Yes), the processing steps in the step S13 and after are repeatedly performed.
When the distal end portion 2C is inserted close to the target site 9G, the insertion navigation mode is terminated, and the treatment instrument 6 is protruded from the distal end portion 2C, and biopsy or the like is performed on the target site 9G.
As described above, in the medical device 1, the operator can easily recognize the position of the distal end portion 2C based on the superimposed image displayed on the display section 4. Furthermore, the operator can recognize the state of the tissues in the vicinity of the distal end portion 2C based on the tomographic image PO. Therefore, the medical device 1 facilitates the insertion of the distal end portion 2C of the insertion portion 2A to the target site 9G.
Hereinafter, a medical device 1A according to the second embodiment of the present invention will be described with reference to the drawings. The medical device 1A is similar to the medical device 1. Therefore, the same constituent elements are attached with the same reference numerals and the description thereof will be omitted.
As shown in
In order to insert the second route image PR2 in the real image, the second route image PR2 to be superimposed on the VBS image corresponding to the real image is generated, and the generated second route image PR2 is superimposed on the real image.
The operator can perform insertion operation while checking the insertion route R with reference to the second route image PR2 which is displayed on the real image in a superimposed manner and recognizing the position and the like of the distal end portion 2C with reference to the superimposed image PW1.
The medical device 1A has the same effects as those of the medical device 1, and further includes an advantage of a simple navigation screen with improved visibility. Note that the various kinds of configurations described in the medical device 1 can be also used in the medical device 1A, and the configuration of the medical device 1A can be also used in the medical device 1.
Hereinafter, description will be made on a medical device 1B according to the modified example 1 of the second embodiment of the present invention, and a medical device 1C according to the modified example 2 of the second embodiment of the present invention. The medical devices 1B and 1C are similar to the medical device 1A. Therefore, the same constituent elements are attached with the same reference numerals and description thereof will be omitted.
As shown in
The insertion route R is calculated along the core line S as a center of the bronchus 9 having a predetermined thickness. Therefore, as shown in
However, as shown in
For example, the display area calculation section 30 counts the number K of the pixels of the second route image PR2 in the VBS image composed of 500×500 pixels. Then, when the number of K of the pixels is equal to or smaller than the first predetermined value K1, the superimposed image generation section 12 displays a line representing the route image PR in a thick manner such that the number of the pixels becomes K1, for example. That is, the shorter the route displayed in a superimposed manner is, the thicker the displayed route image PR is.
In addition, when halation or the like occurs in the real image, the real image partly becomes stark white in some cases, and in other cases, the color inside the lumen and the color of the second route image PR2 are hard to be distinguished from each other. Therefore, as a method of highlighted display of the second route image PR2, the color or the type of the line may be changed, or the second route image may be displayed in a blinking manner.
Alternatively, the display area calculation section 30 may calculate the average luminance not for the pixels in the whole of the real image RBS but for the pixels within a range of a predetermined region of interest (ROI), and may change the display method so as to improve the visibility of the second route image PR2 depending on the change of the average luminance.
It is preferable to set the ROI within a range surrounding the second route image PR2, and the shape of the ROI may be any of a circle, an ellipse, a rectangular, a square, and the like. In addition, the shape is not limited to a preset shape, and a figure whose range surrounding the second route image PR2 has the minimum area may be selected for each processing, or a previously selected figure may be enlarged or reduced so as to cover the range surrounding the second route image PR2.
On the other hand, as shown in
As shown in
However, as shown in
The auxiliary insertion route generation section 31 uses not only core line information but also volume information as three-dimensional shape information of the lumen. As already described above, the core line S is a line connecting the gravity center points on a vertical plane in the tract direction of the lumen, and the volume information is information indicating a position of the luminal wall of the lumen.
That is, as shown in
Note that the auxiliary insertion route generation section 31 may generate more than four, for example, eight auxiliary insertion routes SR.
The medical devices 1B and 1C have the same effects as those of the medical devices 1 and 1A, and further have an advantage of excellent visibility of the insertion route R on the navigation screen. Note that the various kinds of configurations described in the medical devices 1 and 1A can be also used in the medical devices 1B and 1C, and the configurations of the medical devices 1B and 1C can be also used in the medical devices 1 and 1A.
Hereinafter, description will be made on the medical device 1D according to the third embodiment of the present invention with reference to drawings. The medical device 1D is similar to the medical device 1. Therefore, the same constituent elements are attached with the same reference numerals and description thereof will be omitted.
As shown in
The magnetic field sensor detects the magnetic field generated by a plurality of magnetic field generation antennae 22 disposed outside the subject 7, and thereby the position calculation section 20D detects the position and the like of the distal end portion 2C. That is, the disposed positions of the magnetic field sensor 21 and the image pickup section 2B which are disposed at the distal end portion 2C are well-known. Therefore, the position calculation section 20D detects the position and the direction of line of sight, and the roll angle of the image pickup section 2B. Note that, as the magnetic field detection sensor, an MR sensor, a Hall element, a coil and the like can be used.
The medical device 1D has the same effects as those of the medical device 1. Note that the various kinds of configurations described in the medical devices 1, and 1A to 1C can be also used in the medical device 1D, and the configuration of the medical device 1D can be also used in the medical devices, 1 and 1A to 1C.
Hereinafter, description will be made on a medical device 1DA according to a modified example of the third embodiment of the present invention with reference to drawings. The medical device 1DA is similar to the medical device 1D. Therefore, the same constituent elements are attached with the same reference numerals and description thereof will be omitted.
As shown in
Note that, in order to support the insertion into the bifurcations of the bronchus, when a guiding instrument as a forceps which can bend a taper-shaped distal end portion having joints by manual operation is used as the treatment instrument 6, the magnetic field sensor 21D may be disposed at the distal end portion of the guiding instrument.
The medical device 1DA has the same effects as those of the medical device 1D, and moreover can acquire position information of the treatment instrument distal end portion 6A protruded from the channel opening 8E.
Hereinafter, description will be made on a medical device 1E according to the fourth embodiment of the present invention, with reference to drawings. The medical device 1E is similar to the medical device 1. Therefore, the same constituent elements are attached with the same reference numerals and description thereof will be omitted.
In the medical device 1E, when the distal end portion 2C of the insertion portion 2A reaches in the vicinity of the target site 9G, the tomographic image displayed on the navigation screen is changed. In other words, the image displayed by the display section 4 is selected based on the position of the distal end portion 2C by the control section 10. More specifically, when the distance between the position of the distal end portion 2C and the position of the target site 9G is equal to or smaller than a predetermined value, or the distal end portion 2C passed through the last bifurcation portion, the navigation mode is switched from an insertion portion insertion-supporting mode to a treatment instrument operation supporting mode. It is needless to say that the operator may select the navigation mode.
In this embodiment, as shown in
Then, the tomographic image generation section 14 generates a tomographic image PPE of a plane including the position of the channel opening 8E and parallel to the axis direction of the channel 8, that is, the plane parallel to the direction of the distal end portion 2C. Furthermore, as shown in
The extended line P8S may be calibrated or the color of the line may be changed depending on the length. Furthermore, the direction of the extended line P8S may have a predetermined angle with respect to the direction of the distal end portion 2C, and the angle may be arbitrarily changed by the operator.
Hereinafter, with reference to the flowchart in
The processings in these steps are similar to those in the steps S10 to S13 in the medical device 1 according to the first embodiment which was described with reference to
In the medical apparatus 1E, a trigger is set by the control section 10 depending on the position of the distal end portion 2C calculated in the step S21. For example, the trigger is set when the distance between the position of the distal end portion 2C and the target site 9G is equal to or smaller than the predetermined value. In this case, the distance between the position of the distal end portion 2C and the target site 9G may be a direct distance, or a distance of the insertion route via the core line S.
In addition, the trigger is set when the inner diameter of the portion of the bronchus 9 at which the distal end portion 2C is positioned is equal to or smaller than a predetermined value, or when the difference between the inner diameter of the portion of the bronchus 9 at which the distal end portion 2C is positioned and the outer diameter of the insertion portion 2A is equal to or smaller than a predetermined value, for example.
Furthermore, the trigger may be set not only automatically by the control section 10 but also by the setting operation by the operator using the input section 5. Alternatively, the trigger may be set when detecting that the image of the treatment instrument 6 is reflected on the real image, that is, the operator has started biopsy by protruding the treatment instrument 6 from the channel opening 8E.
For example, when the treatment instrument 6 is protruded from the channel opening 8E, the luminance of the pixels within a predetermined region of interest (ROI) in the real image is increased. Therefore, the average luminance in the ROI is calculated and the trigger may be set depending on the change of the average luminance.
When the trigger is ON (YES), the navigation mode is switched in step S26. In contrast, when the trigger is OFF (NO), the current navigation mode is continued.
The processings in these steps are similar to those in the steps S14 to S17 in the medical device 1 according to the first embodiment which was described with reference to
The medical device 1E has the same effects as those in the medical device 1, and further performs treatment instrument operation support also after the distal end portion 2C is inserted close to the target site 9G. Note that the various kinds of configurations described in the medical device 1, and 1A to 1D can be also used in the medical device 1E, and the configuration of the medical device 1E can be also used in the medical device 1, and 1A to 1D.
Hereinafter, a medical device 1F according to the fifth embodiment of the present invention will be described with reference to drawings. The medical device 1F is similar to the medical device 1. Therefore, the same constituent elements are attached with the same reference numerals and description thereof will be omitted.
As shown in
Then, when the distal end portion 2C is inserted close to the target site 9G, the navigation mode is switched to the treatment instrument operation supporting mode, and the tomographic image generation section 14 generates a tomographic image PPF (see
Furthermore, as shown in
The operator can recognize the three-dimensional relation between the scanning range 41 and the treatable range 6E, by changing the position of the line of sight when viewing the superimposed image PW1F which is a three-dimensional model image.
Note that, also in the medical device 1F, the switching of the navigation mode is performed by detecting the setting of the trigger, similarly as in the medical device 1E according to the fourth embodiment.
The medical device 1F has the same effects as those in the medical devices 1 and the like, and further performs treatment instrument operation support also after the distal end portion 2C is inserted close to the target site 9G. Note that various kinds of configurations described in the medical devices 1, and 1A to 1E can be also used in the medical device 1F, and the configuration of the medical device 1F can be also used in the medical devices 1, and 1A to 1E.
Note that the medical devices in the above-described embodiments can be also used when observing the whole of the lumen, that is, when performing a screening without determining a target site, for example. In such a case, a trajectory of the endoscope distal end is displayed instead of the insertion route. The points configuring the trajectory may be positions calculated by the position calculation means or may be points on the center line of the luminal organ located in the vicinity of the calculated positions. In addition, the trajectory to be displayed may be a history of movement which shows the whole previous movement of the endoscope distal end or may be just a trajectory during a predetermined time period or a trajectory within a predetermined space. Furthermore, the center line of the luminal organ is displayed on the trajectory in a superimposed manner, thereby allowing the operator to easily determine that which part of the luminal organ has been observed.
In a case where a screening is performed, also when the endoscope distal end is inserted to a predetermined site and thereafter the endoscope is extracted to reach the carina, for example, the trajectory representing the whole previous movement of the endoscope distal end may be displayed. At that time, it is preferable to make the trajectory distinguishable by displaying the line indicating the position of the endoscope distal end on the deeper side than the carina, that is, the trajectory, in a different color, or with a dotted line.
Having described the preferred embodiments of the invention referring to the accompanying drawings, it should be understood that the present invention is not limited to those precise embodiments and various changes and modifications thereof could be made by one skilled in the art without departing from the spirit or scope of the invention as defined in the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
2011-012103 | Jan 2011 | JP | national |
This application is a continuation application of PCT/JP2011/075686 filed on Nov. 8, 2011 and claims benefit of Japanese Application No. 2011-012103 filed in Japan on Jan. 24, 2011, the entire contents of which are incorporated herein by this reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2011/075686 | Nov 2011 | US |
Child | 13556732 | US |