The present invention relates to a biological light measurement device and a position display method of a light irradiation position and a light detection position or a measurement channel and in particular, to a biological light measurement device which measures blood circulation, a blood motion state, and a change in the amount of hemoglobin inside the body by irradiating near-infrared light to the body and measuring the light, which is transmitted through the inside of the body or reflected in the body, and a position display method of a light irradiation position and a light detection position or a measurement channel.
The biological light measurement device is a device capable of easily measuring blood circulation, a blood moving state, and a change in the amount of hemoglobin inside the body with low restraint and without doing damage to the subject body. In recent years, imaging of the measurement data using a multi-channel device has been realized, and its clinical application is expected.
A device that irradiates visible-wavelength to near-infrared-wavelength light to the body and measures the inside of the body from the reflected light at the position distant by about 10 to 50 mm from the irradiation position, as a principle of the biological light measurement device, is known. Moreover, in biological light measurement devices using a multi-channel, a device with a function of easily displaying and setting the positional relationship between a measured part in an object to be measured and a light irradiation position and a light detection position on two dimensions is also known.
Patent Document 1 discloses a biological light measurement device with a function of measuring a light irradiation position and a light detection position and displaying a body transmitted light intensity image, which is corrected on the basis of the measured light irradiation position and light detection position, and a shape image of an object to be measured as an indication of a measured part so as to overlap each other, in order to clarify the three-dimensional positional relationship among the measured parts in the object to be measured, the light irradiation position and the light detection position, and the body transmitted light intensity image. As a shape image of an object to be measured, a wireframe image, a tomographic image, a CT image, and an MRI image of the object to be measured may be mentioned.
In the known document described above, there is a problem in that a light irradiation position and a light detection position should be measured.
Measuring the light irradiation position and the light detection position necessarily results in a cost increase because a measuring person has to use a device having the measurement means.
In addition, the work itself of measuring the light irradiation position and the light detection position at the time of biological light measurement requires time and effort. This causes a burden on the measuring person and the subject body. In particular, since a large number of channels are used in normal biological light measurement, the number of light irradiation positions and light detection positions that should be measured is also increased. This causes a big burden.
On the other hand, for a shape image of the object to be measured, there is a need to roughly detect a light irradiation position and a light detection position and display them so as to overlap each other. For example, this is a case where a measured part is mostly fixed at specific positions, such as the frontal and occipital regions, from the experience of biological light measurements in the past and accordingly, detailed positional information is not needed.
For this need, a measurement result of accurate three-dimensional positions of a light irradiation position and a light detection position or a measurement channel is superfluous information, and the above-described cost and burden are an excessive burden compared with the purpose.
It is an object of the present invention to provide a biological light measurement device that makes it possible to display a light irradiation position and a light detection position on a three-dimensional head image or a calculation position of a measurement channel without actually measuring the light irradiation position and the light detection position or the three-dimensional position of the measurement channel.
In order to solve the above-described problems, the present invention is configured as follows. A biological light measurement device of the present invention is a biological light measurement device including a light source unit that irradiates near-infrared light, a two-dimensional probe that measures a transmitted light intensity of the near-infrared light at two-dimensional measurement points of a subject body and outputs a signal corresponding to the transmitted light intensity at each measurement point as measurement data for every measurement channel, a signal processing unit that processes the measurement data of the two-dimensional probe to be imaged, and a display unit that displays the imaged measurement data, and is characterized in that it includes: a storage unit that stores data regarding the head shape for display; and a control unit having a coordinate transformation section which performs coordinate transformation of positional information of the two-dimensional probe, which is set on a two-dimensional head image selected from the data regarding the head shape, in order to calculate a light irradiation position and a light detection position on a three-dimensional head image or the position of the measurement channel.
As described above, according to the present invention, it becomes possible to display a light irradiation position and a light detection position or a position of a measurement channel on a three-dimensional head image without measuring the three-dimensional positions of the light irradiation position and the light detection position by a measuring person.
According to a representative embodiment of the present invention, a biological light measurement device includes a probe position easy-input unit. A measuring person sets a light irradiation position and a, light detection position on a head image displayed in a two-dimensional manner, and a control unit calculates three-dimensional light irradiation position and light detection position according to the set information. Accordingly, it becomes possible to display a body transmitted light intensity image and a shape image of an object to be measured as an indication of a measured part so as to overlap each other on a three-dimensional image without measuring the three-dimensional positions of a light irradiation position and a light detection position.
Hereinafter, an overall device configuration, a specific configuration example, and the like of the biological light measurement device of the present invention will be described with reference to the drawings.
(Device Configuration)
The biological light measurement device of the present invention is a device that irradiates near-infrared light into the body, detects light reflected from the vicinity of the surface of the body or transmitted through the body (hereinafter, simply referred to as transmitted light), and generates an electric signal corresponding to the intensity of the light. As shown in
The light source unit 101 includes a semiconductor laser 104, which emits light with a predetermined wavelength, and a plurality of optical modules 105, which have a modulator for modulating the light generated by the semiconductor laser 104 to have a plurality of different frequencies. Output light of each optical module 105 is irradiated from a predetermined measurement region of a subject body 107, for example, a plurality of places of the head through an optical fiber 106. In addition, a probe holder 108 is fixed to the subject body 107, and the optical fiber 106 and an optical fiber for detection 109 are fixed to the probe holder 108. In the present invention, an approximate midpoint of the light irradiation position of the optical fiber 106 and the light detection position of the optical fiber for detection 109 in the probe holder 108 is defined as a measurement channel.
The probe holder 108 has light irradiation positions and light detection positions, which are arrayed in a square matrix with predetermined distances therebetween according to the head shape of the subject body 107, and calculates a concentration change of oxygenated hemoglobin, a concentration change of deoxygenated hemoglobin, and a change in the total hemoglobin concentration when the brain of the subject body 107 is not stimulated and when the brain is stimulated in the measurement position (measurement channel) which is a middle position of the light irradiation position and the light detection position adjacent to each other.
The light measurement unit 102 includes a photoelectric conversion element 110 such as a photodiode which converts each transmitted light beam, which is guided from a plurality of measurement places of the measurement region through the optical fiber for detection 109, into the amount of electricity corresponding to the amount of light, a lock-in amplifier 111 to which an electric signal from the photoelectric conversion element 110 is input and which selectively detects a modulation signal corresponding to the light irradiation position, and an A/D converter 112 which converts an output signal of the lock-in amplifier 111 into a digital signal.
The light source unit 101 includes “n” optical modules (n is a natural number). Although the wavelength of light depends on the spectral characteristic of an observed matter inside the body, one or a plurality of wavelengths are selected and used from light in a wavelength range of 600 nm to 1400 nm in the case of measuring the oxygen saturation or the amount of blood from the concentration of Hb and HbO2.
The light source unit 101 is configured to generate light beams with two kinds of wavelengths, for example, wavelengths of 780 nm and 830 nm corresponding to two kinds of objects to be measured of oxygenated hemoglobin and deoxygenated hemoglobin, and the light beams with these two wavelengths are mixed and irradiated from one radiation position. The lock-in amplifier 111 selectively detects a modulation signal corresponding to the light irradiation position and these two wavelengths. Hemoglobin amount change signals corresponding to the number of channels, which is twice the number of points (measurement points) between the light irradiation positions and the detection positions, are acquired. In addition, a signal processing unit 113, a storage unit 115, and an input/output unit 116 including a display unit 114 or a mouse 117 are further connected to the control unit 103. In addition, a stimulus presenting unit 118 is provided near the subject body 107 so that a predetermined stimulus generated by the control unit 103 or the like is presented (displayed) on the subject body 107.
The storage unit 115 stores data necessary for processing of the signal processing unit 113 or a processing result. As the data regarding the head shape for display, for example, a plurality of “standard patterns of the head shape” for display corresponding to the age or sex of the subject body 107 is set as a table in a form, in which data of a two-dimensional plane figure and data of a three-dimensional shape (wireframe image) form a pair, and is stored in the storage unit 115. A standard pattern of the head shape or the shape of a probe is prepared beforehand as the positional information enough for a measuring person (user) to understand a specific measured part of a subject body, for example, measurement positions such as the frontal and occipital regions, according to measurement purposes of the subject body 107, on the basis of the past data examples of biological light measurement. Moreover, as the data regarding the head shape for display, data of each shape of a plurality of probe holders provided in the biological light measurement device or data of positional relationship of an optical fiber or an optical fiber for detection in each probe holder is formed as a set and formed as a table as data of a “probe” having a predetermined pattern and is stored in the storage unit 115.
The control unit 103 or the signal processing unit 113 has various kinds of arithmetic processing functions or image processing display functions realized by a CPU, a memory, software, and the like. By the arithmetic processing function, for example, a change signal of the amount of hemoglobin converted into a digital signal is processed. On the basis of the processed information, the signal processing unit 113 generates a graph showing a concentration change of oxygenated hemoglobin, a concentration change of deoxygenated hemoglobin, a change in the total hemoglobin concentration, and the like for every channel or an image obtained by plotting them on the two-dimensional image of the subject body. A processing result of the signal processing unit 113 is displayed on the display unit 114 of the input/output unit 116.
The input/output unit 116 has a function for inputting various instructions required for an operation of the device or outputting a measurement result or a processing result by the measuring person. A display screen of the display unit 114 has a graphical user interface (GUI) function, and it functions as an input unit used when a user inputs instructions required for various kinds of measurements or processing or information, such as the coordinate information, by operating an icon, a “probe” position, and the like on the display screen with the mouse 117 or the like. For example, it is used when a measuring person inputs the information regarding the position coordinates of a probe on the “head shape” by selecting the two-dimensional “head shape” as information equivalent to the position of the probe holder 108 of the subject body 107, that is, the light irradiation position, the light detection position, or the three-dimensional position of a measurement channel with a mouse operation without actually measuring it and setting the two-dimensional position of the “probe” with respect to this “head shape”. In addition, it is needless to say that the display unit 114 for the input of two-dimensional position coordinates of the “probe” and a display unit, which displays a processing result of the signal processing unit 113 so as to overlap the position of a head shaped probe, may be separately provided when necessary.
The control unit 103 of the biological light measurement device 100 includes: a coordinate transformation section 203 that calculates the light irradiation position and the light detection position or the position of a measurement channel on a three-dimensional head image by performing coordinate transformation processing of the input probe position; a stimulus presenting section 206 that presents a stimulus to the subject body; a topographic image generating section 207 which generates a topographic image on the basis of the light irradiation position; and an overlap processing section 208 that generates a topographic image, on the basis of the input probe position, so as to overlap the light irradiation position and the light detection position on the head shape or the calculation position of a measurement channel. The coordinate transformation section 203 has a function of calculating the light irradiation position and the light detection position on the three-dimensional head image or the position of a measurement channel according to the two-dimensional position information set by the probe position easy-input section 202.
In addition, the coordinate transformation section 203 includes a two-dimensional coordinate to three-dimensional coordinate transformation section 204, which performs transformation from two-dimensional coordinates to three-dimensional coordinates, and a three-dimensional coordinate to two-dimensional coordinate transformation section 205, which performs transformation from three-dimensional coordinates to two-dimensional coordinates. For example, the coordinate transformation section 203 calculates the light irradiation position and the light detection position or the calculation position of a measurement channel by sequentially repeating the coordinate transformation processing between the shapes approximated on the way from the two-dimensional head image to the three-dimensional head image. As will be described in detail later, the coordinate transformation section 203 can perform both two-dimensional display and three-dimensional display of a probe position so that the user information can be input or an operation result can be output and displayed. In addition, the above-described main functions that the biological light measurement device 100 of the present invention have are realized by all of the control unit 103, the display unit 114, and the like using software, and it is needless to say that they are not limited to the block configuration shown in
In other words, the biological light measurement device 100 of the present invention has a function of easily displaying and setting the light irradiation position and the light detection position or the three-dimensional position of a measurement channel using a display unit with a GUI function. That is, it is possible to realize a function of easily setting the light irradiation position and the light detection position of a subject body on the two-dimensional head image of the display screen, estimating the light irradiation position and the light detection position on the three-dimensional head image by arithmetic processing, such as coordinate transformation, based on the information set in this way, and displaying the actually measured topographic image at the calculation position on the head image obtained by arithmetic processing so as to overlap each other, without measuring the actual light irradiation position and light detection position set in the subject body after a measuring person mounts the probe holder 108 at the head of the subject body 107. Accordingly, since time and effort when measuring the actual light irradiation position and light detection position set in the subject body at the time of biological light measurement can be omitted, a burden on the measuring person and the subject body can be significantly reduced.
(1.0 Specific Example of the Overall Configuration)
Next, a more specific configuration for realizing each function of the biological light measurement device 100 of the present invention will be described using the flow chart of
First, a user selects a “standard pattern of the head shape” or a “probe”, which is optimal for a subject body, using a display screen by the head shape data selecting section 201.
In the present embodiment, an operation screen 2011 which gives a priority to ease of an operation is assumed. That is, the operation screen 2011 shown in
The two-dimensional head image 2012 shown in
In addition, as a two-dimensional head image, it is also possible to form a database so that images when the head is viewed from various directions according to measurement purposes, for example, two-dimensional head images of left and right temporal parts, the front, or the back of the head can be selected as two-dimensional images for positioning by the user.
Then, the user sets the position of the probe 2013 on the selected two-dimensional head image 2012 on the software (S2010).
Then, a measuring person calculates a light irradiation position and a light detection position on the three-dimensional head image using the setting position information regarding the probe position on the two-dimensional head image set by a simple operation (S2110).
First, the user performs coordinate transformation processing on the basis of the position of the probe 2013 on the two-dimensional head image set on the software. In this coordinate transformation processing, coordinate transformation processing between the shapes approximated on the way from the two-dimensional head image to the three-dimensional head image is executed while being sequentially repeated, and the light irradiation position and the light detection position or the calculation position of a measurement channel is calculated. That is, by performing coordinate transformation processing A (S2020), coordinate transformation processing B (S2040), and coordinate transformation processing C (S2060) regarding the position of the probe center, the position of the probe center on the three-dimensional semi-ellipsoidal surface is calculated (S2060).
Moreover, processing of calculating the length of an ellipse arc and processing of calculating a light irradiation position and a light detection position from the coordinate position of the probe center on the three-dimensional semi-ellipsoidal surface are executed (S2080). On the basis of a result of such calculation processing, alight irradiation position and a light detection position on the three-dimensional head image or a measurement channel, that is, a light irradiation position and a light detection position are further calculated (S2110). In addition, types of the shape approximated on the way or the number of times of coordinate transformation processing is not limited to the example described above. For example, the intermediate coordinate transformation processing C (S2060) described above may be omitted. Alternately, it is also possible to set a three-dimensional hemispherical image, instead of the three-dimensional head image, as a final image by performing the coordinate transformation processing B and C on a two-dimensional head image and to calculate a light irradiation position, a light detection position, or a measurement channel from the coordinate position of the probe center on the three-dimensional hemispherical image. Alternatively, it is also possible to set a three-dimensional hemispherical image, instead of the three-dimensional head image, as a final image and to calculate a light irradiation position and a light detection position or a measurement channel.
Then, after the light irradiation position and the light detection position on the three-dimensional head image or the measurement channel (the light irradiation position and the light detection position) is calculated (S2110), a body transmitted light intensity image and a shape image of an object to be measured as an indication of a measured part are displayed so as to overlap each other on three dimensions (S2120).
Moreover, as shown in
Here, the processing S2010 in which a user sets the probe position 2010 on a two-dimensional head image on the software will be described.
First, a user sets the probe position 2010 on the two-dimensional head image 2012, which is displayed on the operation screen 2011 of
In the above conditions assumed, the center coordinate position 2014 of a probe on the two-dimensional head image is used as setting position information of the user on the two-dimensional head image 2012.
Next, processing in a forward direction in the flow of
In the processing in a forward direction, first, coordinate transformation processing A (S2020) is performed to calculate a coordinate position 2031 of the probe center on a two-dimensional circle image 2032, which is shown in
Then, coordinate transformation processing B (S2040) is performed to calculate a coordinate position 2051 of the probe center on a three-dimensional hemispherical surface 2052, which is shown in
Then, coordinate transformation processing C (S2060) is performed to calculate a coordinate position 2071 of the probe center on a three-dimensional semi-ellipsoidal surface 2072, which is shown in
Then, by the calculation processing A (S2080), a light irradiation position and light detection position 2091 on the three-dimensional semi-ellipsoidal surface 2072 shown in
Then, using the coordinate transformation processing D (S2100), a light irradiation position and light detection position 2111 on the three-dimensional head image 3100 shown in
In addition, it is also possible to calculate a light irradiation position and a light detection position on a three-dimensional head image by performing the same coordinate transformation processing as in the present embodiment using the positional information regarding a light irradiation position and a light detection position on a two-dimensional head image as information regarding the setting position by the user on the two-dimensional head image (S2110).
After the light irradiation position and light detection position 2111 on the three-dimensional head image is calculated, a body transmitted light intensity image 2121 and a shape image of an object to be measured as an indication of a measured part are displayed so as to overlap each other on three dimensions, as shown in
Next, a supplementary explanation regarding the processing (S2120) of overlapping display with a shape image will be given. The topographic image generating section 207 based on the light irradiation position calculates a distance d between a light irradiation position and a light detection position, which are adjacent to each other, in a three-dimensional space from the read light irradiation position and light detection position 2111 for every measurement position. At this time, in the present embodiment, the number of measurement positions is 16. Accordingly, the number of distances d between light irradiation positions and light detection positions in the three dimensional space calculated by the calculation is 24. Then, the topographic image generating section 207 corrects a topographic image Hb data (original), which is based on the two-dimensional light irradiation position and light detection position that are topographic images before correction, according to the following expression (1), and generates a topographic image Hb data (new) based on the three-dimensional light irradiation position and light detection position that are topographic images after correction. Specifically, the pixel value of each pixel of a topographic image obtained by cubic spline interpolation is corrected by the distance d of a measurement position closest to the pixel. In addition, the correction at this time is based on the following expression (1).
Hb data (new)=Hb data (original)×Distance (1)
Here, Distance=d/30
Then, the topographic image generating section 207 converts the topographic image after correction into a topographic image according to the wireframe image generated previously. Specifically, the topographic image generating section 207 converts the topographic image after correction into a three-dimensional topographic image by performing three-dimensional mapping interpolation of the pixel value of each pixel, which forms a topographic image, so as to match head wireframe coordinates.
Thereafter, the topographic image generating section 207 generates a three-dimensional image in which a topographic image overlaps a wireframe image, converts the three-dimensional image into a two-dimensional image (“three-dimensional topographic image”) seen from the viewing position instructed from an input means, and displays it on the display surface of the display means.
Next, processing in a backward direction in the flow chart of
As previously described, the coordinate transformation processing A (S2020), the coordinate transformation processing B (S2040), the coordinate transformation processing C (S2060), and the coordinate transformation processing D (S2100) are reversible coordinate transformation processing. Therefore, it is also possible to calculate and display the probe position on a two-dimensional head image by setting the probe position on a three-dimensional head image on the three-dimensional head image by the user.
First, for example, on the three-dimensional head image 3100 in a state shown in
Then, using the coordinate transformation processing D (S2100), the coordinate position 2071 of the probe center on the three-dimensional semi-ellipsoidal surface 2072 shown in
Then, using the coordinate transformation processing C (S2060), the coordinate position 2051 of the probe center on the three-dimensional hemispherical surface 2052 shown in
As previously described, the light irradiation position and light detection position 2091 on the three-dimensional semi-ellipsoidal surface 2072 and the light irradiation position and light detection position 2111 on the three-dimensional head image 3100 are calculated from the coordinate position 2071 of the probe center on the three-dimensional semi-ellipsoidal surface 2072.
As described above, according to the present invention, a body transmitted light intensity image and a shape image of an object to be measured as an indication of a measured part can be displayed so as to overlap each other on three dimensions, without measuring the three-dimensional coordinates of the light irradiation position and the light detection position, by realizing a function in which a measuring person sets a light irradiation position and a light detection position on the head image displayed on two dimensions and calculates a light irradiation position and a light detection position on three dimensions according to the set information. That is, with no need that a measuring person measures the three-dimensional coordinates of a light irradiation position and a light detection position in the head of a subject body, a body transmitted light intensity image of the subject body and a shape image of an object to be measured as an indication of a measured part can be displayed so as to overlap each other on the three-dimensional image. Since an accurate measurement result of the light irradiation position and the light detection position is not needed, a time of the whole biological light measurement can be shortened by omission of position measurement, and the cost and a burden on a measuring person can be reduced.
Next, the coordinate transformation processing in the embodiment of the present invention will be described in more detail.
(2.1. Two-Dimensional Head Image Two-Dimensional Circle Image)
First, the coordinate transformation processing A (S2020) will be described using
(2.1.1. Coordinate Transformation Processing A Two-Dimensional Head Image→Two-Dimensional Circle Image)
The calculation procedure of making a point on a two-dimensional head image uniquely correspond to a point on a two-dimensional circle image by the coordinate transformation processing A (S2020) will be described. The two-dimensional circle image is assumed to be a circle with a radius Rc and the central point (0 0).
Each coordinate point and each angle of deflection are expressed as follows.
Point Pe(Xe Ye): coordinate point on a two-dimensional head image to be subjected to the coordinate transformation processing A (2020)
Point Oe(Xeo Yeo): central point of a two-dimensional head image
Point Pet (Xet Yet): point of intersection of a straight line OePe and a two-dimensional head image
Angle of deflection θe: angle of deflection on a two-dimensional head image
Angle of deflection θc: angle of deflection on a two-dimensional circle image
Point Pc(Xc Yc): coordinates on a two-dimensional circle image after coordinate transformation of the point Pe by the coordinate transformation processing A
The calculation of coordinate transformation is performed as follows.
(I) In the case of (Xe Ye)=(Xeo Yeo)
(XcYc)=(00)
(II) In the case of (Xe Ye)≠(Xeo Yeo)
(i) The angle of deflection θe is calculated.
The same if Ye−Yeo≧0 (0≦θe≦π)
θe=−θe if Ye−Yeo<0(−π≦θe≦0)
(ii) The point of intersection Pet(Xet Yet) of the straight line OePe and a two-dimensional head image is calculated.
(iii) The point Pc(Xc Yc) is calculated.
The angle of deflection on the two-dimensional circle image θc=θe.
is calculated.
(2.1.2. Coordinate Transformation Processing A Two-Dimensional Circle Image→Two-Dimensional Head Image)
The calculation procedure of making a point on a two-dimensional circle image uniquely correspond to a point on a two-dimensional head image by the coordinate transformation processing A (S2020) will be described. It is assumed that the two-dimensional circle image is a circle with a radius Rc and a central point (0 0).
Each coordinate point and each angle of deflection are expressed as follows.
Point Pc(Xc Yc): coordinates on a two-dimensional circle image to be subjected to the coordinate transformation processing A
θc: angle of deflection on a two-dimensional circle image
θe: angle of deflection on a two-dimensional head image
Point Oe (Xeo Yeo): central point Oe of a two-dimensional head image
Point Pet(Xet Yet): point of intersection of a straight line OePe and a two-dimensional head image
Point Pe(Xe Ye): coordinates on a two-dimensional head image after coordinate transformation of the point Pc by the coordinate transformation processing A
The calculation of coordinate transformation is performed as follows.
(I) In the case of (Xc Yc)=(0 0)
(XeYe)=(XeoYeo)
(II) In the case of (Xc Yc)≠(0 0)
(i) The angle of deflection θc is calculated.
The same if Yc≧0 (0≦θc≦π)
θc=−θc if Yc<0(−π≦θc≦0)
(ii) The point Pet(Xet Yet) is calculated.
(ii-1) The angle of deflection is calculated as θe=θc.
(ii-2) A straight line OePet is calculated.
(1) In the case of
(2) In the case of
(ii-3) The point Pet(Xet Yet) is calculated by calculating the point of of intersection of the head image and the straight line.
(iii) The point Pe(Xe Ye) is calculated.
(2.2. Two-Dimensional Circle Image Three-Dimensional Hemispherical Surface)
The coordinate transformation processing B (S2040) will be described using
(2.2.1. Coordinate Transformation Processing B Two-Dimensional Circle Image→Three-Dimensional Hemispherical Surface)
The calculation procedure of making a point on a two-dimensional circle image uniquely correspond to a point on a three-dimensional hemispherical surface by the coordinate transformation processing B (S2040) will be described.
It is assumed that the two-dimensional circle image is a circle with a radius Rc and a central point (0 0) and the three-dimensional hemispherical surface is a hemispherical surface with a radius Rs and a central point (0 0 0).
Each coordinate point is expressed as follows.
Point Pc (Xc Yc): point on a two-dimensional circle image to be subjected to the coordinate transformation processing B
Point Ps(Xs Ys Zs): point on a three-dimensional hemispherical surface after coordinate transformation of the point Pc by the coordinate transformation processing B (Zs≧0)
The calculation of coordinate transformation is performed as follows.
(I) Coordinates of the point Pc are transformed into polar coordinates.
Using the polar coordinates, the coordinates of the point Pc are expressed as (Xc Yc)=Rc·(cos θc sin θc)
The same if Yc≧0 (0≦θe≦π)
θc=−θc if Yc<0(−π≦θc≦0)
(II) Polar coordinates of the point Ps are calculated, and the point Ps(Xs Ys Zs) is calculated.
Angles of deflection θs and φs used for polar coordinate expression of the point Ps are calculated as follows.
In addition, the coordinates of the point Ps are calculated as
(2.2.2. Coordinate Transformation Processing B Three-Dimensional Hemispherical Surface→Two-Dimensional Circle Image)
The calculation procedure of making a point on a three-dimensional hemispherical surface uniquely correspond to a point on a two-dimensional circle image by the coordinate transformation processing B (S2040) will be described.
It is assumed that the two-dimensional circle image is a circle with a radius Rc and a central point (0 0) and the three-dimensional hemispherical surface is a hemispherical surface with a radius Rs and a central point (0 0 0).
Each coordinate point is expressed as follows.
Point Ps(Xs Ys Zs): point on a three-dimensional hemispherical surface to be subjected to coordinate transformation processing B (Zs≧0)
Point Pc (Xc Yc): point on a two-dimensional circle image after coordinate transformation of the point Ps by the coordinate transformation processing B
The calculation of coordinate transformation is performed as follows.
(I) Coordinates of the point Ps are transformed into polar coordinates.
Using the polar coordinates, the coordinates of the point Ps are expressed as
Angles of deflection θs and φs are calculated as follows.
The same if
(II) Polar coordinates of the point Pc are calculated, and the point Pc(Xc Yc) is calculated.
The angle of deflection θc and a moving radius r used for polar coordinate expression of the point Pc are calculated as follows.
In addition, the coordinates of the point Pc are calculated as (Xc Yc)=r·(cos θc sin θc)
(2.3. Three-Dimensional Hemispherical Surface Three-Dimensional Semi-Ellipsoidal Surface)
The coordinate transformation processing C (S2060) will be described using
(2.3.1. Coordinate Transformation Processing C Three-Dimensional Hemispherical Surface→Three-Dimensional Semi-Ellipsoidal Surface)
The calculation procedure of making a point on a three-dimensional hemispherical surface uniquely correspond to a point on a three-dimensional semi-ellipsoidal surface by the coordinate transformation processing C (S2060) will be described.
It is assumed that the three-dimensional hemispherical surface is a hemispherical surface with a radius Rs and a central point (0 0 0). It is assumed that the three-dimensional semi-ellipsoidal surface is a semi-ellipsoidal surface with radii Rvx, Rvy, and Rvz and a central point (0 0 0).
Each coordinate point is expressed as follows.
Point Ps(Xs Ys Zs): point on a three-dimensional hemispherical surface to be subjected to coordinate transformation processing C (Zs≧0)
Point Pv(Xv Yv Zv): point on a three-dimensional semi-ellipsoidal surface after coordinate transformation of the point Ps by the coordinate transformation processing C
The calculation of coordinate transformation is performed as follows.
(2.3.2. Coordinate Transformation Processing C Three-Dimensional Semi-Ellipsoidal Surface→Three-Dimensional Hemispherical Surface)
The calculation procedure of making a point on a three-dimensional semi-ellipsoidal surface uniquely correspond to a point on a three-dimensional hemispherical surface by the coordinate transformation processing C (S2060) will be described.
It is assumed that the three-dimensional semi-ellipsoidal surface is a semi-ellipsoidal surface with radii Rvx, Rvy, and Rvz and a central point (0 0 0). It is assumed that the three-dimensional hemispherical surface is a hemispherical surface with a radius Rs and a central point (0 0 0).
Each coordinate point is expressed as follows.
Point Pv(Xv Yv Zv): point on a three-dimensional semi-ellipsoidal surface to be subjected to coordinate transformation processing C
Point Ps(Xs Ys Zs): point on a three-dimensional hemispherical surface after coordinate transformation of the point Pv by the coordinate transformation processing C
The calculation of coordinate transformation is performed as follows.
(2.4. Three-Dimensional Semi-Ellipsoidal Surface Three-Dimensional Head Image)
The coordinate transformation processing D (S2100) will be described using
(2.4.1. Three-Dimensional Semi-Ellipsoidal Surface→Three-Dimensional Head Image)
A point on a three-dimensional semi-ellipsoidal surface is made to uniquely correspond to a point on a three-dimensional head image by the coordinate transformation processing D (S2100).
It is assumed that the three-dimensional semi-ellipsoidal surface is a semi-ellipsoidal surface with radii Rvx, Rvy, and Rvz and a central point (0 0 0).
It is assumed that the three-dimensional head image is formed by N polygons and its central point is (0 0 0).
Although this calculation processing does not depend on the shape of a polygon, it is assumed that the three-dimensional head image is formed by triangle polygons in the present embodiment.
Three points which form each triangle polygon are expressed as (Ri.1, Ri.2, Ri.3), . . . (Rj.1, Rj.2, Rj.3), . . . (RN.1, RN.2, RN.3). Each coordinate point is expressed as follows.
Point Ov(0 0 0): Central point on a three-dimensional semi-ellipsoid
Point Pv(Xv Yv Zv): point on a three-dimensional semi-ellipsoidal surface to be subjected to coordinate transformation processing D
Point Ph(Xh Yh Zh): point on a three-dimensional head image after coordinate transformation of the point Pv by the coordinate transformation processing D
The point of intersection of the straight line OvPv and a three-dimensional head image is defined as a point Ph, and it is calculated as follows.
(I) The following calculation is performed for each triangle polygon.
kj.1, kj.2, kj.3
In addition,
* {right arrow over (OR1)}×{right arrow over (OR2)}, {right arrow over (OR1)}×{right arrow over (OR3)}, and {right arrow over (OR2)}×{right arrow over (OR3)} indicate a vector cross product.
(II) Find a triangle polygon satisfying 0≦kj.2+kj.3≦1, 0≦kj.2, and 0≦kj.3.
(A Triangle Polygon Crossing the Straight Line OvPv is Searched for)
(IIa) When there is a triangle polygon satisfying 0≦kj.2+kj.3≦1, 0≦kj.2, and 0≦kj.3, the triangle polygon is expressed as (Rj.1, Rj.2, Rj.3) and a calculation result of (I) in the triangle polygon is expressed as kj.1, kj.2, kj.3.
(IIb) When there is no triangle polygon satisfying 0≦kj.2+kj.3≦1, 0≦kj.2, and 0≦kj.3, find a polygon which satisfies 0<k1 and “distances from three points |{right arrow over (OQ)}−{right arrow over (OR1)}|+|{right arrow over (OQ)}−{right arrow over (OR2)}|+|{right arrow over (OQ)}−{right arrow over (OR3)}| become minimum”. The triangle polygon is expressed as (Rj.1, Rj.2, Rj.3) and a calculation result of (I) in the triangle polygon is expressed as kj.1, Kj.2, kj.3.
(III) Using kj.1, the coordinates of the point Ph are calculated.
(XhYhZh)=kj.1·(XvYvZv)
(2.4.2 Three-Dimensional Head Image→Three-Dimensional Semi-Ellipsoidal Surface)
A point on a three-dimensional head image is made to uniquely correspond to a point on a three-dimensional semi-ellipsoidal surface by the coordinate transformation processing D (S2100). It is assumed that the three-dimensional semi-ellipsoidal surface is a semi-ellipsoidal surface with radii Rvx, Rvy, and Rvz and a central point (0 0 0). It is assumed that the central point of a three-dimensional head image is (0 0 0).
Each coordinate point is expressed as follows.
Point Ov(0 0 0): Central point on a three-dimensional semi-ellipsoid
Point Ph(Xh Yh Zh): point on a three-dimensional head image to be subjected to coordinate transformation processing D
Point Pv(Xv Yv Zv): point on a three-dimensional semi-ellipsoidal surface after coordinate transformation of the point Ph by the coordinate transformation processing D
The point of intersection of the straight line OvPh and a three-dimensional semi-ellipsoidal surface is defined as a point Pv, and it is calculated as follows.
(3. Calculation of Approximation of a Three-Dimensional Head Image to Three-Dimensional Semi-Ellipsoid)
A three-dimensional head image is approximated using a three-dimensional semi-ellipsoid. Although some approximation methods may be considered, three radii Rvx, Rvy, and Rvz of a three-dimensional semi-ellipsoid are simply calculated from a three-dimensional head image in the present embodiment.
The procedure of calculating three radii of a three-dimensional semi-ellipsoid using four points of a right ear Ph.R, a left ear Ph.L, the Nasion Ph.N, and the crown Ph.T in a three-dimensional head image will be described using
The midpoint Oh of the right ear Ph.R and the left ear Ph.L is found, and {right arrow over (OhPhR)} is set as Rvx. A perpendicular is drawn from Nasion Ph.N to the straight line Ph.RPh.L, a distance between a point of the foot and Ph.N, and it is set as Rvy. A perpendicular is drawn from the crown Ph.T to the plane including the midpoint Oh of the right ear Ph.R and the left ear Ph.L, a distance between a point of the foot and Ph.T, and it is set as Rvz.
(4. Calculation of the Length of an Ellipse Arc)
A method of calculating the length of an ellipse arc will be described on the basis of
The problem of finding the length of an ellipse arc is known as a problem of calculating an elliptic integral. Since it cannot be calculated by an elementary function, it can be approximately calculated by numerical integration. Although several methods are proposed as numerical integration, the calculation is performed by mensuration by parts using the extended midpoint rule in the present embodiment.
The radii of an ellipse are set to a and b and its center is set (0, 0), and the length L of an arc between two points P1(X1 Y1) and P2(X2 Y2) on the ellipse is calculated.
The two points P1 and P2 are expressed as follows using the angles of deflection α and β
P1(X1Y1)=(a cos αb sin α)
P2(X2Y2)=(a cos βb sin β)
Here, α<β is assumed because the universalness is satisfied even if α<β.
The length L of the arc of the ellipse is expressed by the following definite integral.
(I) In the case of
Approximate calculation of this definite integral is performed as follows by mensuration by parts using the extended midpoint rule.
N: Division number in mensuration by parts
(II) Other cases
As shown in the following table, definite integral is decomposed, and the decomposed definite integral is calculated according to approximation calculation of (I).
(5. Calculation of the Point Coordinates at which the Length of an Ellipse Arc Becomes a Designated Distance)
A method of calculating the point coordinates which are away from a certain point on an ellipse by a designated distance will be described on the basis of
The coordinates of the point P2 at which the distance on the ellipse to the point P1 is L are calculated.
The point P1 is expressed as follows using α.
P1(X1Y1)=(a cos αb sin α)
On the other hand, the point P2 is expressed as follows using the angle of deflection β.
P2(X2Y2)=(a cos βb sin β)
In addition, the angle of deflection β is calculated.
Here, α<β and a<b are assumed because the universalness is satisfied even if α<β0 and a<b.
As previously described, since the length of an ellipse arc cannot be calculated in a deductive way, it is calculated by approximation calculation. Therefore, the length of an ellipse arc is calculated for the candidate value of the angle of deflection β in a possible range and the angle of deflection which is closest to the designated distance is adopted. Specifically, the calculation is performed as follows.
(i) The range of a value that the angle of deflection β can have is as follows.
(ii) The candidate value of β is generated by dividing the range of the possible value by an appropriate division number N.
Candidate value of β
(iii) The length of the ellipse arc is calculated for each candidate value of β, and the value closest to the designated distance L is set as β.
(6. Actual Calculation Processing)
Hereinafter, the procedure of calculating a light irradiation position and a light detection position (S2110) on a three-dimensional head image on the basis of the probe position (S2010) on a two-dimensional head image set on the software by the user will be described.
(Calculation Processing A of Calculating a Light Irradiation Position and a Light Detection Position from the Coordinate Position of the Probe Center (S2080))
The calculation processing A (S2080) will be described below using
A light irradiation position and a light detection position on a three-dimensional semi-ellipsoidal surface are calculated from the coordinate position of the probe center on the three-dimensional semi-ellipsoidal surface.
In the present embodiment, it is assumed that a probe on a two-dimensional head image is disposed as shown in
The inclination of a reference line (901) which connects the central point of a probe and the central point of a two-dimensional head image is set to αu, and the inclination of a centerline (902) of the probe with respect to the reference line (901) is set to θu.
In order to calculate a light irradiation position and a light detection position on the three-dimensional semi-ellipsoidal surface from the coordinate position of the probe center on the three-dimensional semi-ellipsoidal surface, the information regarding the relationship between the probe center and the light irradiation position and the light detection position in the probe is used as known probe shape information.
As the probe shape information, it is usually preferable to assume a sheet-like probe, in which a certain fixed number of light irradiation positions and light detection positions are mixed together, and to use the shape information.
It is very difficult to make a probe with a shape, which completely matches the head shape, due to individual difference of the head shape of a subject body or shape difference depending on a fixing position. Usually, a probe with a shape, which matches the head shapes of many subject bodies in a possible range, is used.
In the present embodiment, the calculation processing A (S2080) in the case where two of the shape on the two-dimensional plane and the shape disposed on the three-dimensional spherical surface are assumed as the probe shape information will be described.
(6.1. Case where a Shape on a Two-Dimensional Plane is Used)
First, the case is assumed in which each light irradiation position and each light detection position are arrayed on the sheet shape on the two-dimensional plane as the probe shape information. For example, it is assumed that the shape information shown in
The coordinates of the central position of the probe on the two-dimensional plane is set to (0 0).
The coordinate points of N light irradiation positions and N light detection positions on the two-dimensional plane are set to M1(x1 y1), . . . , Mi(xi yi) . . . , MN(xN yN).
Then, rotation of the probe set on the two-dimensional head image is reflected in the probe shape information on the two-dimensional plane. The coordinate points of the N light irradiation positions and the N light detection positions on the two-dimensional plane are as follows, as shown in
M1(x1 cos θu−y1 sin θux1 sin θu+y1 cos θu)
Mi(xi cos θu−yi sin θuxi sin θu+yi cos θu)
MN(xN cos θu−yN sin θuxN sin θu+yN cos θu)
For a reference line passing through the central line of the probe, perpendiculars are drawn from each light irradiation position and each light detection position. Each foot of the perpendiculars is set as a reference point.
K1(0x1 sin θu+y1 cos θu), . . . , Ki(0xi sin θu+yi cos θu), . . . , KN(0xN sin θu+yN cos θu)
When calculating each light irradiation position and each light detection position on the three-dimensional semi-ellipsoid, a distance (expression 2) from the center of a probe to a reference point and a distance (expression 3) from the reference point to each light irradiation position and each light detection position are used from the shape information of the probe.
In order to calculate the position of Mi on the three-dimensional semi-ellipsoid, the following two steps of calculation are performed.
(I) Along the reference line on the three-dimensional semi-ellipsoid, the coordinates of the reference point Ki on the three-dimensional semi-ellipsoid, at which the distance on the three-dimensional semi-ellipsoid from the central point of the probe on the three-dimensional semi-ellipsoid becomes (expression 4), are calculated.
(II) A coordinate point at which the distance on the three-dimensional semi-ellipsoid from the reference point Ki on the three-dimensional semi-ellipsoid becomes (expression 5) is calculated, and this is set as Mi on the three-dimensional semi-ellipsoid.
Hereinafter, details of calculation processing of (I) and (II) will be described.
(I) The coordinates of the reference point Ki on the three-dimensional semi-ellipsoid are calculated.
The procedure of calculating the reference point Ki as a point on the reference line passing through the central point Pv of a probe on a three-dimensional semi-ellipsoid and the apex of a semi-ellipsoid will be described using
In
Pv(XvYvZv)=(Rvx·cos θv·cos φvRvy·sin θv·cos φvRvz·sin φv)
Here, θv and φv are angles of deflection in polar coordinate expression and are calculated as follows.
The reference point Ki can be expressed using the angle of deflection φKi and Ki(Xi Yi Zi)=(Rvx·cos θv·cos φKi Rvy·sin θv·cos φKi Rvz·sin φKi) as a point on the reference line.
On an ellipse formed on a cut plane including the reference line passing through the apex of a semi-ellipsoid and the central point Pv of the probe on the three-dimensional semi-ellipsoid, a point at which the distance from the central point Pv of the probe becomes (expression 4) is calculated. (refer to
On the cut ellipsoidal surface S1, the points Pv and Ki are expressed as follows.
On the other hand, the reference point Ki at which the distance from the central point Pv of the probe becomes (expression 4) is calculated by calculating the angle of deflection ψKi using the “calculation of the point coordinates at which the length of an ellipse arc becomes a designated distance” method described previously.
Moreover, here, the angle α between the Xv axis and a line segment, which is obtained by projecting the reference line to the XvYv coordinate plane, is calculated as follows.
(1) In the case of (Xv Yv)≠(0 0)
is calculated if Yv≧0, and
is calculated if Yv<0
(2) In the case of (Xv Yv)=(0 0), α=θu is calculated.
(II) The coordinates of Mi on the three-dimensional semi-ellipsoid are calculated.
A method of calculating the coordinates of Mi on the three-dimensional semi-ellipsoid will be described below.
The point Mi is calculated as a point, at which the distance from the reference point Ki becomes (expression 5), on the curve (curve L2) formed as a line of intersection of an ellipsoid and a cross section (cross section S2) which is perpendicular to the “tangential plane at the point Ki on the ellipsoid” and which is perpendicular to the “cross section (cross section S1) perpendicular to the bottom surface of the ellipsoid passing through the point Ki”. The calculation procedure is as follows.
(i) Find the expression of the tangential line L1 at the point Ki, which is included in the cross section S1, to the ellipsoid.
(ii) Find the expression of the cross section S2.
(iii) Find the expression of the curve L2.
(iv) Find a point, at which the distance from the reference point Ki becomes (expression 5), on the curve L2.
Details of each calculation procedure differ depending on the coordinate value (expression 6) of the reference point Ki.
(XKiYKiZKi) (Expression 6)
The coordinate value of the reference point Ki is related to each of the following six conditions, and details of the calculation procedures (i) to (iv) will be described below.
(1) In the case of condition 1
(XKi≠0,YKi≠0,ZKi≠0)
(i) Find the expression of the tangential line L1 at the point Ki, which is included in the cross section S1, to the ellipsoid.
As shown in
In the XsYs coordinate system on the cross section S1, the point Ki is expressed as follows.
Point Ki(XKi√{right arrow over (1+tan2 α)}ZKi)
The straight line L1 in the XsYs coordinate system on the cross section S1 is expressed as follows.
In addition,
The following relational expressions are satisfied between the coordinate system on the cross section S1 shown in
Xs=X√{square root over (1+tan2 α)}
Ys=Z
Y=X·tan α
Taking the above into consideration, the following is obtained.
(ii) Find the expression of the cross section S2.
As shown in
Cross section S2(X−XKi)+h2·(Y−YKi)−h3·(Z−ZKi)=0
Here, h2=tan α
(iii) Find the expression of the curve L2.
The expression of the ellipsoid becomes as follows.
Here, the expression of the cross section S2 found previously is changed.
This expression of the cross section S2 is substituted into the expression of the ellipsoid to find a cross-sectional line formed by the line of intersection of the cross section S2 and the ellipsoid.
After changing the above expression, the following expression is obtained. The following expression indicates a curve L3 which is defined by projecting the curve L2 on the XY plane, and is shown in
In the case of h2≧0,
In the case of h2<0,
Using the above, the arbitrary point Q on the curve L2 is expressed as follows.
Here, the parameter θQ was defined in the curve L3 obtained by projecting the curve L2 on the XY plan, as shown in
(iv) Find a point, at which the distance from the reference point Ki becomes (expression 5), on the curve L2.
(1) A parameter θs in the expression of the coordinates of the point Ki on the curve L2 is calculated.
In the case of tKj≧0,
In the case of tKj<0,
(2) Coordinates of a point O2 obtained by projecting the central point of the curve L3 to the cross section S2 and coordinates of two points J1 and J2 on the curve L2 are calculated, and |{right arrow over (O2J1)}|, |{right arrow over (O2J2)}|, and {right arrow over (O2J1)}·{right arrow over (O2J2)} are calculated.
(3) The minimum value |{right arrow over (O2Q)}|min and the maximum value |{right arrow over (O2Q)}|max of |{right arrow over (O2Q)}| are calculated.
(i) In the case of |{right arrow over (O2J1)}|≧|{right arrow over (O2J2)}|
|{right arrow over (O2Q)}|min=max(√{square root over (|{right arrow over (O2J2)}|2−|{right arrow over (O2J1)}·{right arrow over (O2J2)}|)},0)
|{right arrow over (O2Q)}|max=√{square root over (2)}|{right arrow over (O2J1)}|
(ii) n the case of |{right arrow over (O2J1)}|<|{right arrow over (O2J2)}|
|{right arrow over (O2Q)}|min=max(√{square root over (|{right arrow over (O2J1)}|2−|{right arrow over (O2J1)}·{right arrow over (O2J2)}|)},0)
|{right arrow over (O2Q)}|max=√{square root over (2)}|{right arrow over (O2J2)}|
(4) θQ obtained when the length Ld of an arc on the curve L2 between the point Ki and the point Q becomes a designated distance
(5) The length Ld of the arc on the curve L2 between the point Ki and the point Q can be calculated according to the following expression.
By this definite integral, it is not possible acquire a solution in a deductive way. Therefore, the solution can be approximately calculated by mensuration by parts using the extended midpoint rule. In the extended midpoint rule, the calculation is performed as follows using an appropriate division number N.
(6) In order to calculate the coordinates of a point at which Ld becomes the designated distance
Candidate value of θQ
As described above, the coordinates of the point Mi, at which the distance from the reference point Ki becomes (expression 5), on the curve L2 are calculated.
(2) In the case of condition 2 (in the case of expression 7)
(XKi≠0,YKi=0,ZKi≠0) (expression 7)
(i) Find the expression of the tangential line L1 at the point Ki, which is included in the cross section S1, to the ellipsoid.
The point Ki is expressed as Ki(XKi 0 ZKi)=(Rvx·cos φKi 0 Rvz·sin φKi) using the polar coordinates.
As shown in
In the coordinate system on the cross section S1, the point Ki is expressed as (XKi ZKi).
The straight line L1 is calculated as follows.
(ii) Find the expression of the cross section S2.
As shown in
Cross section S2(X−XKi)−h3(Z−ZKi)=0
Here,
(iii) Find the expression of the curve L2.
The expression of the ellipsoid is as follows.
The following expression is obtained by changing the expression of the cross section S2.
The expression of the cross section S2 is substituted into the expression of the ellipsoid to find a cross-sectional line formed by the line of intersection of the cross section S2 and the ellipsoid.
After changing the above expression, the following expression is obtained. The following expression indicates a curve L3 which is defined by projecting the curve L2 on the XY plane, and is shown in
Using the above, the arbitrary point Q on the curve L2 is expressed as follows using the parameter θQ.
(iv) Find a point, at which the distance from the reference point Ki becomes (expression 5), on the curve L2.
(1) A parameter θs in the expression of the coordinates of the point Ki on the curve L2 is calculated.
In the case of XKi−x0≧0, θs=0
In the case of XKi−x0<0, θs=π
(2) Coordinates of a point O2 obtained by projecting the central point of the curve L3 to the cross section S2 and coordinates of two points J1 and J2 on the curve L2 are calculated, and |{right arrow over (O2J1)}|, |{right arrow over (O2J2)}|, and {right arrow over (O2J1)}·{right arrow over (O2J2)} are calculated.
(3) The minimum value |{right arrow over (O2Q)}|min and the maximum value |{right arrow over (O2Q)}|max of |{right arrow over (O2Q)}| are calculated.
|{right arrow over (O2Q)}|min=min(|{right arrow over (O2J1)}|·|{right arrow over (O2J2)}|)
|{right arrow over (O2Q)}|max=max(|{right arrow over (O2J1)}|·|{right arrow over (O2J2)}|)
(4) θQ obtained when the length Ld of an arc on the curve L2 between the point Ki and the point Q becomes a designated distance
(5) The length Ld of the arc on the curve L2 between the point Ki and the point Q can be calculated according to the following expression.
By this definite integral, it is not possible acquire a solution in a deductive way. Therefore, the solution can be approximately calculated by mensuration by parts using the extended midpoint rule. In the extended midpoint rule, the calculation is performed as follows using an appropriate division number N.
(6) In order to calculate the coordinates of a point at which Ld becomes the designated distance
Candidate value of θQ
As described above, the coordinates of the point Mi, at which the distance from the reference point Ki becomes (expression 5), on the curve L2 are calculated.
(3) In the case of condition 3 (ZKi=0)
(i) Find the expression of the tangential line L1 at the point Ki, which is included in the cross section S1, to the ellipsoid.
Tangential line L1X=X1,Y=Yi
(ii) Find the expression of the cross section S2. Since the bottom surface of the semi-ellipsoid is the cross section S2, expression of the cross section S2 is Z=0.
(iii) Find the expression of the curve L2.
Since an ellipse of the bottom surface of the semi-ellipsoid is the curve L2, expression of curve L2 is (expression 8).
(iv) Find a point, at which the distance from the reference point Ki becomes (expression 5), on the curve L2.
Since the curve L2 is an ellipse, the coordinate value of the point, at which the distance from the reference point Ki becomes (expression 5), on the curve L2 is calculated using the “calculation of the point coordinates at which the length of an ellipse arc becomes a designated distance” method described previously.
(4) In the case of condition 4
(XKi=0,YKi≠0,ZKi≠0)
(i) Find the expression of the tangential line L1 at the point Ki, which is included in the cross section S1, to the ellipsoid.
The point Ki is expressed as Ki(0 YKi ZKi)=(0 Rvy·cos φKi Rvz·sin φKi) using the polar coordinates.
As shown in
In the coordinate system on the cross section S1, the point Ki is expressed as (YKi ZKi).
The straight line L1 is calculated as follows.
(ii) Find the expression of the cross section S2.
As shown in
Cross section S2(Y−YKi)−h3(Z−ZKi)=0
Here,
(iii) Find the expression of the curve L2.
The expression of the ellipsoid is as follows.
The following expression is obtained by changing the expression of the cross section S2.
By substituting the expression of the cross section S2 into the expression of the ellipsoid, a cross-sectional line formed as a line of intersection of the cross section S2 and the ellipsoid is calculated.
After changing the above expression, the following expression is obtained. The following expression indicates a curve L3 which is defined by projecting the curve L2 on the XY plane, and is shown in
Using the above, the arbitrary point Q on the curve L2 is expressed as follows using the parameter θQ.
(iv) Find a point, at which the distance from the reference point Ki becomes (expression 5), on the curve L2.
(1) A parameter θs in the expression of the coordinates of the point Ki on the curve L2 is calculated.
In the case of
In the case of
(2) Coordinates of a point O2 obtained by projecting the central point of the curve L3 to the cross section S2 and coordinates of two points J1 and J2 on the curve L2 are calculated, and |{right arrow over (O2J1)}|, |{right arrow over (O2J2)}|, and {right arrow over (O2J1)}·{right arrow over (O2J2)} are calculated.
(3) The minimum value |{right arrow over (O2Q)}|min and the maximum value |{right arrow over (O2Q)}|max of |{right arrow over (O2Q)}| are calculated.
|{right arrow over (O2Q)}|min=min(|{right arrow over (O2J1)}|·|{right arrow over (O2J2)}|)
|{right arrow over (O2Q)}|max=max(|{right arrow over (O2J1)}|·|{right arrow over (O2J2)}|)
(4) θQ obtained when the length Ld of an arc on the curve L2 between the point Ki and the point Q becomes a designated distance
(5) The length Ld of the arc on the curve L2 between the point Ki and the point Q can be calculated according to the following expression.
By this definite integral, it is not possible to acquire a solution in a deductive way. Therefore, the solution can be approximately calculated by mensuration by parts using the extended midpoint rule. In the extended midpoint rule, the calculation is performed as follows using an appropriate division number N.
(6) In order to calculate the coordinates of a point at which Ld becomes the designated distance
Candidate value of θQ
As described above, the coordinates of the point Mi, at which the distance from the reference point Ki becomes (expression 5), on the curve L2 are calculated
(5) In the case of condition 5
(i) Find the coordinates of the point Q2.
The coordinates of the point Q2 shown in
Here, θ2 is calculated as follows.
In the case of
In the case of
In the case of
(ii) Find the expression of the curve L2.
As shown in
In the XcYc coordinate system shown in
Accordingly, the coordinates of the point Q on the curve L2 is expressed as follows using the angle of deflection θc.
Point Q on the curve L2 (√{square root over (Rvx2 cos2 θ2+Rvy2 sin2 θ2)} cos θc Rvz·sin θc)
(iii) The coordinates of the point Q on the curve L2 in the ellipsoid are as follows.
(iv) Find a point, at which the distance from the reference point Ki becomes (expression 5), on the curve L2.
By calculating the point, at which the distance from the reference point Ki becomes (expression 5), on the curve L2 using the “calculation of the point coordinates at which the length of an ellipse arc becomes a designated distance” method described previously and calculating the angle of deflection θc of the point, the coordinate value of the point on the three-dimensional semi-ellipsoid is calculated.
(6) In the case of condition 6
As shown in
Since the curve L2 is an ellipse on the XZ coordinate plane, it is calculated as follows.
On the other hand, using the “calculation of the point coordinates at which the length of an ellipse arc becomes a designated distance” method described previously, the point, at which the distance from the reference point Ki becomes (expression 5), on the curve L2 is calculated.
(6.2. In the Case where the Shape Disposed on a Three-Dimensional Spherical Surface is Used)
Next, the case is assumed in which the shape disposed on a three-dimensional spherical surface is used as probe shape information.
The radius of a three-dimensional sphere is set to Rs. The central point of a probe is set to P (Rs 0 0). For example, it is assumed that the shape information shown in
Then, rotation of the probe set on the two-dimensional head image is reflected in the probe shape information on the three-dimensional spherical surface.
The rotation is assumed to be performed with the x axis on three dimensions as the center. (refer to
M1(x1y1 cos θu−z1 sin θuy1 sin θu+z1 cos θu)
Mi(xiyi cos θu−zi sin θuyi sin θu+zi cos θu)
MN(XNyN cos θu−ZN sin θuyN sin θu+ZN cos θu)
At each point of the light irradiation position and the light detection position, a point which becomes the foot onto the reference point of the probe is calculated. These are set as reference points K1, . . . Ki, . . . , KN with respect to each point of the light irradiation position and the light detection position. For example, for Mi, the reference point Ki is calculated as follows.
Ki(√{square root over (Rs2−(yi sin θu+zi cos θu)2)}0yi sin θu+zi cos θu)
Moreover, on the three-dimensional spherical surface, an arc length (expression 2) between the central point of the probe and each reference point and an arc length (expression 3) between each reference point and each point of the light irradiation position and the light detection position are calculated. For example, (expression 4) and (expression 5) found for Mi and the reference point Ki are as follows.
Since a cross section cut by the plane, which passes through Mi and is parallel to the yz plane, is as shown in
Since a cross section cut by the plane, which passes through Mi and is parallel to the xy plane, is as shown in
Using the arc length (expression 2) from the central point of the probe to the reference point and the arc length (expression 3) from the reference point to each point of the light irradiation position and the light detection position as probe shape information, the coordinate positions of the light irradiation position and the light detection position on the three-dimensional elliptical hemisphere can be calculated, similar to the case where the shape on the two-dimensional plane is used.
(7. Display Function of an Electroencephalographic Electrode Position)
In the present embodiment, using the coordinate transformation processing A, B, C, and D (S2020, 52040, S2060, S2100), it is possible to display an electroencephalographic electrode position on a two-dimensional head image, a two-dimensional circle image, a three-dimensional hemispherical surface, a three-dimensional semi-ellipsoidal surface, and a three-dimensional head image. Hereinafter, the display function of the electroencephalographic electrode position will be described using
As the electroencephalographic electrode position, for example, a 10-20 electrode arrangement method is generally known. Details of the 10-20 electrode arrangement method are disclosed, for example, in Non-patent Document 1 (new clinical examination technician lecture 7, clinical physiology (third edition), Igaku-Shoin Ltd., pp. 174-176) and the like. In the present embodiment, this 10-20 electrode arrangement method is used. According to the defined arrangement method, the coordinate position (301) of each electrode on the three-dimensional hemisphere is calculated. The calculated result is like 302 of
The coordinate position (303) of each electrode on a two-dimensional circle image can be obtained by performing the coordinate transformation processing B (S2040) on the coordinate position (301) of each electrode on the three-dimensional hemisphere.
In addition, the coordinate position (305) of each electrode on a two-dimensional head image can be obtained by performing the coordinate transformation processing A (S2020) on the coordinate position (303) of each electrode on the two-dimensional circle image.
Using the coordinate position (305) of each electrode on the two-dimensional head image, an overlap image (306) with the two-dimensional head image 2012 can be obtained and presented as shown in
In addition, the coordinate position (307) of each electrode on a three-dimensional semi-ellipsoid can be obtained by performing the coordinate transformation processing C (S2060) on the coordinate position (301) of each electrode on the three-dimensional hemisphere. In addition, the coordinate position (308) of each electrode on a three-dimensional head image 3100 can be obtained by performing the coordinate transformation processing D (S2080) on the coordinate position (307) of each electrode on the three-dimensional semi-ellipsoid.
Using the coordinate position (308) of each electrode on the obtained three-dimensional head image, an overlap image (310) with the shape head can be obtained and presented as shown in
In addition, using the calculation result on each of the above-described images (the three-dimensional head image, the three-dimensional semi-ellipsoidal image, the three-dimensional hemispherical image, the two-dimensional circle image, and the two-dimensional head image), it is also possible to display the electroencephalographic electrode position and the biological light measurement result so as to overlap each other.
Number | Date | Country | Kind |
---|---|---|---|
2008-256510 | Oct 2008 | JP | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/JP2009/067023 | 9/30/2009 | WO | 00 | 3/28/2011 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2010/038774 | 4/8/2010 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5803909 | Maki et al. | Sep 1998 | A |
6128517 | Maki et al. | Oct 2000 | A |
6240309 | Yamashita et al. | May 2001 | B1 |
6282438 | Maki et al. | Aug 2001 | B1 |
6542763 | Yamashita et al. | Apr 2003 | B1 |
6901284 | Maki et al. | May 2005 | B1 |
7047149 | Maki et al. | May 2006 | B1 |
7228166 | Kawasaki et al. | Jun 2007 | B1 |
7233819 | Eda et al. | Jun 2007 | B2 |
7613502 | Yamamoto et al. | Nov 2009 | B2 |
8180426 | Dan et al. | May 2012 | B2 |
8235894 | Nakagawa | Aug 2012 | B2 |
8406838 | Kato | Mar 2013 | B2 |
20010018554 | Yamashita et al. | Aug 2001 | A1 |
20010047131 | Maki et al. | Nov 2001 | A1 |
20040127784 | Yamashita et al. | Jul 2004 | A1 |
20050131303 | Maki et al. | Jun 2005 | A1 |
20050148857 | Maki et al. | Jul 2005 | A1 |
20050171440 | Maki et al. | Aug 2005 | A1 |
20060184045 | Yamashita et al. | Aug 2006 | A1 |
20060184046 | Yamashita et al. | Aug 2006 | A1 |
20060184047 | Yamashita et al. | Aug 2006 | A1 |
20070073179 | Afonso et al. | Mar 2007 | A1 |
Number | Date | Country |
---|---|---|
09-019408 | Jan 1997 | JP |
2001-079008 | Mar 2001 | JP |
2003-088528 | Mar 2003 | JP |
Entry |
---|
Obrig et al., “Beyond the Visible—Imaging the Human Brain With Light”, 2003, J Cereb Blood Flow Metab, 23(1), 1-18. |
Machine Translation of JP 2003-088528. |
Number | Date | Country | |
---|---|---|---|
20110176713 A1 | Jul 2011 | US |