The present application claims priority from Japanese Patent Applications No. 2007-312399 filed on Dec. 3, 2007, No. 2007-313838 filed on Dec. 4, 2007, and No. 2007-313839 filed on Dec. 4, 2007, the contents of which are incorporated herein by reference.
1. Technical Field
The present invention relates to a position identifying system, a position identifying method, and a computer readable medium. In particular, the present invention relates to a position identifying system, a position identifying method, and a computer readable medium used by the position identifying system for identifying a position of an object existing inside a body.
2. Related Art
A measurement apparatus for collecting information from a living organism is known that measures detailed information concerning the organism's metabolism by propagating the wavelength of light inside the organism, as in, for example, Japanese Patent Application Publication No. 2006-218013. An optical measurement apparatus is known that obtains an absorption coefficient distribution in a direction of depth in the subject by measuring the amount of light absorbed at different distances between where the light enters and exits, as in, for example, Japanese Patent Application Publication No. 8-322821.
These two apparatuses, however, use different points for irradiation and detection, making it difficult to form an observation system.
Therefore, it is an object of an aspect of the innovations herein to provide a position identifying system, a position identifying method, and a computer readable medium, which are capable of overcoming the above drawbacks accompanying the related art. The above and other objects can be achieved by combinations described in the independent claims. The dependent claims define further advantageous and exemplary combinations of the innovations herein.
According to a first aspect related to the innovations herein, one exemplary position identifying system may include a position identifying system that identifies a position of an object existing inside a body, comprising a vibrating section that vibrates each of a plurality of different positions inside the body at a different timing; an image capturing section that captures a frame image of the object at each of the different timings; and a position identifying section that identifies the position of the object based on a blur amount of an image of the object in each frame image captured by the image capturing section.
According to a second aspect related to the innovations herein, one exemplary position identifying method may include a position identifying method for identifying a position of an object existing inside a body, comprising vibrating each of a plurality of different positions inside the body at a different timing; capturing a frame image of the object at each of the different timings; and identifying the position of the object based on a blur amount of an image of the object in each frame image captured during the image capturing.
According to a third aspect related to the innovations herein, one exemplary computer readable medium may include a computer readable medium storing thereon a program causing a position identifying system that identifies a position of an object existing inside a body to function as a vibrating section that vibrates each of a plurality of different positions inside the body at a different timing; an image capturing section that captures a frame image of the object at each of the different timings; and a position identifying section that identifies the position of the object based on a blur amount of an image of the object in each frame image captured by the image capturing section.
According to a fourth aspect related to the innovations herein, one exemplary position identifying system may include a position identifying system that identifies a position of an object existing inside a body, comprising a vibrating section that vibrates the body; an image capturing section that captures a frame image of the object after the body is vibrated; and a position identifying section that identifies the position of the object inside the body based on a blur amount of an image of the object in the frame image captured by the image capturing section.
According to a fifth aspect related to the innovations herein, one exemplary position identifying method may include a position identifying method for identifying a position of an object existing inside a body, comprising vibrating the body; capturing a frame image of the object after the body is vibrated; and identifying the position of the object inside the body based on a blur amount of an image of the object in the frame image captured by the image capturing section.
According to a sixth aspect related to the innovations herein, one exemplary computer readable medium may include a computer readable medium storing thereon a program causing a position identifying system that identifies a position of an object existing inside a body to function as a vibrating section that vibrates the body; an image capturing section that captures a frame image of the object after the body is vibrated; and a position identifying section that identifies the position of the object inside the body based on a blur amount of an image of the object in the frame image captured by the image capturing section.
According to a seventh aspect related to the innovations herein, one exemplary position identifying system may include a position identifying system that identifies a position of an object existing inside a body, comprising a vibrating section that vibrates the body; an image capturing section that captures a frame image of the object when the body is vibrated and also when the body is not vibrated; and a position identifying section that identifies the position of the object inside the body based on a blur amount of an image of the object in the frame image captured by the image capturing section.
According to an eighth aspect related to the innovations herein, one exemplary position identifying method may include a method for identifying a position of an object existing inside a body, comprising vibrating the body; capturing a frame image of the object when the body is vibrated and also when the body is not vibrated; and identifying the position of the object inside the body based on a blur amount of the image of the object in each frame image captured during the image capturing.
According to a ninth aspect related to the innovations herein, one exemplary computer readable medium may include a computer readable medium storing thereon a program causing a position identifying system that identifies a position of an object existing inside a body to function as a vibrating section that vibrates the body; an image capturing section that captures a frame image of the object when the body is vibrated and also when the body is not vibrated; and a position identifying section that identifies the position of the object inside the body based on a blur amount of the image of the object in each frame image captured by the image capturing section.
The summary clause does not necessarily describe all necessary features of the embodiments of the present invention. The present invention may also be a sub-combination of the features described above. The above and other features and advantages of the present invention will become more apparent from the following description of the embodiments taken in conjunction with the accompanying drawings.
Hereinafter, some embodiments of the present invention will be described. The embodiments do not limit the invention according to the claims, and all the combinations of the features described in the embodiments are not necessarily essential to means provided by aspects of the invention.
The ICG injecting section 190 injects indocyanine green (ICG), which is a luminescent substance, into the subject 20, which is an example of the body in the present invention. The ICG is an example of the luminescent substance in the present embodiment, but the luminescent substance may instead be a different fluorescent substance. The ICG is excited by infra-red rays with a wavelength of 750 nm, for example, to emit broad spectrum fluorescence centered at 810 nm.
If the subject 20 is a living organism, the ICG injecting section 190 injects the ICG into the blood vessels of the organism through intravenous injection. The position identifying system 10 captures images of the blood vessels in the organism from the luminescent light of the ICG. This luminescent light includes fluorescent light and phosphorescent light. The luminescent light, which is an example of the light from the body, includes chemical luminescence, frictional luminescence, and thermal luminescence, in addition to the luminescence from the excitation light or the like. The blood vessels are examples of the objects in the present invention.
The ICG injecting section 190 is controlled by the control section 105, for example, to inject the subject 20 with ICG such that the ICG density in the organism is held substantially constant. The subject 20 may be a living organism such as a person. Objects such as blood vessels exist inside the subject 20. The position identifying system 10 of the present embodiment detects the position, i.e. depth, of objects existing below the surface of the subject 20, where the surface may be the inner surface of an organ. The position identifying system 10 corrects the focus of the frame image of the object according to the detected position. The body in this invention may be an internal organ of a living organism, such as the stomach or intestines, or may be an inorganic including natural bodies such as ruins and inorganic bodies such as industrial products.
The endoscope 100 includes an image capturing section 110, a light guide 120, a vibrating section 133, and a clamp port 130. The tip 102 of the endoscope 100 includes an objective lens 112, which is a portion of the image capturing section 110, an irradiation aperture 124, which is a portion of the light guide 120, and a nozzle 138, which is a portion of the vibrating section 133.
A clamp 135 is inserted into the clamp port 130, and the clamp port 130 guides the clamp 135 to the tip 102. The tip of the clamp 135 may be any shape. Instead of the clamp, various types of instruments for treating the organism can be inserted into the clamp port 130. The nozzle 138 ejects water or air.
The light irradiating section 150 generates the light to be radiated from the tip 102 of the endoscope 100. The light generated by the light irradiating section 150 includes irradiation light that irradiates the subject 20 and excitation light, such as infra-red light, that excites the luminescent substance inside the subject 20 such that the luminescent substance emits luminescent light. The irradiation light may include a red component, a green component, and a blue component.
The image capturing section 110 captures a frame image based on the reflected light, which is the irradiation light reflected by the object, and the luminescent light emitted by the luminescent substance. The image capturing section 110 may include an optical system and a two-dimensional image capturing device such as a CCD, or may include the lens 112 in an optical system. If the luminescent substance emits infra-red light, the image capturing section 110 can capture an infra-red light frame image. If the light irradiating the object contains red, green, and blue components, i.e. if the irradiation light is white light, the image capturing section 110 can capture a visible light frame image.
The light from the object may be luminescent light such as fluorescent light or phosphorescent light emitted by the luminescent substance in the object, or may be the irradiation light that reflects from the object or that passes through the object. In other words, the image capturing section 110 captures a frame image of the object using the light emitted by the luminescent substance inside of the object, the light reflected by the object, or the light passing through the object.
The image capturing section 110 can capture a frame image of the object using various techniques that do not involve receiving light from the object. For example, the image capturing section 110 can capture a frame image of the object using electromagnetic radiation such as X-rays or γ-rays, radiation including particle beams such as alpha rays, or the like. The image capturing section 110 may capture the frame image of the object using sound waves, electrical waves, or electromagnetic waves having various wavelengths.
The light guide 120 may be formed of optical fiber. The light guide 120 guides the light emitted by the light irradiating section 150 to the tip 102 of the endoscope 100. The light guide 120 can have the irradiation aperture 124 provided in the tip 102. The light emitted by the light irradiating section 150 passes though the irradiation aperture 124 to irradiate the subject 20.
The image processing section 140 processes the image data acquired from the image capturing section 110. The output section 180 outputs the image data processed by the image processing section 140. The image capturing control section 160 controls the image capturing by the image capturing section 110. The light emission control section 170 is controlled by the image capturing control section 160 to control the light irradiating section 150. For example, when the image capturing section 110 performs image capturing alternately with infra-red light and irradiation light, the light emission control section 170 controls the image capturing section 110 to synchronize the timing of the image capturing with the emission timing of the infra-red light and the irradiation light.
The vibrating section 133 causes the body to vibrate. For example, the vibrating section 133 causes the surface of the subject 20 to vibrate by discharging air from the tip of the nozzle 138. As another example, the vibrating section 133 can cause the surface of the subject 20 to vibrate using sound waves or supersonic waves. During vibration, the image processing section 140 identifies the depth of the blood vessels from the surface of the subject 20 based on the amount of blur in portions of the frame image captured by the image capturing section 110. The vibrating section 133 desirably causes the surface of the body to vibrate in a manner to include movement in a direction perpendicular to the frame image capturing direction of the image capturing section 110.
The object frame image acquiring section 210 acquires an object frame image, which is a frame image based on the light from the object, i.e. the blood vessel, inside the subject 20. More specifically, the frame image captured by the image capturing section 110 based on the light from the object is acquired as the object frame image. The image capturing section 110 captures the frame image of the object after the body is caused to vibrate. The object frame image acquiring section 210 acquires the object frame image captured by the image capturing section 110.
If the light from the object is luminescent light emitted by the luminescent substance, the object frame image acquired by the object frame image acquiring section 210 includes an image of an object in a range extending as deep from the surface as the excitation light exciting the luminescent substance can penetrate. For example, if the luminescent substance excitation light radiated from the tip 102 of the endoscope 100 has a wavelength of 750 nm, the excitation light can penetrate relatively deeply into the subject 20, i.e. to a depth of several centimeters. Therefore, the object frame image acquired by the object frame image acquiring section 210 can include the image of a blood vessel that is relatively deep in the subject 20. The blood vessel image is an example of the images of the object in the object frame image of the present invention.
The luminescent substance existing within the depth to which the excitation light can penetrate is excited by the excitation light, so that the object frame image acquired by the object frame image acquiring section 210 includes the image of the blood vessel existing within the depth to which the excitation light can penetrate. The image of the blood vessel becomes more blurred for a blood vessel that is deeper because the fluorescent light from the blood vessels is scattered by the subject 20.
The surface image acquiring section 214 acquires a surface image of the body. That is, the surface image acquiring section 214 acquires an image equivalent to what can be seen by the eye. For example, the surface image acquiring section 214 acquires, as the surface image, an image captured by the image capturing section 110 based on the irradiation light reflected from the surface of the body.
The position identifying section 230 identifies the position of the objects in the body based on the amount of blurring of the object image in the object frame image acquired by the object frame image acquiring section 210. More specifically, the blur amount calculating section 232 calculates the blur amount of the object image in the object frame image.
The transmission time calculating section 234 calculates a transmission time that indicates the length of the period from when the body begins to vibrate to when the vibration reaches the object, based on the blur amount of the object image in the object frame image as calculated by the blur amount calculating section 232. For example, the transmission time calculating section 234 calculates the transmission time to be the length of the period from when the body begins to vibrate to when the blur amount caused by the vibration exceeds a predetermined value.
The distance calculating section 236 calculates a distance from the position of the vibration in the body caused by the vibrating section 133 to the position of the object, based on the transmission time calculated by the transmission time calculating section 234. For example, the distance calculating section 236 can calculate longer distances for longer transmission times calculated by the transmission time calculating section 234. The distance calculating section 236 can calculate a distance from the position of the body that is vibrated by the vibrating section 133 based on the transmission time and a transmission speed that indicates the distance that the vibration travels per unit time.
When the vibrating section 133 vibrates the body from the surface, the transmission time calculating section 234 may calculate the transmission time to be the period from when the vibrating section 133 vibrates the surface to when the blur amount caused by the vibration becomes greater than a preset value. In this case, the distance calculating section 236 may calculate the depth of the object in relation to the surface based on the transmission time calculated by the transmission time calculating section 234.
The image correcting section 220 corrects the spread of the object image in the object frame image based on the depth identified by the position identifying section 230. As described above, the images of the objects are blurred due to scattering caused by the body between the object and the surface. The image correcting section 220 corrects the blur according to the depth of the object from the surface identified by the position identifying section 230.
More specifically, the correction table 222 stores correction values for correcting the spread of the object image in the object frame image, in association with the depth of the object. The image correcting section 220 corrects the spread of the object image in the object frame image based on the correction values stored in the correction table 222 and the depth of the object calculated by the position identifying section 230.
The display control section 226 controls the display of the frame image corrected by the image correcting section 220 according to the depth of the objects. For example, the display control section 226 changes the color or brightness of the object image in the object frame image corrected by the image correcting section 220, according to the depth of the object.
The position identifying section 230 may identify the depth of each of a plurality of objects from the surface. More specifically, the transmission time calculating section 234 may calculate a transmission time for each of the plurality of objects. The distance calculating section 236 may calculate the depth of each object from the surface based on the transmission time calculated by the transmission time calculating section 234. The image correcting section 220 may correct the spread of the object images in the object frame image based on the depth of each object.
The frame image corrected by the image correcting section 220 is provided to the output section 180 to be displayed. The display control section 226 controls the display of the frame image corrected by the image correcting section 220 according to the depth of each object. For example, the display control section 226 may change the color or brightness of each object in the object frame image corrected by the image correcting section 220, based on the depth of each object. The display control section 226 may instead display characters or the like indicating the depth of each object in association with the corrected frame image.
At the time t1, which is not within the period during which the image capturing section 110 captures the frame images 401, 403, and 405, the surface of the subject 20 is vibrated. The image capturing section 110 captures frame images of the object at intervals of 2Δt beginning at the time t1+2Δt. In
By capturing the series of frame images described above several times, the image capturing section 110 can capture frame images of the object at intervals of Δt, beginning when the vibrating section 133 begins the vibration. The object frame image acquiring section 210 acquires the frame images 401 to 405 of the object captured by the image capturing section 110.
The frame image 401 includes the blood vessel image 411 and the blood vessel image 421, the frame image 403 includes the blood vessel image 413 and the blood vessel image 423, the frame image 405 includes the blood vessel image 415 and the blood vessel image 425, the frame image 402 includes the blood vessel image 412 and the blood vessel image 422, and the frame image 404 includes the blood vessel image 414 and the blood vessel image 424. In
The blur amount calculating section 232 calculates the blur amount of each of the blood vessel images 411 to 415 and 421 to 425 in the frame images 401 to 405. More specifically, the blur amount calculating section 232 calculates the blur amount in a border region between the object and another region. The blur amount may be the amount that the object image expands in the border region. The spread of the object image can be evaluated by the amount of spatial change in the brightness value of a specified color included in the object. The amount of spatial change in the brightness value may be a half-value width or a spatial derivative value of the spatial distribution.
The transmission time calculating section 234 identifies the blood vessel image 412 as having the greatest blur amount from among the blood vessel images 411 to 415 and also as having a blur amount greater than a preset value, based on the blur amounts calculated by the blur amount calculating section 232. The transmission time calculating section 234 identifies the time t+2Δ as the time at which the frame image 402 including the blood vessel image 412 is captured. The transmission time calculating section 234 then detects the transmission time from the surface to the blood vessel shown by the blood vessel images 411 to 415 to be the time difference of 2Δt between the time t1 at which the vibrating section 133 vibrated the surface of the subject 20 and the time t+2Δat which the frame image 402 is captured.
The blood vessel image 423 has the greatest amount of blur from among the blood vessel images 421 to 425. Accordingly, in the same way as described for the blood vessel images 411 to 415, the transmission time calculating section 234 calculates the transmission time from the surface to the blood vessel shown by the blood vessel images 421 to 425 to be the time difference of 3Δt, based on the amount of blur in the blood vessel images 421 to 425 detected by the blur amount calculating section 232.
The above example describes the operation of each element when the image capturing section 110 captures frame images of the object in two separate series, based on the image capture rate of the image capturing section 110, the speed at which the vibration moves through the subject 20, and the desired depth resolution. If the depth resolution, which is determined by the speed at which the vibration moves through the subject 20 and the image capture rate of the image capturing section 110, is greater than or equal to the required depth resolution, the image capturing section 110 may perform one series of image capturing.
The distance calculating section 236 calculates the distance from the surface to each blood vessel based on the transmission time calculated by the transmission time calculating section 234 and the information stored in the distance calculating table. More specifically, the distance calculating section 236 calculates the distance from the surface to each blood vessel to be the distance stored in association with the corresponding transmission time calculated by the transmission time calculating section 234.
The distance calculating section 236 may calculate the distance from the surface to each blood vessel further based on the difference between the maximum blur amount and the blur amount of the blood vessel image when there is no vibration, in addition to the transmission time. Using the blood vessel shown by the blood vessel images 411 to 415 as an example, the distance calculating section 236 may calculate the distance from the surface to the blood vessel to be the distance stored in association with the time difference Δt and the difference between the blur amount of the blood vessel image 411 and the blur amount of the blood vessel image 412. The distance calculating section 236 can increase the depth resolution by calculating the distance based on the time difference and the blur amount difference.
If the image capturing section 110 captures frame images of the objects both when the body is vibrating and when the body is not vibrating, the position identifying section 230 can identify the position of the objects inside the body based on the blur amounts of the object images in each object frame image captured by the image capturing section 110. More specifically, the position identifying section 230 identifies the position of the objects inside the body based on the difference between the blur amount of the object images when the body is vibrating and the blur amount of the object images when the body is not vibrating. The position identifying section 230 can identify the position of the objects inside the body based on this blur amount difference and the information stored in the distance calculation table described above. The position identifying section 230 can identify the position of the objects to be further away from the position on the body vibrated by the vibrating section 133 when the blur amount difference is smaller.
For example, the image correcting section 220 achieves the blood vessel image 620 by applying an image conversion to the blood vessel image 421 to correct the spread. More specifically, the image correcting section 220 stores a point-spread function having the depth of the blood vessel as a parameter. The point-spread function indicates the point-spread caused by the dispersion experienced by a point light source traveling to the surface. The image correcting section 220 achieves the blood vessel image 620 in which the spread of the blood vessel image is corrected by applying a filtering process to the blood vessel image 421. This filtering process uses an inverse filter of a point-spread function determined according to the depth of the blood vessel. The correction table 222 may store the inverse filter, which is an example of a correction value, in association with the depth of the object.
Since the blood vessel images in the frame image captured by the image capturing section 110 are corrected by the position identifying system 10 of the present embodiment in this way, a frame image containing clear blood vessel images 610 and 620 can be obtained. The display control section 226 causes the output section 180 to display the depth from the surface by changing the color or the shading of the blood vessel image 610 and the blood vessel image 620 in the frame image 600 according to the depth of each blood vessel. The display control section 226 may cause the output section 180 to display a combination of the frame image corrected by the image correcting section 220 and the surface image acquired by the surface image acquiring section 214. More specifically, the display control section 226 may overlap the surface image onto the frame image corrected by the image correcting section 220, and cause the output section 180 to display this combination.
The position identifying system 10 of the present embodiment enables a doctor who is watching the output section 180 while performing surgery, for example, to clearly view images of the internal blood vessels 610 and 620, and also enables the doctor to see information concerning the depth of the blood vessels.
The image capturing section 110 captures the frame image of the object at each of the different timings. The position identifying section 230 identifies the position of objects near the position of the body vibrated by the vibrating section 133 at the timing of the capture of a frame image including a frame image of an object having a blur amount greater than the preset value.
For example, the amount calculating section 232 calculates this blur amount from the blood vessel image indicating the blood vessel 710 included in each of the frame images captured by the image capturing section 110 while each of the positions 751, 752, 753, and 754, respectively, are vibrated by the vibrating section 133. The distance calculating section 236 identifies the frame image that includes the blood vessel image calculated as having the greatest blur amount by the blur amount calculating section 232. The distance calculating section 236 then determines that a blood vessel exists near the position that is vibrated by the vibrating section 133 when the identified frame image is captured.
In the example of
In addition to calculating the depth of the blood vessel 710, the distance calculating section 236 may calculate the certainty of the calculated depth. For example, the distance calculating section 236 determines that the blood vessel 710 exists between (i) the midpoint between the position 751 and the position 752 and (ii) the midpoint between the position 752 and the position 753. The distance calculating section 236 sets the region between the two midpoints as having the greatest certainty near the position 752 in the distance certainty distribution. The image correcting section 220 may use the certainty distribution calculated by the distance calculating section 236 to correct the spread of the blood vessel image.
The image processing section 140 detects a plurality of blood vessels in the frame images by analyzing the frame images captured by the image capturing section 110. The position identifying section 230 identifies the position of each blood vessel in the target area of the image capturing by the image capturing section 110. The vibrating section 133 causes vibrations at different depths from the surface 730 at each identified position of a blood vessel. In this way, the position identifying section 230 can calculate the depth of each of the plurality of blood vessels.
As described above, the vibrating section 133 causes vibrations at a plurality of different positions in the body at different timings. The position identifying section 230 identifies the positions of the objects based on the blur amount of the object images in each frame image captured by the image capturing section 110.
The image capturing section 110 captures a frame image of the objects when (i) the first position 861 is vibrated without vibrating the second position 862 and (ii) when the second position 862 is vibrated without vibrating the first position 861. The position identifying section 230 identifies the position of the objects inside the body based on the difference between (i) the blur amount of the object images when the first position 861 is vibrated without vibrating the second position 862 and (ii) the blur amount of the object images when the second position 862 is vibrated without vibrating the first position 861.
The blur amount of the portion of the blood vessel image 911 near the position 861 is greater than the blur amount of the portion of the blood vessel image 911 further form the position 861. On the other hand, the blur amount of the portion of the blood vessel image 921 near the position 862 is greater than the blur amount of the portion of the blood vessel image 921 further from the position 862.
The difference between the blur amounts of the portions of the blood vessel image 912 and the blood vessel image 922 near the position 861 and the position 862 is less than the difference between the blur amounts at different portions of the blood vessel image 911 and the blood vessel image 921. In this case, the distance calculating section 236 identifies the blood vessels to be at deeper positions when the difference between the blur amounts of the blood vessel images at different positions is greater. In this way, the position identifying section 230 identifies the position of the objects to be further from the first position 861 and the second position 862 when the blur amount difference is smaller. The image correcting section 220 performs a correction for the blood vessel images of the blood vessel 820 calculated to be deeper by the position identifying section 230 that has a greater effect than the correction performed for the blood vessel image of the blood vessel 810 calculated to be shallower by the position identifying section 230.
The host controller 1582 is connected to the RAM 1520 and is also connected to the CPU 1505 and graphic controller 1575 accessing the RAM 1520 at a high transfer rate. The CPU 1505 operates to control each section based on programs stored in the ROM 1510 and the RAM 1520. The graphic controller 1575 acquires frame image data generated by the CPU 1505 or the like on a frame buffer disposed inside the RAM 1520 and displays the frame image data in the display apparatus 1580. In addition, the graphic controller 1575 may internally include the frame buffer storing the frame image data generated by the CPU 1505 or the like.
The input/output controller 1584 connects the hard disk drive 1540, the communication interface 1530 serving as a relatively high speed input/output apparatus, and the CD-ROM drive 1560 to the host controller 1582. The communication interface 1530 communicates with other apparatuses via the network. The hard disk drive 1540 stores the programs used by the CPU 1505 in the position identifying system 10. The CD-ROM drive 1560 reads the programs and data from a CD-ROM 1595 and provides the read information to the hard disk drive 1540 via the RAM 1520.
Furthermore, the input/output controller 1584 is connected to the ROM 1510, and is also connected to the flexible disk drive 1550 and the input/output chip 1570 serving as a relatively high speed input/output apparatus. The ROM 1510 stores a boot program performed when the position identifying system 10 starts up, a program relying on the hardware of the position identifying system 10, and the like. The flexible disk drive 1550 reads programs or data from a flexible disk 1590 and supplies the read information to the hard disk drive 1540 and via the RAM 1520. The input/output chip 1570 connects the flexible disk drive 1550 to each of the input/output apparatuses via, for example, a parallel port, a serial port, a keyboard port, a mouse port, or the like.
The programs provided to the hard disk 1540 via the RAM 1520 are stored on a recording medium such as the flexible disk 1590, the CD-ROM 1595, or an IC card and are provided by the user. The programs are read from the recording medium, installed on the hard disk drive 1540 in the position identifying system 10 via the RAM 1520, and are performed by the CPU 1505. The programs installed in and executed by the position identifying system 10 affect the CPU 1505 to cause the position identifying system 10 to function as the components provided to the position identifying system 10 described in relation to
While the embodiments of the present invention have been described, the technical scope of the invention is not limited to the above described embodiments. It is apparent to persons skilled in the art that various alterations and improvements can be added to the above-described embodiments. It is also apparent from the scope of the claims that the embodiments added with such alterations or improvements can be included in the technical scope of the invention.
Number | Date | Country | Kind |
---|---|---|---|
2007-312399 | Dec 2007 | JP | national |
2007-313838 | Dec 2007 | JP | national |
2007-313839 | Dec 2007 | JP | national |