1. Field of the Invention
The present invention relates to an image processing apparatus and an image processing method, which process images acquired by a capsule endoscope in examinations using the capsule endoscope that is introduced into a subject and captures images of inside of the subject.
2. Description of the Related Art
In recent years, examinations using capsule endoscopes (hereinafter, referred to as a capsule endoscopic examination or simply as an examination) which are introduced into subjects such as patients and capture images of insides of the subjects are known in the field of endoscopes. A capsule endoscope is an apparatus that has a built-in imaging function, a built-in wireless communication function, and the like provided in a casing of a capsule shape formed in a size introducible into a digestive tract of a subject.
In general, a capsule endoscopic examination is performed as described below. First, a medical worker such as a nurse attaches an antenna unit on the outside surface of a body of a patient that is a subject, and connects, to the antenna unit, a receiving device enabled to perform wireless communications with the capsule endoscope. Then, an imaging function of the capsule endoscope is turned on and the capsule endoscope is swallowed by the patient. Accordingly, the capsule endoscope is introduced into the subject, captures images while moving inside the digestive tract by peristaltic movement or the like, and wirelessly transmits image data of in-vivo images. The image data are received by the receiving device and accumulated in a built-in memory. Thereafter, the patient is allowed to freely act, for example, go out from the hospital, until the time designated by the medical worker as long as the patient carries the receiving device.
When the patient comes back to the hospital at the designated time, the examination is temporarily suspended, and the medical worker removes the receiving device from the patient and connects the receiving device to an image processing apparatus that is configured with a workstation or the like. Therefore, the image data accumulated in the receiving device are downloaded (transferred) to the image processing apparatus, and the image processing apparatus performs predetermined image processing to form images. The medical worker observes in-vivo images displayed on a screen of the image processing apparatus, confirms that the capsule endoscope has reached a large intestine, has captured a necessary region inside the subject, and has generated image data without a communication failure (a failure of an antenna) or a shortage of a battery after being swallowed, and then allows the patient to go home. Thereafter, the medical worker clears away devices, such as the receiving device, and finishes his/her work.
As a technique related to confirmation of the end of a capsule endoscopic examination, Japanese Laid-open Patent Publication No. 2009-297497 discloses a technique for analyzing the last images or a plurality of images received from an imaging apparatus, and determining whether the imaging apparatus is located inside a living body.
An image processing apparatus according to one aspect of the present invention processes image data acquired from a receiving device that receives and accumulates a series of image data wirelessly transmitted from a capsule endoscope, and includes an image data acquisition unit that sequentially acquires the image data from the receiving device in order from a latest imaging time, an image processing unit that performs predetermined image processing on the image data acquired by the image data acquisition unit, in order in which the image data are acquired, and a display controller that displays a screen containing a result obtained through the predetermined image processing.
An image processing method according to another aspect of the present invention processes image data acquired from a receiving device that receives and accumulates a series of image data wirelessly transmitted from a capsule endoscope, and includes acquiring the image data sequentially from the receiving device in order from a latest imaging time, performing predetermined image processing on the image data acquired at the acquiring, in order in which the image data are acquired, and displaying a screen containing a result obtained through the predetermined image processing.
The above and other features, advantages and technical and industrial significance of this invention will be better understood by reading the following detailed description of presently preferred embodiments of the invention, when considered in connection with the accompanying drawings.
Exemplary embodiments of an image processing apparatus and an image processing method according to the present invention will be described below with reference to the drawings. The present invention is not limited by the embodiments below. Furthermore, in describing the drawings, the same components are denoted by the same reference signs.
The imaging unit 21 includes, for example: an imaging element, such as a CCD or a CMOS, that generates image data of an image representing the inside of the subject 10 based on an optical image formed on a light receiving surface; and an optical system, such as an objective lens, that is arranged on a light receiving surface side of the imaging element.
The illumination unit 22 is realized by a semiconductor light-emitting element (for example, a light emitting diode (LED)) or the like that emits light toward the inside of the subject 10 when an image is captured. The capsule endoscope 2 has a built-in circuit board (not illustrated) in which a driving circuit or the like that drives each of the imaging unit 21 and the illumination unit 22 is formed. The imaging unit 21 and the illumination unit 22 are fixed on the circuit board such that respective fields of view are directed outward from one end portion of the capsule endoscope 2.
The signal processing unit 23 controls each unit in the capsule endoscope 2, performs A/D conversion on an imaging signal output from the imaging unit 21 to generate digital image data, and further performs predetermined signal processing on the digital image data.
The memory 24 temporarily stores therein various operations executed by the signal processing unit 23 and the image data subjected to the signal processing in the signal processing unit 23.
The transmitting unit 25 and the antenna 26 superimpose, together with related information, the image data stored in the memory 24 on a wireless signal and transmits the superimposed signal to outside.
The battery 27 supplies electric power to each unit in the capsule endoscope 2. The battery 27 includes a power supply circuit that performs boosting or the like of electric power supplied from a primary battery or secondary battery, such as a button battery.
After being swallowed by the subject 10, the capsule endoscope 2 sequentially captures images of living body sites (an esophagus, a stomach, a small intestine, a large intestine, and the like) at predetermined time intervals (for example, 0.5 second time interval) while moving inside the digestive tract of the subject 10 by peristaltic movement or the like of organs. The image data and related information generated from acquired imaging signals are sequentially and wirelessly transmitted to the receiving device 3. The related information includes identification information (for example, a serial number) or the like assigned in order to individually identify the capsule endoscope 2.
The receiving device 3 receives the image data and the related information wirelessly transmitted from the capsule endoscope 2 via the receiving antenna unit 4 including a plurality of receiving antennas 4a to 4h (eight receiving antennas in
As illustrated in
The receiving unit 31 receives the image data wirelessly transmitted from the capsule endoscope 2 via the receiving antennas 4a to 4h.
The signal processing unit 32 performs predetermined signal processing on the image data received by the receiving unit 31.
The memory 33 stores therein the image data subjected to the signal processing by the signal processing unit 32 and related information.
The data transmitting unit 34 is an interface connectable to a USB or a communication line, such as a wired LAN or a wireless LAN, and transmits the image data and the related information stored in the memory 33 to the image processing apparatus 5 under control of the control unit 37.
The operating unit 35 is used by a user to input various setting information or the like.
The display unit 36 display registration information on an examination (examination information, patient information, or the like), various setting information input by the user, or the like.
The control unit 37 controls operations of each unit in the receiving device 3.
The battery 38 supplies electric power to each unit in the receiving device 3.
The receiving device 3 is connected to the receiving antenna unit 4 attached to the subject 10 and is carried by the subject 10 while the capsule endoscope 2 is capturing images (a predetermined time after the capsule endoscope 2 is swallowed). During this period, the receiving device 3 stores, in the memory 33, the image data received via the receiving antenna unit 4 together with related information such as receiving intensity information and receiving time information in each of the receiving antennas 4a to 4h. After the capsule endoscope 2 completes the imaging, the receiving device 3 is removed from the subject 10, is then connected to the image processing apparatus 5, and transfers the image data and the related information stored in the memory 33 to the image processing apparatus 5. In
The input unit 51 is realized by an input device, such as a keyboard, a mouse, a touch panel, or various switches, and receives input of information and an instruction according to a user operation.
The image data acquisition unit 52 is an interface connectable to a USB or a communication line, such as a wired LAN or a wireless LAN, and includes a USB port, a LAN port, or the like. The image data acquisition unit 52 acquires the image data and the related information from the receiving device 3 via an external device such as the cradle 3a connected to the USB port or via various communication lines.
The storage unit 53 is realized by a semiconductor memory such as a flash memory, a RAM, a ROM, or a recording medium, such as an HDD, an MO, a CD-R, or a DVD-R, and a read and write device or the like that reads and writes information from and to the recording medium. The storage unit 53 stores therein programs and various information for causing the image processing apparatus 5 to operate and execute various function, image data acquired by capsule endoscopic examinations, or the like.
The image processing unit 54 is realized by hardware, such as a CPU, and by reading a predetermined program stored in the storage unit 53, performs predetermined image processing on image data acquired via the image data acquisition unit 52, generates an in-vivo image, and performs a process of generating an observation screen that contains the in-vivo image and that is in a predetermined format.
More specifically, the image processing unit 54 performs image processing for image generation (image processing for image generation to change to a format so that the stored image data can be displayed as an image), such as a white balance process, demosaicing, color conversion, density conversion (gamma conversion or the like), smoothing (noise elimination or the like), or sharpening (edge enhancement or the like), on the image data stored in the storage unit 53, and further performs image processing, such as a position detection process, an average color calculation process, a lesion detection process, a red detection process, an organ detection process, a predetermined feature detection process, on the generated images depending on purposes.
The display controller 55 causes the display device 5a to display, in a predetermined format, the in-vivo image generated by the image processing unit 54.
The control unit 56 is realized by hardware, such as a CPU, and by reading various programs stored in the storage unit 53, transfers instructions or data to each unit of the image processing apparatus 5 based on signals input via the input unit 51, image data acquired via the image data acquisition unit 52, or the like, and integrally controls the entire operations of the image processing apparatus 5.
Next, operations of the image processing apparatus 5 will be described.
An examination using the capsule endoscope 2 is started when the subject 10 swallows the capsule endoscope 2 in a state in which the receiving antenna unit 4 is attached to the subject 10 and the receiving antenna unit 4 is connected to the receiving device 3. Thereafter, when a predetermined time has elapsed, the user (a medical worker) removes the receiving device 3 from the receiving antenna unit 4 and sets the receiving device 3 in the cradle 3a. The above described predetermined time is set to a time (for example, about 8 hours) enough for the capsule endoscope 2 to move inside the subject 10 by peristaltic movement and pass through an examination target region such as a small intestine. Furthermore, the user asks the subject 10 to wait because whether the examination needs to be resumed is uncertain at this stage.
In response to this, at Step S10, the image data acquisition unit 52 starts to acquire a series of image data accumulated in the receiving device 3. In this case, as illustrated in
At subsequent Step S11, the image processing unit 54 starts to perform the image processing for image generation, such as a white balance process or demosaicing, on the image data acquired by the image data acquisition unit 52, in the order in which the image data are acquired. Even when the image processing unit 54 starts the image processing, the image data acquisition unit 52 continuously performs the process of acquiring image data in reverse order of the order in which the images are captured.
At Step S12, the display controller 55 displays images generated by the image processing unit 54 on the display device 5a in the order in which the images are generated.
The user observes the in-vivo images displayed on the image display area d1, and determines whether an image needed for the diagnosis has been obtained. For example, when an examination target region is the entire small intestine, and if the images displayed on the image display area d1 start with a large intestine, it is determined that a necessary image of the entire small intestine has been obtained. In this case, the user performs a predetermined pointer operation (for example, a click operation) on the OK button d2 by using a mouse or the like. In response to this, a signal (OK signal) indicating that the image is confirmed is input to the control unit 56.
At Step S13, if the OK signal is input to the control unit 56 (Step S13: Yes), the control unit 56 causes the display controller 55 to end display of the preview screen D1 (Step S14). Thereafter, the image data acquisition unit 52 continues to acquire image data in the background. The image processing for image generation may be temporarily suspended, and may be resumed after all of image data are acquired.
When the image is confirmed, the user may allow the subject 10 to go home.
At Step S15, when acquisition of all of the image data is completed, the image processing unit 54 performs the image processing for image generation and image processing for an individual predetermined purpose on the acquired image data in the order in which the images are captured (Step S16). However, it is sufficient to perform only the image processing for an individual purpose on image data for which images have already been generated before the end of the image display (Step S14). As the image processing for an individual purpose, a process is performed which is set in advance from among a position detection process, an average color calculation process, a lesion detection process, a red detection process, an organ detection process, a predetermined feature detection process, and the like. The reason why the image processing at Step S16 is performed in the same order as the order in which the images are captured is that the image processing for an individual purpose includes a process, such as a position detection process or a similarity detection process, in which the order of images is important because information on adjacent images is used.
When all of the image data are acquired by the image processing apparatus 5, the user may remove the receiving device 3 from the cradle 3a, clear away the devices or the like, and finish his/her work.
Subsequently, upon completion of the image processing for image generation and the image processing for an individual purpose on all of the image data acquired from the receiving device 3, the operations of the image processing apparatus 5 end. Thereafter, the image processing apparatus 5 may further generate and display an observation screen containing an in-vivo image according to a signal that is input from the input unit 51 by a user operation.
In contrast, when determining that an image needed for the diagnosis has not been obtained through the observation of the preview screen D1 illustrated in
At Step S13, if the NG signal is input to the control unit 56 (Step S13: No), the control unit 56 causes the image data acquisition unit 52 to stop acquisition of image data (Step S17).
At this time, if the capsule endoscope 2 seems to be inside the subject 10, the user is able to resume the examination.
If the examination is to be resumed (Step S18: Yes), and when the user reconnects the receiving device 3 to the receiving antenna unit 4 and attaches the receiving antenna unit 4 to the subject 10, the examination is resumed (Step S19). In response to this, the receiving device 3 receives image data wirelessly transmitted from the capsule endoscope 2 via the receiving antenna unit 4. After an adequate time has elapsed since the resumption of the examination, and when the receiving device 3 is removed from the receiving antenna unit 4 again, the examination ends (Step S20). Thereafter, when the receiving device 3 is set in the cradle 3a again, the process returns to Step S10.
In contrast, if the examination is not to be resumed (Step S18: No), the image processing apparatus 5 causes the image data acquisition unit 52 to resume acquisition of image data (Step S21).
As described above, according to the first embodiment, in-vivo images generated in reverse chronological order from the end of the imaging are displayed while image data are being transferred from the receiving device 3 to the image processing apparatus 5. Therefore, the user is able to determine the necessity of a reexamination on the subject 10 at an earlier stage after the examination. Consequently, it becomes possible to reduce a wait time of the subject 10. Furthermore, if by any chance a necessary image has not been obtained, it is possible to immediately resume the examination. Therefore, it becomes possible to reduce a burden, such as a reexamination on another day, on the subject 10. Furthermore, the user is able to clear away the receiving device 3 upon completion of transfer of image data, so that it becomes possible to improve the efficiency of works related to examinations.
A first modified example of the first embodiment according to the present invention will be described.
In the above described first embodiment, the image processing apparatus 5 starts acquisition of image data accumulated in the receiving device 3 at the end of the imaging and continues the acquisition until the OK signal is input by a user operation on the preview screen D1. However, as indicated by diagonal lines in
In this case, if the user does not confirm images (if the OK signal or the NG signal is not input) even after acquisition of the preview image data is completed, the image processing apparatus 5 waits for input of the OK signal or the NG signal while displaying, on the display device 5a, a still image of the last in-vivo image generated for preview.
Furthermore, if the user confirms the images and the display of the preview screen is ended (see Step S14), it is preferable to acquire image data from the receiving device 3 to the image processing apparatus 5 in the order in which the images are captured, starting from the start of the imaging to the end of the imaging. As described above, by setting the order of acquisition of remaining image data to the same as the order of image processing at Step S16, it becomes possible to start image processing before the acquisition of the image data is completed (see Step S15), enabling to perform the acquisition of the image data and the image processing in parallel.
A second embodiment of the present invention will be described.
A feature of an image processing apparatus according to the second embodiment lies in that it automatically determines whether a series of in-vivo images captured by the capsule endoscope 2 contains an in-vivo image needed for a diagnosis. A configuration of the image processing apparatus according to the second embodiment is the same as that of the image processing apparatus 5 illustrated in
Operations of the image processing apparatus according to the second embodiment will be described.
At Step S31 subsequent to Step S11, the image processing unit 54 starts to perform image processing of determining regions that appear in in-vivo images (a region determination process) on the image data that have been subjected to the image processing for image generation, in the order in which the images are generated (namely, the reverse order of the order in which the images are captured). As the region determination process, any well-known method may be used. For example, it may be possible to determine that, based on color feature data of the in-vivo images, a brownish in-vivo image corresponds to a large intestine and a yellowish in-vivo image corresponds to a small intestine.
As a result of the region determination by the image processing unit 54, if a large intestine is contained in the in-vivo images that are generated in reverse chronological order from the end of the imaging (Step S32: Yes), the display controller 55 displays, on the display device 5a, a screen for notifying that the large intestine is confirmed (Step S33).
The notification to the user may be made not by the display of a text message but by, for example, a notification sound, a voice message, or the like. Alternatively, it may be possible to display an in-vivo image in which a region is confirmed, instead of or together with the display of a text message.
Thereafter, the image data acquisition unit 52 continues to acquire image data in the background. In this case, the image data may continuously be acquired in reverse order of the order in which the images are captured, or in the same order as the order in which the images are captured (namely, the same order as the subsequent image processing) similarly to the first modified example. Furthermore, the image processing for image generation may be temporarily suspended, and may be resumed after all of the image data are acquired.
At Step S34, when acquisition of all of the image data is completed, the image processing unit 54 performs the image processing for image generation and the image processing for an individual predetermined purpose on the acquired image data in the order in which the images are captured (Step S35). Incidentally, if the order of acquisition of the image data is changed to the order in which the images are captured, it may be possible to start the image processing before the acquisition of the image data is completed. Furthermore, it is sufficient to perform only necessary image processing on image data that have been subjected to the region determination process (Step S31).
When all of the image data are acquired by the image processing apparatus 5, the user may remove the receiving device 3 from the cradle 3a, clear away the devices or the like, and finish his/her work.
Thereafter, upon completion of the image processing for image generation and the image processing for an individual purpose on all of the image data acquired from the receiving device 3, the operations of the image processing apparatus 5 end. Incidentally, the image processing apparatus 5 may further generate and display an observation screen containing an in-vivo image according to a signal that is input from the input unit 51 by a user operation.
In contrast, when an image containing the large intestine is not detected even by performing the region determination process on a preset predetermined number of images at Step S31 (Step S32: No), the display controller 55 displays, on the display device 5a, a screen for notifying that the large intestine is not confirmed (Step S36).
Furthermore, the notification screen D3 contains a YES button d6 and a NO button d7 to be used by the user to determine whether to resume the examination. When determining to resume the examination, the user performs a predetermined pointer operation (for example, a click operation) on the YES button d6 by using a mouse or the like. In response to this, a signal indicating that the examination is to be resumed is input to the control unit 56. In contrast, when determining not to resume the examination, the user performs a predetermined pointer operation on the NO button d7 by using a mouse or the like. In response to this, a signal indicating that the examination is not to be resumed is input to the control unit 56.
If the signal indicating that the examination is to be resumed is input (Step S37: Yes), the control unit 56 causes the image data acquisition unit 52 to stop acquisition of image data from the receiving device 3 (Step S38).
When the user reconnects the receiving device 3 to the receiving antenna unit 4 and attaches the receiving antenna unit 4 to the subject 10, the examination is resumed (Step S39). In response to this, the receiving device 3 receives image data wirelessly transmitted from the capsule endoscope 2 via the receiving antenna unit 4. After an adequate time has elapsed since the resumption of the examination, and when the receiving device 3 is removed from the receiving antenna unit 4 again, the examination ends (Step S40). Thereafter, when the receiving device 3 is set to the cradle 3a again, the process returns to Step S10.
In contrast, if the signal indicating that the examination is not to be resumed is input to the control unit 56 (Step S37: No), the display controller 55 displays, on the display device 5a, an input screen for aiding the user to input an instruction on whether to cause the image processing apparatus 5 to continue image processing on image data that have already been accumulated in the receiving device 3 (Step S41).
Furthermore, the input screen D4 contains a YES button d9 and a NO button d10 to be used by the user to determine whether to cause the image processing apparatus 5 to continue the image processing. When determining to continue the image processing, the user performs a predetermined pointer operation (for example, a click operation) on the YES button d9 by using a mouse or the like. In response to this, an instruction signal indicating that the image processing is to be continued is input to the control unit 56. In contrast, when determining not to continue the image processing, the user performs a predetermined pointer on the NO button d10 by using a mouse or the like. In response to this, an instruction signal indicating that the image processing is not to be continued is input to the control unit 56.
If the instruction signal indicating that the image processing is to be continued is input to the control unit 56 (Step S42: Yes), the operation of the image processing apparatus 5 proceeds to Step S34. In this case, the image processing apparatus 5 continues to acquire image data accumulated in the receiving device 3, and subsequently performs the image processing.
In contrast, if the instruction signal indicating that the image processing is not to be continued is input to the control unit 56 (Step S42: No), the control unit 56 causes the image data acquisition unit 52 to stop acquisition of image data from the receiving device 3 (Step S43).
As described above, according to the second embodiment, the region determination process is performs on in-vivo images generated in reverse chronological order from the end of the imaging while image data are being transferred from the receiving device 3 to the image processing apparatus 5. Therefore, the user is able to easily determine, at an earlier stage, whether the examination on the subject 10 needs to be resumed. Furthermore, the user is also able to determine, by himself/herself, whether to continue the image processing according to contents of individual examinations.
A second modified example of the second embodiment according to the present invention will be described.
If the capsule endoscope 2 continues imaging after being excreted from the subject 10, an image obtained at the end of the imaging contains outside of the subject 10. Therefore, if the region determination process is performed on all of the images generated in reverse chronological order from the end of the imaging, it takes a longer time to reach images of inside of the subject 10.
Therefore, when the region determination process is performed at Step S31 in
Furthermore, it may be possible to omit the region determination process on images that can obviously not be used for a diagnosis, such as images in which objects can hardly be distinguished due to halation, in addition to the images of outside of the subject 10. The halation images can be determined based on, for example, average luminance values of the images or the like.
A third embodiment of the present invention will be described below.
A feature of an image processing apparatus according to the third embodiment lies in that a series of image data accumulated in the receiving device 3 are divided into a plurality of blocks, image data are acquired from each of the blocks, and image processing is performed on the acquired image data. A configuration of the image processing apparatus according to the third embodiment is the same as that of the image processing apparatus 5 illustrated in
If, as described above, the capsule endoscope 2 continues imaging after being excreted from the subject 10, an image obtained at the end of the imaging contains outside of the subject 10. Therefore, if all of images generated in reverse chronological order from the end of the imaging are sequentially displayed on the preview screen, it takes a longer time to reach images of inside of the subject 10. Furthermore, in some cases, the user may want to confirm whether an image of a specific region inside the subject 10 has been obtained or whether there is a region whose image has not been obtained due to a failure in wireless transmission of image data caused by a failure of an antenna or the like.
Therefore, in the third embodiment, to enable the user to roughly grasp the entire series of in-vivo images obtained by an examination, the series of image data accumulated in the receiving device 3 is divided into a plurality of blocks, and images of a plurality of portions inside the subject 10 are simultaneously displayed as a preview.
More specifically, at Step S10 in
Furthermore, at subsequent Step S11, the image processing unit 54 performs the image processing for image generation on the image data acquired from each of the blocks 1 to 4 by the image data acquisition unit 52, in the order of in which the image data are acquired.
Incidentally, the image data may be transferred serially or in parallel from each of the blocks 1 to 4. If the image data are transferred serially, the image data acquisition unit 52 moves between the blocks in order of, for example, the block 4→the block 3→the block 2→the block 1→the block 4→ . . . , and acquires a predetermined amount of image data (for example, one image for each). In this case, in the preview screen D5, in-vivo images displayed on the image display areas d11 to d14 are switched one by one in reverse chronological order of the imaging time.
Furthermore, if the image data are transferred in parallel, the image data acquisition unit 52 simultaneously acquires predetermined amounts of image data from the blocks 1 to 4. In this case, the image processing unit 54 performs, in parallel, the image processing on the image data acquired from the respective blocks 1 to 4. Furthermore, in the preview screen D5, in-vivo images displayed on the image display areas d11 to d14 are simultaneously switched in reverse chronological order of the imaging time.
As described above, according to the third embodiment, the user is able to roughly grasp the entire series of in-vivo images obtained by an examination. Therefore, it becomes possible to easily and accurately determine whether an image needed for a diagnosis has been obtained.
In the above described third embodiment, it may be possible to perform the region determination process (see Step S31 in
As described above, according to the first to third embodiments and modified examples thereof, image data accumulated in the receiving device are acquired in order from the latest imaging time, image processing is performed in the order in which the image data are acquired, and results of the image processing are displayed on a screen. Therefore, it becomes possible to reduce a time for the user to perform necessary determinations, as compared with a conventional technology.
The above described present invention is not limited to the first to third embodiments and the modified examples thereof, and various inventions may be formed by appropriately combining a plurality of structural elements disclosed in the respective embodiments and modified examples. For example, formation by excluding some of the structural elements from the whole structural elements illustrated in the respective embodiments and modified examples may be made, or formation by appropriately combining the structural elements illustrated in the different embodiments and modified examples may be made.
Additional advantages and modifications will readily occur to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details and representative embodiments shown and described herein. Accordingly, various modifications may be made without departing from the spirit or scope of the general inventive concept as defined by the appended claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
2012-231183 | Oct 2012 | JP | national |
This application is a continuation of PCT international application Ser. No. PCT/JP2013/077613 filed on Oct. 10, 2013 which designates the United States, incorporated herein by reference, and which claims the benefit of priority from Japanese Patent Applications No. 2012-231183, filed on Oct. 18, 2012, incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/JP2013/077613 | Oct 2013 | US |
Child | 14276247 | US |