The present invention relates to a medical imaging device, including an ultrasound imaging device, an MRI device, and a CT device. More particularly, the present invention relates to techniques for selecting a specified cross section to be displayed, from a three-dimensional image, or two-dimensional (2D) time-series images or three-dimensional (3D) time-series images, being acquired by the medical imaging device.
Medical imaging devices are used to acquire and then display a morphological image of a target region. In addition, the medical imaging devices can also be used to acquire morphological information and functional information quantitatively. One of examples of such usage may be measurement of estimated weight of an unborn baby (fetus) for observing growth thereof, by the use of an ultrasound imaging device. This type of measurement is performed according to a process, roughly divided into three steps; acquiring images, selecting an image for measurement (measurement image), and performing the measurement. In the step of acquiring images, a target region and its surroundings are imaged sequentially, thereby acquiring a plurality of two-dimensional cross-sectional images or volume data thereof. In the step of selecting the measurement image, a cross sectional image optimum for measurement is selected from the acquired data. In the step of performing the measurement, a head region, an abdominal region, and a leg region are measured for the case of measuring the estimated fetal weight, and calculations are performed on measured values according to a predetermined calculation formula, thereby obtaining a weight value. Measuring the head region or the abdominal region requires surface traces, and it has been time consuming. However, in recent years, there are suggested automatic measurement techniques that perform the traces automatically, followed by specific calculations (see Patent Literature 1 and other similar documents). This technique brings about workflow improvement in the measurement.
In the examination, however, the step of selecting of the measurement image after acquiring images takes the most time and effort. For the case of a fetus, in particular, it is difficult to estimate and visualize a position of a measurement cross section, within the abdomen of the fetus as the examinee, and thus it takes time to acquire the cross section. In order to solve the problem of difficulties in acquiring such cross section necessary for fetal examination, Patent Literature 2 discloses that a high echo area is extracted from three-dimensional data, and a cross section is selected on the basis of three-dimensional features of thus extracted high echo area. Specifically, in selecting the cross section, matching is performed with prepared template representing the three-dimensional features, and a cross section which matches with the template is determined as a cross section to be selected.
Typically, an ultrasound image has characteristics including that image data may be different depending on an imaging operator at every imaging time (operator dependence), and that image data may be different depending on a constitutional predisposition and a disease of an imaging target (imaging target dependence). The operator dependence is caused by the following reason; that is, it is performed manually at every imaging time, to apply ultrasound waves and search a body for a region to be acquired as a cross sectional image or as volume data, and thus it is difficult to acquire completely identical data, even though an identical operator performs the examination on an identical patient. The imaging target dependence is caused by the following reason; that is, sound-wave propagation velocity and an attenuation rate within a body are different depending on the constitutional predisposition of the patient, and the shape of an organ is not perfectly identical between different patients due to the type of disease and individual variations. In other words, it is difficult to obtain an image that is ideal for measurement irrespective of which number is the imaging time and who is the patient, since there are influences of the operator dependence and the imaging target dependence.
The data thus acquired tends to include problems such as discrepancies with respect to the ideal position, an unclear image, and differences in a characteristic form.
The technique disclosed by Patent Literature 2 determines a cross section by matching with the templates prepared in advance, thus failing to address the aforementioned operator dependence and the imaging target dependence.
MRI devices or CT devices have less operator dependence relative to ultrasound imaging devices. However, it is difficult to determine a cross section by matching with a template, due to variations among individuals, or due to change in the shape of organs such as the heart and lungs in time-series images even in an identical person. In recent years, it is attempted to apply DL (Deep learning) techniques to improve an image quality or to determine a specific disease. In order to achieve discriminability with a high degree of precision in the DL technique, hardware with high processing power is required, together with long processing time. Thus it is difficult to install such technique on a conventionally used medical imaging device, or on a medical imaging device that needs high-speed processing.
In view of the situation above, an objective of the present invention is to avoid the problems of operator dependence and imaging target dependence, providing a technique for automatically extracting a cross section with high precision at high speed, when determining the cross section used for diagnosis and measurement, from 3D volume data acquired by a medical imaging device, or temporally sequential 2D or 3D images or 3D volume data.
In order to solve the problems above, the present invention provides a learning model that is trained to output as a discrimination score, spatial or temporal distance between a cross section to be extracted (target cross section) and a plurality of cross sections selected from processing target data, where the trained model is suitable for extracting the target cross section and easily implementable in a medical imaging device. Then, aptitude scores of cross sectional images of the processing target are calculated by using the model obtained by machine learning, thereby achieving extraction of an image of the target cross section with a high degree of precision.
The medical imaging device of the present invention includes an imager configured to collect image data of a subject, and an image processor configured to extract a specified cross section from the image data collected by the imager, wherein the image processor is provided with a model introducer configured to introduce a learning model being trained in advance to output discrimination scores for the image data of a plurality of cross sections, the discrimination score representing spatial or temporal proximity to the specified cross section, and a cross section extractor configured to select a plurality of cross sectional images from the image data and to extract the specified cross section on the basis of a result of applying the learning model to the cross sectional images being selected. The learning model is provided by integrating a feature extraction layer of a trained model, with a discrimination layer of an untrained model, and reduced in size. Thus, this learning model has a structure of layers simpler than the trained model prior to the integration.
An image processing method of the present invention determines from imaged data, a target cross section as a processing target and presents thus determined cross section, including a step of preparing a learning model being trained in advance to output discrimination scores for the image data of a plurality of cross sections, the discrimination score representing spatial or temporal proximity to the specified cross section, and a step of obtaining a distribution of discrimination scores of the plurality of cross sectional images selected from the imaged data, by using the learning model, and determining the target cross section on the basis of the distribution of the discrimination scores. This learning model is a downsized model obtained by integrating a feature extraction layer of a trained model that is trained in advance by using as learning data, the plurality of cross sectional images and the image of the target cross section constituting the imaged data, with a discrimination layer of an untrained model, followed by retraining.
According to the present invention, the learning model is applied to extraction of the cross section, thereby achieving reduction of manual-operation dependence and also reduction of examination time, in automatic extraction of the cross sectional image optimum for measurement. In addition, the small and simple model, being downsized with keeping a high degree of precision, is employed as the precise and complex learning model. Accordingly, this allows installation of the learning model on the medical imaging device, with maintaining a standard scale of an image processor within the device, as well as achieving high-speed processing.
There will now be described embodiments of the present invention, with reference to the accompanying drawings.
As shown
The imager 100 may be structured variously depending on modality. For the case of an MRI device, there are provided, for example, a magnetic field generation means for collecting magnetic resonance signals from the subject that is placed in a static magnetic field. For the case of a CT device, there are provided an X-ray source for applying X-rays to the subject, an X-ray detector for detecting X-rays passing through the subject, and a mechanism for rotating the X-ray source and the X-ray detector around the subject. For the case of an ultrasound imaging device, there is provided a means for transmitting ultrasound waves to the subject and receiving the ultrasound waves being reflected waves from the subject, so as to generate an ultrasound image. A method of generating image data in the imager may also be various depending on modality, but any data finally obtained may be volume data (3D image data) or 2D time-series image data or time-series volume data. Such data will be collectively referred to as “volume data” in the following description.
The image processor 200 is provided with a cross section extractor 230 configured to extract a specified cross section (referred to as “target cross section”), from the 3D volume data delivered from the imager 100, and model introducer 250 configured to introduce a learning model (discriminator) into the cross section extractor 230, the learning model inputting information of a plurality of cross sections included in the 3D volume data and outputting a score representing proximity between the cross sections and the target cross section, according to a feature of each cross section. The target cross section may be different depending on a diagnostic purpose or an objective of image processing on the cross section. In this example here, the target cross section is assumed as suitable for measuring the size (such as width, length, diameter, and circumferential length) of a structure, e.g., a specified organ and a region included in the cross section. The image processor 200 may further be provided with an operation part 210 for performing further measurement and other operations on image data of the cross section extracted by the cross section extractor 230, and a display controller 270 for displaying on the monitor 310, the cross section extracted by the cross section extractor 230 and results and other information from the operation part.
A learning model used by the cross section extractor 230 is a machine learning model that has been trained to output scores representing similarity between a correct image and a large number of cross sectional images included in the 3D volume data where the target cross section is already known, considering an image of the target cross section as the correct image, and for example, the learning model may comprise CNN (convolution neural network). A highly trained model (the first trained model) is integrated with an untrained model having less number of layers than the first trained model, and then, the learning model of the present embodiment is created as a downsized model (the second trained model). After the integration, the downsized model has already been trained in the same manner as trained CNN. The first trained model includes many layers and a large number of iterations are required for learning, but learning precision is high. The downsized model is obtained by combining a part of layers of the model trained with high precision, that is, a particularly trained layer with high precision including a feature extraction layer, for instance, with a layer of relatively low learning contribution in the untrained model, e.g., a discrimination layer within lower-level layers in CNN. Thus, the downsized model has a simple configuration with less number of layers, relative to the first trained model. Employing such downsized learning model allows installation of the learning model on the medical imaging device, with reducing processing time of the image processor 200. A specific structure and learning process of the learning model will be described in detail in the following embodiments.
The learning model (downsized model) is created in advance in the medical imaging device 10, or for instance, by a computer independent of the medical imaging device 10, and stored in the memory unit 350. Depending on variations of discrimination tasks, more than one downsized model may be stored. For example, when there is a plurality of cross sections as measurement targets, the downsized models may be created respectively for the measurement targets; e.g., the head, the chest, and the legs. When the type of target cross section is more than one, the downsized model may be created in response to the type of the target cross section. When there is a plurality of downsized models, the model introducer 250 calls a model necessary for the discrimination task, and passes the model to the cross section extractor 230.
As shown in
A part of or all of functions of the image processor 200 can be implemented by software that is executed by a CPU. Apart of the imager for generating image data and a part of the image processor may be implemented by hardware such as ASIC (Application Specific Integrated Circuit) and FPGA (Field Programmable Gate Array).
With the configuration as described above, an operation of the medical imaging device of the present embodiment, mainly processing steps of the image processor 200, will be described with reference to
As a precondition, a user may select a type of the target cross section via the operation input unit 330, for example. Types of the target cross section may include, a type depending on difference in purpose, for example, the cross section for measurement or the cross section for ensuring a direction where a structure extends, and a type depending on difference in measurement targets (such as a region, an organ, and a fetus). Such information may be entered at the time of setting imaging conditions, or this information may be set as a default when the imaging conditions are provided.
Upon receipt of 3D image data obtained by imaging according to the imager 100, the cross section selector 231 selects a plurality of cross sections from the 3D image data (S301). In the case where an orientation of the target cross section in the image space is known, the cross section selector selects more than one cross sections along the direction of the orientation and passes them to the cross section identifier 233. For example, when Z-axis is set as a body axis direction and the cross section is known to be parallel to the XY plane, XY planes at specific intervals are selected. Since the target cross section cannot be kept constant depending on structures (tissue or regions) included in the volume data, cross sections at various orientations may be selected in such a case. Preferably, the cross sections may be selected according to an approach, so-called, “coarse to fine approach”. In this approach, selection by the cross section selector 231 and identification by the cross section identifier 233 are repeated, and an area searched for selecting the cross sections (referred to as “search area”) is narrowed down starting from a relatively large area at each iteration. As the search area becomes narrower, intervals between the cross sections to be selected are made narrower, and further, the number of angles of the cross sections may also be increased.
On the other hand, the model introducer 250 reads out a learning model from the memory unit 350, in response to the type of the preset target cross section, and stores the learning model in the model storage unit 251. When the cross sections selected by the cross section selector 231 are passed to the cross section identifier 233, the model calling unit 252 calls from the model storage unit 251, the learning model to be applied. The cross section identifier 233 uses the learning model thus called to perform feature extraction and identification (discrimination) of the selected cross sections, and outputs a distribution of scores as a result of the identification (S302). The distribution of scores represents plotting of scores indicating a degree of similarity between the target cross section and the cross sections as processing targets, where distance values from the target cross section to the plurality of cross sections are plotted in the distribution. The distribution shows that the higher is the score, the cross section with the score is closer to the target cross section, in terms of spatial distance. The scores in the distribution have numerical values from 0 to 1 where the score of the cross section agreeing with the target cross section is set to 1.
The identification-result determiner 235 receives the distribution of scores being the result from the cross section identifier 233, and determines as the target cross section, a cross section that has the best score as a final result, i.e., the cross section having the score equal to 1 or the closest to 1, in the aforementioned example (S303).
After the target cross section is extracted by the cross section extractor 230, the display controller 250 displays this extracted cross section on the monitor 310 (S304). When the operation part 240 is provided with an automatic measurement function, the structures on the cross section are measured and the result of the measurement is displayed on the monitor 310 via the display controller 250 (S305). When there is a plurality of discrimination tasks, or reprocessing becomes necessary due to user's adjustment, the processing returns to step S301 (S306), and S301 to S304 (S305) are repeated.
According to the present embodiment, using a model (discriminator) that is trained in advance to identify a cross section being the closest to the target cross section, allows determination of the target cross section within a short time and automatically. Further according to the present embodiment, the learning model is obtained by integrating a partial layer of the model being highly trained in advance, with a partial layer of an untrained model with a relatively simple structure, and then retrained. Therefore, this learning model can be easily implemented in the imaging device and processing time using the learning model can be reduced significantly. Consequently, the time from imaging until displaying the target cross section, or until measurement using the target cross section can be reduced, and this enhances real-time characteristics.
In the first embodiment, there has been described the example where the processing target is 3D volume data. As a similar example, the present invention is also applicable to time-series data. That is, in the case where the time-series data is 2D time-series data, replacing one dimension of 3D by temporal dimension, and this 2D time-series data comprises various time-phase sectional images. When an image at a specified time phase is assumed as the target cross section, 2D time-series image data being imaged is inputted into the image processor 200 in a specified time unit, and then the aforementioned processing is performed, thereby automatically identifying the cross section in the target time phase and displaying the cross section.
If the 2D time-series image data does not include the target cross section, the processing by the image processor 200 is performed in parallel with continuous imaging, and this allows a search for the target cross section. In the case of the 2D time-series image data, it is sufficient for the cross section selector 231 to select only an imaged cross section (a plane in one direction), and this enables high-speed processing. It is further possible to select all of the imaged cross sections taken at predetermined intervals.
There has been described so far one embodiment of the present invention that is applicable irrespective of modality. Another embodiment of the present invention will be described in the following, where the present invention is applied to an ultrasound imaging device.
Initially, with reference to
The probe 410 comprises a plurality of ultrasound elements arranged along a predetermined direction. For example, each of the ultrasound elements is a ceramic element made of ceramic, for instance. The probe 410 is placed in such a manner that the probe comes into contact with the surface of the examination target 101.
The transmit beamformer 420 allows transmission of ultrasonic waves from at least a part of the plurality of ultrasound elements via the D/A converter 430. Delay time is given to each of the ultrasonic wave transmitted from each of the ultrasound elements that constitute the probe 410, in such a manner that the ultrasonic waves converge at a predetermined depth, so as to generate transmission beams that converge at the predetermined depth.
The D/A converter 430 converts electrical signals of transmission pulses from the transmit beamformer 420, into acoustic signals. The A/D converter 440 converts the acoustic signals received by the probe 410, being reflected in the process of propagation within the examination target 101, into electrical signals again, to generate receive signals.
The beamformer memory 450 stores via the A/D converter 440, in every transmission, beamforming delay data as to each focused point of the receive signals outputted from the ultrasonic elements. The receive beamformer 460 receives via the A/D converter 440 in every transmission, the receive signals outputted from the ultrasound elements, and generates beamforming signals from the beamforming delay data as to each transmission stored in the beamformer memory 450, and the receive signals thus received.
The image processor 470 generates an ultrasound image by using the beamforming signals generated by the receive beamformer 460, and automatically extracts an image optimum for measurement, from the 3D volume data being imaged or from a group of 2D cross sectional images accumulated within cine memory. For this purpose, the image processor 470 is provided with a data reconstructing unit 471 configured to generate the ultrasound image by using the beamforming signals generated by the receive beamformer 460, data memory 472 configured to store image data generated by the data reconstructing unit, a model introducer 473 configured to introduce a downsized machine learning model installed on the device in advance, a cross section extractor 474 configured to use the machine learning model to automatically extract an image optimum for measurement from the 3D volume data or from a group of 2D cross sectional images acquired from the data memory 472, an automatic measurement unit 475 configured to perform automatic measurement of a specified region on the cross section thus extracted, and a cross section adjuster 476 configured to receive a user operation input. Though not illustrated, in order to support Doppler imaging, there may be provided a Doppler processor for processing Doppler signals.
Functions of the data reconstructing unit 471 are the same as conventional ultrasound imaging devices, and the data reconstructing unit generates an ultrasound image such as an image in B-mode, in M-mode, or the like.
The model introducer 473 and the cross section extractor 474 implement functions respectively corresponding to the model introducer 250 and the cross section extractor 230 of the first embodiment, and they have the same configurations as shown in the functional block diagram in
The automatic measurement unit 475 may be configured by software incorporating a publicly known automatic measurement algorithm, and perform measurement of the size and others of a predetermined region, from one or more cross sections being extracted. Then, target measured values are calculated based on the information such as the size according to the given algorithm.
The cross section adjuster 476 accepts via the operation input unit 490, user's modification and adjustment on the cross section displayed on the monitor 480, being extracted by the cross section extractor 475, and provides the automatic measurement unit 475 with a command to change the position of the cross section and to perform reprocessing of automatic measurement caused by such change.
The monitor 480 displays the ultrasound image extracted by the image processor 470, together with a measured value and measurement position of the image. The operation input unit 490 comprises an input device for accepting positional adjustment of the cross section extracted by a user input, switching of the cross section, and adjustment of the measurement position. The image processor 470 performs a part of the processing once again, and updates the display result on the monitor 480.
Next, there will be described a learning model stored in the model storage unit 251 of the model introducer 473.
This learning model is a high-precision downsized model installed on the device in advance. As shown in
A specific structure of the downsized machine learning model will be described, taking as an example, CNN (Convolutional Neural Network) being one type of Deep Learning (DL).
As shown in
The downsized model 550 is established by integrating the feature extraction layer 515 as a part of the layer configuration of the trained model 510, with the discrimination layer 531 of the untrained model 530, to structure a new layer configuration, and then retrained using the learning database 500. It is to be noted that the layer configurations of the models, 510, 530, and 550 as shown in
Next, with reference to
In the process of training the learning model, the score distribution 705 as an output from the learning model is checked to obtain the distribution where the discrimination score of a cross section becomes higher, as the cross section becomes spatially closer to the position of the measurement cross section. In order to achieve this distribution, machine learning is repeated while adjusting weighting factors of the layers constituting the model, together with adjusting the learning data. In adjusting the learning data, anatomical information of a living body is used to adjust the spatial distance between the non-measurement cross section and the measurement cross section, and the position where the cross sections are acquired. According to such iteration of the adjustment as described above, a high-precision learning model that is suitable for searching for the measurement cross section can be generated, on the basis of the distribution of discrimination scores. In the case where there is a plurality of measurement cross sections, as a processing target, the learning model is created for each of the plurality of measurement cross sections.
When the learning data is not volume data, but temporally sequential 2D cross sections, the horizontal axis of the score distribution 705 in
The aforementioned downsized model 550 is also trained in the same manner as described above, the downsized model being obtained by integrating thus trained model 510 with untrained model 530. In the time of retraining, the learning rate of the trained model 510 and the untrained model 530 is adjusted so that the learning is performed emphasizing the discrimination layer 531. In other words, the weighting factor of the feature extraction layer 515 moved from the trained model 510 is maintained, and the learning rate of the discrimination layer 531 moved from the untrained model 530 is raised. Then, this allows acquisition of the downsized model 500 achieving both high precision and high-speed processing.
In light of the aforementioned configuration of the ultrasound imaging device 40, there will be described a process for extracting a cross section optimum for measurement, according to each unit of the cross section extractor 474 of the present embodiment. As one example, there will be described a case where the biparietal diameter (BPD), abdominal circumference (AC), and femur length (FL) of an unborn baby (fetus) are measured to estimate the weight. As shown in
With reference to
When the processing of cross section extraction starts, the cross section extractor 474 (
The process in step S902 is performed according to the “coarse to fine approach” that sequentially narrows down an area targeted for extracting a cross section (search area) starting from a large area. Therefore, the cross section selector (
Next, the cross section identifier (
The cross section extractor 474 analyzes the score distribution as a result of discrimination of each cross section according to the learning model (step S905) and narrows the initial search area 1001 down to a smaller search area. As shown in
As described above, in step S905, it is determined whether the search area is narrowed sufficiently on the basis of the analysis result of the score distribution, and across section suitable for the measurement is found. Then, it is further determined whether the search is to be finished (step S906). If the search is not finished, a new search area is determined, approaching a region that seems to include the measurement cross section, on the basis of the analysis of the result (step S902).
The processing from step S902 to step S906 is repeated two or more times, and along with narrowing the search area, an optimum measurement cross section is extracted, enabling a complete search at high speed. At the time when the search area becomes small to a certain degree, the direction (angle) of the cross section may be changed not only in the deflection angle direction but also in the elevation angle direction. As described above, narrowing the search area is repeated two or more times like a loop, thereby enabling extraction of the measurement cross section having a high score, with less number of identification processes.
When it is determined that the search is finished in step S906, automatic measurement or manual measurement as appropriate is performed on thus extracted measurement cross section (step S907). Finally, there are presented a plurality of extraction results, such as the extracted cross section, information of the cross section in the space, a measured value and measurement position, and other higher-ranked candidates (step S908). The monitor 480 displays thus presented extraction results and the processing is finished.
The automatic extraction of the cross section is a subsidiary diagnostic function, and it is necessary for a user to determine a final diagnosis. In the present embodiment, the cross section adjuster 476 accepts a signal from the operation input unit 490, and this allows adjustment of the cross section, switching of the cross section, and re-evaluation of measurement according user preference with a simple operation.
In the block for displaying cross section candidates 1220, there may also be displayed a spatial positional relationship 1206 of each cross sectional image in 3D volume data, together with an UI (candidate selection field 1207) for selecting a candidate. When the user requests to change the extracted measurement cross section, the candidate selection field 1207 is expanded and non extracted candidate cross sections 1208 and 1209 are displayed. The candidate cross sections may include, for example, a cross section positioned close to the extracted cross section, or a cross section with a high score, and in the figure, there are displayed two candidates. However, the number of candidates may be three or more. There may also be provided buttons 1208A and 1209A prompting to select any of the candidate cross sections.
The slider for positional adjustment 1230 is a UI for adjusting the position, enabling selection of a cross sectional image from any position on the volume data, for instance. When the user manipulates the slider for positional adjustment or the candidate buttons 1208A, 1209B, and others, the operation input unit 490 transmits a signal to the cross section adjuster 476, in response to the user's manipulation. The cross section adjuster 476 performs a series of processing such as updating and switching of the cross section, updating the measurement position, and updating of the measured value, and then, displays a result of the processing on the monitor 480.
When there is a plurality of cross sections targeted for measurement, the procedures shown in
The automatic measurement will be described specifically, taking fetal weight measurement as an example. As illustrated in
As illustrated in
As shown in
As shown in
Estimated weight=a×(BPD)3+b×(AC)2×(FL)
(where a and b are factors obtained based on empirical values, for example, a=1.07, b=0.30) The automatic measurement unit 475 displays thus calculated estimated weight on the monitor 480.
Embodiments of the ultrasound imaging device have been described, taking as an example, extraction of cross sections necessary for measuring fetal weight, including the AC measurement cross section, the BPD measurement cross section, and FL measurement cross section. The present embodiments features that identification and extraction on the basis of the downsized learning model, and it is further applicable to extraction of 4CV cross section of heart (heart four chamber view) for checking fetal cardiac function, 3VV cross section (three vessel view), left ventricular outflow view, right ventricular outflow view, and aortic arch view, and also applicable to automatic extraction of measurement cross section of amniotic fluid pocket for measuring the amount of amniotic fluid surrounding the fetus. In addition, the embodiments above may be applicable to automatic extraction of a standard cross section necessary for measurement and observation of heart and circulatory organs, not only in fetus but also in adults.
According to the present embodiments, a highly sophisticated learning model is employed, enabling automatic and high-speed cross section extraction, though the cross section extraction is highly operator dependent. Using the downsized model, obtained by integrating the learning model having a highly trained layer configuration, with the learning model having a relatively simple layer configuration, facilitates implementation of the learning model in the ultrasound imaging device, and enables high-speed processing.
According to the present embodiments, the coarse to fine approach is employed in extracting the cross section, and this enables a high-speed and complete search for the cross section.
In the aforementioned embodiments, there has been described the case where the volume data imaged in one-time examination for one patient is processed. The present embodiment is applicable to a group of 2D images taken in the examination at a previous time or in the examinations across the past several times. There will now be described the case where input data is 2D images that are temporally sequential.
Thereafter, the cross section identifier (233) identifies the target group of cross sections according to the learning model called from the model introducer 473 in advance. A distribution on the temporal axis as a result of the identification is analyzed, the search is finished when a cross section suitable for the measurement is found, and a measurement cross section is determined. If imaging is performed continuously in parallel to this image processing, the cross section called from the data memory may be updated according to imaging manipulation by a user at the point of time.
In
Finally, one cross section is determined from the candidate cross sections extracted from the plurality of volume data.
In the second embodiment and its modification, the present invention is applied to the ultrasound imaging device, but the present invention may also be applicable to any medical imaging device that is capable of acquiring volume data or time-series data. In the aforementioned embodiments, there has been described the case where the image processor is a constitutional element of the medical imaging device. However, if imaging and image processing are not performed in parallel, the image processing of the present invention may be performed in an image processing device or an image processor that are spatially or temporally away from the medical imaging device (the imager 100 in
In addition, the embodiments and modifications of the present invention are described in detail for ease of understanding, and those embodiments and modifications are not necessarily limited to those as described above including all the components. A part of or all of the configurations, functions, processors, and processing means described in some of the above embodiments may be implemented by hardware, for example, by designing with an integrated circuit. Those configurations, functions, and others may be implemented by software, by interpreting and executing programs for processors to implement each of the functions. Information such as programs, tables, and files for implementing each of the functions may be placed in storage such as memory, hard disk, and SSD (Solid State Drive), or in a storage medium such as IC card, SD card, and DVD.
Number | Date | Country | Kind |
---|---|---|---|
2017-146782 | Jul 2017 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2018/021926 | 6/7/2018 | WO | 00 |