This application claims priority to Japanese Patent Application No. 2021-076887, filed on Apr. 28, 2021, the disclosure of which is incorporated herein by reference in its entirety.
The present invention relates to a method of generating a learned model for deducing body weight, a processing device that executes a process for determining body weight of an imaging subject lying on a table, and a storage medium storing a command for causing a processor to execute the process for determining body weight.
An x-ray computed tomography (CT) device is known as a medical device that non-invasively captures images of the inside of a patient. X-ray CT devices can capture images of a site to be imaged in a short period of time, and therefore have become widespread in hospitals and other medical facilities.
On the other hand, CT devices use X-rays to examine patients, and as CT devices become more widespread, there is increasing concern about patient exposure during examinations. Therefore, it is important to control patient exposure dose from the perspective of reducing the patient exposure dose from X-rays as much as possible and the like. Therefore, technologies to control the dose have been developed. For example, Patent Document 1 discloses a dose control system.
In recent years, dose control has become stricter based on guidelines by the Ministry of Health, Labour and Welfare, and these guidelines state that dose control should be based on the diagnostic reference level (DRL). The dose must be controlled with reference to the guidelines for diagnostic reference levels. Furthermore, different patients have different physiques, and therefore, it is important to control not only the exposure dose to which the patient is subjected during a CT scan but also the patient body weight information in order to control the dose for each patient. Therefore, medical institutions obtain body weight information of each patient and record the information in the RIS (Radiology Information System).
In medical institutions, for example, the body weight of a patient is measured by a weight scale before a CT scan, in order to obtain patient body weight information. Once the body weight of the patient is measured, the measured body weight is recorded in the RIS. However, it is not always possible to measure the body weight of the patient on a weight scale for every CT scan. Therefore, the body weight information recorded in the RIS may be out of date, and it is not desirable to control the dose with the outdated body weight information. Furthermore, there is also a problem where if the patient is using a wheelchair or stretcher, body weight measurement itself is not easy.
Therefore, there is demand for a technology that can easily acquire body weight information of a patient.
A first aspect of the present invention is a learned model generating method of generating a learned model that outputs a body weight of an imaging subject when an input image of the imaging subject lying on a table of a medical device is input, where a neural network generates the learned model by executing learning using: a plurality of learning images generated based on a plurality of camera images of a human lying on a table of a medical device, and a plurality of correct answer data corresponding to the plurality of learning images, where each of the plurality of correct answer data represents a body weight of a human included in a corresponding learning image.
A second aspect of the present invention is a processing device that executes a process of determining a body weight of an imaging subject based on a camera image of the imaging subject lying on a table of a medical device.
A third aspect of the present invention is a storage medium, including one or more non-volatile, computer-readable storage media storing one or more commands that can be executed by one or more processors, where the one or more commands causes the one of more processors to execute a process of determining a body weight of an imaging subject based on a camera image of the imaging subject lying on a table of a medical device.
A fourth aspect of the present invention is a medical device that executes a process of determining a body weight of an imaging subject based on a camera image of the imaging subject lying on a table of a medical device.
A fifth aspect of the present invention is a learned model that outputs a body weight of an imaging subject when an input image of the imaging subject lying on a table of a medical device is input, where the learned model is generated by a neural network executing learning using: a plurality of learning images generated based on a plurality of camera images of a human lying on a table of a medical device, and a plurality of correct answer data corresponding to the plurality of learning images, where each of the plurality of correct answer data represents a body weight of a human included in a corresponding learning image.
A sixth aspect of the present invention is a learned model generating device that generates a learned model that outputs a body weight of an imaging subject when an input image of the imaging subject lying on a table of a medical device is input, where a neural network generates the learned model by executing learning using: a plurality of learning images generated based on a plurality of camera images of a human lying on a table of a medical device, and a plurality of correct answer data corresponding to the plurality of learning images, where each of the plurality of correct answer data represents a body weight of a human included in a corresponding learning image.
There is a certain correlation between human physique and body weight. Therefore, a learning image can be generated based on a camera image of a human, and the learning image can be labeled with the body weight of a human as correct answer data. Then, a neural network can execute learning using the learning image and correct answer data to generate a learned model that can deduce body weight. Furthermore, medical devices include medical devices that perform scanning with a patient lying on a table, such as CT devices, MM devices, and the like. Therefore, if a camera for acquiring a camera image of the patient lying on the table is prepared, a camera image including the patient can be acquired. Thus, based on the acquired camera image, an input image to input to the learned model can be generated, and the input image can be input to the learned model to deduce the body weight of the patient.
Therefore, the body weight of the patient can be deduced without having to measure the body weight of the patient for each examination, and thus the body weight of the patient at the time of the examination can be managed.
Furthermore, if the BMI and height are known, the body weight can be calculated. Therefore, body weight information can also be obtained by deducing height instead of body weight, and calculating the body weight based on the deduced height and BMI.
Embodiments for carrying out the invention will be described below, but the present invention is not limited to the following embodiments.
Each modality is a medical system with a medical device and an operation console. The medical device is a device that collects data from a patient, and the operation console is connected to the medical device and is used to operate the medical device. The medical device is a device that collects data from a patient. Examples of medical devices that can be used include simple X-ray devices, X-ray CT devices, PET-CT devices, MRI devices, MM-PET devices, mammography devices, and various other devices. Note that in
Furthermore, the system 10 also has PACS (Picture Archiving and Communication Systems) 11. The PACS 11 receives an image and other data obtained by each modality via a communication network 12 and stores the received data. Furthermore, the PACS 11 also transfers the stored data via the communication network 12 as necessary.
Furthermore, the system 10 has a plurality of workstations W1 to Wb. The workstations W1 to Wb include, for example, workstations used in hospital information systems (HIS), radiology information systems (RIS), clinical information systems (CIS), cardiovascular information systems (CVIS), library information systems (LIS), electronic medical record (EMR) systems, and/or other image and information management systems and the like, and workstations used for image inspection work by an image interpreter.
The network system 10 is configured as described above. Next, an example of a configuration of the X-ray CT device, which is an example of a modality, will be described.
The gantry 2 and table 4 are installed in a scan room 100. The gantry 2 has a display panel 20. An operator can input an operation signal to operate the gantry 2 and table 4 from the display panel 20. The camera 6 is installed on a ceiling 101 of the scan room 100. The operation console 8 is installed in an operation room 200.
A field of view of the camera 6 is set to include the table 4 and a perimeter thereof. Therefore, when the patient 40, who is an imaging subject, lies on the table 4, the camera 6 can acquire a camera image including the patient 40.
Next, the gantry 2, table 4, and operation console 8 will be described with reference to
Furthermore, the gantry 2 has an X-ray tube 22, an aperture 23, a collimator 24, an X-ray detector 25, a data acquisition system 26, a rotating part 27, a high-voltage power supply 28, an aperture driving device 29, a rotating part driving device 30, a GT (Gantry Table) control part 31, and the like.
The X-ray tube 22, aperture 23, collimator 24, X-ray detector 25, and data acquisition system 26 are mounted on the rotating part 27.
The X-ray tube 22 irradiates the patient 40 with X-rays. The X-ray detector 25 detects the X-rays emitted from the X-ray tube 22. The X-ray detector 25 is provided on an opposite side of the X-ray tube 22 from the bore 21.
The aperture 23 is disposed between the X-ray tube 22 and the bore 21. The aperture 23 shapes the X-rays emitted from an X-ray focal point of the X-ray tube 22 toward the X-ray detector 25 into a fan beam or a cone beam.
The X-ray detector 25 detects the X-rays transmitted through the patient 40. The collimator 24 is disposed on the X-ray incident side to the X-ray detector 25 and removes scattered X-rays.
The high voltage power supply 28 supplies high voltage and current to the X-ray tube 22. The aperture driving device 29 drives the aperture 23 to deform an opening thereof. The rotating part driving device 30 rotates and drives the rotating part 27.
The table 4 has a cradle 41, a cradle support 42, and a driving device 43. The cradle 41 supports the patient 40, who is an imaging subject. The cradle support 42 movably supports the cradle 41 in the y direction and z direction. The driving device 43 drives the cradle 41 and cradle support 42. Note that herein, a longitudinal direction of the cradle 41 is a z direction, a height direction of the table 4 is a y direction, and a horizontal direction orthogonal to the z direction and y direction is an x direction.
A GT control part 31 controls each device and each part in the gantry 2, the driving device 43 of the table 4, and the like.
The operation console 8 has an input part 81, a display part 82, a storage part 83, a processing part 84, a console control part 85, and the like.
The input part 81 includes a keyboard, a pointing device, and the like for accepting instructions and information input from an operator and performing various operations. The display part 82 displays a setting screen for setting scan conditions, camera images, CT images, and the like and is, for example, an LCD (Liquid Crystal Display), OLED (Electro-Luminescence) display, or the like.
The storage part 83 stores a program for executing various processes by a processor. Furthermore, the storage part 83 also stores various data, various files, and the like. The storage part 83 has a hard disk drive (HDD), solid state drive (SSD), dynamic random access memory (DRAM), read only memory (ROM), and the like. Furthermore, the storage part 83 may also include a portable storage medium 90 such as a CD (Compact Disk), DVD (Digital Versatile Disk), or the like.
The processing part 84 performs an image reconfiguring process and various other operations based on data of the patient 40 acquired by the gantry 2. The processing part 84 has one or more processors, and the one or more processors execute various processes described in the program stored in the storage part 83.
The generating part 841 generates an input image to be input to the learned model based on a camera image. The deducing part 842 inputs the input image to the learned model to deduce the body weight of the patient. The confirming part 843 confirms to the operator whether or not to update the deduced body weight. The reconfiguring part 844 reconfigures a CT image based on projection data obtained from a scan.
Note that details of the generating part 841, deducing part 842, confirming part 843, and reconfiguring part 844 will be described in each step of an examination flow (refer to
A program for executing the aforementioned functions is stored in the storage part 83. The processing part 84 implements the aforementioned functions by executing the program. One or more commands that can be executed by one or more processors are stored in the storage part 83. The one or more commands cause one or more processors to perform the following operations (a1) to (a4): (a1) Generating an input image to be input to the learned model based on a camera image (generating part 841), (a2) Inputting the input image to the learned model to deduce the body weight of the patient (deducing part 842), (a3) Confirming to the operator whether or not to update the body weight (confirming part 843), (a4) Reconfiguring a CT image based on projection data (reconfiguring part 844).
The processing part 84 of the console 8 can read the program stored in the storage part 83 and execute the aforementioned operations (a1) to (a4).
The console control part 85 controls the display part 82 and the processing part 84 based on an input from the input part 81.
The X-ray CT device 1 is configured as described above.
In recent years, there has been a demand for strict control of patient exposure dose when performing examinations that use X-rays, such as CT scans and the like. In medical institutions, for example, the body weight of a patient is measured by a weight scale before a CT scan, in order to obtain patient body weight information. Once the body weight of the patient is measured, the measured body weight is recorded in the RIS. However, it is not always possible to measure the body weight of the patient on a weight scale for every CT scan. Therefore, the body weight information recorded in the RIS may be out of date, and it is not desirable to control the dose with the outdated body weight information. Furthermore, there is also a problem where if the patient is using a wheelchair or stretcher, body weight measurement itself is not easy. Therefore, in the present embodiment, in order to address this problem, DL (deep learning) is used to generate a learned model that can deduce the body weight of the patient.
A learning phase for generating a learned model is described below with reference to
Note that examples of the prescribed image processing to be performed on the camera image include image cropping, standardization processing, normalization processing, and the like. Furthermore, the learning images C1 to Cn include an image of a human in a supine position in a head-first condition and an image of the human in a supine posture in a feet-first condition, as described above. However, a craniocaudal direction of a feet-first human is opposite to the craniocaudal direction of a head-first human. Therefore, in embodiment 1, the prescribed image processing includes a process of rotating an image by 180° in order to match the craniocaudal direction of a human. Referring to
Furthermore, a plurality of correct answer data G1 to Gn are also prepared. Each correct answer data Gi (1≤i≤n) is data representing the body weight of the human in a corresponding learning image Ci of the plurality of learning images C1 to Cn. Each correct answer data Gi is labeled with a corresponding learning image Ci of the plurality of learning images C1 to Cn. After preparing the learning image and correct answer data, the flow proceeds to step ST2.
In step ST2, the computer (learned model generating device) is used to cause a neural network (NN) 91 to execute learning using the learning images C1 to Cn and the correct answer data G1 to Gn, as illustrated in
The learned model 91a generated thereby is stored in a storage part (for example, a storage part of a CT device or storage part of an external device connected to the CT device).
The learned model 91a obtained from the aforementioned learning phase is used to deduce the body weight of the patient 40 during the examination of the patient 40. An examination flow of patient 40 will be described below.
The camera 6 acquires a camera image of the inside of the scan room and outputs the camera image to the console 8. The console 8 performs prescribed data processing on the camera image received from the camera 6, if necessary, and then outputs the camera image to the display panel 20 of the gantry 2. The display panel 20 can display the camera image in the scan room imaged by the camera 6. After laying the patient 40 on the table 4, the flow proceeds to step ST12.
In step ST12, the body weight of the patient 40 is deduced using the learned model 91a. A method of deducing the body weight of the patient 40 will be specifically described below.
First, as a preprocessing step for deducing, an input image to be input to the learned model 91a is generated. The generating part 841 (refer to
Note that when the patient 40 lies on the table 4, the patient 40 gets on the table 4 while adjusting their posture on the table 4, and gets into a supine posture, which is a posture for imaging. Therefore, when generating the input image 61, it is necessary to determine whether or not the posture of the patient 40 in the camera image used to generate the input image 61 is a supine position. Whether or not the posture of the patient 40 is a supine position can be determined using a prescribed image processing technique.
After generating the input image 61, the deducing part 842 (refer to
The deducing part 842 inputs the input image 61 to the learned model 91a. Note that in the learning phase (refer to
On the other hand, if the orientation of the patient 40 is feet-first, the input image 611 as illustrated in
Note that when determining whether to rotate the input image by 180°, it is necessary to identify whether the patient 40 is oriented head first or feet first. The identification method, for example, can be performed based on information in a RIS. The RIS includes the orientation of the patient 40 at the time of the examination, and therefore, the generating part 841 can identify the orientation of the patient from the RIS. Therefore, the generating part 841 can determine whether or not to rotate the input image by 180° based on the orientation of the patient 40.
When the input image 61 is input to the learned model 91a, the learned model 91a deduces and outputs the body weight of the patient 40 in the input image 61. After the body weight is deduced, the flow proceeds to step ST13.
In step ST13, the confirming part 843 (refer to
The confirming part 843 displays patient information 70 on the display part 82 (refer to
In step ST14, the operator decides whether or not to update the body weight. The operator clicks the No button on the window 71 to not update the body weight, and clicks the Yes button on the window 71 to update the body weight. If the No button is clicked, the confirming part 843 determines that the body weight of the patient 40 will not be updated, and the past body weight is saved as-is. On the other hand, if the Yes button is clicked, the confirming part 843 determines that the body weight of the patient 40 is to be updated. If the body weight of the patient 40 is updated, the MS manages the updated body weight as the body weight of the patient 40. Once the body weight update (or cancellation of the update) is complete, the flow proceeds to step ST15.
In step ST15, the patient 40 is moved into the bore 21 and a scout scan is performed. When the scout scan is performed, the reconfiguring part 844 (refer to
In step ST17, the operator performs an examination end operation. When the examination end operation is performed, various data transmitted to the PACS 11 (refer to
The DICOM files FS1 to FSa store scout images acquired in a scout scan, and DICOM files FD1 to FDb store CT images acquired in a diagnostic scan.
The DICOM files FS1 to FSa store pixel data of the scout images and supplementary information. Note that the DICOM files FS1 to FSa store pixel data of scout images of different slices.
Furthermore, the DICOM files FS1 to FSa store patient information described in the examination list, imaging condition information indicating imaging conditions of the scout scan, and the like as data elements of supplementary information. The patient information includes updated body weight and the like. Furthermore, the DICOM files FS1 to FSa also store data elements for supplementary information, such as the input image 61 (refer to
On the other hand, DICOM files FD1 to FDb store pixel data of the CT images obtained from the diagnostic scan and supplementary information. Note that the DICOM files FD1 to FDb store pixel data of CT images of different slices.
Furthermore, the DICOM files FD1 to FDb store imaging condition information indicating imaging conditions in diagnostic scans, dose information, patient information described in the examination list, and the like as supplementary information. The patient information includes updated body weight and the like. Furthermore, similar to the DICOM files FS1 to FSa, the DICOM files FD1 to FDb also store the input images 61 and protocol data as supplementary information.
The X-ray CT device 1 (refer to
Furthermore, the operator informs the patient 40 that the examination is complete and removes the patient 40 from the table 4. Thereby, the examination of the patient 40 is completed.
In the present embodiment, the body weight of the patient 40 is deduced by generating the input image 61 based on a camera image of the patient 40 lying on the table 4 and inputting the input image 61 to the learned model 91a. Therefore, body weight information of the patient 40 at the time of examination can be obtained without using a measuring instrument to measure the body weight of the patient 40, such as a weight scale or the like, and thus it is possible to manage the dose information of the patient 40 in correspondence with the body weight of the patient 40 at the time of examination. Furthermore, the body weight of the patient 40 is deduced based on camera images acquired while the patient 40 is lying on the table 4, and therefore, there is no need for hospital staff such as technicians, nurses, and the like to measure the body weight of the patient 40 on a weight scale, which also reduces the workload of the staff.
Embodiment 1 describes an example of the patient 40 undergoing an examination in a supine posture. However, the present invention can also be applied when the patient 40 undergoes examination in a different position from the supine position. For example, if the patient 40 is expected to undergo the examination in a right lateral decubitus posture, the neural network can be trained with learning images for the right lateral decubitus posture to prepare a learned model for the right lateral decubitus position, and the learned model can be used to estimate the body weight of the patient 40 in the right lateral decubitus posture.
In embodiment 1, the operator is asked to confirm whether or not to update the body weight (step ST13). However, the confirmation step may be omitted and the deduced body weight may be automatically updated.
Note that in embodiment 1, the system 10 includes the PACS 11, but another management system for patient data and images may be used instead of the PACS 11.
In embodiment 1, body weight was deduced, but in embodiment 2, height is deduced and body weight is calculated from the deduced height and BMI.
The generating part 940 generates an input image to be input to the learned model based on a camera image. The deducing part 941 inputs the input image to the learned model to deduce the height of the patient. The calculating part 942 calculates the body weight of the patient based on the BMI and the deduced height. The confirming part 943 confirms to the operator whether or not to update the calculated body weight. The reconfiguring part 944 reconfigures a CT image based on projection data obtained from a scan.
Furthermore, one or more commands that can be executed by one or more processors are stored in the storage part 83. The one or more commands cause one or more processors to perform the following operations (b1) to (b5): (b1) Generating an input image to be input to the learned model based on a camera image (generating part 940), (b2) Inputting the input image to the learned model to deduce the height of the patient (deducing part 941), (b3) Calculating the body weight of the patient based on the BMI and the deduced height (calculating part 942), (b4) Confirming to the operator whether or not to update the body weight (confirming part 943), (b5) Reconfiguring a CT image based on projection data (reconfiguring part 944).
The processing part 84 of the console 8 can read the program stored in the storage part 83 and execute the aforementioned operations (b1) to (b5).
First, a learning phase according to embodiment 2 will be described. Note that the learning phase in embodiment 2 is also described in the same manner as in embodiment 1, with reference to the flow shown in
In step ST1, a plurality of learning images to be used in the learning phase are prepared.
Furthermore, a plurality of correct answer data GI1 to GIn are also prepared. Each correct answer data GIi (1≤i≤n) is data representing the height of the human in a corresponding learning image CIi of the plurality of learning images CI1 to CIn. Each correct answer data GIi is labeled with a corresponding learning image CIi of the plurality of learning images CI1 to CIn. After preparing the learning image and correct answer data, the flow proceeds to step ST2.
In step ST2, a learned model is generated. Specifically, as illustrated in
The learned model 92a generated thereby is stored in a storage part (for example, a storage part of a CT device or storage part of an external device connected to the CT device).
The learned model 92a obtained from the aforementioned learning phase is used to deduce the height of the patient 40 during the examination of the patient 40. An examination flow of patient 40 will be described below.
After laying the patient 40 on the table 4, the flow proceeds to step ST30 and step ST22.
In step ST30, scanning conditions are set and a scout scan is performed. When the scout scan is performed, the reconfiguring part 944 (refer to
In step ST22, the body weight of the patient 40 is determined. A method of determining the body weight of the patient 40 will be described below. Note that step ST22 has steps ST221, ST222, and ST223, and therefore, each step ST221, ST222, and ST223 is described below in order.
In step ST221, the generating part 940 (refer to
Next, the deducing part 941 (refer to
In step ST222, the calculating part 942 (refer to
Next, in step ST223, the calculating part 942 calculates the body weight of the patient 40 based on the BMI calculated in step ST222 and the height deduced in step ST221. The following relational expression (1) holds between the BMI, height, and body weight.
BMI=body weight÷(height2) (1)
As described above, the BMI and height are known, and therefore, the body weight can be calculated from the expression (1) above. After the body weight is calculated, the flow proceeds to step ST23.
In step ST23, the confirming part 943 confirms to the operator whether or not to update the body weight calculated in step ST22. In embodiment 2, the window 71 (refer to
In step ST24, the operator decides whether or not to update the body weight. The operator clicks the No button on the window 71 to not update the body weight, and clicks the Yes button on the window 71 to update the body weight. If the No button is clicked, the confirming part 843 determines that the body weight of the patient 40 will not be updated, and the past body weight is saved as-is. On the other hand, if the Yes button is clicked, the confirming part 843 determines that the body weight of the patient 40 is to be updated. If the body weight of the patient 40 is updated, the RIS manages the updated body weight as the body weight of the patient 40.
Note that in step ST23, as illustrated in
Furthermore, while the body weight is being updated, steps ST31 and ST32 are also performed. Steps ST31 and ST32 are the same as steps ST16 and ST17 of embodiment 1, and therefore, a description is omitted. Thereby, the flow shown in
In embodiment 2, height is deduced instead of body weight, and body weight is calculated based on the deduced height. Thus, the height may be deduced and the body weight may be calculated from the BMI formula.
Embodiments 1 and 2 assume that the posture of the patient 40 is a supine position. However, depending on the examination to which the patient 40 is subjected, the patient 40 may have to be placed in a different posture than the supine position (for example, the right lateral decubitus position). Therefore, in embodiment 3, a method is described, which can deduce the body weight of the patient 40 with sufficient accuracy, even when the posture of the patient 40 varies based on the examination to which the patient 40 is subjected.
Note that the processing part 84 in embodiment 3 will be described, similarly to embodiment 1, with reference to the functional blocks shown in
A learning phase according to embodiment 3 will be described below. Note that the learning phase in embodiment 3 is also described in the same manner as in embodiment 1, with reference to the flow shown in
(1) Posture: supine position. n1 number of learning images CA1 to CAn1 are prepared as learning images corresponding to the supine position. Each learning image CAi (1≤i≤n1) can be prepared by acquiring a camera image of a human lying in a supine position on a table by imaging with a camera from above the table, and executing a prescribed image processing with regard to the camera image. The learning images CA1 to CAn1 include an image of a human in a supine position in a head-first condition and an image of the human in a supine posture in a feet-first condition.
Examples of the prescribed image processing to be performed on the camera image include image cropping, standardization processing, normalization processing, and the like. Furthermore, as described above, the learning images CA1 to CAn1 include an image of a human in a supine position in a head-first condition and an image of the human in a supine posture in a feet-first condition. Therefore, the prescribed image processing includes a process of rotating a learning image by 180° in order to match the craniocaudal direction of a human. For example, the learning image CA1 is head first, while the learning image CAn1 is feet first. Therefore, the learning image CAn1 is rotated 180° such that the human craniocaudal direction in the learning image CAn1 matches the human craniocaudal direction in the learning image CA1. Thereby, the learning images CA1 to CAn1 are set up such that the human craniocaudal directions match. Furthermore, correct answer data GA1 to GAn1 are also prepared. Each correct answer data GAi (1≤i≤n1) is data representing the body weight of the human in a corresponding learning image CAi of the plurality of learning images CA1 to CAn1. Each correct answer data GAi is labeled with a corresponding learning image of the plurality of learning images CA1 to CAn1.
(2) Posture: prone position. n2 number of learning images CB1 to CBn2 are prepared as learning images corresponding to a prone position. Each learning image CBi (1≤i≤n2) can be prepared by acquiring a camera image of a human lying in a prone position on a table by imaging with a camera from above the table, and executing a prescribed image processing with regard to the camera image. The learning images CB1 to CBn1 include an image of a human in a prone position in a head-first condition and an image of the human in a prone posture in a feet-first condition.
Examples of the prescribed image processing to be performed on the camera image include image cropping, standardization processing, normalization processing, and the like. Furthermore, as described above, the learning images CB1 to CBn2 include an image of a human in a prone position in a head-first condition and an image of the human in a prone position in a feet-first condition. Therefore, the prescribed image processing includes a process of rotating a learning image by 180° in order to match the craniocaudal direction of a human. For example, the learning image CB1 is head-first, but the learning image CBn2 is feet-first. Therefore, the learning image CBn2 is rotated by 180° such that the craniocaudal direction of the human in the learning image CBn2 matches the craniocaudal direction of the human in the learning image CB1.
Furthermore, correct answer data GB1 to GBn2 are also prepared. Each correct answer data GBi (1≤i≤n2) is data representing the body weight of the human in a corresponding learning image CBi of the plurality of learning images CB1 to CBn2. Each correct answer data GBi is labeled with a corresponding learning image of the plurality of learning images CB1 to CBn2.
(3) Posture: left lateral decubitus position. n3 number of learning images CC1 to CCn3 are prepared as learning images corresponding to a left lateral decubitus position. Each learning image CCi (1≤i≤n3) can be prepared by acquiring a camera image of a human lying in a left lateral decubitus posture on a table by imaging with a camera from above the table, and executing a prescribed image processing with regard to the camera image. The learning images CC1 to CCn3 include an image of a human in a left lateral decubitus posture in a head-first condition and an image of the human in a left lateral decubitus posture in a feet-first condition.
Examples of the prescribed image processing to be performed on the camera image include image cropping, standardization processing, normalization processing, and the like. Furthermore, as described above, the learning images CC1 to CCn3 include an image of a human in a left lateral decubitus posture in a head-first condition and an image of the human in a left lateral decubitus posture in a feet-first condition. Therefore, the prescribed image processing includes a process of rotating a learning image by 180° in order to match the craniocaudal direction of a human. For example, the learning image CC1 is head-first, but the learning image CCn3 is feet-first. Therefore, the learning image CCn3 is rotated by 180° such that the craniocaudal direction of the human in the learning image CCn3 matches the craniocaudal direction of the human in the learning image CC1.
Furthermore, correct answer data GC1 to GCn3 are also prepared. Each correct answer data GCi (1≤i≤n3) is data representing the body weight of the human in a corresponding learning image CCi of the plurality of learning images CC1 to CCn3. Each correct answer data GCi is labeled with a corresponding learning image of the plurality of learning images CC1 to CCn3.
(4) Posture: right lateral decubitus position. n4 number of learning images CC1 to CCn4 are prepared as learning images corresponding to a right lateral decubitus position. Each learning image CDi (1≤i≤n4) can be prepared by acquiring a camera image of a human lying in a right lateral decubitus posture on a table by imaging with a camera from above the table, and executing a prescribed image processing with regard to the camera image. The learning images CC1 to CCn4 include an image of a human in a right lateral decubitus posture in a head-first condition and an image of the human in a right lateral decubitus posture in a feet-first condition.
Examples of the prescribed image processing to be performed on the camera image include image cropping, standardization processing, normalization processing, and the like. Furthermore, as described above, the learning images CD1 to CDn4 include an image of a human in a right lateral decubitus posture in a head-first condition and an image of the human in a right lateral decubitus posture in a feet-first condition. Therefore, the prescribed image processing includes a process of rotating an image by 180° in order to match the craniocaudal direction of a human. For example, the learning image CD1 is head-first, but the learning image CDn4 is feet-first. Therefore, the learning image CDn4 is rotated by 180° such that the craniocaudal direction of the human in the learning image CDn4 matches the craniocaudal direction of the human in the learning image CD1.
Furthermore, correct answer data GD1 to GDn4 are also prepared. Each correct answer data GDi (1≤i≤n4) is data representing the body weight of the human in a corresponding learning image CDi of the plurality of learning images CD1 to CDn4. Each correct answer data GDi is labeled with a corresponding learning image of the plurality of learning images CD1 to CDn4.
Once the aforementioned learning image and correct answer data are prepared, the flow proceeds to step ST2.
The learned model 93a generated thereby is stored in a storage part (for example, a storage part of a CT device or storage part of an external device connected to the CT device).
The learned model 93a obtained from the aforementioned learning phase is used to deduce the body weight of the patient 40 during the examination of the patient 40. An examination flow of the patient 40 will be described below using an example where the posture of the patient is a right lateral decubitus position. Note that the examination flow of the patient 40 in embodiment 3 will also be described with reference to the flow shown in
In step ST11, an operator guides the patient 40 into a scan room and has the patient 40 lie on the table 4. A camera image of the patient 40 is displayed on the display panel 20 of the gantry 2. After laying the patient 40 on the table 4, the flow proceeds to step ST12.
In step ST12, the body weight of the patient 40 is deduced using the learned model 93a. A method of deducing the body weight of the patient 40 will be specifically described below.
First, an input image to be input to the learned model 93a is generated. The generating part 841 generates an input image used for body weight deducing by executing a prescribed image processing on the camera image obtained by the camera 6. Examples of the prescribed image processing include image cropping, standardization processing, normalization processing, and the like.
After generating the input image 64, the deducing part 842 (refer to
The deducing part 842 inputs the input image to the learned model 93a. Note that in the learning phase (refer to
In step ST13, the confirming part 843 confirms to the operator whether or not to update the body weight deduced in step ST12 (refer to
In step ST15, the patient 40 is moved into the bore 21 and a scout scan is performed. When the scout scan is performed, the reconfiguring part 844 reconfigures a scout image based on projection data obtained from the scout scan. The operator sets the scan range based on the scout image. Furthermore, the flow proceeds to step ST16, and a diagnostic scan is performed to acquire various CT images used for diagnosis of the patient 40. When the diagnostic scan is completed, the flows proceeds to step ST17 to perform the examination end operation. Thus, the examination of the patient 40 is completed.
In embodiment 3, postures (1) to (4) are considered as patient postures, and learning images and correct answer data corresponding to each posture are prepared to generate the learned model 93a (refer to
In embodiment 3, the learned model 93a is generated using the learning images and correct answer data corresponding to the four postures. However, the learned model may be generated using the learning images and correct answer data corresponding to some of the four postures described above (for example, supine position and left lateral decubitus position).
Note that in embodiment 3, body weight is used as the correct answer data to generate a learned model, but instead of body weight, height may be used as the correct answer data to generate a learned model deducing height. Using the learned model, the height of the patient 40 can be deduced even when the posture of the patient 40 is different for each examination, and therefore, the body weight of the patient 40 can be calculated from expression (1) above.
Embodiment 3 indicates an example where the neural network 93 generates a learned model by executing learning using the learning images and correct answer data of postures (1) to (4). In embodiment 4, an example of generating a learned model for each posture is described.
In embodiment 4, the processing part 84 has the following functional blocks.
The selecting part 8411 selects, from a plurality of learned models, a learned model to be used for deducing the body weight of the patient. The deducing part 8421 deduces the body weight of the patient by inputting the input image generated by the generating part 841 to the learned model selected by the selecting part 8411.
Furthermore, one or more commands that can be executed by one or more processors are stored in the storage part 83. The one or more commands cause one or more processors to perform the following operations (c1) to (c5): (c1) Generating an input image to be input to the learned model based on a camera image (generating part 841), (c2) Selecting, from a plurality of learned models, a learned model to be used for deducing the body weight of the patient (selecting part 8411), (c3) Inputting the input image to the selected learned model to deduce the body weight of the patient (deducing part 8421), (c4) Confirming to the operator whether or not to update the body weight (confirming part 843), (c5) Reconfiguring a CT image based on projection data (reconfiguring part 844).
The processing part 84 of the console 8 can read the program stored in the storage part 83 and execute the aforementioned operations (c1) to (c5).
A learning phase according to embodiment 4 will be described below. Note that the learning phase in embodiment 4 is also described in the same manner as in embodiment 3, with reference to the flow shown in
In step ST1, learning images and correct answer data used in the learning phase are prepared. In embodiment 4, postures (1) to (4) illustrated in
The learned models 941a to 944a generated thereby are stored in a storage part (for example, a storage part of a CT device or storage part of an external device connected to the CT device).
The learned models 941a to 944a obtained from the aforementioned learning phase are used to deduce the body weight of the patient 40 during the examination of the patient 40. An examination flow of patient 40 will be described below.
In step ST52, the selecting part 8411 (refer to
Herein, it is assumed that the patient 40 is in the right lateral decubitus position. Therefore, the selecting part 8411 selects the learned model 944a (refer to
Note that in order to select the learned model 944a from the learned models 941a to 944a, it is necessary to identify that the posture of the patient is a right lateral decubitus position. The identification method, for example, can be performed based on information in a MS. The MS includes the posture of the patient 40 at the time of the examination, and therefore, the selecting part 8411 can identify the orientation of the patient and posture of the patient from the MS. Therefore, the selecting part 8411 can select the learned model 944a from the learned models 941a to 944a. After selecting the learned model 944a, the flow proceeds to step ST53.
In step ST53, the body weight of the patient 40 is deduced using the learned model. A method of deducing the body weight of the patient 40 will be specifically described below.
First, an input image to be input to the learned model 944a is generated. The generating part 841 generates an input image used for body weight deducing by executing a prescribed image processing on the camera image obtained by the camera 6. In embodiment 4, the posture of the patient 40 is a right prone position, similar to embodiment 3. Therefore, the generating part 841 generates the input image 64 (refer to
After generating the input image 64, the deducing part 842 (refer to
The deducing part 842 inputs the input image 641 after rotating the input image 64 by 180° to the learned model 944a selected in step ST52 and then deduces the body weight of the patient 40. Once the body weight of the patient 40 has been deduced, the flow proceeds to step ST54. Steps ST54 to ST58 are the same as steps ST13 to ST17 in embodiment 1, and therefore, a description is omitted.
Thus, a learned model may be prepared for each posture of the patient, and the learned model corresponding to the orientation of the patient and posture of the patient during examination may be selected.
Note that in embodiment 4, the body weight is used as the correct answer data to generate a learned model. However, instead of body weight, height may be used as the correct answer data, and a learned model may be generated to deduce the height for each posture. In this case, by selecting the learned model corresponding to the posture of the patient 40, the height of the patient 40 can be deduced even when the posture of the patient 40 is different for each examination, and therefore, the body weight of the patient 40 can be calculated from expression (1) above.
Note that in embodiments 1 to 4, a learned model is generated by a neural network performing learning using a learning image of an entire human body. However, a learned model may be generated by performing learning using a learning image that includes only a portion of the human body, or by performing learning using a learning image that includes only a portion of the human body and a learning image that includes the entire human body.
In embodiments 1 to 4, methods for managing the body weight of the patient 40 imaged by an X-ray CT device were described, but the present invention can also be applied to when the body weight of the patient imaged in a device other than an X-ray CT device (for example, an Mill device) is managed.
In embodiments 1 to 4, deducing is executed by a CT device. However, deducing may be executed on an external computer that the CT device can access through a network.
Note that in embodiments 1 to 4, a learned model was created by DL (deep learning), and this learned model was used to deduce the body weight or height of the patient. However, machine learning other than DL may be used to deduce the body weight or height. Furthermore, a camera image may be analyzed using a statistical method to obtain the body weight or height of the patient.
Number | Date | Country | Kind |
---|---|---|---|
2021-076887 | Apr 2021 | JP | national |