This application is based upon and claims the benefit of priority from Japanese Patent Application No. 2023-015439, filed on Feb. 3, 2023, the entire contents of which are incorporated herein by reference.
Disclosed embodiments relate to an ultrasonic diagnostic apparatus.
During dialysis, an ultrasonic examination is performed on blood vessels of an object, and a schema report is created for the purpose of VA (Vascular Access) management and support for puncture. The report includes detailed information such as a schema shows mapping of the distribution of the blood vessel, depth of the blood vessel, blood-vessel diameter, information on the inside of the blood vessel wall, and information on edema and inflammation around the blood vessel. These information contribute to accurate puncture.
The schema report needs to contain a lot of information about blood vessels and is repeatedly used for observation over time, so accurate mapping is required in creating the schema report. Thus, there is a problem that it takes a long time to create a schema report.
Hereinbelow, embodiments of an ultrasonic diagnostic apparatus will be described in detail by referring to the accompanying drawings.
In one embodiment, an ultrasonic diagnostic apparatus includes: a display; a probe; and processing circuitry. The processing circuitry acquires an appearance image of an object that is scheduled to undergo dialysis, generates a 3D image from an echo signal acquired by the probe, the 3D image containing a plurality of frame images depicting inside of the object and positional information of each of the plurality of frame images, extracts a blood-vessel image depicting a blood vessel inside the object from the 3D image, generates a composite image of the appearance image and the blood-vessel image by aligning the appearance image and the blood-vessel image, and causes the display to display the composite image.
The first embodiment relates to a configuration of an ultrasonic diagnostic apparatus 1 and processing to be executed by the ultrasonic diagnostic apparatus 1.
The ultrasonic diagnostic system 100 includes the ultrasonic diagnostic apparatus 1, a magnetic sensor unit 2, a camera 3, and a probe (i.e., ultrasonic probe) 4. The ultrasonic diagnostic apparatus 1 receives positional information of the camera 3 and the probe 4 from the magnetic sensor unit 2, receives an optical image of an object from the camera 3, receives an echo signal inside the object from the probe 4, generates image data related to the object from the positional information, the optical image, and the echo signals, and displays the image data as an image on the display 10.
The magnetic sensor unit 2 can acquire the relative position of its magnetic sensors 22 with respect to the transmitter 21 by using magnetism. The camera 3 images the probe 4 configured to scan the object, and sends the generated image data to the ultrasonic diagnostic apparatus 1. In response to an instruction received from the ultrasonic diagnostic apparatus 1, the probe 4 generates and transmits an ultrasonic wave to the object, receives the ultrasonic wave (echo) reflected from the inside of the object's body, and sends an echo signal based on the echo to the ultrasonic diagnostic apparatus 1.
As shown in
The controller 23 receives information on the magnitude and direction of the detected magnetic field from the respective magnetic sensors 22 installed on the camera 3 and the probe 4, and calculates the relative positions of the respective magnetic sensors 22 with respect to the transmitter 21 (for example, three-dimensional coordinates in which the position of the transmitter 21 is the origin) based on the magnitude and direction of the magnetic field. Further, the controller 23 transmits the relative positions of the respective magnetic sensors 22 to the ultrasonic diagnostic apparatus 1 as positional information of each of the camera 3 and the probe 4. For example, regarding the magnetic sensors 22 within a range of about 1 meter from the transmitter 21, the distance and direction from the transmitter 21 can be measured.
The display 10 is composed of a general display output device such as a liquid crystal display and an OLED (Organic Light Emitting Diode) display, for example. The input interface 11 receives user operations. The input interface 11 is, for example, an operation console that includes operation buttons. The input interface 11 includes an input circuit. The ultrasonic transmission circuit 12 outputs an instruction to transmit an ultrasonic wave to the connected probe 4. The probe 4 transmits an ultrasonic wave toward the object in response to this instruction, and receives the echo signal reflected within object. The echo-signal reception circuit 13 acquires the echo signal from the probe 4.
The positional information acquisition circuit 14 acquires positional information of each of the camera 3 and the probe 4 from the magnetic sensor unit 2 connected to the ultrasonic diagnostic apparatus 1. The camera communication interface 15 transmits an imaging instruction to the camera 3, which is connected to the ultrasonic diagnostic apparatus 1 via a connection tool such as a USB, and receives the generated image data from the camera 3. The camera communication interface 15 includes a communication circuit.
The memory 16 stores data and reads in the stored data in response to an instruction from the processing circuitry 17. The memory 16 is a storage device such as an HDD (Hard Disk Drive) and an SSD (Solid State Drive).
The processing circuitry 17 controls the entirety of the ultrasonic diagnostic apparatus 1. The processing circuitry 17 is a processor that generates and displays each image by reading out and executing programs stored in the memory 16.
The processing circuitry 17 implements each of the functions 18, 19, and 110 to 117 by executing the programs.
The optical image acquisition function 18 includes a function to acquire an optical image of the arm (external appearance) of the object to be subjected to dialysis. The optical image is an example of an appearance image.
The 3D-image construction function 19 includes a function to construct or acquire a 3D image, which contains a plurality of frame images depicting the inside of the object and positional information of each of the frame images, by using the positional information of the probe 4 and the ultrasonic image (echo signal) acquired from the probe 4. Note that each of the frame images is one frame of an ultrasonic image.
The blood-vessel-image extraction function 110 includes a function to extract a blood-vessel image indicative of branching and distribution of blood vessels inside the object from the 3D image.
The blood-vessel-parameter calculation function 111 uses the 3D image to calculate blood vessel parameters including blood-vessel diameter, vascular depth from the body surface, and distance to another blood vessel shown in the plurality of frame images. Each blood vessel parameter is an example of blood vessel data. The blood-vessel-parameter calculation function 111 also includes a function of using an existing technique for determining whether a blood vessel in the 3D image is an artery or a vein. For example, if the blood vessel has pulsations, the blood-vessel-parameter calculation function 111 determines this blood vessel to be an artery. If there is no pulsation, this blood vessel is determined to be a vein.
The puncture-position specifying function 112 includes a function to: (i) specify a recommended puncture position from blood vessels shown in the plurality of frame images on the basis of the blood vessel parameters; and (ii) put a mark indicative of the recommended puncture position in the blood-vessel image depicting inside of the object.
The recommended puncture position is the recommended position for puncturing a needle in dialysis.
The blood-vessel-image color-coding function 113 includes a function to color a blood-vessel image using a chromatic color or an achromatic color on the basis of: whether the blood vessel is an artery or a vein, the blood-vessel diameter, and the vascular depth from the body surface.
The blood vessel rendering function 114 includes a function to generate an image of a blood vessel portion from the 3D image.
The image composition function 115 includes a function to compose the optical image of the arm of the object and the image of the blood vessel portion rendered from the 3D image by aligning both images with each other.
The report display function 116 includes a function to display the composite image on a schema report screen of the display 10. The report display function 116 further includes a function to display the recommended puncture position specified by the puncture-position specifying function 112 on the schema report screen of the display 10.
The MPR rendering function 117 includes a function to generate cross-sectional images in three orthogonal directions of an arbitrary position in the 3D image.
In the step S1, as a result of a scan that is performed by a user on the arm of the object with the use of the probe 4 equipped with one magnetic sensor 22, the ultrasonic diagnostic apparatus 1 acquires a 3D image containing a plurality of frame images depicting the inside of the arm and positional information of each frame image.
In the step S2, the ultrasonic diagnostic apparatus 1 extracts a blood-vessel image showing a blood vessel from the 3D image, specifies the recommended puncture position, and displays the recommended puncture position on the schema report screen on the display 10. The ultrasonic diagnostic apparatus 1 may cause the display 10 to display blood vessel parameters including blood-vessel diameter, vascular depth from the body surface, and distance from another blood vessel. Further, the ultrasonic diagnostic apparatus 1 may cause the display 10 to display a blood-vessel image that is color-coded depending on whether the blood vessel is an artery or vein, the blood-vessel diameter, and the vascular depth from the body surface.
In the step S3, the ultrasonic diagnostic apparatus 1 causes the display 10 to display cross-sectional images in three orthogonal directions of the position designated by the user with the cursor on the optical image of the arm of the object.
As shown in
In response to pressing of the schema creation button, the processing circuitry 17 of the ultrasonic diagnostic apparatus 1 starts processing. The details of this processing are described below on the basis of
In the step S11, the processing circuitry 17 receives a signal indicating pressing of the schema creation button from the input interface 11. In response to reception of this signal, the processing circuitry 17 detects that the schema creation button is pressed, and performs the processing of the steps S12 and S13.
In the step S12, the optical image acquisition function 18 of the processing circuitry 17 causes the camera communication interface 15 to transmit the imaging instruction to the camera 3. The camera 3 receives the imaging instruction from the camera communication interface 15, images the arm of the object before start of the scan, and transmits the optical image of the arm to the camera communication interface 15. The optical image acquisition function 18 of the processing circuitry 17 acquires the optical image received by the camera communication interface 15, and stores the optical image in the memory 16 as image data indicating the position of the probe 4 before start of the scan.
In the step S13, the processing circuitry 17 starts acquisition of positional information and ultrasonic images. First, the processing circuitry 17 acquires the positional information of each of the camera 3 and the probe 4 from the magnetic sensor unit 2 via the positional information acquisition circuit 14. During the same time period, the processing circuitry 17 outputs an ultrasonic transmission instruction to the probe 4 via the ultrasonic transmission circuit 12 and acquires an echo signal from the probe 4 via the echo-signal reception circuit 13. The processing circuitry 17 then associates the positional information of the probe 4 with the frame image based on the acquired echo signal and stores this frame image in the memory 16. The processing circuitry 17 performs the above-described processing for the plurality of frame images to be time-sequentially generated.
When the scan is completed, the user presses the acquisition completion button on the input interface 11 of the ultrasonic diagnostic apparatus 1.
In the step S14, the processing circuitry 17 receives a signal indicating pressing of the acquisition completion button from the input interface 11. As a result, the processing circuitry 17 detects that the acquisition completion button is pressed, and then executes the processing of the steps S15 and S16.
In the step S15, the processing circuitry 17 completes acquisition of the positional information and the ultrasonic images.
In the step S16, the optical image acquisition function 18 of the processing circuitry 17 causes the camera communication interface 15 to transmit the imaging instruction to the camera 3. The camera 3 receives the imaging instruction from the camera communication interface 15, images the arm of the object after the scan, and transmits the optical image of the arm to the camera communication interface 15. The optical image acquisition function 18 of the processing circuitry 17 acquires the optical image received by the camera communication interface 15, and stores the optical image in the memory 16 as the image data indicating the position of the probe 4 after the scan.
The reason to image the object before and after acquiring the positional information and the ultrasonic images using the camera 3 is to detect the probe 4 depicted in the optical image and use the position of the probe 4 for aligning the 3D image constructed from the ultrasonic images with the optical image.
In the step S17, the processing circuitry 17 reads out the acquired positional information and the ultrasonic images of the probe 4 from the memory 16. The 3D-image construction function 19 of the processing circuitry 17 constructs a 3D image in a voxel format. The positional information acquired from the magnetic sensor unit 2 includes the position of each magnetic sensor 22 (X, Y, Z) and the tilt angle (i.e., inclination) of each magnetic sensor 22 (θx, θy, θz). The position of each magnetic sensor 22 is, for example, the relative position of the magnetic sensor 22 with respect to the transmitter 21. The tilt angle of each magnetic sensor 22 is, for example, the rotation angle of the magnetic sensor 22 with respect to the reference axis of the transmitter 21. The 3D-image construction function 19 constructs the 3D image that includes the ultrasonic images of all the frames from the positional information associated with the ultrasonic images of the respective frames.
In the step S21, the blood-vessel-image extraction function 110 of the processing circuitry 17 extracts the blood vessel portion (blood-vessel image) from the 3D image. The blood-vessel-image extraction function 110 acquires color data indicating the state of the blood flow as the blood vessel portion, for example. Further, the blood-vessel-parameter calculation function 111 determines whether each blood vessel included in the extracted blood vessel portion is an artery or a vein. Since the puncture is performed on a vein, the location of the vein must be specified.
In the step S22, the blood-vessel-parameter calculation function 111 of the processing circuitry 17 calculates the depth of the vein, the diameter of the vein, and the distance between the vein and another blood vessel in each frame image included in the voxel-format 3D image. Further, the puncture-position specifying function 112 specifies the recommended puncture position on the basis of these values (blood-vessel parameters), and puts a mark indicating the recommended puncture position on the blood vessel portion in the 3D image. The details are described below.
As shown in
When there is only one blood vessel in the frame image and no other blood vessels, the blood-vessel-parameter calculation function 111 sets the distance between the blood vessels to 0. When a plurality of blood vessels are depicted in the frame image, the blood-vessel-parameter calculation function 111 treats the shortest distance as defined above as the distance between blood vessels.
The puncture-position specifying function 112 specifies the top three determination values in descending order among the calculated values.
As shown in
In the step S23, the blood-vessel-image color-coding function 113 color-codes the blood-vessel portion depicted in the 3D image according to: blood-vessel diameter, vascular depth from the body surface, and whether the blood vessel is an artery or a vein.
In the step S24, the blood vessel rendering function 114 of the processing circuitry 17 renders the blood vessel portions from the 3D image. First, the blood vessel rendering function 114 calculates the difference between the direction of the optical image generated by the camera 3 and the direction of the 3D image. Further, the blood vessel rendering function 114 renders the blood vessel portions in the 3D image by rotating the 3D image by the difference in such a manner that the line of sight is directed as looking down from right above the center of the 3D image (i.e., a 3D image as viewed from directly above the center is obtained). Hereinafter, the rendered 3D image of the blood vessel portion(s) is referred to as “the rendered image”. At this time, the blood vessel rendering function 114 colors one or more marks indicating the recommended puncture position(s) depicted in the rendered image. The coloring of the mark(s) is performed on the basis of predetermined colors assigned according to the recommended order of recommended puncture positions, for example.
Under the above-described conditions, the angle by which the 3D image is rotated so as to align with the camera 3 is calculated for each axis by Expression 2 below.
In other words, the blood vessel portion(s) in the 3D image may be rendered after rotating the viewpoint by (−θx, −θy, −θz) from directly above the 3D image.
In the step S25, the image composition function 115 of the processing circuitry 17 composes the optical image generated by the camera 3 and the rendered image.
Next, the image composition function 115 performs scaling and alignment of the rendered image such that both ends of the rendered image overlap the positions of the probe 4 depicted in the optical images before start of the scan and after completion of the scan. After performing the scaling and alignment, the report display function 116 displays the rendered image such that the rendered image is superimposed on the optical image, as shown in
The ultrasonic diagnostic apparatus 1 obtains the latest positional information of the probe 4 from the magnetic sensor unit 2. Further, the ultrasonic diagnostic apparatus 1 associates the specific portion in the optical image with the actual positional information, and stores it in the memory 16. As a result, the positional information of the specific portion of the optical image can be specified.
After acquiring the ultrasonic images, the image composition function 115 composes the optical image and the rendered image by using the positional information of the specific portion in the optical image and the positional information of each frame image in the 3D image (i.e., positional information of each point in
In the step S26, the report display function 116 displays the composite image and a list of the recommended puncture positions on the schema report screen of the display 10.
The composite image is displayed on the right side of
The lower left side of
Next, details of the processing of displaying the cross-sectional images in the step S3 of
Accordingly, the user designates a position in the optical image by using the input interface 11. The MPR rendering function 117 acquires the position in the optical image designated by the user from the input interface 11, and converts the acquired position into the position in the 3D image corresponding to the acquired position. Further, the MPR rendering function 117 generates the cross-sectional images of the 3D image in three orthogonal directions of the converted position. The report display function 116 displays the generated cross-sectional images on the schema report screen of the display 10. Note that image processing such as movement of the cross-sectional position in the 3D image and rotation of the cross-section can be achieved by using an appropriate image processing method similar to known 3D operations.
The second embodiment relates to a configuration in which the position and tilt angle of the camera 3 are fixed. When the position and tilt angle of the camera 3 are fixed, for example, when the relative position of the camera 3 with respect to the transmitter 21 and the rotation angle of the camera 3 with respect to the reference axis of the transmitter 21 are known in advance, there is no need to install the magnetic sensor 22 on the camera 3.
Referring to
The third embodiment relates to a modification of a configuration of displaying the blood-vessel image. Although a description has been given of the configuration in which the blood-vessel image and the optical image of the arm of the object are composed and the composite image is displayed on the display 10 in the first embodiment, the configuration of superimposing the blood-vessel image on the image of the arm and displaying the superimposed image is not limited to the first embodiment and may be achieved by another embodiment.
For example, when a user wears goggles (or a head-mounted display) and looks at the arm of the object, the ultrasonic diagnostic system may display a blood-vessel image, which depicts the inside of the arm of the object and includes a mark indicating the recommended puncture position, on the goggle lenses. In this case, the field of vision of the user wearing the goggles includes the arm of the object and the blood-vessel image depicting the inside of the arm. This enables the user to check the recommended puncture position while looking at the actual arm of the object, and thus, it becomes easier for the user to perform the puncture procedure.
According to at least one embodiment as described above, information on the blood vessels of the object can be easily referred to.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the invention. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the invention. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
Number | Date | Country | Kind |
---|---|---|---|
2023-015439 | Feb 2023 | JP | national |