The present disclosure generally relates to intravascular ultrasound (IVUS) imaging system. Particularly, but not exclusively, the present disclosure relates to correlating frames in a first series of IVUS images with frames in a second series of IVUS images.
Ultrasound devices insertable into patients have proven diagnostic capabilities for a variety of diseases and disorders. For example, intravascular ultrasound (IVUS) imaging systems have been used as an imaging modality for diagnosing blocked blood vessels and providing information to aid medical practitioners in selecting and placing stents and other devices to restore or increase blood flow.
IVUS imaging systems include a control module (with a pulse generator, an image acquisition and processing components, and a monitor), a catheter, and a transducer disposed in the catheter. The transducer-containing catheter is positioned in a lumen or cavity within, or in proximity to, a region to be imaged, such as a blood vessel wall or patient tissue in proximity to a blood vessel wall. The pulse generator in the control module generates electrical pulses that are delivered to the transducer and transformed to acoustic pulses that are transmitted through patient tissue. The patient tissue (or other structure) reflects the acoustic pulses and reflected pulses are absorbed by the transducer and transformed to electric pulses. The transformed electric pulses are delivered to the image acquisition and processing components and converted into images displayable on the monitor.
Often, physicians will capture a series of IVUS images at different stages of treatment. However, conventional tools and systems to not allow the physician to compare these different series of IVUS images besides providing a select set of measurements taken from the images. Thus, there is a need to correlate IVUS images of the same vessel taken at different times and provide a graphical interface to display these images in relation to each other.
Machine learning (ML) is the study of computer algorithms that improve through experience. Typically, ML algorithms build a model based on sample data, referred to as training data. The model can be used to infer (e.g., make predictions or decisions without explicitly being programmed to do so. As will be appreciated, the quality of the inference a model makes is dependent upon the training data. Thus, there is a need to provide a larger and more complete corpus of knowledge with which these ML models are trained.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to necessarily identify key features or essential features of the claimed subject matter, nor is it intended as an aid in determining the scope of the claimed subject matter.
In general, the present disclosure is provided to process raw IVUS images, automatically detected lumen and vessel borders, and identify regions of interest, or more particularly, starting and ending points between which include frames of interest in a series of IVUS images.
In some embodiments, the disclosure can be implemented as a method for a computing device. The method can comprise receiving, by a processor, a first series of intravascular ultrasound (IVUS) images of a vessel of a patient, the first series of IVUS images comprising a first plurality of frames; receiving, by the processor, a second series of intravascular ultrasound (IVUS) images of the vessel of the patient, the second series of IVUS images comprising a second plurality of frames; determining, by the processor, an offset for the first plurality of frames based at least in part on the second plurality of frames; applying, by the processor, the offset to the first plurality of frames to generate an offset series of IVUS images; and generating, by the processor, a graphical user interface (GUI), the GUI comprising indications of the offset series of IVUS images and the second series of IVUS images.
In further embodiments of the method, determining the offset for the first plurality of frames comprises: identifying a frame of the first plurality of frames comprising a vessel fiducial; identifying a frame of the second plurality of frames comprising the vessel fiducial; and determining the offset for the first plurality of frames that when applied aligns the frame of the first plurality of frames comprising the vessel fiducial with the frame of the second plurality of frames comprising the vessel fiducial.
In further embodiments of the method, the offset comprises a first offset and a second offset and wherein determining the offset for the first plurality of frames based comprises: identifying a first frame of the first plurality of frames comprising a first vessel fiducial; identifying a second frame of the second plurality of frames comprising the first vessel fiducial; determining the first offset for the first plurality of frames that when applied to a first segment of the first plurality of frames aligns the first frame of the first plurality of frames with the first frame of the second plurality of frames; identifying a second frame of the first plurality of frames comprising a second vessel fiducial; identifying a second frame of the second plurality of frames comprising the second vessel fiducial; and determining the second offset for the first plurality of frames that when applied to a second segment of the first plurality of frames different than the first segment, aligns the second frame of the first plurality of frames with the second frame of the second plurality of frames, wherein the second offset is different from the first offset.
In further embodiments of the method, the first offset comprises an offset distance and the second offset comprises an offset angle or wherein the first offset comprises an offset distance or an offset angle and the second offset comprises an offset distance and an offset angle.
In further embodiments of the method, identifying the frame of the first plurality of frames comprising the vessel fiducial and wherein identifying the frame of the second plurality of frames comprising the vessel fiducial comprises: executing a machine learning (ML) model to infer the frame of the first plurality of frames comprising the vessel fiducial; and executing the ML model to infer the frame of the second plurality of frames comprising the vessel fiducial.
In further embodiments of the method, the vessel fiducial is one of a lumen geometry, a vessel geometry, a side branch location, a calcium morphology, a plaque distribution, or a guide catheter position.
In further embodiments of the method, determining the offset for the first plurality of frames comprises: calculating a correlation score for each frame of the first plurality of frames based on a frame-by-frame correlation with the second plurality of frames; identifying a frame of the first plurality of frames having the highest correlation score and a frame of the second plurality of frames associated with the highest correlation score; and determining the offset for the first plurality of frames that when applied aligns the frame of the first plurality of frames with the highest correlation score with the frame of the second plurality of frames associated with the highest correlation score.
In further embodiments of the method, the offset is an offset distance and wherein the method further comprises: calculating a correlation score for each frame of the first plurality of frames based on an angular offset frame-by-frame correlation with the second plurality of frames; identifying a frame of the first plurality of frames having the highest correlation score and a frame or a rotated version of the frame of the second plurality of frames associated with the highest correlation score; and determining an offset angle for the first plurality of frames based on the frame or the rotated version of the frame of the second plurality of frames associated with the highest correlation score that when applied aligns the frame of the first plurality of frames with the highest correlation score with the frame of the second plurality of frames associated with the highest correlation score, wherein the offset series of IVUS images is generated by applying the offset distance and the offset angle to the first plurality of frames.
In further embodiments of the method, determining the offset for the first plurality of frames comprises: calculating a correlation score for each frame of the first plurality of frames based on an angular offset frame-by-frame correlation with the second plurality of frames; identifying a frame of the first plurality of frames having the highest correlation score and a frame or a rotated version of the frame of the second plurality of frames associated with the highest correlation score; and determining the offset for the first plurality of frames based on the frame or the rotated version of the frame of the second plurality of frames associated with the highest correlation score that when applied aligns the frame of the first plurality of frames with the highest correlation score with the frame of the second plurality of frames associated with the highest correlation score.
In further embodiments of the method, the offset for the first plurality of frames is a distance offset, an angle offset, or a distance and an angle offset.
In further embodiments, the method can comprise receiving the second series of IVUS images from an intravascular imaging device; and receiving the first series of IVUS images from a memory storage device.
In further embodiments of the method, the first series of IVUS images are captured during a pre-percutaneous coronary intervention (PCI) procedure.
In further embodiments of the method, the second series of IVUS images are captured during a peri-PCI or post-PCI procedure.
In further embodiments of the method, the GUI comprises a longitudinal view of the first series of IVUS images and the second series of IVUS images and wherein the longitudinal views are set against a common scale.
With some embodiments, the disclosure can be implemented as an apparatus for an intravascular imaging system. The apparatus can comprise a processor; and a memory device coupled to the processor, the memory device comprising instructions executable by the processor, which instructions when executed by the processor cause the intravascular imaging system to implement any of the methods outlined herein.
With some embodiments, the disclosure can be implemented as at least one machine readable storage device. The at least one machine readable storage device can comprise a plurality of instructions that in response to being executed by a processor of an intravascular ultrasound (IVUS) imaging system cause the processor to implement any of the methods outlined herein.
With some embodiments, the disclosure can be implemented as an apparatus for an intravascular imaging system. The apparatus can comprise a 16. An apparatus for an intravascular imaging system, comprising a display; a processor coupled to the display; and a memory device coupled to the processor, the memory device comprising instructions executable by the processor, which instructions when executed by the processor cause the intravascular imaging system to receive a first series of intravascular ultrasound (IVUS) images of a vessel of a patient, the first series of IVUS images comprising a first plurality of frames; receive a second series of intravascular ultrasound (IVUS) images of the vessel of the patient, the second series of IVUS images comprising a second plurality of frames; determine an offset for the first plurality of frames based at least in part on the second plurality of frames; apply the offset to the first plurality of frames to generate an offset series of IVUS images; generate a graphical user interface (GUI), the GUI comprising indications of the offset series of IVUS images and the second series of IVUS images; and display the GUI on the display.
In further embodiments of the apparatus, the instructions further cause the intravascular imaging system to identify a frame of the first plurality of frames comprising a vessel fiducial; identify a frame of the second plurality of frames comprising the vessel fiducial; and determine the offset for the first plurality of frames that when applied aligns the frame of the first plurality of frames comprising the vessel fiducial with the frame of the second plurality of frames comprising the vessel fiducial.
In further embodiments of the apparatus, the offset comprises a first offset and a second offset and wherein the instructions further cause the intravascular imaging system to: identify a first frame of the first plurality of frames comprising a first vessel fiducial; identify a second frame of the second plurality of frames comprising the first vessel fiducial; determine the first offset for the first plurality of frames that when applied to a first segment of the first plurality of frames aligns the first frame of the first plurality of frames with the first frame of the second plurality of frames; identify a second frame of the first plurality of frames comprising a second vessel fiducial; identify a second frame of the second plurality of frames comprising the second vessel fiducial; and determine the second offset for the first plurality of frames that when applied to a second segment of the first plurality of frames different than the first segment, aligns the second frame of the first plurality of frames with the second frame of the second plurality of frames, wherein the second offset is different from the first offset.
In further embodiments of the apparatus, the first offset comprises an offset distance and the second offset comprises an offset angle or wherein the first offset comprises an offset distance or an offset angle and the second offset comprises an offset distance and an offset angle.
In further embodiments of the apparatus, the instructions further cause the intravascular imaging system to: execute a machine learning (ML) model to infer the frame of the first plurality of frames comprising the vessel fiducial; and execute the ML model to infer the frame of the second plurality of frames comprising the vessel fiducial.
In further embodiments of the apparatus, the vessel fiducial is one of a lumen geometry, a vessel geometry, a side branch location, a calcium morphology, a plaque distribution, or a guide catheter position.
With some embodiments, the disclosure can be implemented as at least one machine readable storage device. The at least one machine readable storage device can comprise a plurality of instructions that in response to being executed by a processor of an intravascular ultrasound (IVUS) imaging system cause the processor to receive a first series of intravascular ultrasound (IVUS) images of a vessel of a patient, the first series of IVUS images comprising a first plurality of frames; receive a second series of intravascular ultrasound (IVUS) images of the vessel of the patient, the second series of IVUS images comprising a second plurality of frames; determine an offset for the first plurality of frames based at least in part on the second plurality of frames; apply the offset to the first plurality of frames to generate an offset series of IVUS images; generate a graphical user interface (GUI), the GUI comprising indications of the offset series of IVUS images and the second series of IVUS images; and send the GUI to a display coupled to the IVUS imaging system.
In further embodiments of the at least one machine readable storage device, execution of the instructions further causes the IVUS imaging system to calculate a correlation score for each frame of the first plurality of frames based on a frame-by-frame correlation with the second plurality of frames; identify a frame of the first plurality of frames having the highest correlation score and a frame of the second plurality of frames associated with the highest correlation score; and determine the offset for the first plurality of frames that when applied aligns the frame of the first plurality of frames with the highest correlation score with the frame of the second plurality of frames associated with the highest correlation score.
In further embodiments of the at least one machine readable storage device, the offset is an offset distance and execution of the instructions further causes the IVUS imaging system to calculate a correlation score for each frame of the first plurality of frames based on an angular offset frame-by-frame correlation with the second plurality of frames; identify a frame of the first plurality of frames having the highest correlation score and a frame or a rotated version of the frame of the second plurality of frames associated with the highest correlation score; and determine an offset angle for the first plurality of frames based on the frame or the rotated version of the frame of the second plurality of frames associated with the highest correlation score that when applied aligns the frame of the first plurality of frames with the highest correlation score with the frame of the second plurality of frames associated with the highest correlation score, wherein the offset series of IVUS images is generated by applying the offset distance and the offset angle to the first plurality of frames.
In further embodiments of the at least one machine readable storage device, execution of the instructions further causes the IVUS imaging system to calculate a correlation score for each frame of the first plurality of frames based on an angular offset frame-by-frame correlation with the second plurality of frames; identify a frame of the first plurality of frames having the highest correlation score and a frame or a rotated version of the frame of the second plurality of frames associated with the highest correlation score; and determine the offset for the first plurality of frames based on the frame or the rotated version of the frame of the second plurality of frames associated with the highest correlation score that when applied aligns the frame of the first plurality of frames with the highest correlation score with the frame of the second plurality of frames associated with the highest correlation score.
In further embodiments of the at least one machine readable storage device, the offset for the first plurality of frames is a distance offset, an angle offset, or a distance and an angle offset.
To easily identify the discussion of any element or act, the most significant digit or digits in a reference number refer to the figure number in which that element is first introduced.
The foregoing has broadly outlined the features and technical advantages of the present disclosure such that the following detailed description of the disclosure may be better understood. It is to be appreciated by those skilled in the art that the embodiments disclosed may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present disclosure. The novel features of the disclosure, both as to its organization and operation, together with further objects and advantages will be better understood from the following description when considered in connection with the accompanying figures. It is to be expressly understood, however, that each of the figures is provided for the purpose of illustration and description and is not intended as a definition of the limits of the present disclosure.
As noted, the present disclosure relates to IVUS images and lumens (e.g., vessels) of patients and to processing an IVUS recording, or said differently, processing a series of IVUS images. As such, an example IVUS imaging system, patient vessel, and series of IVUS images is described.
Suitable IVUS imaging systems include, but are not limited to, one or more transducers disposed on a distal end of a catheter configured and arranged for percutaneous insertion into a patient.
With some embodiments, mechanical energy from the drive unit 110 can be used to drive an imaging core (also not shown) disposed in the catheter 102. In at least some embodiments, electric signals transmitted from the one or more transducers may be input to the processor 106 for processing. In at least some embodiments, the processed electric signals from the one or more transducers can be used to form a series of images, described in more detail below. For example, a scan converter can be used to map scan line samples (e.g., radial scan line samples, or the like) to a two-dimensional Cartesian grid, which can be used as the basis for a series of IVUS images that can be displayed for a user.
In at least some embodiments, the processor 106 may also be used to control the functioning of one or more of the other components of the control system 104. For example, the processor 106 may be used to control at least one of the frequency or duration of the electrical pulses transmitted from the pulse generator 108, the rotation rate of the imaging core by the drive unit 110. Additionally, where IVUS imaging system 100 is configured for automatic pullback, the drive unit 110 can control the velocity and/or length of the pullback.
The present disclosure provides that the IVUS runs from different time frames can be aligned on a frame-by-frame basis and provide a graphical user interface that correlates the IVUS runs to allow the physician to view the correlated IVUS runs to observe a more direct understanding of their treatment on the vessel, for example, by observing the difference in lesion properties with a side-by-side comparison.
With some embodiments, IVUS images correlation and visualization system 400 could be implemented as part of control system 104. Alternatively, control system 104 could be implemented as part of IVUS images correlation and visualization system 400. As depicted, IVUS images correlation and visualization system 400 includes a computing device 402. Optionally, IVUS images correlation and visualization system 400 includes IVUS imaging system 100 and display 404.
It is noted that although the disclosure frequently uses IVUS as an exemplary intravascular imaging modality, the disclosure could be provided to longitudinally and/or angularly align frames from different runs captured using any of a variety of other intravascular imaging modalities, such as, optical coherence tomography (OCT).
Computing device 402 can be any of a variety of computing devices. In some embodiments, computing device 402 can be incorporated into and/or implemented by a console of display 404. With some embodiments, computing device 402 can be a workstation or server communicatively coupled to IVUS imaging system 100 and/or display 404. With still other embodiments, computing device 402 can be provided by a cloud based computing device, such as, by a computing as a service system accessibly over a network (e.g., the Internet, an intranet, a wide area network, or the like). Computing device 402 can include processor 406, memory 408, input and/or output (I/O) devices 410, network interface 412, and IVUS imaging system acquisition circuitry 414.
The processor 406 may include circuitry or processor logic, such as, for example, any of a variety of commercial processors. In some examples, processor 406 may include multiple processors, a multi-threaded processor, a multi-core processor (whether the multiple cores coexist on the same or separate dies), and/or a multi-processor architecture of some other variety by which multiple physically separate processors are in some way linked. Additionally, in some examples, the processor 406 may include graphics processing portions and may include dedicated memory, multiple-threaded processing and/or some other parallel processing capability. In some examples, the processor 406 may be an application specific integrated circuit (ASIC) or a field programmable integrated circuit (FPGA).
The memory 408 may include logic, a portion of which includes arrays of integrated circuits, forming non-volatile memory to persistently store data or a combination of non-volatile memory and volatile memory. It is to be appreciated, that the memory 408 may be based on any of a variety of technologies. In particular, the arrays of integrated circuits included in memory 120 may be arranged to form one or more types of memory, such as, for example, dynamic random access memory (DRAM), NAND memory, NOR memory, or the like.
I/O devices 410 can be any of a variety of devices to receive input and/or provide output. For example, I/O devices 410 can include, a keyboard, a mouse, a joystick, a foot pedal, a display, a touch enabled display, a haptic feedback device, an LED, or the like.
Network interface 412 can include logic and/or features to support a communication interface. For example, network interface 412 may include one or more interfaces that operate according to various communication protocols or standards to communicate over direct or network communication links. Direct communications may occur via use of communication protocols or standards described in one or more industry standards (including progenies and variants). For example, network interface 412 may facilitate communication over a bus, such as, for example, peripheral component interconnect express (PCIe), non-volatile memory express (NVMe), universal serial bus (USB), system management bus (SMBus), SAS (e.g., serial attached small computer system interface (SCSI)) interfaces, serial AT attachment (SATA) interfaces, or the like. Additionally, network interface 412 can include logic and/or features to enable communication over a variety of wired or wireless network standards (e.g., 902.11 communication standards). For example, network interface 412 may be arranged to support wired communication protocols or standards, such as, Ethernet, or the like. As another example, network interface 412 may be arranged to support wireless communication protocols or standards, such as, for example, Wi-Fi, Bluetooth, ZigBee, LTE, 5G, or the like.
The IVUS imaging system acquisition circuitry 414 may include circuitry including custom manufactured or specially programmed circuitry configured to receive or receive and send signals between IVUS imaging system 100 including indications of an IVUS run, a series of IVUS images, or a frame or frames of IVUS images.
Memory 408 can include instructions 416. During operation processor 406 can execute instructions 416 to cause computing device 402 to receive (e.g., from IVUS imaging system 100, or the like) a series of IVUS images from multiple IVUS runs of a vessel and store the recording as IVUS images 418a, IVUS images 418b, etc. in memory 408. For example, processor 406 can execute instructions 416 to receive information elements from IVUS imaging system 100 comprising indications of IVUS images captured by catheter 102 while being pulled back from distal end 204 to proximal end 206, which images comprising indications of the anatomy and/or structure of vessel 202 including vessel walls and plaque. Further, it is to be appreciated that processor 406 can execute instructions 416 to receive IVUS images from multiple runs through a vessel (e.g., pre-PCI, post-PCI, at different times, or the like). It is to be appreciated that IVUS images 418a and 418b can be stored in a variety of image formats or even non-image formats or data structures that comprise indications of vessel 202. Further, IVUS images 418a and 418b includes several “frames” or individual images that, when represented co-linearly can be used to form an image of the vessel 202, such as, for example, as represented by IVUS images 300a.
The present disclosure provides to correlate IVUS images 418a and IVUS images 418b on a frame-by-frame basis and present a correlated view of the images in a graphical user interface. With some examples, processor 406 can execute instructions 416 to identify IVUS run frame mapping 420 from IVUS images 418a and IVUS images 418b using a machine learning (ML) model to infer the mapping (e.g., see
Turning now to
In the example where ML is used to generate IVUS run frame mapping 420, processor 406 can execute instructions 416 to execute or “run” ML model 422 with IVUS images 418a and IVUS images 418b as inputs to generate IVUS run frame mapping 420. ML model 422 can infer IVUS run frame mapping 420 from IVUS images 418a and IVUS images 418b. Memory 408 can store a copy of ML model 422 and processor 406 can execute ML model 422 to generate IVUS run frame mapping 420. In general, ML model 422 can be any of a variety of ML models. Examples of ML models and even training an ML model as contemplated herein are provided below.
With some embodiments, the disclosure can be provided to align IVUS runs based on a correlation, for each frame of one IVUS run, with all frames of another IVUS run. Processor 406 could execute instructions 416 to determine an IVUS run frame-by-frame correlation 426. For examples, processor 406 could execute instructions 416 to iterate through each frame of IVUS images 418a and calculate (e.g., using fiducials, using ML, using background subtraction, using cross-correlation, or the like) the correlation between all frames of IVUS images 418b. Subsequently, processor 406 can execute instructions 416 to identify, for each frame in IVUS images 418a, the most closely correlated frame in IVUS images 418b. For example,
With some embodiments, the frame-by-frame correlation can be determined for each frame at different angles of rotation. Processor 406 can execute instructions 416 to identify, for each frame in a set of IVUS images (e.g., IVUS images 418a, or the like), a correlation with a frame from another set of IVUS images (e.g., IVUS images 418b, or the like) at several angles of rotation.
With some examples, processor 406 can execute instructions 416 to generate rotated image frames (e.g., rotated image frames 602a, 602b, etc.) at every possible angles of rotation. In such an example, 359 rotated image frames would be generated. In other examples, processor 406 can execute instructions 416 to generate rotated image frames at a subset of all possible angles of rotation (e.g., every 2 degrees, every 5 degrees, every 10 degrees, every 15 degrees, every 20 degrees, every 30 degrees, every 45 degrees, or the like).
In general, IVUS run frame mapping 420 can include an indication of an offset (e.g., in time, in distance, in rotation, or the like) in which to adjust one (or each) of the IVUS images 418a and IVUS images 418b to align them. As used herein, the term align is meant to align the frames of the images longitudinally and/or angularly.
With some embodiments, processor 406 can execute instructions 416 to receive a bookmark (or bookmarks) identifying a frame of one of IVUS images 418a and/or IVUS images 418b. The IVUS run frame mapping 420 can be adjusted to align the bookmark or bookmarks. With some embodiments, this mapping in not linear. For example, a frame from IVUS images 418a can be adjusted linearly (e.g., by a first distance) and/or rotated (e.g., by a first angle) based on its correlation to a frame from IVUS images 418b while the adjacent frame in IVUS images 418a can be adjusted linearly (e.g., by a second distance) and/or rotated (e.g., by a second angle) based on its correlation to the same or a different frame from IVUS images 418b.
As described above with respect to
Processor 406 can execute instructions 416 to generate IVUS run frame mapping 420 from vessel fiducials 702, for example, by pairing frames from IVUS images 418a and IVUS images 418b where the same anatomical fiducial was identified. Given IVUS run frame mapping 420, processor 406 can execute instructions 416 to correlate each frame of IVUS images 418a to a respective frame of IVUS images 418b. Further, processor 406 can execute instructions 416 to generate GUI 424 depicting indications of frames of the IVUS images 418a correlated and/or with respect to respective frames of IVUS images 418b based on IVUS run frame mapping 420.
It is to be appreciated that in some embodiments, processor 406 can execute instructions 416 and identify a fiducial in a single frame of each IVUS run (e.g., IVUS images 418a and 418b). For example, vessel fiducials 702 could include a side branch identified in a frame of IVUS images 418a and the same side branch identified in a frame of IVUS images 418b. In other embodiments, processor 406 can execute instructions 416 and identify fiducials in multiple frames. In such an example, the fiducials need not be the same. For example, like stated above, vessel fiducials 702 could include a side branch location in a frame of IVUS images 418a and the same side branch location in a frame of IVUS images 418b as well as a guide catheter location in another frame of IVUS images 418a and the guide catheter location in another frame of IVUS images 418b. Examples are not limited in this context.
For example,
A variety of techniques and workflows to identify longitudinal and/or angular offsets for frames in a set of IVUS images (e.g., IVUS images 418a, or the like) to align the frames with frames in another set of IVUS images (e.g., IVUS images 418b, or the like) are provided. It is noted that although
With some embodiments, processor 406 can execute instructions 416 to longitudinally align frames from IVUS images 418a with frames from IVUS images 418b on a segment by segment basis. For example, with some embodiments, processor 406 can execute instructions 416 to identify segments based on vessel fiducials 702.
With some embodiments, processor 406 can execute instructions 416 to rotationally align frames from IVUS images 418a with frames from IVUS images 418b based on vessel fiducials 702. For example, in some embodiments, the IVUS run frame mapping 420 can include an offset angle (e.g., with which to rotate the frame).
It is to be appreciated that various techniques and workflows to identify an alignment offset can be combined. As used herein, “alignment offset” is intended to mean either an offset distance (e.g., to longitudinally align frames) or an offset angle (e.g., to angularly align frames), or both. For example, IVUS run frame mapping 420 can include either or both an offset distance and an offset angle. With some examples, various offset derivation methodologies outlined herein can be combined in a segment-by-segment basis. For example, an alignment offset for frames in a first segment (e.g., segment 904a of
As discussed above, a GUI can be generated to present graphical indications of the different IVUS runs in relation to each other, such as for example, where the frames are aligned as described herein.
GUI 1100 can include graphical indications of IVUS images 418a and IVUS images 418b. As shown in this example, graphical indication of IVUS images 418a and IVUS images 418b include both an on-axis view (e.g., on-axis view 1102a and on-axis view 1102b) and a longitudinal view (e.g., longitudinal view 1104a and longitudinal view 1104b). As depicted GUI 1100 can arrange the on-axis view 1102a and the on-axis view 1102b as well as longitudinal view 1104a and longitudinal view 1104b in a horizontal (e.g., side-by-side) visualization. With other embodiments, processor 406 can execute instructions 416 to generate GUI 1100 to visualize the on-axis view 1102a and the on-axis view 1102b in a vertical arrangement.
Further, GUI 1100 can include a dual-view slide bar 1106 and a dual-view slider 1108. The dual-view slider 1108 can be manipulated (e.g., via a touch screen, via a mouse, via a joystick, or the like) to slide (or move) through the frames of the IVUS images. As dual-view slider 1108 is moved, processor 406 can execute instructions 416 to regenerate GUI 1100 to move frame indicators 1110a and 1110b disposed over longitudinal views 1104a and 1104b along with the position of the dual-view slider 1108. Further still, the on-axis views 1102a and 1102b can change to correspond to the frames from each respective IVUS run matching the location of the frame indicators 1110a and 1110b.
Accordingly, as provided herein, one or both IVUS runs can be adjusted (e.g., based on an offset distance and/or an offset angle) to align the IVUS runs with each other. As such, a user (e.g., physician) can view different IVUS runs (e.g., a pre-PCI run and a post-PCI run, or the like) where the locations, and corresponding fiducials, of the vessel are aligned in the visualization, such as for example, as depicted in GUI 1100.
With some embodiments, more than two (2) IVUS runs can be presented in a GUI. For example,
Logic flow 1200 can begin at block 1202. At block 1202 “receive a first series of IVUS images of a vessel of a patient” a first series of IVUS images captured via an IVUS catheter percutaneously inserted in a vessel of a patent can be received. For example, information elements comprising indications of IVUS images 418a can be received from IVUS imaging system 100 where catheter 102 is (or was) percutaneously inserted into vessel 202. The IVUS images 418a can comprise frames of images representative of images captured while the catheter 102 is pulled back from distal end 204 to proximal end 206. Processor 406 can execute instructions 416 to receive information elements comprising indications of IVUS images 418a from IVUS imaging system 100, or directly from catheter 102 as may be the case.
Continuing to block 1204 “receive a second series of IVUS images of the vessel of the patient” a second series of IVUS images captured via an IVUS catheter percutaneously inserted in the vessel of the patent can be received. For example, information elements comprising indications of IVUS images 418b can be received from IVUS imaging system 100 where catheter 102 is (or was) percutaneously inserted into vessel 202. Like IVUS images 418a, IVUS images 418b can comprise frames of images representative of images captured while the catheter 102 is pulled back from distal end 204 to proximal end 206. However, as described above and contemplated herein, distal end 204 and proximal end 206 for IVUS images 418a can be at different locations than distal end 204 and proximal end 206 for IVUS images 418b. Processor 406 can execute instructions 416 to receive information elements comprising indications of IVUS images 418b from IVUS imaging system 100, or directly from catheter 102 as may be the case.
Continuing to block 1206 “identify a mapping between frames in the first series of IVUS images to the second series of IVUS images” a mapping between frames in the first series of IVUS images to frames in the second series of IVUS images can be identified. For example, processor 406 can execute instructions 416 to generate IVUS run frame mapping 420 based on ML model 422. In another embodiments, processor 406 can execute ML models 704 to identify vessel fiducials 702 and then identify IVUS run frame mapping 420 from vessel fiducials 702. In another example, processor 406 can execute instructions 416 to generate IVUS run frame mapping 420 on based on a correlation (e.g., frame-by-frame correlation, angular offset frame-by-frame correlation, or the like) as outlined above. In yet another example, processor 406 can execute instructions 416 to generate IVUS run frame mapping 420 on a per segment basis as outlined above.
In any of the above embodiments, IVUS run frame mapping 420 can comprise an indication of an offset (e.g., in time, in distance, in angle, or the like) for one or both series of IVUS images, which when applied would align the IVUS images longitudinally (e.g., as depicted in
With some examples, processor 406 can execute instructions 416 to map frames based on a longitudinal offset as outlined herein. In such an example, processor 406 can execute instructions 416 to map frames based on a partial overlap and time warping. It is to be appreciated that one set of IVUS images (e.g., IVUS images 418a, or the like) can be captured at a first pullback speed while another set of IVUS images (e.g., IVUS images 418b, or the like) can be captured at a second pullback speed, which is different from the first pullback speed. With yet another example, one set of IVUS images (e.g., IVUS images 418a, or the like) can be captured along a first pullback path through a vessel while another set of IVUS images (e.g., IVUS images 418b, or the like) can be captured along a slightly different pullback path, or motion artifacts can be manifest in the captured IVUS images.
Accordingly, although many of the examples discuss aligning (or co-registering) IVUS images of different runs based on offset distances and/or angles, some embodiments provide that the runs can also be aligned (or co-registered) based on motion overlaps and/or time warping.
For example,
Continuing to block 1208 “generate a graphical user interface comprising an indication of the first series of IVUS images and the second series of IVUS images where at least one of the first series of IVUS images or the second series of IVUS images is offset (e.g., in time, in distance, in angle, or the like) based on the mapping to longitudinally and/or angularly align the first series of IVUS images with the second series of IVUS images” a GUI can be generated where the GUI comprises graphical indications of the first series of IVUS images and the second series of IVUS images and where any number of frames from the first and/or second series of IVUS images is offset (e.g., in time, in distance, in angle, or the like) to longitudinally and/or angularly align the first and second series of IVUS images. For example, processor 406 can execute instructions 416 to generate GUI 424 as discussed above. As a specific example, processor 406 can execute instructions 416 to generate GUI 1100 as GUI 424 and cause GUI 1100 to be displayed on display 404.
As noted, with some embodiments, processor 406 of computing device 402 can execute instructions 416 to generate IVUS run frame mapping 420 using an ML model or to generate vessel fiducials 702 from an ML model and then generate IVUS run frame mapping 420 from vessel fiducials 702. In such examples, the ML model can be stored in memory 408 of computing device 402. It will be appreciated, that prior to being deployed, the ML model is to be trained.
The ML system 1402 may make use of experimental data 1408 gathered during several prior procedures. Experimental data 1408 can include IVUS images from several IVUS runs for several patients. The experimental data 1408 may be collocated with the ML system 1402 (e.g., stored in a storage 1410 of the ML system 1402), may be remote from the ML system 1402 and accessed via a network interface 1504, or may be a combination of local and remote data.
Experimental data 1408 can be used to form training data 1412. As noted above, the ML system 1402 may include a storage 1410, which may include a hard drive, solid state storage, and/or random access memory. The storage 1410 may hold training data 1412. In general, training data 1412 can include information elements or data structures comprising indications of multiple series of IVUS images and corresponding desired output (e.g., either a mapping or vessel fiducials). It is to be appreciated that where the desired output is an IVUS frame mapping then the input can be two (or more as may be the case) series of IVUS images. As a specific example referring to
The training data 1412 may be applied to train the ML model 1424. Depending on the application, different types of models may be used to form the basis of ML model 1424. For instance, in the present example, an artificial neural network (ANN) may be particularly well-suited to learning associations between an IVUS images (e.g., IVUS images 418a, IVUS images 418b, etc.) and fiducials or frame mapping (e.g., IVUS run frame mapping 420, vessel fiducials 702, etc.) Convoluted neural networks may also be well-suited to this task. In another example, ML model 1424 can be based on a spatial transformer (e.g., a spatial transformation network, or the like). As another example, ML model 1424 can be multiple networks, such as, for example, Siamese networks, or the like.
Any suitable training algorithm 1420 may be used to train the ML model 1424. For example, the examples depicted herein may be suited to a supervised training algorithm or reinforcement learning training algorithm. For a supervised training algorithm, the ML system 1402 may apply the IVUS images 1414 as inputs 1430, to which an expected output (e.g., mapping or fiducials) can be generated by ML model 1424. In a reinforcement learning scenario, training algorithm 1420 may attempt to maximize some or all (or a weighted combination) of the model inputs 1430 mappings to output 1426 to produce an ML model 1424 having the least error. With some embodiments, training data 1412 can be split into “training” and “testing” data wherein some subset of the training data 1412 can be used to adjust the ML model 1424 (e.g., internal weights of the model, or the like) while another, non-overlapping subset of the training data 1412 can be used to measure an accuracy of the ML model 1424 to infer (or generalize) output 1426 from “unseen” input 1430.
The ML model 1424 may be applied using a processor circuit 1406, which may include suitable hardware processing resources that operate on the logic and structures in the storage 1410. The training algorithm 1420 and/or the development of the trained ML model 1424 may be at least partially dependent on hyperparameters 1422. In exemplary embodiments, the model hyperparameters 1422 may be automatically selected based on logic 1428, which may include any known hyperparameter optimization techniques as appropriate to the ML model 1424 selected and the training algorithm 1420 to be used. In optional, embodiments, the ML model 1424 may be re-trained over time, to accommodate new knowledge and/or updated experimental data 1424.
Once the ML model 1424 is trained, it may be applied (e.g., by the processor 406, or the like) to new input data (e.g., IVUS images 418a, IVUS images 418b, etc.) This input to the ML model (e.g., ML model 422, ML model 702, or the like) may be formatted according to a predefined model inputs 1430 mirroring the way that the training data 1412 was provided to the ML model 1424. The ML model 1424 may generate output 1426 which may be, for example, a generalization or IVUS run frame mapping 420 or vessel fiducials 702 as discussed above.
The above description pertains to a particular kind of ML system 1402, which applies supervised learning techniques given available training data with input/output pairs. However, the present invention is not limited to use with a specific ML paradigm, and other types of ML techniques may be used. For example, in some embodiments the ML system 1402 may apply for example, evolutionary algorithms, or other types of ML algorithms and models to an IVUS run frame mapping 420 (or vessel fiducials 702 as may be the case) from IVUS images 418a and/or IVUS images 418b.
With some embodiments, ML model 1424 can be a traditional ML model, such as, for example, a neural network, a convolutional neural network, an evolutionary artificial neural network, or the like. However, in some embodiments, ML model 1424 may not be an ML model in the traditional sense. For example, ML model 1424 might be a dynamic programming algorithm where parameters of the dynamic programming algorithm are tuned using the training data 1412.
In some embodiments, the disclosure can be provided to angularly align an IVUS run with a view of the vessel from an external imaging modality. For example,
As described above with respect to
As such, with some examples, IVUS images correlation and visualization system 1500 can be coupled to an external imaging system 1506 (e.g., an angiography machine, a computed tomography (CT) machine, a magnetic resonant imaging (MRI) machine, or the like) that is configured to capture external images of the vessel with which IVUS images 418a and/or 418b are captured. Alternatively, IVUS images correlation and visualization system 1500 can be coupled to a memory device storing external images or frames of external images.
Processor 406 can execute instructions 416 to receive an external image 1502 (or images) from external imaging system 1506 (or a memory storage device). Processor 406 can execute instructions 416 to identify fiducials in the external image 1502 and in IVUS images 418a (or IVUS images 418b). For example, processor 406 can execute instructions 416 to identify vessel fiducials 702 corresponding to fiducials in IVUS images 418a and external image 1502.
As outlined above, a variety of techniques exist to identify fiducial in both internal and external imaging modalities. For example, side branch identification and matching are often used to co-register internal images to an external image. The present disclosure provides that processor 406 can execute instructions 416 to identify the fiducial and its location and identify the angle of the fiducial and store an indication of the fiducial location and angle in vessel fiducials 702. With some embodiments, processor 406 can identify the angle of the fiducial using image processing techniques and/or ML inference. For example, ML model 702 could be trained as outlined above to identify fiducials and their corresponding angle from external image 1502. Once the angle of the fiducial in the external image 1502 is identified, processor 406 can execute instructions 416 to identify an offset angle (e.g., IVUS run frame mapping 420, or the like) with which to rotate frames of the IVUS images (e.g., IVUS images 418a and/or 418b) to align the viewing angle with that of the external image 1502. Further, processor 406 can execute instructions 416 to identify an offset for other frames in the IVUS images given the offset angle of frames corresponding to the fiducials (e.g., as outlined above with respect to
For example,
For example,
In some examples, an image frame can be rotated based on a fiducial landmark. For example, a fiducial landmark 1610 is depicted in
Accordingly, as outlined above, processor 406 can execute instructions 416 to angularly align frames within an IVUS run with a viewing perspective of an external image (e.g., external image 1502, or the like) such that the angle in which fiducials are viewed aligns between both imaging modalities.
Further, as discussed above, a GUI can be generated to present graphical indications of an aligned IVUS run. For example, a GUI can be generated to present a visual representation of frames from an IVUS run aligned with a vessel as viewed in an external image.
GUI 1700 can include graphical indications of external image 1502 and IVUS external image aligned IVUS images 1608. Accordingly, as a physician (or user) inspects frames of the IVUS images 418a, the external image aligned IVUS images 1608 will be presented such that the lumen and fiducials as viewed in the IVUS image frames will match the angle of the vessel and fiducials (e.g., fiducials 1602a and 1602b) as viewed in the external image frame.
The instructions 1908 transform the general, non-programmed machine 1900 into a particular machine 1900 programmed to carry out the described and illustrated functions in a specific manner. In alternative embodiments, the machine 1900 operates as a standalone device or may be coupled (e.g., networked) to other machines. In a networked deployment, the machine 1900 may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine 1900 may comprise, but not be limited to, a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a set-top box (STB), a PDA, an entertainment media system, a cellular telephone, a smart phone, a mobile device, a wearable device (e.g., a smart watch), a smart home device (e.g., a smart appliance), other smart devices, a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions 1908, sequentially or otherwise, that specify actions to be taken by the machine 1900. Further, while only a single machine 1900 is illustrated, the term “machine” shall also be taken to include a collection of machines 1900 that individually or jointly execute the instructions 1908 to perform any one or more of the methodologies discussed herein.
The machine 1900 may include processors 1902, memory 1904, and I/O components 1942, which may be configured to communicate with each other such as via a bus 1944. In an example embodiment, the processors 1902 (e.g., a Central Processing Unit (CPU), a Reduced Instruction Set Computing (RISC) processor, a Complex Instruction Set Computing (CISC) processor, a Graphics Processing Unit (GPU), a Digital Signal Processor (DSP), an ASIC, a Radio-Frequency Integrated Circuit (RFIC), another processor, or any suitable combination thereof) may include, for example, a processor 1906 and a processor 1910 that may execute the instructions 1908. The term “processor” is intended to include multi-core processors that may comprise two or more independent processors (sometimes referred to as “cores”) that may execute instructions contemporaneously. Although
The memory 1904 may include a main memory 1912, a static memory 1914, and a storage unit 1916, both accessible to the processors 1902 such as via the bus 1944. The main memory 1904, the static memory 1914, and storage unit 1916 store the instructions 1908 embodying any one or more of the methodologies or functions described herein. The instructions 1908 may also reside, completely or partially, within the main memory 1912, within the static memory 1914, within machine-readable medium 1918 within the storage unit 1916, within at least one of the processors 1902 (e.g., within the processor's cache memory), or any suitable combination thereof, during execution thereof by the machine 1900.
The I/O components 1942 may include a wide variety of components to receive input, provide output, produce output, transmit information, exchange information, capture measurements, and so on. The specific I/O components 1942 that are included in a particular machine will depend on the type of machine. For example, portable machines such as mobile phones will likely include a touch input device or other such input mechanisms, while a headless server machine will likely not include such a touch input device. It will be appreciated that the I/O components 1942 may include many other components that are not shown in
In further example embodiments, the I/O components 1942 may include biometric components 1932, motion components 1934, environmental components 1936, or position components 1938, among a wide array of other components. For example, the biometric components 1932 may include components to detect expressions (e.g., hand expressions, facial expressions, vocal expressions, body gestures, or eye tracking), measure biosignals (e.g., blood pressure, heart rate, body temperature, perspiration, or brain waves), identify a person (e.g., voice identification, retinal identification, facial identification, fingerprint identification, or electroencephalogram-based identification), and the like. The motion components 1934 may include acceleration sensor components (e.g., accelerometer), gravitation sensor components, rotation sensor components (e.g., gyroscope), and so forth. The environmental components 1936 may include, for example, illumination sensor components (e.g., photometer), temperature sensor components (e.g., one or more thermometers that detect ambient temperature), humidity sensor components, pressure sensor components (e.g., barometer), acoustic sensor components (e.g., one or more microphones that detect background noise), proximity sensor components (e.g., infrared sensors that detect nearby objects), gas sensors (e.g., gas detection sensors to detection concentrations of hazardous gases for safety or to measure pollutants in the atmosphere), or other components that may provide indications, measurements, or signals corresponding to a surrounding physical environment. The position components 1938 may include location sensor components (e.g., a GPS receiver component), altitude sensor components (e.g., altimeters or barometers that detect air pressure from which altitude may be derived), orientation sensor components (e.g., magnetometers), and the like.
Communication may be implemented using a wide variety of technologies. The I/O components 1942 may include communication components 1940 operable to couple the machine 1900 to a network 1920 or devices 1922 via a coupling 1924 and a coupling 1926, respectively. For example, the communication components 1940 may include a network interface component or another suitable device to interface with the network 1920. In further examples, the communication components 1940 may include wired communication components, wireless communication components, cellular communication components, Near Field Communication (NFC) components, Bluetooth® components (e.g., Bluetooth® Low Energy), Wi-Fi® components, and other communication components to provide communication via other modalities. The devices 1922 may be another machine or any of a wide variety of peripheral devices (e.g., a peripheral device coupled via a USB).
Moreover, the communication components 1940 may detect identifiers or include components operable to detect identifiers. For example, the communication components 1940 may include Radio Frequency Identification (RFID) tag reader components, NFC smart tag detection components, optical reader components (e.g., an optical sensor to detect one-dimensional bar codes such as Universal Product Code (UPC) bar code, multi-dimensional bar codes such as Quick Response (QR) code, Aztec code, Data Matrix, Dataglyph, MaxiCode, PDF417, Ultra Code, UCC RSS-2D bar code, and other optical codes), or acoustic detection components (e.g., microphones to identify tagged audio signals). In addition, a variety of information may be derived via the communication components 1940, such as location via Internet Protocol (IP) geolocation, location via Wi-Fi® signal triangulation, location via detecting an NFC beacon signal that may indicate a particular location, and so forth.
The various memories (i.e., memory 1904, main memory 1912, static memory 1914, and/or memory of the processors 1902) and/or storage unit 1916 may store one or more sets of instructions and data structures (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. These instructions (e.g., the instructions 1908), when executed by processors 1902, cause various operations to implement the disclosed embodiments.
As used herein, the terms “machine-storage medium,” “device-storage medium,” “computer-storage medium” mean the same thing and may be used interchangeably in this disclosure. The terms refer to a single or multiple storage devices and/or media (e.g., a centralized or distributed database, and/or associated caches and servers) that store executable instructions and/or data. The terms shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media, including memory internal or external to processors. Specific examples of machine-storage media, computer-storage media and/or device-storage media include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), FPGA, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The terms “machine-storage media,” “computer-storage media,” and “device-storage media” specifically exclude carrier waves, modulated data signals, and other such media, at least some of which are covered under the term “signal medium” discussed below.
In various example embodiments, one or more portions of the network 1920 may be an ad hoc network, an intranet, an extranet, a VPN, a LAN, a WLAN, a WAN, a WWAN, a MAN, the Internet, a portion of the Internet, a portion of the PSTN, a plain old telephone service (POTS) network, a cellular telephone network, a wireless network, a Wi-Fi® network, another type of network, or a combination of two or more such networks. For example, the network 1920 or a portion of the network 1920 may include a wireless or cellular network, and the coupling 1924 may be a Code Division Multiple Access (CDMA) connection, a Global System for Mobile communications (GSM) connection, or another type of cellular or wireless coupling. In this example, the coupling 1924 may implement any of a variety of types of data transfer technology, such as Single Carrier Radio Transmission Technology (1×RTT), Evolution-Data Optimized (EVDO) technology, General Packet Radio Service (GPRS) technology, Enhanced Data rates for GSM Evolution (EDGE) technology, third Generation Partnership Project (3GPP) including 3G, fourth generation wireless (4G) networks, Universal Mobile Telecommunications System (UMTS), High Speed Packet Access (HSPA), Worldwide Interoperability for Microwave Access (WiMAX), Long Term Evolution (LTE) standard, others defined by various standard-setting organizations, other long range protocols, or other data transfer technology.
The instructions 1908 may be transmitted or received over the network 1920 using a transmission medium via a network interface device (e.g., a network interface component included in the communication components 1940) and utilizing any one of several well-known transfer protocols (e.g., hypertext transfer protocol (HTTP)). Similarly, the instructions 1908 may be transmitted or received using a transmission medium via the coupling 1926 (e.g., a peer-to-peer coupling) to the devices 1922. The terms “transmission medium” and “signal medium” mean the same thing and may be used interchangeably in this disclosure. The terms “transmission medium” and “signal medium” shall be taken to include any intangible medium that can store, encoding, or carrying the instructions 1908 for execution by the machine 1900, and includes digital or analog communications signals or other intangible media to facilitate communication of such software. Hence, the terms “transmission medium” and “signal medium” shall be taken to include any form of modulated data signal, carrier wave, and so forth. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a matter as to encode information in the signal.
Terms used herein should be accorded their ordinary meaning in the relevant arts, or the meaning indicated by their use in context, but if an express definition is provided, that meaning controls.
Herein, references to “one embodiment” or “an embodiment” do not necessarily refer to the same embodiment, although they may. Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in the sense of “including, but not limited to.” Words using the singular or plural number also include the plural or singular number respectively, unless expressly limited to one or multiple ones. Additionally, the words “herein,” “above,” “below” and words of similar import, when used in this application, refer to this application as a whole and not to any portions of this application. When the claims use the word “or” in reference to a list of two or more items, that word covers all the following interpretations of the word: any of the items in the list, all the items in the list and any combination of the items in the list, unless expressly limited to one or the other. Any terms not expressly defined herein have their conventional meaning as commonly understood by those having skill in the relevant art(s).
This application claims the benefit of U.S. Provisional Patent Application Ser. No. 63/648,483 filed on May 16, 2024 and U.S. Provisional Patent Application Ser. No. 63/502,859 filed on May 17, 2023, the disclosures of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
63648483 | May 2024 | US | |
63502859 | May 2023 | US |