IMAGE PROCESSING METHOD AND APPARATUS

Abstract
An image processing apparatus comprises processing circuitry configured to: acquire first image data that is representative of a subject at a first time and second image data that is representative of the subject at a second, different time; process the first image data and second image data to obtain a plurality of transformed first data sets and a plurality of transformed second data sets; transform the transformed first data sets and transformed second data sets to obtain respective first distance transforms and second distance transforms; select a combination of at least one of the first distance transforms and at least one of the second distance transforms; generate at least one morphed distance transform based on the combination; and process the at least one morphed distance transform to obtain upsampled image data that is representative of the subject at a third time.
Description
FIELD

Embodiments described herein relate generally to a method and apparatus for image processing, for example for temporal upsampling of frames.


BACKGROUND

It is known to perform so-called four-dimensional (4D) medical imaging, for example 4D ultrasound imaging. In 4D imaging systems, a series of three-dimensional (3D) images obtained at different times may be dynamically rendered to produce a moving 3D image, for example a 3D movie.


Each three-dimensional (3D) image may be obtained by using software to combine data that has been taken at different positions or angles to obtain volumetric data, and to render an image from the volumetric data using methods such as simple surface shading or direct volume rendering.


In some circumstances, the number of frames captured in 4D imaging is not enough for an animated playback of the frames to seem smooth to a viewer. The frames may be captured at a rate below that which the human eye perceives as being representative of smooth motion. For example, the frames may be captured at below 10 frames per second, or between 10 and 15 frames per second.


A viewer may find it difficult to view a moving image that does not appear to the viewer to be moving smoothly. For example, a viewer may be distracted by discontinuous motion. A viewer may find it difficult to identify anatomical features or abnormalities in a sequence of images that do not appear to the viewer to be moving smoothly.


There are several use cases in which 4D acquisitions capture multiple 3D data volumes, where each 3D volume presents a point in time or a phase of anatomy.


A first example of a use case is 4D ultrasound. In 4D ultrasound, successive frames of volumetric data are acquired by an ultrasound scanner and rendered for viewing. In an example, ultrasound imaging of a mitral valve is performed at 11 frames per second (fps). 11 fps may be considered to be just under a frame rate that is required for the human eye to perceive the motion between frames as being smooth. If the 4D imaging is displayed at 11 fps, a viewer may perceive the motion of the mitral valve to be jerky or discontinuous. The discontinuous motion may detract from the user's ability to analyses the images displayed. The discontinuous motion may make it difficult to interpret the images.


A second example of a use case is multi-phase CT imaging of a heart. In multi-phase CT imaging, image data is captured that is representative of heart motion over one or more cycles of heart motion, typically multiple cycles of heart motion. The image data is gated into frames that are representative of individual phases of the heart motion. For example, image data may be obtained that is representative of 9 different cardiac phases. The data for the different heart phases may be used to render a sequence of images that is representative of heart motion at different times in a cycle of heart motion.


We consider an example in which images of 9 cardiac phases are played one after the other. A viewer may perceive the apparent motion of the heart as not being smooth. The user's ability to analyses the images may be affected by the lack of smoothness in the motion.


Multi-phase CT or other multi-phase imaging may also be performed in respect of other types of motion.


It is known to interpolate motion between successive image frames by using inter-frame or inter-phase registration. To interpolate motion using registration, image data for a first frame is registered to image data for a second frame, for example an adjacent frame. Any suitable registration method may be used, for example any suitable method of non-rigid registration. The registration maps points on the first frame to corresponding points on the second frame.


Once a registration has been obtained, the registration may be used to obtain a third frame that is representative of a time between the first frame and the second frame. The third frame is obtained by interpolating image values based on the registration.


The data that is interpolated between frames may be 2D or 3D. For example, a 3D motion field produced by a registration may be interpolated.


It has been found that interpolating between frames using registration may result in smooth playback. However, successfully interpolating using registration requires the registration to be accurate. If there is difficulty in obtaining a registration, it may not be possible to generate an interpolated frame. In some circumstances, registration may fail entirely. Requiring that every single registration is performed accurately may be a very demanding requirement.


Interpolating using registration may require a lot of processing and resources. The level of processing and resources that are used in registration may make it difficult to perform a registration-based interpolation in real time. Registration-based interpolation may not be appropriate for some real time applications, for example real time ultrasound imaging. Registration may not deal well with changes in topology.


Different modalities of imaging may require the use of different registration methods. Good registration methods may not be available for some modalities.


For observation of cardiac motion, there may be a need to see among the myocardium, coronary vessels and valves. However, in some phases, the coronary vessels or valve may disappear from view because of the motion of the heart or because of the structure's thickness. In some phases, the coronary vessels may disappear from view due to insufficient concentration of contrast. A valve may disappear from view due to its shape when open.


In some circumstances, registration-based methods may not work well when certain image features (for example, valves) may disappear or reappear from view between successive images.


Another method of obtaining intermediate frames that may be considered is a volume blending technique which blends adjacent frames. For example, one may consider blending a first frame and a second frame to obtain a third frame between the first frame and second frame. Each pixel of the third frame may be allocated a value that is intermediate between its value in the first frame and its value in the second frame.


However, it has been found that simple volume blending techniques do not generally give an appearance of motion. Instead, they may provide a sensation of fading between the frames.





BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments are now described, by way of non-limiting example, and are illustrated in the following figures, in which:



FIG. 1 is a schematic diagram of an apparatus according to an embodiment;



FIG. 2 is a flow chart illustrating in overview a method of an embodiment;



FIG. 3 is a flow chart showing some of the images generated and used in the embodiment of FIG. 2;



FIG. 4 illustrates an example of a signed distance field;



FIG. 5 illustrates an example of a plurality of iso-level images and a combined image;



FIG. 6 is a flow chart illustrating in overview a method of providing an animation in accordance with an embodiment;



FIG. 7 is a flow chart illustrating in overview a further method of providing an animation in accordance with an embodiment;



FIG. 8 is a schematic illustration of a use of segmentation in an embodiment; and



FIG. 9 is a schematic illustration of a further use of segmentation in an embodiment.





DETAILED DESCRIPTION

Certain embodiments provide an image processing apparatus comprising processing circuitry configured to: acquire first image data that is representative of a subject at a first time and second image data that is representative of the subject at a second, different time; process the first image data based on a parameter of the image data to obtain a plurality of transformed first data sets, each of the transformed first data sets corresponding to a respective value for the parameter; process the second image data based on the parameter of the image data to obtain a plurality of transformed second data sets, each of the transformed second data sets corresponding to a respective value for the parameter; transform each of the transformed first data sets to obtain a respective first distance transform; transform each of the transformed second data sets to obtain a respective second distance transform; select a combination of at least one of the first distance transform and at least one of the second distance transforms based on the parameter; generate at least one morphed distance transform based on the combination; and process the at least one morphed distance transform to obtain upsampled image data that is representative of the subject at a third time, wherein the third time is between the first time and the second time.


Certain embodiments provide an image processing method comprising: acquiring first image data that is representative of a subject at a first time and second image data that is representative of the subject at a second, different time; processing the first image data based on a parameter of the image data to obtain a plurality of transformed first data sets, each of the transformed first data sets corresponding to a respective value for the parameter; processing the second image data based on the parameter of the image data to obtain a plurality of transformed second data sets, each of the transformed second data sets corresponding to a respective value for the parameter; transforming each of the transformed first data sets to obtain a respective first distance transform; transforming each of the transformed second data sets to obtain a respective second distance transform; selecting a combination of at least one of the first distance transforms and at least one of the second distance transforms based on the parameter; generating at least one morphed distance transform based on the combination; and processing the at least one morphed distance transform to obtain upsampled image data that is representative of the subject at a third time, wherein the third time is between the first time and the second time.


Certain embodiments provide an image processing apparatus comprising processing circuitry configured to: acquire first image data that is representative of a subject at a first time and second image data that is representative of the subject at a second, different time; process the first image data based on a parameter of the image data to obtain a plurality of transformed first data sets, each of the transformed first data sets corresponding to a respective value for the parameter; process the second image data based on the parameter of the image data to obtain a plurality of transformed second data sets, each of the transformed second data sets corresponding to a respective value for the parameter; identify a defect of the first image data and/or the second image data, wherein the identifying of the defect is based on the first transformed data sets and the second transformed data sets; and generate video data based on the identifying of the defect.


Certain embodiments provide an image processing method comprising: acquiring first image data that is representative of a subject at a first time and second image data that is representative of the subject at a second, different time; processing the first image data based on a parameter of the image data to obtain a plurality of transformed first data sets, each of the transformed first data sets corresponding to a respective value for the parameter; processing the second image data based on the parameter of the image data to obtain a plurality of transformed second data sets, each of the transformed second data sets corresponding to a respective value for the parameter; identifying a defect of the first image data and/or the second image data, wherein the identifying of the defect is based on the first transformed data sets and the second transformed data sets; and generating video data based on the identifying of the defect.


A medical image processing apparatus 10 according to an embodiment is illustrated schematically in FIG. 1.


The apparatus 10 comprises a computing apparatus 12, in this case a personal computer (PC) or workstation, which is connected to a computed tomography (CT) scanner 14, one or more display screens 16 and an input device or devices 18, such as a computer keyboard, mouse or trackball.


The CT scanner 14 is configured to obtain volumetric CT imaging data that is representative of an anatomical region of a patient or other subject. The CT scanner 14 is configured to obtain volumetric CT imaging data over time, while movement is occurring in the anatomical region. For example, the anatomical region may be the heart and imaging data may be obtained during motion of the heart.


In alternative embodiments, the CT scanner 14 may be replaced or supplemented by a scanner configured to obtain volumetric imaging or two-dimensional imaging data in any appropriate imaging modality, for example a CT scanner, cone-beam CT scanner, MRI (magnetic resonance imaging) scanner or ultrasound scanner.


The imaging data may be obtained using a contrast agent. For example, the imaging data may comprise contrast CT imaging data, or ultrasound data obtained using bubble contrast.


Imaging data sets obtained by the CT scanner 14 may be stored in memory 20 and subsequently provided to computing apparatus 22, or may be provided to computing apparatus 22 directly. In an alternative embodiment, imaging data sets are supplied from a remote data store (not shown) which may form part of a Picture Archiving and Communication System (PACS). The memory 20 or remote data store may comprise any suitable form of memory storage.


Computing apparatus 12 provides a processing resource for automatically or semi-automatically processing imaging data sets, and comprises a central processing unit (CPU) 22.


The computing apparatus 12 includes rendering circuitry 24 configured to render image frames from the volumetric CT imaging data, decomposition circuitry 26 configured to obtain signed distance fields for the image frames, and interpolation circuitry 28 configured to use the signed distance fields to interpolate between the image frames.


In the present embodiment, the circuitries 24, 26, 28 are each implemented in computing apparatus 12 by means of a computer program having computer-readable instructions that are executable to perform the method of the embodiment. However, in other embodiments, the various circuitries may be implemented as one or more ASICs (application specific integrated circuits) or FPGAs (field programmable gate arrays).


The computing apparatus 12 also includes a hard drive and other components of a PC including RAM, ROM, a data bus, an operating system including various device drivers, and hardware devices including a graphics card. Such components are not shown in FIG. 1 for clarity.



FIG. 2 is a flow chart illustrating in overview a method of an embodiment. FIG. 3 is a further flow chart showing some of the images generated and used in the embodiment of FIG. 2.


At stage 30 of FIG. 1, the rendering circuitry 24 receives volumetric imaging data from the memory 20. The volumetric imaging data is representative of the motion of an anatomical region over time. In further embodiments, the data received by the rendering circuitry 24 may be two-dimensional.


The rendering circuitry 24 renders a first image frame 32 that is representative of the anatomical region at a first time T0, and a second image frame 34 that is representative of the anatomical region at a second time T1. Times T0 and T1 may also be referred to as temporal locations. Movement of the anatomical region has occurred between the first time T0 and the second time T1. The first image frame 32 and second image frame 34 may be adjacent frames of a sequence of frames.


First image frame 32 and second image frame 34 are illustrated in FIG. 3. In the example of FIG. 3, the rendered images 32, 34 are representative of slices of the anatomy, for example axial slices. In other embodiments, the rendered images may be any suitable image type. For example, the rendered images may be images that provide a three-dimensional view of the anatomy. Any suitable rendering method may be used. For example, the rendering method may comprise at least one of Multiplanar Reformat (MPR), Full Volume Intensity Projection volume rendering, Shaded Volume Rendering, Global Illumination Volume Rendering, Curved Plane Reformats (CPR).


Although we describe the process performed by the rendering circuitry 24 in terms of the creation of images, in most embodiments the images 32, 34 are not displayed to the user at this stage. The images 32, 34 are represented by two-dimensional data sets comprising respective pixel values for a plurality of pixel positions. Although we refer below to the processing of images, in practice the processing is performed on the two-dimensional data sets that are representative of the images. Similarly, the iso-levels and signed distance fields described below are illustrated in FIG. 3 only for the purposes of explanation. In most embodiments, the iso-levels and signed distance fields are not displayed to a user. Two-dimensional data sets representative of the iso-levels and signed distance fields are processed by the various circuitries.


It may be seen in FIG. 3 that the rendered images 32, 34 each comprise pixels having a range of different greyscale levels. Different greyscale levels are representative of different CT values, for example CT values in Hounsfield units.


At stage 40, the decomposition circuitry 26 receives a set of N pre-defined iso-level values. In the present embodiment, N=10. In other embodiments, any suitable value for N may be used. The iso-level values are representative of intensity and may comprise, for example, values in Hounsfield units.


In the present embodiment, the set of iso-levels to be used is manually selected by a user. In other embodiments, the set of iso-levels to be used may be determined automatically. The iso-levels may be selected by selecting the most visible parts of the volume. A determination of which parts of the volume are most visible may be obtained using a visibility histogram based on the view presented to the user.


In some embodiments, at least one of the iso-levels may be obtained using a histogram analysis in which a midpoint between significant peaks of the histogram is determined.


In further embodiments, a trained model (for example, a deep neural network, which may comprise a convolutional neural network) may be used to obtain the iso-levels to be used. For example, the trained model may classify the iso-levels that cause the least amount of differences to the final image at key frames. The trained model may have an input volume which comprises at least one key frame. From the at least one key frame, the trained model may regress a fixed vector of thresholds. The loss function may be based on a degree of similarity between a rendered view of the generated volume (based on the proposed decomposition and reconstruction) and a rendered view of a reference source key frame.


The decomposition circuitry 26 decomposes the first rendered image 32 into a first plurality of N iso-level images. The iso-level images are numbered from 1 to N. Each of the N iso-level images is representative of a respective iso-level surface in the first image 32.


In the present embodiment, each of the iso-level images is obtained by thresholding the first rendered image using a respective one of the set of N pre-defined iso-level values. In other embodiments, any suitable method may be used to obtain the iso-level images. In some embodiments, segmentation and/or clipping may be used to constrain each iso-level.


Two examples of iso-level images 42, 44 obtained from the first rendered image 32 are shown in FIG. 3. The iso-level images may also be referred to as volume masks.


The decomposition circuitry 26 decomposes the second rendered image 34 into a second plurality of iso-level images. In the present embodiment the same set of N iso-level values are used for the second plurality of iso-level images as were used for the first plurality of iso-level images. Two examples of iso-level images 46, 48 obtained from the second rendered image 34 are shown in FIG. 3.


At stage 50, the decomposition circuitry 26 converts each of the iso-level images into a respective signed distance field. For each iso-level image, the signed distance field is representative of the shape of the iso-level. In other embodiments, any distance transform may be used that is representative of the shape of the iso-level.


A signed distance field (in three dimensions) is a continuous function custom-character3custom-character, having values which are representative of a distance to a boundary of an object. The signed distance field may be written as f(x, y, z). The value of f(x, y, z) at each point within the interior of the object is the negated minimum distance from the point to the boundary of the object. f(x, y, z)<0 represents the interior of the object. f(x, y, z)>0 represents the exterior of the object. The value of f(x, y, z) at each point exterior to the object is the minimum distance from the point to the boundary.



FIG. 4 is a schematic illustration of a signed distance field in two dimensions. The left side of FIG. 4 shows two circles. The right side of FIG. 4 visualizes a distance field for the two circles. The distance field is visualized using a transfer function that maps f(x, y, z)=0 to white and maps other values for f(x, y, z) to different shades of grey. FIG. 4 demonstrates that unconnected shapes (the circles) may be represented as a single distance function.


The output of stage 50 comprises a first plurality of N signed distance fields corresponding to the first plurality of iso-levels obtained from the first rendered image 32. Two of the first plurality of signed distance fields are shown in FIG. 3 as signed distance fields 52, 54.


The output of stage 50 further comprises a second plurality of N signed distance fields corresponding to the second plurality of iso-levels obtained from the second rendered image 34. Two of the second plurality of signed distance fields are shown in FIG. 3 as signed distance fields 56, 58.


The first plurality of signed distance fields and second plurality of signed distance fields are stored temporarily in data store 20 or in any appropriate memory.


In stages 60 to 80 of FIG. 3, the interpolation circuitry 28 uses the signed distance fields to obtain an interpolated volume for a time tin between T0 and T1. Time t may also be referred to as a temporal location between T0 and T1.


At stage 60, the interpolation circuitry 28 morphs corresponding signed distance fields to obtain morphed distance fields. For each corresponding iso-level for the frames at T0 and T1, a new signed distance field is obtained for time t by interpolating the signed distance fields for T1 and T2. A shape blending operation is performed using the temporal location t to calculate the weight of each frame.


Signed distance field morphing is a technique which may be used to volumetrically smooth morph between two shapes without explicitly modelling the motion of points between shapes. In the present embodiment, volumetric morphing is performed on two-dimensional images. It is described as being volumetric because it is applied to two dimensions of position and one dimension of signed distance.


Signed distance morphing may be used to move in between shapes that do not have a simple analytic mapping. For example, morphing may be performed between a rounded cube and a sphere even through no simple analytic mapping between the rounded cube and sphere is available. It has been found that signed distance morphing may deal well with almost all geometry, including disjoint objects.


The morphing of the signed distance fields may be performed using any suitable interpolation method, for example linear, quadratic or cubic interpolation or any similar smooth interpolation method.


In the present embodiment, the interpolation circuitry 28 combines pairs of signed distance fields that correspond to the same iso-level. For example, signed distance field 52 has been obtained from iso-level image 42. Iso-level image 42 is a first iso-level of first image 32 at time T0. Signed distance field 56 has been obtained from iso-level image 46. Iso-level image 46 is a first iso-level of second image 34 at time T1, for the same iso-level value as iso-level image 42.


Signed distance fields 52, 56 are combined by morphing the signed distance fields 52, 56 based on a time t that is between T0 and T1. The combination of signed distance fields 52, 56 is a morphed signed distance field 62.


Similarly, signed distance fields 54 and 58 have been obtained from iso-level images 44 and 48 respectively, for a second iso-level value. Signed distance fields 54, 58 are combined by morphing the signed distance fields 54, 58 based on the time t. The combination of signed distance fields 54, 58 is a morphed signed distance field 64.


Although only two morphed signed distance fields 62, 64 are illustrated in FIG. 3, in practice each corresponding pair of signed distance fields is morphed together, wherein each corresponding pair of signed distance fields comprises signed distance fields that have been obtained from iso-level images having the same iso-level value.


In further embodiments, at least some of the pairs of signed distance fields that are morphed together may have been obtained from iso-level images having different iso-level values. Morphing together images having different iso-level values may be used to account for inconsistencies in the intensity of captured tissue. For example, the same type of tissue may appear to have different intensity values in different images. One reason for an inconsistency in intensity values may be that contrast is fading out. In some circumstances, morphing together images that are representative of the same type of tissue may comprise combining images having different intensity values. In some embodiments, the iso-levels to be combined may be estimated by the model. In some embodiments, the iso-levels to be combined may be selected by a user.


At stage 70, the interpolation circuitry 28 converts each morphed signed distance field back into an iso-level image (which may also be referred to as a volume mask) having an associated iso-level value.


At stage 80, the interpolation circuitry 28 accumulates all of the iso-level images of stage 70 into a single volume 82. In the present embodiment, the iso-level images are accumulated by selecting the maximum intensity between a current morphed iso-level and a previous morphed iso-level. The interpolation circuitry 28 selects the maximum intensity of each morphed iso-level, for each voxel. For every voxel value, we get a set of values representing each iso-level (interpolated shape) after mapping the distance field back into a value space. The interpolation circuitry 28 iterates through the iso-levels to obtain the maximum value. For each iso-level, the interpolation circuitry 28 compares a current voxel value to a previous voxel value, wherein the current value is from the data set for the iso-level currently being considered, and the previous value is the maximum value as determined from the previously-considered iso-levels. The interpolation circuitry 28 selects the maximum of the current value and the previous value.


In other embodiments, any suitable method may be used to combine the iso-level images. For example, the over operator may be used.



FIG. 5 shows an example of a plurality of iso-level images that are combined into a combined image. The iso-level images may be morphed iso-level images as obtained at stage 70. The iso-level images are obtained for Hounsfield values of −180, −11, 77, 166, 255, 344, 433, 522, 611 and 700 respectively. Each of the iso-level images is reference by its Hounsfield level in FIG. 5.



FIG. 5 also shows a combined image 90 in which all of the individual iso-level images of FIG. 5 are combined into a single image (which in example of FIG. 5 is a slice).


The method of FIG. 2 may provide an easy to use, efficient temporal up-sampling technique that gives a good perception of motion between frames. The method of FIG. 2 may be particularly useful for sequences that are captured in the range of frame rates between 10 fps and 20 fps.


The resources required to perform the morphing process of FIG. 2 may be much less than those required to provide, for example, a registration-based method of interpolation. It has been found that it is difficult for the morphing algorithm to fail. The process of FIG. 2 may provide a robust method of obtaining intermediate frames by interpolation. In general, morphing is a much simpler algorithm than registration. The method of FIG. 2 may be computed efficiently, for example on the GPU.


In some circumstances, the method of FIG. 2 may allow a lower frame rate to be used than would otherwise be the case, which may reduce a resource requirement.


In some circumstances, the method of FIG. 2 may be used to trade off temporal resolution and spatial resolution. For example, it may be the case that the scanner may be configured to capture frames either at high spatial resolution and low frame rate, or at low spatial resolution and high frame rate. The method of FIG. 2 may provide an appearance of increased frame rate due to interpolation, while using images that are acquired at high spatial resolution and low frame rate.


The method of FIG. 2 may accommodate features that appear in view and disappear from view during motion, for example heart valves. The method of FIG. 2 may be applicable to a range of different modalities.


In the embodiment described above with reference to FIG. 2, a single intermediate frame at time t is interpolated between a first frame at time T0 and a second frame at time T1. In other embodiments, multiple intermediate frames may be interpolated at different times between the first frame and second frame. Intermediate frames for different times may be obtained by different weightings of the combination between the signed distance fields of the first frame and the signed distance fields of the second frame.


In the embodiment described above with reference to FIG. 2, the frames are decomposed into iso-level images. In other embodiments, any suitable decomposition may be used. For example, a set of images may be obtained using segmentation and/or clipping. Iso-level images may be used as a starting point for obtaining a segmentation. A segmentation mask may be used alongside the iso-level. The segmentation mask may exclude parts of the volume from being part of the decomposed shapes.


In some embodiments, the decomposition circuitry 26 divides the volume into multiple shapes within each iso-level. Within one target iso-level, the decomposition circuitry 26 may divide the volume into multiple shapes. Each of the multiple shapes may be individually morphed and combined as the interpolation circuitry 28 reconstructs the upsampled volume.


For example, in the case of heart imaging, the vessels overlap in iso-range with the blood pool in the heart. In some embodiments, the vessels are segmented and separated into separate shapes from the main heart, even if the vessels share the same or similar iso-level as the main blood pool.


In embodiments described above, the iso-levels are obtained from a single volume per frame. In other embodiments, iso-levels may be obtained from multiple volumes. For example, the volumes may be obtained using different modalities. The volumes may be obtained from different scans in the same modality, for example with and without contrast. Iso-levels from multiple volumes may be reconstructed into a motion interpolated fusion volume.


In some embodiments, a signed distance field may be included to incorporate an shape that is not part of the anatomy that has been imaged. This shape may be referred to as an external shape. The external shape may be representative of an object or device that is to be introduced to the anatomy. For example, the external shape may be an implant model. The external shape may be any appropriate type of shape, for example a mesh object or mathematical shape.


The signed distance field may be static or animated. The signed distance field may be estimated by a model or user selected.


The external shape may be fused with the volume data and become part of the up-sampled volume. Including the external shape in the up-sampling may make it easier to include the external shape in the volume rendering.


We turn to the question of obtaining an animated view from a sequence of frames. We consider the case in which an initial sequence of frames is obtained at a frame rate that may be insufficient to provide an animation that appears smooth to the user. The method of FIG. 2 is used to interpolate between adjacent pairs of frames.



FIG. 6 is a flow chart illustrating in overview a method of providing an animation in accordance with an embodiment.


At stage 100 of FIG. 6, the rendering circuitry 24 receives data that is representative of a sequence of frames. The sequence of frames has been captured at a frame rate that may be insufficient to provide an animation that appears smooth to the user if no interpolation between frames is performed.


The rendering circuitry 24 renders a respective image from each of the sequence of frames. For each image, the decomposition circuitry 26 extracts signed distance fields for each of a plurality of iso-level values. The decomposition circuitry 26 stores the signed distance fields in the data store 20 or in any appropriate memory.


At stage 102, the interpolation circuitry 28 moves to a new time point (for example, a first time point) which is between two adjacent frames of the sequence of frames. At stage 104, the interpolation circuitry 28 morphs each signed distance field for each iso-level of the two adjacent frames to obtain a morphed signed distance field. At stage 106, the interpolation circuitry reconstructs an intermediate frame (which may also be referred to as a volume) from the morphed signed distance fields as described above with reference to stages 70 and 80 of FIG. 2.


The flow chart then returns to stage 102 and a new time point is selected. In some embodiments, the interpolation circuitry 28 may perform a morphing process for a single time point between each pair of adjacent frames, to obtain a single intermediate frame between each pair of adjacent frames in the sequence. In some embodiments, the interpolation circuitry 28 may perform morphing processes for a multiple time points between each pair of adjacent frames, to obtain multiple intermediate frames between each pair of adjacent frames in the sequence.


The interpolation circuitry 28 outputs a set of video data comprising the image data for the original frames and for the interpolated frames. The rendering circuitry 24 displays the resulting animation. The animation comprises the original sequence of frames and the intermediate frames that have been generated during the process of FIG. 6. The animation may appear to a user to be smoother than an animation produced from the original sequence of frames.


In the method of FIG. 6, the decomposing of the original volumes into signed distance fields is performed ahead of time and the result cached. Performing the decomposition ahead of time may reduce the processing resources required during animation.



FIG. 7 is a flow chart illustrating in overview a method of providing an animation in accordance with a further embodiment. In the embodiment of FIG. 6, the entire sequence of frames to be animated is already available at the beginning of the animation process. In the method of FIG. 7, the animation is performed on frames that are being received in real time. The algorithm of FIG. 2 is applied in a live mode by performing an animation at one frame behind real-time.


At stage 110, the decomposition circuitry 26 waits for a pair of frames to be received. At stage 112, the decomposition circuitry 26 extracts signed distance fields for each of a plurality of iso-level values for each of the pair of frames. The decomposition circuitry 26 stores the signed distance fields in data store 20 or in any appropriate memory.


In the method of FIG. 7, the decomposition of each pair of images into signed distance fields is performed just in time and the result subset required for a single interpolation at time t is cached.


At stage 114, the interpolation circuitry reconstructs a volume using a real time parameter based on an estimated time to a next frame appearing.


For example, consider a case in which a previous frame was acquired at t=10 s and a new frame came in at t=13 s. If a target frame rate is 30 fps (for example, having hardware that could reconstruct an image in 1/30th of a second), the interpolation circuitry 28 generates 90 frames and displays them at 30 fps, no matter when the next frame is expected. The time estimation may be such as to provide delayed real time playback.


The method of FIG. 7 provides an interpolated frame between the last set of volume frames using the method of FIG. 2 and at a speed based on an expectation of when the next frame will appear.


The method of FIG. 7 may provide an appearance of smooth real time motion which is offset behind the true real time results by an acquisition time step. The method of FIG. 7 may provide a smooth real time display at the price of a slight increase in latency. The increase in latency is the result of rendering being performed at one frame behind real time rather than in real time.


The method of FIG. 7 may be particularly useful in modalities such as ultrasound. In some circumstances, the method of FIG. 7 may involve very heavy processing. The processing may be performed using the GPU.



FIG. 8 is a schematic illustration of a use of segmentation in an embodiment. Segmentation may be used to separate the interpolation behavior of individual anatomy. In some circumstances, anatomy may appear and disappear in different phases of motion. For example, in heart motion, it may be expected that a valve will appear and disappear over the motion cycle.


In other circumstances, disappearance of anatomy may be unwanted. For example, the vessels may not be perfectly visible in one cardiac phase. One reason for poor vessel visibility may be lack of contrast. Another may be the presence of imaging artifacts. For example, the coronary artery may typically be more influenced by motion artifact compared to myocardium. This may lead to a lack of the coronary artery in volume data in some phases.


It may preferable if a phase having poor vessel visibility is not used in interpolation. If the anatomy isn't represented in a particular phase, then interpolation using that phase may make the anatomy disappear.


A segmentation in which an anatomy is not correctly represented may be described as a poor segmentation, or as a defect. Any suitable method may be used to determine that a defect is present. In some embodiments, constraints are used to determine whether a defect is present. For example, to determine whether a vessel segmentation is poor, a constraint on vessel volume may be used to determine whether the vessels have an expected volume.



FIG. 8 shows three examples of phases: Phase 1, Phase 2 and Phase 3.


In an embodiment, the decomposition circuitry 26 obtains a high iso-level image 120, medium iso-level image 122 and low iso-level image 124 for each phase. The decomposition circuitry 26 uses the high, medium and low iso-levels as starting points for a first heart segmentation 130 (high iso-level), a second heart segmentation 132 (medium iso-level) and a third heart segmentation 134 (low iso-level). The images and segmentations are denoted by 120A, 122A, 124A, 130A, 132A, 134A for Phase 1; 120B, 122B, 124B, 130B, 132B, 134B for Phase 2; and 120C, 122C, 124C, 130C, 132C, 134C for Phase 3.


The decomposition circuitry 26 assesses whether there is a defect in any of the segmentations. In the example shown, it is found that the heart segmentation is good at all iso-levels in Phase 1 and Phase 3 (130A, 132A, 134A, 130C, 132C, 134C). The heart segmentation is also good for the low iso-level 134B of Phase 2. However, the heart segmentation at the high and medium iso-levels 130B, 132B of Phase 2 is poor. It may be considered that a defect has been identified in Phase 2.


In some embodiments, the interpolation circuitry 28 omits the high iso-level and medium iso-level of Phase 2 from an interpolation process, for example a morphing process as described above with reference to FIG. 2. In some circumstances, an interpolation may be performed directly between Phase 1 and Phase 3 without using Phase 2.


The high iso-level and medium iso-level of Phase 2 may be omitted from the displayed images. For example, the interpolation circuitry 28 may use an interpolation between Phase 1 and Phase 3 in place of the high iso-level and medium iso-level of Phase 2. Alternatively, the interpolation circuitry 28 may use high iso-level and medium iso-level of Phase 1 (or of Phase 3) in place of the high iso-level and medium iso-level of Phase 2.


The interpolation circuitry 28 outputs a set of video data comprising a version of the frames in which any identified defects are omitted (for example, substituted with data from another frame or with interpolated data). The rendering circuitry 24 displays an animation that is a rendering of the output video data.


Omitting parts of images with poor visibility may result in a better overall animation. A better interpolation may be obtained if images with poor visibility of certain anatomy are not used in the interpolation process. The interpolation circuitry 28 may exclude frames in which a segmented anatomy is not well represented. For example, the heart may interpolate across 9 phases but the vessels may interpolate across 6 phases. All of the phases may be combined into one destination volume.



FIG. 9 is a schematic illustration of a further use of segmentation in an embodiment. In the embodiment of FIG. 9, no interpolation is performed.


The decomposition circuitry 26 obtains heart segmentations are obtained for high, medium and low iso-levels for each frame as shown in FIG. 8. Any suitable segmentation method may be used. In the embodiment of FIG. 9, all of the heart segmentations are found to be good and no defects are identified.


The decomposition circuitry 26 also obtains vessel segmentations 150, 152 are also obtained for a high iso-level 140 and medium iso-level 142 respectively, using any appropriate segmentation method.


In the example shown in FIG. 9, the vessel segmentations for Phase 2 are poor. The decomposition circuitry 26 detects that the vessel segmentations are not well represented by their segmentation in Phase 2. The decomposition circuitry 26 identifies a defect in Phase 2.


With respect to the vessels, Phase 2 is skipped in the animation of the phases in the video data. Therefore, the phase where the vessels aren't well represented is skipped for the vessels, while keeping this frame active for the rest of the heart. The animation of the vessels goes straight from Phase 1 to Phase 3 while the animation of the rest of the heart includes all of Phase 1, Phase 2, Phase 3. The decomposition circuitry 26 outputs a set of video data comprising a sequence of frames in which the representation of the vessels in Phase 2 is omitted, and is replaced by the better representation of the vessels from Phase 1. The rendering circuitry 24 displays an animation that is a rendering of the output video data.


In general, the output video data may omit frames or parts of frames that have been found to include a defect. The defect may comprise a poor segmentation as described above, for example a poor vessel segmentation. In other embodiments, any suitable defect may be identified and the frame or part of frame containing the defect may be omitted. Entire frames may be omitted if they don't have suitable shapes detected.


By using the method of FIG. 9, decomposition into different iso-levels may be used to improve animation even if interpolation is not used. When visualizing a body part which is easily influenced by motion (for example, the coronary artery), dividing the image data into multiple iso levels and/or segmentation may allow the use of only reliable data and the exclusion of poorer data.


Appropriate anatomy (for example, the coronary artery) may be visualized in all time phases. The user may find it more comfortable to view an animation in which the appearance of certain anatomy is consistent. The user may found it easier to view an animation in which frames having a poor representation of anatomy are at least partially omitted.


In the embodiments above, the steps of rendering, extracting iso-levels, extracting signed distance fields, morphing signed distance fields, converting the morphed signed distance fields into iso-levels and combining the iso-levels are all described as separate steps. In other embodiments, the functions of two or more of these steps may be combined into a single step. In further embodiments, one or more of the steps described above may be split into multiple steps.


In embodiments above, various data sets (for example, data sets that are representative of images, iso-level images and/or signed distance fields) are stored in data store 20. In other embodiments, some or all the data sets may be stored in any suitable data store. Some or all of the data sets may be cached locally. Some or all of the data sets may be used directly without being stored.


Methods are described above with relation to medical imaging, in particular medical imaging of the heart. In other embodiments, methods described above may be used in relation to medical imaging of any anatomical region of any human or animal subject. References to medical may include veterinary. In further embodiments, methods described above may be applied to any suitable type of imaging, which may not be medical. For example, methods described above may be used to visualize results of fluid simulation in the automobile sector or aerospace sector. Methods described above may be used in volume rendering for oil and gas. Methods described above may be used for volume rendering of sonar.


Certain embodiments provide a medical imaging method comprising a set (minimum 2) of volumes representing a temporal location, a set of pre-defined iso-levels, and a temporal location t in between the volume frames, in which an interpolated volume is created by: (1) decomposing the volume into a set of iso-levels and generating a signed distance field representing the shape of the iso-level; (2) for each corresponding iso-level for the frames in the interpolation neighborhood, a new signed distance field is created by interpolating the distance field, representing a shape blending operation using the temporal location t to calculate the weight of each frame; (3) the resulting set of morphed distance fields are then accumulated back into a destination volume by selecting the maximum intensity of each morphed iso-level, for each voxel.


Step 1 may be done ahead of time and the result cached. Step 1 may be done just in time. The result subset required for the interpolation t may be cached. The interpolation method used for the signed distance field blend operation may be either linear, quadratic, cubic or similar smooth interpolation method. The iso-levels between the frames may be different to account for inconsistencies in the intensity of the captured tissue. Segmentation and/or clipping may be allowed to constrain each iso-level. Iso-levels from multiple volumes may be included and reconstructed into a motion interpolated fusion volume. Signed distance field, static or animated, may be included to incorporate external shapes. Iso-levels may be selected manually. Iso-levels may be selected by selecting the most visible parts of the volume calculated using a visibility histogram based on the view presented to the user. A deep neural network may be used to classify the iso-levels causing the least amount of differences to the final image at key frames.


The system may be applied in near-real time by: waiting for the volumes to come in from an acquisition process and when they do to decompose and store the set of signed distance fields associated with each iso-level; provide a set of interpolated in between the last set of volume frames using the method and at a speed based on the expectation of when the next frame will appear. This may give the appearance of smooth real time motion offset by acquisition time step behind the true real time results.


Certain embodiments provide an image processing apparatus comprising processing circuitry configured to: acquire first image data corresponding to first timing and second image data corresponding to second timing which is different timing of the first timing, transform the first image data and the second image data into a plurality of first transformed data and a plurality of second transformed data based on a parameter of image data, transform the first transformed data and the second transformed data into first signed distance field and second signed distance field, select a combination of the first signed distance field and the second signed distance field based on the parameter, generate morphed data based on the combination of the first signed distance field and the second signed distance field, generate up-sampled image data between the first image data and the second image data, by processing the plurality of the morphed data generated in accordance with the parameter values.


Certain embodiments provide an image processing apparatus comprising processing circuitry configured to: acquire first image data corresponding to first timing and second image data corresponding to second timing which is different timing of the first timing, transform the first image data and the second image data into a plurality of first transformed data and a plurality of second transformed data based on a parameter of image data, specify a defect of image data based on the first transformed data and the second transformed data, generate video data based on the specify procedure.


Whilst particular circuitries have been described herein, in alternative embodiments functionality of one or more of these circuitries can be provided by a single processing resource or other component, or functionality provided by a single circuitry can be provided by two or more processing resources or other components in combination. Reference to a single circuitry encompasses multiple components providing the functionality of that circuitry, whether or not such components are remote from one another, and reference to multiple circuitries encompasses a single component providing the functionality of those circuitries.


Whilst certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the invention. Indeed the novel methods and systems described herein may be embodied in a variety of other forms. Furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the invention. The accompanying claims and their equivalents are intended to cover such forms and modifications as would fall within the scope of the invention.

Claims
  • 1. An image processing apparatus comprising processing circuitry configured to: acquire first image data that is representative of a subject at a first time and second image data that is representative of the subject at a second, different time;process the first image data based on a parameter of the image data to obtain a plurality of transformed first data sets, each of the transformed first data sets corresponding to a respective value for the parameter;process the second image data based on the parameter of the image data to obtain a plurality of transformed second data sets, each of the transformed second data sets corresponding to a respective value for the parameter;transform each of the transformed first data sets to obtain a respective first distance transform;transform each of the transformed second data sets to obtain a respective second distance transform;select a combination of at least one of the first distance transforms and at least one of the second distance transforms based on the parameter;generate at least one morphed distance transform based on the combination; andprocess the at least one morphed distance transform to obtain upsampled image data that is representative of the subject at a third time, wherein the third time is between the first time and the second time.
  • 2. An apparatus according to claim 1, wherein each first distance transform comprises a respective first signed distance field, each second distance transform comprises a respective second signed distance field, and the at least one morphed distance transform comprises at least one morphed distance field.
  • 3. An apparatus according to claim 1, wherein the parameter of the image data comprises intensity.
  • 4. An apparatus according to claim 3, wherein processing the first image data set to obtain the plurality of transformed first data sets comprises thresholding the first image data using different values of intensity, and processing the second image data set to obtain the plurality of transformed second data sets comprises thresholding the second image data using different values of intensity.
  • 5. An apparatus according to claim 1, wherein each first distance transform is representative of a shape of a respective iso-level of the first data set, and each second distance transform is representative of a shape of a respective iso-level of the second data set.
  • 6. An apparatus according to claim 1, wherein the combination of at least one of the first distance transforms and at least one of the second distance transforms comprises pairs of first distance transforms and second distance transforms, each pair having a common value for the parameter, and wherein generating at least one morphed distance transform based on the combination comprises interpolating between each of the pairs.
  • 7. An apparatus according to claim 1, wherein the combination of at least one of the first distance transforms and at least one of the second distance transforms comprises pairs of first distance transforms and second distance transforms, wherein at least some of the pairs have a different value for the parameter in respect of the first distance transform than in respect of the second distance transform, and wherein generating at least one morphed distance transform based on the combination comprises interpolating between each of the pairs.
  • 8. An apparatus according to claim 1, wherein the combination of the at least some of the first distance transforms and at least some of the second distance transforms is weighted in dependence on the difference in time between the third time and first time, and the difference in time between the third time and second time.
  • 9. An apparatus according to claim 1, wherein processing the morphed distance transforms to obtain upsampled image data comprises selecting a maximum intensity of each morphed distance transform for each voxel.
  • 10. An apparatus according to claim 1, wherein the processing of the first image data and second image data based on a parameter of the image data is performed in advance and cached.
  • 11. An apparatus according to claim 1, wherein the first image data and second image data each comprise data from a respective plurality of image acquisitions, and wherein the upsampled image data comprises fusion image data.
  • 12. An apparatus according to claim 1, further comprising incorporating into the first distance transforms and second distance transforms a representation of an object that is not part of the subject.
  • 13. An apparatus according to claim 1, wherein the processing of the first image data and second image data based on the parameter comprises selecting a plurality of values for the parameter.
  • 14. An apparatus according to claim 13, wherein the selecting of the plurality of values for the parameter is performed by a trained model.
  • 15. An image processing method comprising: acquiring first image data that is representative of a subject at a first time and second image data that is representative of the subject at a second, different time;processing the first image data based on a parameter of the image data to obtain a plurality of transformed first data sets, each of the transformed first data sets corresponding to a respective value for the parameter;processing the second image data based on the parameter of the image data to obtain a plurality of transformed second data sets, each of the transformed second data sets corresponding to a respective value for the parameter;transforming each of the transformed first data sets to obtain a respective first distance transform;transforming each of the transformed second data sets to obtain a respective distance transform;selecting a combination of at least one of the first distance transforms and at least one of the second distance transforms based on the parameter;generating at least one morphed distance transform based on the combination; andprocessing the at least one morphed distance transform to obtain upsampled image data that is representative of the subject at a third time, wherein the third time is between the first time and the second time.
  • 16. An image processing apparatus comprising processing circuitry configured to: acquire first image data that is representative of a subject at a first time and second image data that is representative of the subject at a second, different time;process the first image data based on a parameter of the image data to obtain a plurality of transformed first data sets, each of the transformed first data sets corresponding to a respective value for the parameter;process the second image data based on the parameter of the image data to obtain a plurality of transformed second data sets, each of the transformed second data sets corresponding to a respective value for the parameter;identify a defect of the first image data and/or the second image data, wherein the identifying of the defect is based on the first transformed data sets and the second transformed data sets; andgenerate video data based on the identifying of the defect.
  • 17. An apparatus according to claim 16, wherein the video data comprises a plurality of frames each obtained from respective image data, and the generating of the video data comprises omitting from the video data at least part of a video frame based on the image data in which the defect is identified.
  • 18. An apparatus according to claim 16, wherein the identified defect is a defect in a segmentation of at least one object represented in the first image data and/or second image data.
  • 19. An apparatus according to claim 16, wherein the generating of the video data comprises obtaining at least one upsampled frame of the video data using an upsampling procedure, and wherein at least part of the image data in which the defect is identified is omitted from the upsampling procedure.
  • 20. An image processing method comprising: acquiring first image data that is representative of a subject at a first time and second image data that is representative of the subject at a second, different time;processing the first image data based on a parameter of the image data to obtain a plurality of transformed first data sets, each of the transformed first data sets corresponding to a respective value for the parameter;processing the second image data based on the parameter of the image data to obtain a plurality of transformed second data sets, each of the transformed second data sets corresponding to a respective value for the parameter;identifying a defect of the first image data and/or the second image data, wherein the identifying of the defect is based on the first transformed data sets and the second transformed data sets; andgenerating video data based on the identifying of the defect.