The present invention relates to anatomical landmark detection in medical image data, and more particularly, to spatially consistent multi-scale deep learning based detection of anatomical landmarks in medical image data.
Fast and robust anatomical object detection is a fundamental task in medical image analysis that supports the entire clinical imaging workflow from diagnosis, patient stratification, therapy planning, intervention, and follow-up. Automatic detection of an anatomical object is a prerequisite for many medical image analysis tasks, such as segmentation, motion tracking, and disease diagnosis and quantification.
Machine learning based techniques have been developed for anatomical landmark detection in medical images. For example, machine learning techniques for quickly identifying anatomy in medical images include Marginal Space Learning (MSL), Marginal Space Deep Learning (MSDL), Marginal Space Deep Regression (MSDR), and Approximated Marginal Space Deep Learning (AMSD). While machine learning techniques are often applied to address the problem of detecting anatomical structures in medical images, the traditional object search scheme used in such techniques is typically driven by suboptimal and exhaustive strategies. Furthermore, these techniques do not effectively address cases of incomplete data, i.e., scans taken with a partial field-of-view. Addressing these limitations of conventional anatomical landmark detection techniques is important to enable artificial intelligence to directly support and increase the efficiency of the clinical workflow from admission through diagnosis, clinical care, and patient follow-up.
The present disclosure relates to methods and systems for automated computer-based spatially consistent multi-scale detection of anatomical landmarks in medical images. Embodiments of the present invention provide robust and fast multi-scale detection of anatomical landmarks in medical images and are capable of reliable landmark detection in incomplete medical images (i.e., medical images with partial field-of-views). Embodiments of the present invention enforce spatial coherence of multi-scale detection of a set of anatomical landmarks in a medical image.
In one embodiment of the present invention, a discrete scale-space representation of a medical image of a patient is generated, wherein the discrete scale-space representation of the medical image includes a plurality of scale-levels. A plurality of anatomical landmarks are detected at a coarsest scale-level of the discrete scale-space representation of the medical image using a respective trained search model trained to predict a trajectory from a starting location to a predicted landmark location at the coarsest scale-level for each of the plurality of anatomical landmarks. Spatial coherence of the detected anatomical landmarks is enforced by fitting a learned shape model of the plurality of anatomical landmarks to the detected anatomical landmarks at the coarsest scale-level to robustly determine a set of the anatomical landmarks within a field-of-view of the medical image. The detected landmark location for each of the landmarks in the set of anatomical landmarks is refined at each remaining scale-level of the discrete scale-space representation of the medical image using, for each landmark in the set of anatomical landmarks, a respective trained search model trained to predict a trajectory to a predicted landmark location at each remaining scale-level, wherein the trained search model for each remaining scale-level for each landmark is constrained based on a range surrounding the predicted landmark location for that landmark at a previous scale-level.
These and other advantages of the invention will be apparent to those of ordinary skill in the art by reference to the following detailed description and the accompanying drawings.
The present disclosure relates to methods and systems for automated computer-based spatially consistent multi-scale detection of anatomical landmarks in medical images. Embodiments of the present invention are described herein to give a visual understanding of the anatomical landmark detection method. A digital image is often composed of digital representations of one or more objects (or shapes). The digital representation of an object is often described herein in terms of identifying and manipulating the objects. Such manipulations are virtual manipulations accomplished in the memory or other circuitry/hardware of a computer system. Accordingly, is to be understood that embodiments of the present invention may be performed within a computer system using data stored within the computer system.
Robust and fast computer-based automated detection of anatomical structures in medical images is an important task for next-generation automated medical support tools. While machine learning techniques are often applied to address this problem, the traditional object search scheme is typically driven by suboptimal and exhaustive strategies. One limitation with traditional machine learning anatomical landmark detection techniques is that they do not effectively addresses cases of incomplete data, i.e., medical image scans taken with a partial field-of-view. Deep scanning-based methods represent one main category of machine learning based anatomical landmark detection solutions. In deep-scanning based methods, such as Marginal Space Deep Learning (MSDL), the problem of anatomical landmark detection in medical images is typically reformulated to a patch-wise classification between positive and negative hypotheses, sampled as volumetric boxes of image intensities. Alternatively, end-to-end deep learning systems based on fully convolutional architectures approach the problem of anatomical landmark detection in medical images by learning a direct mapping f(1)=M between the original image/and a coded map M highlighting the locations of anatomical landmarks. However, in cases of over thousands of large range 3D CT scans at high spatial resolution (e.g., 2 mm or less), the training of such deep learning systems becomes infeasible due to excessive memory requirements and high computational complexity. Furthermore, for incomplete data, all of these deep learning based systems share a common limitation in that they rely on suboptimal or inaccurate heuristics such as probability thresholding to recognize whether an anatomical landmark is visible in the field-of-view of the 3D scan.
Embodiments of the present invention provide improvements to the technology of computer-based automated anatomical landmark detection in medical images, as compared to traditional machine learning based techniques for anatomical landmark detection. Embodiments of the present invention provide faster and more accurate detection of anatomical landmarks, as compared to existing deep learning based techniques for anatomical landmark detection. Embodiments of the present invention provide increased robustness for landmark detection in cases of incomplete data. As used herein, “incomplete data” refers to a medical image scan with a partial field-of-view that is missing one or more of the landmarks to be detected. Embodiments of the present invention also utilize a multi-scale landmark detection method that reduces memory requirements and computational complexity as compared to existing deep learning based techniques. Embodiments of the present invention address the above described limitations of existing deep learning based anatomical landmark detection techniques by using a scale-space model and robust statistical shape modeling for multi-scale spatially-coherent landmark detection.
In general, the continuous scale-space of a 3D image signal I ∈3 → is defined as: L(x, t)=T(ϵ,t)I(x−ξ), where t ∈ denotes the continuous scale-level,×∈, L(x, 0)=I(x), and T defines a one-parameter kernel-family. The main properties of such a scale-space representation are the non-enhancement of local extrema and implicitly the causality of structure across scales. These properties are important for the robustness of a search process, starting from coarse to fine scale. According to an advantageous embodiment of the present invention, a discrete approximation of the continuous scale-space L is used while best preserving these properties. This discrete scale-space is defined as: Ld(t)=Ψρ(σ(t−1))*Ld(t−1), where Ld(0)=I, t ∈0 denotes the discrete scale-level, a represents a scale-dependent smoothing function, and Ψρ denotes a signal operator that reduces the spatial resolution with factor p using down-sampling.
Assuming without loss of generality a discrete scale-space of M scale levels and ρ=2, embodiments of the present invention search for anatomical landmarks in a medical image using a navigation model across the M scales, starting with from coarsest scale-level (t=M−1) and ending with the finest scale-level (t=0). According to an advantageous embodiment, for a given anatomical landmark, each scale-space is searched by iteratively approximating an optimal action value function Q* for a current state s using a learned model θ and applying an action a based on the approximated optimal action value function. For this, the optimal action value function Q* is redefined by conditioning the state-representation s and model parameters 0 on the scale-space Ld and the current scale t ∈[0, . . . , M−1] : Q*(s, a|Ld,t)≈Q(s, a; θt|Ld,t) . This results in M independent navigation sub-models θ=[θ0, θ1, . . . , θM−1], one for each scale-level. In an advantageous embodiment, the respective navigation sub-model for each scale-level is a deep neural network (DNN) trained at that scale level using deep reinforcement learning (DRL), i.e., the navigation sub-models are trained by optimizing the Bellman criterion on each scale-level t<M. Additional details regarding training a model for landmark detection using DRL are described in U.S. Publication No. 2017/0103532, entitled “Intelligent Medical Image Landmark Detection,” and U.S. Publication No. 2017/0116497, entitled “Intelligent Multi-scale Medical Image Landmark Detection,” the disclosures of which are incorporated herein in their entirety by reference. In order to search for the landmark in a given scale-level t, a state-representation s representing a current location of the landmark search at that scale-level is input to the trained DNN θtfor that scale-level, the trained DNN calculates action values (Q-values) for a defined set of actions (e.g., left, right, up, down, front, back), and an action with the highest Q-value is selected and applied to move the current location. These operations are repeated until the landmark search at that scale-level converges (or for a predetermined maximum number of iterations).
The multi-scale detection workflow for each anatomical landmark is performed as follows: the search starts in the image center at the coarsest scale level M−1. Upon convergence of the search at the coarsest scale-level M−1, the scale-level is changed to M−2 and the search continues from the convergence point determined at M−1. The same process is repeated at the following scales until convergence on the finest scale t=0. The present inventors have empirically observed that optimal trajectories converge on minimal oscillatory cycles. As such, in an advantageous implementation, the convergence point can be defined as the center of gravity of this cycle. The search-model for the coarsest scale-level Q(., .; θM−1|Ld, M−1) is trained for global convergence (i.e., convergence over the entire reduced resolution image at that scale), while the models for each of the subsequent scales t<M−1 are trained in a constrained range around the ground-truth. This range may be robustly estimated from the accuracy upper-bound on the previous scale t+1. Note that the spatial coverage of a fixed-size state s ∈ S increases exponentially with the scale. This multi-scale navigation model allows the system to effectively exploit the image information and increase the robustness of the search.
According to an advantageous embodiment of the present invention, the global search model θM−1 (i.e., the search model for the coarsest scale-level) is explicitly trained for missing landmarks in order to further improve the accuracy for such cases. In particular, the global search model θM−1 is trained to reward trajectories that leave the image space through the correct image/volume border when the landmark being searched for is missing in the field of view in the training data. For example, assuming that computed tomography (CT) scans are cut only horizontally, the global search model θM−1 is trained to reward trajectories that leave the image space through the top border or the bottom border depending on whether the missing landmark in the training data is above or below the field of view. In order to perform this training, an annotation is required for each missing landmark in the training data indicating whether the missing landmark is above the field of view or below the field of view.
Referring to
At step 104, a discrete scale-space representation is generated for each training image. The discrete scale-space representation for a training image I is defined as: Ld(t)=Ψρ(σ(t−1))*Ld(t−1) , where Ld(0)=I, t ∈0 denotes the discrete scale-level, σ represents a scale-dependent smoothing function, and Ψρ denotes a signal operator that reduces the spatial resolution with factor ρ using down-sampling. Accordingly, generating the discrete scale-space representation for a training image I results an image pyramid of M images Ld(0), Ld(1), . . . ,Ld(M−1), where Ld(0) is the original training image I and Ld(1), . . . , Ld(M−1) are reduced resolution image at different spatial resolutions (scale-space levels). In an exemplary implementation ρ=2, but the present invention is not limited thereto. For example, a scale-space of 4 scale-levels (M=4) can be used with isotropic resolutions of 2 mm, 4 mm, 8 mm, and 16 mm defined for the respective scale-levels.
At step 106, for each landmarks, a respective search model is trained for each of the scale-levels (t=0, 1, . . . , M−1) in the discrete scale-space. That is, for each of the N anatomical landmarks in the set of anatomical landmarks, M search models are trained, each trained to search for the anatomical landmark in a respective one of the M scale-levels (resolutions). In an advantageous embodiment of the present invention, each of the M search models for a given anatomical landmark is a DNN trained based on the training data at the respective scale-level using DRL. A method for training a DNN-based search model θ for a particular anatomical landmark using DRL is described herein. It is to be understood that, other than where specific differences between training the search models for the different scale-levels are noted, the training method can be similarly applied to train the search model at each of the scale-levels. Additional details regarding training a model for landmark detection using DRL are described in U.S. Publication No. 2017/0103532, entitled “Intelligent Medical Image Landmark Detection,” and U.S. Publication No. 2017/0116497, entitled “Intelligent Multi-scale Medical Image Landmark Detection,” the disclosures of which are incorporated herein in their entirety by reference.
In an advantageous implementation, the trained DNN can be a deep convolutional neural network (CNN). Inspired by the feed-forward type of information processing observable in the early visual cortex, the deep CNN represents a powerful representation learning mechanism with an automated feature design, closely emulating the principles of animal and human receptive fields. The architecture of the deep CNN is comprised of hierarchical layers of translation-invariant convolutional filters based on local spatial correlations observable in images. Denoting the l-th convolutional filter kernel in the layer k by w(k,l), the representation map generated by this filter can be expressed as:oi,j=σ((w(k,l)*x)i,j+b(k,l)), where×denotes the representation map from the previous layer (used as input), (i,j) define the evaluation location of the filter and b(k,l) represents the neuron bias. The function a represents the activation function used to synthesize the input information. In an exemplary implementation, rectified linear unit activations (ReLU) can be used given their excellent performance. In a supervised training setup, i.e., given a set of independent observations as input patches X with corresponding value assignments y, the network response function can be defined as R(·; w,b) and Maximum Likelihood Estimation can be used to estimate the optimal network parameters: ŵ, {circumflex over (b)}=argminw,b∥R(X; w, b)−y∥22. This optimization problem can be solved using a stochastic gradient descent (SGD) approach combined with the backpropagation algorithm to compute the network gradients.
Reinforcement learning (RL) is a technique aimed at effectively describing learning as an end-to-end cognitive process. A typical RL setting involves an artificial agent that can interact with an uncertain environment, thereby aiming to reach predefined goals. The agent can observe the state of the environment and choose to act on it, similar to a trial-and-error search, maximizing the future reward signal received as a supervised response from the environment. This reward-based decision process is modeled in RL theory as a Markov Decision Process (MDP), :=(S, A, T, R, γ), where S represents a finite series of states over time, A represents a finite series of actions allowing the agent to interact with the environment, T:S×A×S→[0,1] is a stochastic transition function, where Tss,a describes the probability of arriving in state s after performing action a in state s, R:S×A×S is a scalar reward function, where Rs, as denotes the expected reward after a state transition, and γ is the discount factor controlling future versus immediate rewards.
Formally, the future discounted reward of an artificial agent at time can be written as = with marking the end of a learning episode and rt defining the immediate reward the agent receives at time . Especially in model-free reinforcement learning, the target is to find the optimal so called action-value function, denoting the maximum expected future discounted reward when starting in state s and performing action a: Q*(s, a)=maxπ[Rt|st=s, at=a, π], where π is an action policy, in other words a probability distribution over actions in each given state. Once the optimal action-value function is estimated, the optimal action policy, determining the behavior of the artificial agent, can be directly computed in each state: ∀s ∈ S:π*(s)=argmaxa∈AQ*(s, a). One important relation satisfied by the optimal action-value function Q* is the Bellman optimality equations, which is defined as:
Q*(s, a)=Σs′Ts,as′(Rs,as′+γmaxa′Q*(s′, a′))=s′(R+γmaxa′Q*(s′, a′)) (1)
where s defines a possible state visited after s, a the corresponding action, and r=Rs,as′ represents a compact notation for the current, immediate reward. Viewed as an operator τ, the Bellman equation defines a contraction mapping. Strong theoretical results show that by applying Qi+1=r(Qi), ∀(s, a), the function Qi converges to Q* at infinity. This model-based policy iteration approach is however not always feasible in practice. An alternative is the use of model-free temporal difference methods, such as Q-learning, which exploit correlations of consecutive states. The use of parametric functions to approximate the Q-function provides a step further toward higher computational efficiency. Considering the expected non-linear structure of the Q-function, neural networks represent a potentially powerful solution for policy approximation. According to an advantageous embodiment of the present invention, deep neural networks are leveraged to approximate the Q-function (at each scale-level) in order to provide automated machine-driven intelligence for landmark detection in medical images.
According to an advantageous embodiment, the landmark detection problem is formulated as a deep-learning driven behavior policy encoding automatic, intelligent paths in parametric space toward the correct solution. In particular, for the landmark detection problem, the optimal search policy represents a trajectory in image space (at the respective scale-level) converging to the landmark location p ∈d (d is the image dimensionality). The reward-based decision process for determining the trajectory to the landmark location is modeled with an MDP . While the system dynamics T are implicitly modeled through the deep-learning-based policy approximation, the state space S, the action space A, and the reward/feedback scheme are explicitly designed for the landmark detection problem. The states describe the surrounding environment. According to an advantageous implementation, the state for the landmark detection search model is defined as a region-of-interest in the image (at the given scale-level) with its center representing the current position of the agent (i.e., the current estimate for the landmark location). The actions denote the moves of the artificial agent in the parametric space. According to an advantageous implementation, a discrete action scheme can be selected allowing the agent to move a predetermined distance (i.e., one pixel/voxel) in all directions: up, down, left, right, front, back, corresponding to a shift of the image patch. This allows the agent to explore the entire image space (for the global search model at scale level M−1) or the entire search space of the constrained search regions (for the search models at the remaining scales). The rewards encode the supervised feedback received by the agent. As opposed to typical reward choices for RL problems, embodiments of the present invention follow more closely to a standard human learning environment, where rewards are scaled according to the quality of a specific move. In an advantageous implementation, the reward is selected to be δd, the supervised relative distance change to the ground truth landmark location after executing an action.
Deep reinforcement learning is used to train the intelligent artificial agent. Given the model definition, the goal of the agent is to select actions by interacting with the environment in order to maximize the cumulative future reward. The optimal behavior is defined by the optimal policy π* and implicitly optimal action value function Q* . In an advantageous implementation, a model-free, temporal difference approach using a deep convolutional neural network (CNN) can be used to approximate the optimal active-value function Q*. Defining the parameters of a deep CNN as θ, this architecture can be used as a generic, non-linear function approximator Q(s, a; θ)≈Q* (s, a), referred to herein as a deep Q network (DQN). A deep Q network can be trained in this context using an iterative approach to minimize a mean squared error based on the Bellman optimality criterion (see Equation 1). At any learning iteration i, the optimal expected target values can be approximated using a set of reference parameters Qiref:=θj from a previous iteration j<i:y=r+γ maxa′Q (s′, a′; θiref). As such, a sequence of well-defined optimization problems driving the evolution of the network parameters is obtained. The error function at each step i is defined as:
{circumflex over (θ)}i=arg minθ
Using a different network to compute the reference values for training can bring robustness to the algorithm. In such a setup, changes to the current parameters θi and implicitly to the current approximator Q(·; θi) cannot directly impact the reference output y , introducing an update-delay and thereby reducing the probability to diverge and oscillate in suboptimal regions of the optimization space. To ensure the robustness of the parameter updates and train more efficiently, experience replay can be used. In experience replay, the agent stored a limited experience memory (204 of
For each anatomical landmark to be detected, the above described training algorithm is used to train a respective search model θt for each of the scale-levels t=0, 1, . . . M−1 . The search-model for the coarsest scale-level Q(., .; θM−1|Ld, M−1) is trained for global convergence (i.e., convergence over the entire reduced resolution image at that scale), while the models for each of the subsequent scales t<M 1 are trained in a constrained range around the ground-truth. This range may be robustly estimated from the accuracy upper-bound on the previous scale t+1. According to an advantageous embodiment of the present invention, the global search model θM−1 (i.e., the search model for the coarsest scale-level) is explicitly trained for missing landmarks in order to further improve the accuracy for such cases. In particular, the global search model θM−1 is trained to reward trajectories that leave the image space through the correct image/volume border when the landmark being searched for is missing in the field of view in the training data. Once the trained search models for each landmark are trained, the trained search models can be stored, for example on a memory or storage of a computer system or on a remote cloud-based storage device, and used to perform automated computer-based landmark detection in a newly received medical image.
At step 304, a discrete scale-space representation is generated for the medical image. The discrete scale-space representation for the received medical image I is defined as: Ld(t)=Ψρ(σ(t−1))*Ld(t−1) , where Ld(0)=I, t ∈0 denotes the discrete scale-level, σ represents a scale-dependent smoothing function, and Ψρ denotes a signal operator that reduces the spatial resolution with factor ρ using down-sampling. Accordingly, generating the discrete scale-space representation for the medical image/results an image pyramid of M images Ld(0), Ld(1), . . . , Ld(M−1), where Ld(0) is the original resolution medical image I and Ld(1), . . . , Ld(M−1) are reduced resolution images at different spatial resolutions (scale-space levels) generated by down-sampling the medical image. In an exemplary implementation ρ=2, but the present invention is not limited thereto. For example, a scale-space of 4 scale-levels (M=4) can be used with isotropic resolutions of 2 mm, 4 mm, 8 mm, and 16 mm defined for the respective scale-levels.
At step 306, a set of anatomical landmarks are detected at the coarsest scale-level of the scale-space representation of the medical image using a respective trained search model for each landmark. For each anatomical landmark, a plurality of search models, each corresponding to a respective scale-level (i.e., spatial resolution) of the discrete scale-space representation, are trained in an offline training stage using the method of
Returning to
Given an unseen configuration of detected landmark points at scale M−1 as {tilde over (P)}=[{tilde over (p)}0, {tilde over (p)}1, . . . , {tilde over (p)}N−1], the set of detected landmark points {tilde over (P)} can be approximated with a translated and anisotropic-scaled version of the mean model using least linear squares. However, for the case of incomplete data the cardinality of |{tilde over (P)}|≤N. In addition outliers can corrupt the data. According to an advantageous implementation, an M-estimator sample consensus can be used enable the robust fitting of the shape model to the set of landmarks detected at the coarsest scale-level. Based on random 3-point samples from the set of all triples (i.e., the set of all possible combinations of three of the landmark points), the mean-model fit {circumflex over (ω)}=[t, s] can be obtained, where t and s are the translation and scaling parameters to fit the mean shape model to the detected landmarks. The target is to optimize the cost function based on the re-descending M-estimator and implicitly maximize the cardinality of the consensus set. In an advantageous implementation, the following cost function can be used:
The target is to minimize this cost function (based on the redescending M-estimator) which results in maximizing the cardinality of the consensus set S . Zi is a normalization factor for the distance-based sample score which defines an ellipsoid around the mean landmark location. If a detected landmark is within the ellipsoid, it is considered an inlier and part of the consensus set (with cost<1), if outside, it is an outlier (with fixed cost 1). Standard random sampling is used to select the minimum set of 3 detected landmark points required to fit the model with linear least squares. Given a fitted model, the cost function is evaluated with the aim to maximize the size of the consensus set. This results in a robust set of landmarks {circumflex over (P)} that are present in the field-of-view of the medical image with missing landmarks eliminated from the initial set of landmarks to be detected and spatially coherent locations of the set of landmarks {circumflex over (P)} in the coarsest scale-level that are used to constrain the search for the set of landmarks {circumflex over (P)} at the next scale-level. Enforcing spatial coherency by fitting the learned shape model not only corrects for diverging trajectories by re-initializing the search, but also significantly reduces the false-negative rate by correcting for border cases, in which landmarks very close to the border of the image (e.g.,<2 cm) are falsely labeled as missing by the search model at the coarsest scale M−1.
Referring to
Returning to
At step 312, it is determined if the landmark detection at the final scale-level (t=0) of the scale-space representation of the medical image has been completed. If the landmark detection at the final scale-level (t=0) has not yet been completed, the method returns to step 310, moves to the next scale-level and detects anatomical landmarks at the next scale-level using search models for the anatomical landmarks trained for the next-scale level. Accordingly, the method sequentially performs landmark detection at each scale-level going from coarse to fine resolutions until the landmark detection at the final scale-level (t=0) corresponding to the original resolution medical image is performed. When the landmark detection at the final scale-level (t=0) has been completed, the method proceeds to steps 314 and 316. Referring to
Returning to
At step 316, a scan range of the medical image can be automatically determined based on the detected anatomical landmarks. The robust fitting of the shape-model also enables the estimation of the body-region captured in the medical image scan. The learned shape model of the spatial distribution of the set of landmarks can be used to model of continuous range along a normalized z-axis (i.e., along a length of the human body), to ensure consistency among different patients. For a set of defined landmarks P in a normalized shape-space, the point pminz determines the 0% point, while the point pmaxz determines the 100% point. For a given set of landmarks to be detected {tilde over (P)}, the fitted robust subset of landmarks (defined in step 308) is represented by {circumflex over (P)}⊆{tilde over (P)}. Using the definition of the range based on the shape-space of the landmark points, the span of the point-set {tilde over (P)} can be determined between 0%-100% in the normalized shape-space. This also allows the linear extrapolation of the body-range outside the z-span of the point set {tilde over (P)} in the medical image. That is, the body range between detected locations of pminz and pmaxz in the medical image is interpolated between 0%-100%. The body range above pmaxz and below pminz in the medical image are linearly extrapolated above 100% and below 0%, respectively. When pminz or pmaxz is missing from the field-of-view of the medical image, the interpolation is performed to the bottom or top border of the medical image based on the locations of the landmarks in {circumflex over (P)}.
In an exemplary implementation, for the set of landmarks including bronchial bifurcation, left subclavian artery bifurcation, left common carotid artery bifurcation, brachiocephalic artery bifurcation, left kidney, right kidney, left hip bone corner, and right hip bone corner, the continuous body range model can be defined based on the set of landmarks in the training data with the left hip bone (LHB) corner at 0% and the left common carotid artery (LCCA) bifurcation at 100%. The levels of the remaining landmarks are determined in the normalized shape-space using linear interpolation. When applied to a newly received medical image based on a set of detected landmarks, each detected landmark is assigned the corresponding body range level as in the learned body range model, interpolation is performed between the landmarks, and extrapolation is performed above LCCA bifurcation and/or below the LHB corner. In the example of
Returning to
In an exemplary implementation of the methods of
Given the trained multi-scale models for each landmark Θ0, Θ1, . . . , ΘN (N=8 in the exemplary implementation), the search starts on the lowest (coarsest) scale in the center of the scan. Let {tilde over (P)} be the output of the navigation sub-models (search models) on the coarsest scale. Robust shape-model fitting was performed on {tilde over (P)} to eliminate outliers and correct for miss-aligned landmarks, resulting in a robust set of landmarks {circumflex over (P)}. This reduced the false positive (FP) and false negative (FN) rates from around 2.5% to under 0.5%. Applying the training range r to constrain the navigation of the subsequent scale-levels [M−2, . . . ,0], it was empirically observed that the shape-constraint was preserved and both the FP and FN rates were reduced to zero.
The present inventors compared the method described herein to a previous landmark detection technique of Marginal Space Deep Learning (MSDL). MSDL uses a cascade of sparse deep neural networks to scan the complete image space. Missing structures are detected in MSDL using a fixed cross-validated threshold on the hypothesis-probability. The operating point was selected to maintain a FP-rate of less than 1.5%.
For the method of
The above-described methods for training an intelligent multi-scale navigation model for anatomical landmark detection and automated computer-based multi-scale anatomical landmark detection in medical images can be implemented on a computer using well-known computer processors, memory units, storage devices, computer software, and other components. A high-level block diagram of such a computer is illustrated in
One skilled in the art will recognize that an implementation of an actual computer could contain other components as well, and that
The foregoing Detailed Description is to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined from the Detailed Description, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are only illustrative of the principles of the present invention and that various modifications may be implemented by those skilled in the art without departing from the scope and spirit of the invention. Those skilled in the art could implement various other feature combinations without departing from the scope and spirit of the invention.
This application claims the benefit of U.S. Provisional Application No. 62/466,036, filed Mar. 2, 2017, the disclosure of which is herein incorporated by reference.
Number | Date | Country | |
---|---|---|---|
62466036 | Mar 2017 | US |