The subject matter disclosed herein relates generally to nuclear medicine (NM) imaging systems, and more particularly to methods and systems for estimation of tumor detection, uptake ratio, size and depth of a lesion imaged with the dual head (NM) system.
Mamography imaging is commonly used for the detection of breast cancer. Specifically, mamography imaging is used to detect lesions within the breast. Typically, the lesion is detected using three-dimensional imaging techniques. As such, a location and depth of the lesion can be determined from the image. The depth of the lesion aids, for example, in guiding a biopsy needle during extraction of a lesion sample for pathology.
However, some women cannot be effectively tested because of dense breasts and/or implants. Accordingly, these women may be tested using nuclear single photon imaging. Such imaging only provides two-dimensional images of the lesion having no depth information. When the depth of the lesion is unknown, guiding a biopsy needle is difficult and the chance of missing the lesion with the needle is high. As a result, a large number of samples may have to be taken, thereby causing pain and discomfort to the patient.
In one embodiment, a method of determining a location of a lesion within a breast imaged between a first detector and a second detector is provided. The method includes acquiring a first image data set of the lesion with the first detector and acquiring a second image data set of the lesion with the second detector. The method also includes using the first image data set and the second image data set to calculate location information for the lesion and determining a location of the lesion within the breast based on the calculated location information.
In another embodiment, a system for determining a location of a lesion within a breast is provided. The system includes a first detector and a second detector. The breast is positioned between the first detector and the second detector. The first detector acquires a first image data set of the lesion and the second detector acquires a second image data set of the lesion. The system also includes a location determining program that uses the first image data set and the second image data set to calculate location information for the lesion and determine a location of the lesion within the breast based on the calculated location information.
In yet another embodiment, a method of determining a location of a lesion within a breast is provided. The method includes acquiring image data of the lesion and calculating a plurality of location information for the lesion using the acquired image data. The method also includes weighting the plurality of location information to determine the location of the lesion within the breast.
The foregoing summary, as well as the following detailed description of certain embodiments will be better understood when read in conjunction with the appended drawings. To the extent that the figures illustrate diagrams of the functional blocks of various embodiments, the functional blocks are not necessarily indicative of the division between hardware circuitry. Thus, for example, one or more of the functional blocks (e.g., processors or memories) may be implemented in a single piece of hardware (e.g., a general purpose signal processor or random access memory, hard disk, or the like) or multiple pieces of hardware. Similarly, the programs may be stand alone programs, may be incorporated as subroutines in an operating system, may be functions in an installed software package, and the like. It should be understood that the various embodiments are not limited to the arrangements and instrumentality shown in the drawings.
As used herein, an element or step recited in the singular and proceeded with the word “a” or “an” should be understood as not excluding plural of said elements or steps, unless such exclusion is explicitly stated. Furthermore, references to “one embodiment” are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features. Moreover, unless explicitly stated to the contrary, embodiments “comprising” or “having” an element or a plurality of elements having a particular property may include additional such elements not having that property.
Also as used herein, the phrase “reconstructing an image” is not intended to exclude embodiments in which data representing an image is generated, but a viewable image is not. Therefore, as used herein the term “image” broadly refers to both viewable images and data representing a viewable image. However, many embodiments generate, or are configured to generate, at least one viewable image.
During operation, the breast 52 is positioned between the detectors 54 and 56 and at least one detector is translated to lightly compress and/or maintain the position of the breast 52 between the detectors 54 and 56. It should be noted that the compression of the breast 52, shown in
Each of the detectors 54 and 56 has a radiation detection face (not shown) that is directed towards a structure of interest, for example lesion 60, within the breast 52. Radiation detection faces are covered by a collimator 58, as described above. Although illustrated as a parallel hole collimator 58, different types of collimators as known in the art may be used, such as pinhole, fan-beam, cone-beam, and diverging type collimators. An actual field of view (FOV) of each of the detectors 54 and 56 may be directly proportional to the size and shape of the respective imaging detector, or may be changed using a collimator.
A motion controller unit 120 may control the movement and positioning of the gantry 102 and/or the detectors 54 and 56 with respect to each other to position the breast 52 within the FOVs of the imaging detectors 54 and 56 prior to acquiring an image of the breast 52. The controller unit 120 may have a detector controller 122 and gantry motor controller 124 that may be automatically commanded by a processing unit 130, manually controlled by an operator, or a combination thereof. The optionally gantry motor controller 124 and the detector controller 122 may move the detectors 54 and 56 with respect to the breast 52 individually, in segments or simultaneously in a fixed relationship to one another. Optionally, one or more collimators may be moved relative to the detectors 54 and 56. Preferably, a distance between detectors 54 and 56 is registered by controller 120 and is used by processing unit 130 during data processing. In some embodiments, motion is manually achieved and controller 120 is replaced with scales or preferably encoders for measuring at least the distance between detectors and, optionally, their orientation and/or the compression force exerted by at least one detector on the breast.
The detectors 54 and 56 and gantry 102 remain stationary after being initially positioned, and imaging data is acquired, as discussed below. The imaging data may be combined and reconstructed into a composite image comprising two-dimensional (2D) images and depth information.
A Data Acquisition System (DAS) 126 receives analog and/or digital electrical signal data produced by the detectors 54 and 56 and decodes the data for subsequent processing in processing unit 130. A data storage device 132 may be provided to store data from the DAS 126 or reconstructed image data. An input device 134 also may be provided to receive user inputs and a display 136 may be provided to display reconstructed images.
The NM imaging system 100 also includes a calibration processor 138 that uses acquired image data to calibrate either of the detectors 54 and 56. For example, at least one of energy calibration and sensitivity calibration may be performed, as well as bad pixel marking or interpolation on a pixel by pixel basis for the imaging detectors 54 and 56. Although
In the exemplary embodiment, system 100 also includes a location module 140 configured to perform the methods described herein, for example, to determine the depth of a lesion in a breast. Although
At 204, a background is removed from each data set using the methods described below. In an analysis stage 203, the data set of the first detector 54 is then compared to the data set of the second detector 56 using at least one of the methods 206, 208, 210 and 212. Specifically, the method of 206 compares the total number of counts within a lesion boundaries of each data set acquired by detector 54 and 56 and performs attenuation based calculation to estimate the depth of the lesion. The method of 208 compares the profiles of the image of the lesion on each detector based on each data set acquired by detector 54 and 56. Depth estimation is done by comparing the width of the lesion profile and the known resolution with distance of the collimators. The method of 210 uses linear iteration of each data set acquired by detector 54 and 56 to estimate the shape and depth of the lesion. Method 212 uses collimator kernel reconstruction of the data set acquired by detector 54 and 56 to estimate the shape and depth of the lesion. Each of these methods will be described in more detail below.
After comparing the data sets, a position of the lesion 60 within breast 52 is determined from at least one of the methods 206, 208, 210, and 212. At 218, each of these positions is weighted to determine a weighted position of the lesion 60 in the breast. The weighting method of 218 is described in more detail below. Depth estimations are computed at 203 and analyzed to produce the best estimation of lesion depth, for example, by weighted averaging. In an optional post analysis step 219, the best estimation of depth is used for at least one of: correcting a shape of the lesion by correcting collimator spread due to depth; calculating a lesion volume based on the corrected shape of the lesion; calculating a corrected lesion activity based on the known attenuation and estimated depth; calculating a lesion activity concentration based on corrected lesion activity and lesion volume; and calculating a lesion uptake to background isotope concentration based on lesion activity concentration and background activity calculated in 204. At 222 the data is presented to a user.
Referring to the data acquisition step at 202 of
A lesion boundary 601a and 601b may be drawn (as shown at step 228 of
In method 234 of
Referring back to the method 200 of
Referring to the count comparison method performed at 206 of
Es=(a−−a+)/(a−+a+)
wherein a+ is the number of counts in detector 54 approximately given by:
a
+
=Ae−
μ(z+z)
and a− is the number of counts in the detector 56 given by:
a
−
=Ae−
μ(z−z)
where A is the number of counts that would have been counted in absence of attenuation; μ is the known attenuation coefficient of breast tissue for the photon energy used; Z is ½ the breast thickness (known from the distance between collimators); and z is the average distance of the lesion from the center plane between the two collimators. Accordingly, z is the location of the center of the lesion.
Under these assumptions, the location of the lesion (Z−z from the first collimator and Z+z from the second collimator) is given by finding z according to the equation:
z=Es/μ
For a thick breast, the location z may be estimated by solving for z in the following equation:
Es=μ
z[(1+(μz/6)2)/[(1+(μz/2)2)]
At 261 of
A=(a++a−)/(e−μz+e−μz)
given the known Z and the calculated z.
Optionally, at 261, the images of the two detectors are combined into one two-dimensional image. With A and B as the two images, and C as the combined image, the combination operation may optionally be chosen from:
Cnt=Cnt
bot
+Cnt
top, wherein
Crt
top
=B(1+ax+by)+2Ctop[rtop2−(x−x0)2−(y−y0)2]1/2
Cnt
bot
=B(1+ax+by)+2Cbot[rbot2−(x−x0)2−(y−y0)2]1/2, wherein:
In another embodiment, a chi square statistic is performed using:
χ2=Σ(x,y)(image-function)2/(NDF), wherein NDF is a number of degrees of freedom determined by a number of pixels that have data and function=Cnt.
Accordingly, the output values of the fit parameter is determined as a result of the fitting process that minimizes χ2 at step 266. The fitting variables are:
Ctop, Cbot are lesion activities as seen in the top and bottom detectors 54 and 56.
The collimator resolution function is given by:
r=r
true+(1+scd)d/L, wherein
d is the size of the collimator bore, L is the height of or distance between the detectors, scd is the source to collimator distance, r is the detected lesion size, and rtrue is the true lesion size. The equation is applied to both the top detector 54 and the bottom detector 56 using the equations:
r
top
=r
true
+d+hd/L
r
bot
=r
true
+d+(s−h)d/L
To find a true lesion diameter Ø these equations are added to eliminate h, giving the following equation:
Ø=rtop+rbot−d(2+s/L)
The lesion height is also determined by subtracting to eliminate rtrue using:
h=[(rtop−rbot)L/d+s]/2
The final lesion height is determined using a weighted average of each equation:
z
0=(hawa+hbwb)/(wa+wb); wherein wa=1/Error(ha)
A tumor concentration is then determined at step 268 while preferably correcting for attenuation:
(Ctop+Cbot)˜2Ce−μz(1+(μz)2/2+ . . . )
After solving for C a tumor uptake ratio is determined using: C/B. Accordingly, the above equations provide the following outputs:
The graph 272 depicts the number of counts per second, detected by the same pixel as the point source is moved along the axis 271 at a height h2, wherein h2 is larger than h1 above the collimator's face. With no attenuation, the integrals under the two graphs are similar or equal.
The actual lesion profiles are three dimensional and depict the responsivity of a pixel to appoint a source in a location {, x, y, h} relative to the pixel's center. This lesion profile may be modified by finite penetration through the collimator's septa. Generally, the function may be calculated from geometrical parameters of the collimator. This function may be used in the fit performed at step 264 and in methods 210 and 212.
Referring to the iteration method performed at 210 of
Maximum-likelihood expectation maximization:
or Maximum a-posteriori:
or Regularized least squares optimization:
Referring to
Accordingly, referring to 216 of
Weighted Position=Sum[Es(i)*W(i)]/Sum[W(i)]
wherein Es(i) is the estimated position of the lesion using one or more of the methods 206, 208, 212, and 210 and W(i) is the weight given to each estimated position. It should be noted that not all estimated positions are necessarily weighted. For example, outlying estimations can be eliminated if they are greater than a predetermined variance. Additionally, each method may be weighted differently based on the methods used or the data itself. W(i) may be fixed for each or some of the methods, or may vary depending on the estimated variability for the value Es(i), based on the specific data. For example, for a large lesion, the depth may be more accurately determined by method 206 than by other methods, and thus, a larger W(i) may be associated with method 206 when a large lesion is detected.
Accordingly, the various embodiments provide one or more methods for determining the existence, uptake, size and depth of a lesion in a breast. The methods can then be weighted to increase the accuracy of the determined location of the lesion. At least one technical effect of some embodiments includes enabling a depth of a lesion to be determined in women having, for example, dense breasts and/or implants and that are imaged using nuclear single photon imaging. By determining the depth of the lesion a biopsy needle may be more easily guided, thereby reducing the chance of missing the lesion with the needle. As a result, the number of samples taken may be reduced, thereby reducing pain and discomfort to the patient.
The various embodiments and/or components, for example, the modules, or components and controllers therein, also may be implemented as part of one or more computers or processors. The computer or processor may include a computing device, an input device, a display unit and an interface, for example, for accessing the Internet. The computer or processor may include a microprocessor. The microprocessor may be connected to a communication bus. The computer or processor may also include a memory. The memory may include Random Access Memory (RAM) and Read Only Memory (ROM). The computer or processor further may include a storage device, which may be a hard disk drive or a removable storage drive such as a floppy disk drive, optical disk drive, and the like. The storage device may also be other similar means for loading computer programs or other instructions into the computer or processor.
As used herein, the term “computer” or “module” may include any processor-based or microprocessor-based system including systems using microcontrollers, reduced instruction set computers (RISC), ASICs, logic circuits, and any other circuit or processor capable of executing the functions described herein. The above examples are exemplary only, and are thus not intended to limit in any way the definition and/or meaning of the term “computer”.
The computer or processor executes a set of instructions that are stored in one or more storage elements, in order to process input data. The storage elements may also store data or other information as desired or needed. The storage element may be in the form of an information source or a physical memory element within a processing machine.
The set of instructions may include various commands that instruct the computer or processor as a processing machine to perform specific operations such as the methods and processes of the various embodiments of the invention. The set of instructions may be in the form of a software program. The software may be in various forms such as system software or application software. Further, the software may be in the form of a collection of separate programs or modules, a program module within a larger program or a portion of a program module. The software also may include modular programming in the form of object-oriented programming. The processing of input data by the processing machine may be in response to operator commands, or in response to results of previous processing, or in response to a request made by another processing machine.
As used herein, the terms “software” and “firmware” are interchangeable, and include any computer program stored in memory for execution by a computer, including RAM memory, ROM memory, EPROM memory, EEPROM memory, and non-volatile RAM (NVRAM) memory. The above memory types are exemplary only, and are thus not limiting as to the types of memory usable for storage of a computer program.
As used herein, the term “computer readable medium” includes a tangible and non-transitory medium.
It is to be understood that the above description is intended to be illustrative, and not restrictive. For example, the above-described embodiments (and/or aspects thereof) may be used in combination with each other. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the various embodiments of the invention without departing from their scope. While the dimensions and types of materials described herein are intended to define the parameters of the various embodiments of the invention, the embodiments are by no means limiting and are exemplary embodiments. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. The scope of the various embodiments of the invention should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Moreover, in the following claims, the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements on their objects. Further, the limitations of the following claims are not written in means-plus-function format and are not intended to be interpreted based on 35 U.S.C. §112, sixth paragraph, unless and until such claim limitations expressly use the phrase “means for” followed by a statement of function void of further structure.
This written description uses examples to disclose the various embodiments of the invention, including the best mode, and also to enable any person skilled in the art to practice the various embodiments of the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the various embodiments of the invention is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if the examples have structural elements that do not differ from the literal language of the claims, or if the examples include equivalent structural elements with insubstantial differences from the literal languages of the claims.