This application relates to a method and system for use with data processing and imaging systems, according to one embodiment, and more specifically, for enabling automated Cancer Prediction Imaging.
Prostate gland is a chestnut shaped reproductive organ and is located underneath the bladder in men. This gland adds secretions to the sperms during the semen ejaculation. This gland envelops urethra, the tube or duct which serves a path for both semen and urine. The gland looks like a walnut and is rounded at top and tapers in the bottom called apex of the gland. This glad is about 4 cm in longitudinal direction.
Prostate cancer is a disease that grows, slowly and is confined to the prostate gland and may not cause a serious harm. These kinds of cancers may need minimal or no treatment. There are other types of prostate cancer that grows which can grow aggressively and can spread quickly and needs immediate attention.
We can classify the prostate cancer into both stages and grades. There are mainly two types of prostate cancer stages: (a) clinical stage of the prostate cancer and (b) pathological stage of the prostate cancer. In clinical stage of prostate cancer, the Urologist already has the information about the digital rectal exam (DRE), but they do not have information about the PSA or the Gleason score of the cancer. During the pathological stage of the prostate cancer, the lymph node or the prostate is taken out of the body and a doctor can make a more accurate inference about the cancer, which helps in making an accurate prognosis.
Prostate Cancer is one of the most common cancers in men in USA. It is also one of the leading causes of death in men for all races. In 2007, 223,307 men in the US alone were diagnosed with prostate cancer. In all, 29,093 men in the United States died from prostate cancer. On Cancer Prostate screening, Digital Rectal Examination (DRE) and Prostate-Specific Antigen (PSA) testing have been commonly adapted. For details for the guide to the prostate cancer, following publication can be used for details (M. N. Simmons, R. K. Berglund, and J. S. Jones, “A practical guide to prostate cancer diagnosis and management,” Cleve. Clin. J. Med. 78, 321-331 (2011)).
Today, Prostate specific antigen (PSA) test is one of the standard screening tools for detection of prostate cancer. A patient having a high PSA level or a rising PSA density is usually the first signs for a prostate cancer. PSA is actually an enzyme that the body uses to turn the semen into the liquid which has been congealed after ejaculation. (More information about PSA test and PSA can be seen in this publication: R. M. Hoffman, F. D. Gilliland, M. Adams-Cameron, W. C. Hunt, and C. R. Key, “Prostate-specific antigen testing accuracy in community practice,” BMC Fam. Pract. 24, 3:19 (2002)). Some of the PSA will enter into the blood stream. Doctors, who use PSA to test for prostate cancer, use PSA as a marker for tumors. In case of a swollen prostate, a PSA may be higher because it is bigger. A high PSA does not necessarily indicate prostate cancer, but can cause a PBH or prostatitis.
Both DRE and PSA have weakness that it lacks specificity and hence patients have to undergo unnecessary biopsies. Several other biomarkers are used today since PSA is not a reliable marker for detecting prostate cancer. The publication by Sardana shows emerging biomarkers for diagnosis of prostate cancer (G. Sardana, B. Dowell, and E. P. Diamandis, “Emerging biomarkers for the diagnosis and prognosis of prostate cancer,” Clin. Chem. 54, 1951-1960 (2008)).
Several imaging based methods are used today to tell the difference between a benign and malignant cancers. An example can be elastography-based system. An example of elastography-based system can be seen in the following publication (K. Konig, U. Scheipers, A. Pesavento, A. Lorenz, H. Ermert, and T. Senge, “Initial experiences with real-time elastography guided biopsies of the prostate,” J. Urol. 174, 115-117 (2005)).
MRI-based system has been adapted for cancer detection. An example of MRI-based cancer detection system can be seen in the following publication (S. D. Heenan, “Magnetic resonance imaging in prostate cancer,” Prostate Cancer Prostatic Dis. 7, 282-288. (2004)). Other imaging modalities re CT-based or intravenous contrast enhancement-based. Details can be seen using CT-based system (E. P. Ives, M. A. Burke, P. R. Edmonds, L. G. Gomella, and E. J. Halpern, “Quantitative CT perfusion of prostate cancer: correlation with whole mount pathology,” Clin. Prostate Cancer 4, 109-112 (2005) and (G. Brown, D. A. Macvicar, V. Ayton, and J. E. Husband, “The role of intravenous contrast enhancement in magnetic resonance imaging of prostatic carcinoma,” Clin. Radiol. 50, 601-606 (1995)). The works described in these papers use imaging-based system but no tissue characterization system for distinguishing benign vs. malignant prostate tissue. Thus, neither the PSA level nor the imaging-based system is a fool proof system for diagnosis of prostate cancer.
This invention uses an imaging-based non-invasive method for detecting the benign vs. malignant cancer tissues in prostate. Further since no one modality provides the complete solution to prostate cancer detection, this innovative application uses fusion of modalities to characterize the tissue and then classify it into benign and malignant tissue. This paradigm takes the advantage of a novel system design called “UroImage™” which fundamentally be applied with unique imaging scanner such as 2D ultrasound or 3D ultrasound or other imaging modalities like MR or CT or its inter-fusion methods. Further, this invention allows to use the “UroImage™” system in mobile settings where the three tier system (presentation layer, business layer and database management layer) can be adapted both on a tablet (such as Samsung) or a one tier (presentation layer) system can be adapted on tablet and the remaining two tiers (business layer and database management layer) in the cloud.
The various embodiments is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which:
Processor 252 uses the concept of wavelet transform, where the wavelet ψa,b(t) is given by
where, “a” is the scale factor (related to dilation or compression of wavelet) and b is the translation factor (related to shifting of the wavelet). The grayscale image 250 is decomposed into different scales by successive low and high pass filters. High pass filter coefficients at level 2 decomposition (h2) are used to collect sudden changes in the cancerous image.
Processor 285 is used for extracting features by finding the p-value using Student's t-test. Those skilled in the art can use other statistical methods published in the article (J. F. Box, “Guinness, gosset, fisher, and small samples,” Statist. Sci. 2, 45-52 (1987)).
The prediction output 600 is fed to the performance evaluation block 426. The performance of UroImage™ consists of calculating the measures like sensitivity, specificity, PPV and accuracy. Defining the abbreviations TN, FN, TP and FP as True Negative, False Negative, True Positive, False Positive and defining as follows: TN (True Negative) be the number of non-cancerous cases identified as non-cancerous, FN (False Negative) be the number of cancerous cases incorrectly identified as non-cancerous, TP (True Positive) be the number of cancerous cases correctly identified as they are, and FP (False Positive) be the number of non-cancerous cases incorrectly identified as cancerous. Using these, Sensitivity is computed as the probability that a classifier will produce a positive result when used on cancer population (TP/(TP+FN)). Specificity is the probability that a classifier will produce a negative result when used on the non-cancerous population (TN/(TN+FP)). Accuracy is the ratio of the number of correctly classified samples to the total number of samples ((TP+TN)/(TP+FN+TN+FP)). PPV is the proportion of patients with positive results who are correctly diagnosed (TP/(TP+FP)).
The example computer system 2700 includes a processor 2702 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both), a main memory 2704 and a static memory 2706, which communicate with each other via a bus 2708. The computer system 2700 may further include a video display unit 2710 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)). The computer system 2700 also includes an input device 2712 (e.g., a keyboard), a cursor control device 2714 (e.g., a mouse), a disk drive unit 2716, a signal generation device 2718 (e.g., a speaker) and a network interface device 2720.
The disk drive unit 2716 includes a machine-readable medium 2722 on which is stored one or more sets of instructions (e.g., software 2724) embodying any one or more of the methodologies or functions described herein. The instructions 2724 may also reside, completely or at least partially, within the main memory 2704, the static memory 2706, and/or within the processor 2702 during execution thereof by the computer system 2700. The main memory 2704 and the processor 2702 also may constitute machine-readable media. The, instructions 2724 may further be transmitted or received over a network 2726 via the network interface device 2720. While the machine-readable medium 2722 is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a non-transitory single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable medium” can also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the various embodiments, or that is capable of storing, encoding or carrying data structures utilized by or associated with such a set of instructions. The term “machine-readable medium” can accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media.
The Abstract of the Disclosure is provided to comply with 37 C.F.R. §1.72(b), requiring an abstract that will allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.
This is a continuation-in-part patent application of co-pending patent application, Ser. No. 12/799,177; filed Apr. 20, 2010 by the same applicant. This is also a continuation-in-part patent application of co-pending patent application, Ser. No. 12/802,431; filed Jun. 7, 2010 by the same applicant. This is also a continuation-in-part patent application of co-pending patent application, Ser. No. 12/896,875; filed Oct. 2, 2010 by the same applicant. This is also a continuation-in-part patent application of co-pending patent application, Ser. No. 12/960,491; filed Dec. 4, 2010 by the same applicant. This is also a continuation-in-part patent application of co-pending patent application, Ser. No. 13/053,971; filed Mar. 22, 2011 by the same applicant. This is also a continuation-in-part patent application of co-pending patent application, Ser. No. 13/077,631; filed Mar. 31, 2011 by the same applicant. This is also a continuation-in-part patent application of co-pending patent application, Ser. No. 13/107,935; filed May 15, 2011 by the same applicant. This is also a continuation-in-part patent application of co-pending patent application, Ser. No. 13/219,695; filed Aug. 28, 2011 by the same applicant. This is also a continuation-in-part patent application of co-pending patent application, Ser. No. 13/253,952; filed Oct. 5, 2011 by the same applicant: This is also a continuation-in-part patent application of co-pending patent application, Ser. No. 13/407,602; filed Feb. 28, 2012 by the same applicant. This present patent application draws priority from the referenced co-pending patent applications. This present patent application also draws priority from the provisional patent application, Ser. No. 61/525,745; filed Aug. 20, 2011 by the same applicant. The entire disclosures of the referenced co-pending patent applications and the provisional patent application are considered part of the disclosure of the present application and are hereby incorporated by reference herein in its entirety.
Number | Date | Country | |
---|---|---|---|
61525745 | Aug 2011 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12799177 | Apr 2010 | US |
Child | 13412118 | US | |
Parent | 12802431 | Jun 2010 | US |
Child | 12799177 | US | |
Parent | 12896875 | Oct 2010 | US |
Child | 12802431 | US | |
Parent | 12960491 | Dec 2010 | US |
Child | 12896875 | US | |
Parent | 13053971 | Mar 2011 | US |
Child | 12960491 | US | |
Parent | 13077631 | Mar 2011 | US |
Child | 13053971 | US | |
Parent | 13107935 | May 2011 | US |
Child | 13077631 | US | |
Parent | 13219695 | Aug 2011 | US |
Child | 13107935 | US | |
Parent | 13253952 | Oct 2011 | US |
Child | 13219695 | US | |
Parent | 13407602 | Feb 2012 | US |
Child | 13253952 | US |