The present invention refers to the technical field of coronary medicine, and in particular to systems for acquiring image of aorta based on deep learning.
Cardiovascular diseases are leading causes of death in the industrialized world. The major forms of cardiovascular diseases are caused by chronic accumulation of fatty material in the inner tissue layers of the arteries supplying the heart, brain, kidneys and lower extremities. Progressive coronary artery diseases restrict blood flow to the heart. Due to the lack of accurate information provided through current non-invasive tests, invasive catheterization procedures are required by many patients to evaluate coronary blood flow. Thus, a need exists for non-invasive methods for quantifying blood flow in human coronary arteries to evaluate the functional significance of possible coronary artery diseases. Reliable evaluation of arterial volume will therefore be important for disposition planning to address patient needs. Recent studies have demonstrated that hemodynamic characteristics, such as flow reserve fraction (FFR), are important indicators for determining the optimal disposition for patients with arterial disease. Routine evaluation of FFR uses invasive catheterization to directly measure blood flow characteristics, such as pressure and flow rate. However, these invasive measurement techniques carry risks to the patient and can result in significant costs to the health care system.
Computed tomography arteriography is a computed tomography technique used to visualize the arterial blood vessels. For this purpose, a beam of X-rays is passed from an radiation source through the area of interest in the patient's body to obtain a projection image.
The use of empirical values to acquire images of aorta in prior art suffers from many human factors, poor consistency, and slow extraction speed.
The present invention provides a system for acquiring image of aorta based on deep learning, to solve the problems of the prior art of using empirical values to acquire images of aorta with many human factors, poor consistency and slow extraction speed.
To achieve the above, the present application provides a system for acquiring image of aorta based on deep learning, comprising: a database device, a deep learning device, a data extraction device and an aorta acquisition device;
the database device is configured for generating a database of slices of an aorta layer and a database of slices of a non-aorta layer;
the deep learning device is connected to the database device, and is configured for performing deep learning on slice data of the aorta layer and slice data of the non-aorta layer, to acquire a deep learning model, and for analyzing feature data by the deep learning model, to obtain aorta data;
the data extraction device is configured for extracting the feature data of three-dimensional data of CT sequence images or the CT sequence images to be processed;
the aorta acquisition device is connected to the data extraction device and the deep learning device, and is configured for acquiring an image of aorta from the CT sequence images based on the deep learning model and the feature data.
Optionally, the above system for acquiring image of aorta based on deep learning further comprises: a CT storage device connected to the database device and the data extraction device, configured for acquiring three-dimensional data of the CT sequence images.
Optionally, in the above system for acquiring image of aorta based on deep learning, the database device comprises: an image processing structure, a slice data storage structure for aorta layer and a slice data storage structure for non-aorta layer, wherein the slice data storage structure for aorta layer, the slice data storage structure for non-aorta layer and the CT storage device are all connected to the image processing structure;
the image processing structure is configured for removing the lung, descending aorta, spine and ribs from the CT sequence images to acquire new images;
the slice data storage structure for aorta layer is configured for acquiring slice data of the aorta layer from the new images; and
the slice data storage structure for non-aorta layer is configured for acquiring the remaining slice data from the new images with the slices within the slice data storage structure for aorta layer removed, i.e., the slice data of non-aorta layer.
Optionally, in the above system for acquiring image of aorta based on deep learning, the image processing structure comprises: a grayscale histogram unit, a grayscale volume acquisition unit, a lung tissue removal unit, an extraction unit for gravity center of heart, an extraction unit for gravity center of spine, an extraction unit for image of descending aorta, and a new image acquisition unit;
the grayscale histogram unit is connected to the CT storage unit, and is configured for plotting a grayscale histogram of each group of CT sequence images;
the grayscale volume acquisition unit is connected to the grayscale histogram unit, and is configured for, along a direction of the end point M to the original point O of the grayscale histogram, acquiring a volume of each grayscale value region from point M to point M−1, from point M to point M−2 successively, until from point M to point O; acquiring a volume ratio V of the volume of each grayscale value region to a volume of the total region from point M to point O;
The extraction unit for gravity center of spine is connected to the CT storage device and the extraction unit for gravity center of heart, and is configured for acquiring a gravity center of spine P1, if V=a, picking a start point corresponding to a grayscale value region, projecting the start point onto the CT three-dimensional image, acquiring a three-dimensional image of a bone region, and picking a physical gravity center of the three-dimensional image of the bone region P1, wherein a denotes a constant, 0<a<0.2.
The extraction unit for image of descending aorta is connected to the extraction unit for gravity center of heart, the extraction unit for gravity center of spine and the lung tissue removal unit, and is configured for acquiring an image of descending aorta of each group of CT sequence images based on the gravity center of heart and the gravity center of spine;
the new image acquisition unit is connected to the extraction unit for image of descending aorta, the lung tissue removal unit, the slice data storage structure for aorta layer and the slice data storage structure for non-aorta layer, and is configured for removing the lung, descending aorta, spine and ribs from CT sequence images, to acquire new images.
Optionally, in the above system for acquiring image of aorta based on deep learning, the region delineation unit for descending aorta comprises: an average grayscale value acquisition module, a layered slice module and a binarization processing module;
the average grayscale value acquisition module is connected to the lung tissue removal unit and the grayscale histogram unit, and is configured for acquiring one or more pixel points PO within the first image with a grayscale value greater than the grayscale threshold for the descending aorta Qdescending, and calculating an average grayscale value
the layered slice module is connected to the average grayscale value acquisition module and the lung tissue removal unit, and is configured for layered slicing the first image starting from its bottom layer to obtain a first group of two-dimensional sliced images;
the binarization processing module is connected to the layered slice module and the grayscale histogram unit, and is configured for, based on
binarizing the sliced image, removing impurity points from the first image to obtain a binarized image, wherein k is a positive integer, Qk denotes the grayscale value corresponding to the k-th pixel point PO, and P(k) denotes the pixel value corresponding to the k-th pixel point PO.
Optionally, in the above system for acquiring image of aorta based on deep learning, the region delineation unit for descending aorta further comprises: an rough acquisition module and an accurate acquisition module;
the rough acquisition module is connected to the binarization processing module, and is configured for setting an radius threshold of a circle formed from the descending aorta to an edge of the heart to rthreshold, acquiring an approximate region of the spine and an approximate region of the descending aorta based on the distance between the descending aorta and the heart being less than the distance between the spine and the heart;
the accurate acquisition module is connected to the rough acquisition module, and is configured for removing one or more error pixel points based on the approximate region of the descending aorta, i.e., a circle corresponding to the descending aorta.
Optionally, in the above system for acquiring image of aorta based on deep learning, the data extraction device comprises: a connected domain structure and a feature data acquisition structure;
the connected domain structure is connected to the new image acquisition unit and is configured for acquiring a plurality of binarized images of the CT sequence images to be processed from the new image acquisition unit;
the feature data acquisition structure is connected to the connected domain structure, and is configured for acquiring a connected domain of each binarized image successively starting from the top layer, as well as a proposed circle center Ck, an area Sk, a proposed circle radius Rk, and a distance Ck−C(k-1) between the circle centers of two adjacent layers, a distance Ck-C1 from the circle center Ck of each layer of slice to the circle center of the top layer C1, and an area Mk of all pixels whose pixel points are greater than 0 in a layer pixel and whose pixel points are equal to 0 in the previous layer pixel and a filtered area Hk corresponding to the connected domain, wherein k denotes the k-th layer of slice, k≥1; i.e., the feature data.
Optionally, in the above system for acquiring image of aorta based on deep learning, the feature data acquisition structure is provided with a data processing unit, as well as a circle center acquisition unit, an area acquisition unit and an radius acquisition unit, respectively, connected to the data processing unit;
the data processing unit is configured for detecting 3 layers of slice successively starting from the top layer by using the Hoff detection algorithm, and obtaining 1 circle center and 1 radius from each layer of slice, forming 3 circles respectively; removing points with larger deviations from 3 circle centers to obtain a seed point P1 of the descending aorta; acquiring a connected domain A1 of the layer where the seed point P1 is located; acquiring a gravity center of the connected domain A1 as the proposed circle center C1, and acquiring the area S1 of the connected domain A1 and the proposed circle radius R1; acquiring a connected domain A2 of the layer where the seed point P1 is located, by using the C1 as a seed point; expanding the connected domain A1 to obtain an expanded region D1, removing a portion overlapping with the expanded region D1 from the connected domain A2 to obtain a connected domain A2′; setting a volume threshold Vthreshold for the connected domain, if a volume V2 of the connected domain A2′ being less than Vthreshold, removing one or more points that are too far from the circle center C1 of the previous layer, acquiring the filtered area Hk, making the gravity center of the connected domain A2′ as a proposed circle center C2, acquiring an area S2 of the connected domain A2 and a proposed circle radius R2; repeating the method of the connected domain A2, acquiring a connected domain of each binarized image successively, as well as a proposed circle center Ck, an area Sk, a proposed circle radius Rk, and a distance Ck-C(k-1) between the circle centers of two adjacent layers, a distance Ck-C1 from the circle center Ck of each layer of slice to the circle center of the top layer C1 corresponding to the connected domain;
the circle center acquisition unit is configured for storing the proposed circle centers C1, C2 . . . Ck . . . ;
the area acquisition unit is configured for storing the areas S1, S2 . . . Sk . . . , and the filtered areas H1, H2 . . . Hk . . . ;
the radius acquisition unit is configured for storing the proposed circle radii R1, R2 . . . Rk . . . .
Optionally, in the above system for acquiring image of aorta based on deep learning, the aorta acquisition device comprises: a gradient edge structure and an acquisition structure for image of aorta;
the gradient edge structure is connected to the deep learning device and is configured for expanding aorta data; multiplying the expanded aorta data with original CT sequence image data, and calculating a gradient of each pixel point to obtain gradient data; extracting a gradient edge based on the gradient data; subtracting the gradient edge from the expanded aorta data;
the acquisition structure for image of aorta is connected to the new image acquisition unit and the gradient edge structure, and is configured for generating a list of seed points based on a proposed circle center; extracting a connected domain based on the list of seed points, to obtain an image of aorta.
The beneficial effects resulting from the solutions provided by embodiments of the present application include at least that:
the present application provides a system for acquiring image of aorta based on deep learning, wherein a deep learning model is acquired based on feature data and database, and an image of aorta is acquired by the deep learning model. It has the advantages of good extraction effect, high robustness, and accurate calculation results, and has high promotion value in clinical practice.
The drawings illustrated herein are used to provide a further understanding of the present invention, form a part of the present invention, and the schematic embodiments of the invention and their descriptions are used to explain the present invention and do not constitute an undue limitation of the present invention. Wherein:
In order to make the purpose, technical solutions and advantages of the present invention clearer, the following will be a clear and complete description of the technical solutions of the present invention in conjunction with specific embodiments of the present invention and the corresponding drawings. Obviously, the described embodiments are only a part of the embodiments of the present invention, and not all of them. Based on the embodiments in the present invention, all other embodiments obtained by a person of ordinary skill in the art without making creative labor fall in the protection scope of the present invention.
A number of embodiments of the present invention will be disclosed in the following figures, and for the sake of clarity, many of the practical details will be described together in the following description. It should be understood, however, that these practical details should not be used to limit the present invention. That is, in some embodiments of the present invention, these practical details are not necessary. In addition, for the sake of simplicity, some of the commonly known structures and components will be illustrated in the drawings in a simple schematic manner.
The use of empirical values to acquire images of aorta in prior art suffers from many human factors, poor consistency, and slow extraction speed.
In order to solve the above problems, as shown in
As shown in
As shown in
As shown in
In the present application, by first screening out the center of gravity for the heart and the spine, locating the position of the heart and the spine, and then acquiring the image of the descending aorta based on the position of the heart and the spine, computation burden is reduced, with simple algorithms, easy operation, fast computing speed, scientific design and accurate image processing.
As shown in
As shown in
binarizing the sliced image, removing impurity points from the first image to obtain a binarized image, wherein k is a positive integer, Qk denotes the grayscale value corresponding to the k-th pixel point PO, and P(k) denotes the pixel value corresponding to the k-th pixel point PO.
As shown in
As shown in
As shown in
As shown in
Those skilled in the art know that aspects of the present invention can be implemented as systems, methods, or computer program products. As such, aspects of the present invention may be implemented in the form of: a fully hardware implementation, a fully software implementation (including firmware, resident software, microcode, etc.), or a combination of hardware and software aspects, collectively referred to herein as a “circuit”, “module” or “system”. In addition, in some embodiments, aspects of the present invention may also be implemented in the form of a computer program product in one or more computer-readable media containing computer-readable program code. Embodiments of the methods and/or systems of the present invention may be implemented in a manner that involves performing or completing selected tasks manually, automatically, or in a combination thereof.
For example, the hardware for performing the selected tasks based on the embodiments of the present invention may be implemented as a chip or circuit. As software, the selected tasks based on the embodiments of the present invention may be implemented as a plurality of software instructions to be executed by a computer using any appropriate operating system. In exemplary embodiments of the present invention, one or more tasks, as in the exemplary embodiments based on the methods and/or systems herein, is performed by a data processor, such as a computing platform for executing a plurality of instructions. Optionally, the data processor includes volatile storage for storing instructions and/or data, and/or non-volatile storage for storing instructions and/or data, such as a magnetic hard disk and/or removable media. Optionally, a network connection is also provided. Optionally, a display and/or user input device, such as a keyboard or mouse, is also provided.
Any combination of one or more computer readable may be utilized. A computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium. A computer-readable storage medium may be, for example—but not limited to—an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or component, or any combination thereof. More specific examples of computer-readable storage media (a non-exhaustive list) would include each of the following:
An electrical connection having one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage component, a magnetic storage component, or any suitable combination of the foregoing. In this specification, the computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in combination with an instruction execution system, device or component.
The computer-readable signal medium may include a data signal propagated in a baseband or as part of a carrier wave that carries computer-readable program code. This propagated data signal can take a variety of forms, including but not limited to electromagnetic signals, optical signals or any suitable combination of the above. The computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium that sends, propagates, or transmits a program for being used by or in conjunction with an instruction execution system, device or component.
The program code contained on the computer-readable medium may be transmitted using any suitable medium, including (but not limited to) wireless, wired, fiber optic, RF, etc., or any suitable combination of the above.
For example, computer program code for performing operations of aspects of the present invention may be written in any combination of one or more programming languages, including object-oriented programming languages such as Java, Smalltalk, C++, and conventional procedural programming languages such as “C” programming language or the like. The program code may be executed entirely on an user's computer, partially on an user's computer, as a stand-alone software package, partially on an user's computer and partially on a remote computer, or entirely on a remote computer or server. In the case of a remote computer, the remote computer may be connected to an user's computer via any kind of network—including a local area network (LAN) or a wide area network (WAN)—or, may be connected to an external computer (e.g., using an Internet service provider to connect via the Internet).
It should be understood that each block of the flowchart and/or block diagram, and a combination of respective blocks in the flowchart and/or block diagram, may be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, a specialized computer, or other programmable data processing device, thereby producing a machine such that these computer program instructions, when executed by the processor of the computer or other programmable data processing device, produce a device that implements a function/action specified in one or more of the blocks in the flowchart and/or block diagram.
These computer program instructions may also be stored in a computer-readable medium that causes a computer, other programmable data processing device, or other apparatus to operate in a particular manner such that the instructions stored in the computer-readable medium result in an article of manufacture that includes instructions to implement the function/action specified in one or more blocks in the flowchart and/or block diagram.
Computer program instructions may also be loaded onto a computer (e.g., a coronary artery analysis system) or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus or other apparatus to produce a computer-implemented process, such that the instructions executed on the computer, other programmable device or other apparatus provide a process for implementing the function/action specified in a block of the flowchart and/or one or more block diagram.
The above specific examples of the present invention further detail the purpose, technical solutions and beneficial effects of the present invention. It should be understood that the above are only specific embodiments of the present invention and are not intended to limit the present invention, and that any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present invention shall be included within the protection scope of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
2020106069631 | Jun 2020 | CN | national |
2020106069646 | Jun 2020 | CN | national |
The present application is a continuation of International Patent Application No. PCT/CN2022/132798 filed on Nov. 30, 2020, which claims the benefit of priority from the Chinese Patent Application No. 202010606964.6 filed on Jun. 29, 2020, entitled “METHODS AND SYSTEMS FOR ACQUIRING DESCENDING AORTA BASED ON CT SEQUENCE IMAGES” and the Chinese Patent Application No. 202010606963.1 filed on Jun. 29, 2020, entitled “METHODS AND SYSTEMS FOR PICKING UP POINTS ON AORTA CENTERLINE BASED ON CT SEQUENCE IMAGES”, the entire content of each is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2020/132798 | Nov 2020 | US |
Child | 18089728 | US |