This application generally relates to the field of image processing, and specifically, relates to a system and method for processing ECT motion gated image data and performing image reconstruction.
Positron emission tomography (PET) is a specialized radiology procedure that generates three-dimensional images of functional processes in a target organ or tissue of a body. Specifically, in PET studies, a biologically active molecule carrying a radioactive tracer is first introduced to a patient's body. The PET system then detects gamma rays emitted by the tracer and constructs a three-dimensional image of the tracer concentration within the body by analyzing the detected signals. Because the biologically active molecules used in PET studies are natural substrates of metabolism at the target organ or tissue, PET can evaluate the physiology (functionality) and anatomy (structure) of the target organ or tissue, as well as its biochemical properties. Changes in these properties of the target organ or tissue may provide information for the identification of the onset of a disease process before any anatomical changes relating to the disease become detectable by other diagnostic tests, such as computed tomography (CT) or magnetic resonance imaging (MRI).
Furthermore, the high sensitivity of PET—in the picomolar range—may allow the detection of small amounts of radio-labeled markers in vivo. PET may be used in conjunction with other diagnostic tests to realize simultaneous acquisition of both structural and functional information of the body. Examples include a PET/CT hybrid system, a PET/MR hybrid system.
One aspect of the present disclosure relates to a method for processing ECT motion gated image data and performing image reconstruction. The method may include one or more of the following operations. The emission computed tomography (ECT) data relating to a subject may be received. A model for characterizing a physical motion may be selected. A characteristic volume of interest (VOI) of the ECT data may be determined based on the selected model. A physical motion curve may be obtained or determined based on the characteristic VOI. The ECT data may be gated based on the physical motion curve.
In a second aspect of the present disclosure, provided herein is a non-transitory computer readable medium comprising executable instructions. The instructions, when executed by at least one processor, may cause the at least one processor to effectuate a method for processing ECT motion gated image data and performing image reconstruction.
In some embodiments, the selection of a model for a physical motion may include choosing a model for a physiological motion. Further, a time of flight (TOF) probability distribution may be determined in the model for a physiological motion.
In some embodiments, the determination of the model for a physiological motion may include designating a signal quality indicator (SQI) of the physiological motion based on a VOI.
In some embodiments, the determination of a characteristic VOI may include maximizing the SQI of the physiological motion. In some embodiments, the VOI associated with maximized SQI may correspond to the characteristic VOI.
In some embodiments, the selection of a model may include selecting a physiological spectrum for the physiological motion.
In some embodiments, the determination of a physical motion may include determining movement of a center of mass of the characteristic VOI with respect to time.
In some embodiments, the ECT data may be gated based on the phase or the amplitude of the physiological motion.
In some embodiments, the selection of a model for a physical motion may include selecting a model for a rigid motion.
In some embodiments, the determination of a physical motion curve may further include one or more of the following operations. The plurality of ECT data may be divided into a plurality of subsets. A subset of the plurality of subset may be selected as a reference subset. The reference subset may be transformed into a reference representation. For each subset of the plurality of subsets except for the reference subset, the subset may be transformed into a representation, and similarity may be assessed between the representation and the reference representation.
In some embodiments, the representation or the reference representation may include an image or a histogram.
Additional features will be set forth in part in the description which follows, and in part will become apparent to those skilled in the art upon examination of the following and the accompanying drawings or may be learned by production or operation of the examples. The features of the present disclosure may be realized and attained by practice or use of various aspects of the methodologies, instrumentalities and combinations set forth in the detailed examples discussed below.
The present disclosure is further described in terms of exemplary embodiments. These exemplary embodiments are described in detail with reference to the drawings. These embodiments are non-limiting examples, in which like reference numerals represent similar structures throughout the several views of the drawings, and wherein:
In the following detailed description, numerous specific details are set forth by way of examples in order to provide a thorough understanding of the relevant disclosure. However, it should be apparent to those skilled in the art that the present disclosure may be practiced without such details. In other instances, well known methods, procedures, systems, components, and/or circuitry have been described at a relatively high-level, without detail, in order to avoid unnecessarily obscuring aspects of the present disclosure. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the present disclosure is not limited to the embodiments shown, but to be accorded the widest scope consistent with the claims.
It will be understood that the term “system,” “unit,” “module,” and/or “block” used herein are one method to distinguish different components, elements, parts, section or assembly of different level in ascending order. However, the terms may be displaced by other expression if they may achieve the same purpose.
It will be understood that when a unit, engine, module or block is referred to as being “on,” “connected to” or “coupled to” another unit, engine, module, or block, it may be directly on, connected or coupled to, or communicate with the other unit, engine, module, or block, or an intervening unit, engine, module, or block may be present, unless the context clearly indicates otherwise. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purposes of describing particular examples and embodiments only, and is not intended to be limiting. As used herein, the singular forms “a,” “an,” and “the” may be intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “include,” and/or “comprise,” when used in this disclosure, specify the presence of integers, devices, behaviors, stated features, steps, elements, operations, and/or components, but do not exclude the presence or addition of one or more other integers, devices, behaviors, features, steps, elements, operations, components, and/or groups thereof.
As illustrated in
The ECT scanner 110 may be configured to detect radiation rays in the imaging system. In some embodiments, the ECT scanner 110 may include an SPECT scanner or a PET scanner. Take the PET for example, the PET is a specialized radiology procedure that generates and examines three-dimensional images of functional processes in a target organ or tissue of a body. Specifically, in PET studies, a biologically active molecule carrying a radioactive tracer is first introduced to the body of an object. The PET system then detects gamma rays emitted by the tracer and constructs an image of the tracer concentration within the body by analyzing the detected signal.
Merely by way of example, the radiation rays may take the form of line of response (LOR) in a PET system. Detection of the LORs may be performed by the ECT scanner 110 by way of registering coincidence events from annihilation of positrons. As another example, the radiation rays may be X-ray beams passing through an object (e.g., a patient) in a CT system. The intensity of an X-ray beam passing through the object that lies between the X-ray source and a detector (not shown) may be attenuated, and further evaluated by the ECT scanner 110. In some embodiments, the analysis controller 130 may store or access programs for imaging of various types of nuclear medicine diagnosis. Exemplary types of nuclear medicine diagnosis may include PET, SPECT, MRI, or the like, or a combination thereof. It should also be noted here that the “line of response” or “LOR” used here may be representative of a radiation ray, and not intended to limit the scope of the present disclosure. The radiation ray used herein may include a particle ray, a photon ray, or the like, or any combination thereof. The particle ray may include neutron, proton, electron, μ-meson, heavy ion, or the like, or any combination thereof. For example, the radiation ray may represent the intensity of an X-ray beam passing through the subject in the case of a CT system. As another example, the radiation ray may represent the probability of a positron generated in the case of a PET system.
The analysis controller 130 may be configured to process the signals acquired by the ECT scanner 110 or retrieved from another source (for example, an imaging device, a database, storage, or the like, or a combination thereof), to reconstruct images based on the acquired signals, to control one or more parameters relating to operations performed by the ECT scanner 110. The analysis controller 130 may include a central processing unit (CPU), an application-specific integrated circuit (ASIC), an application-specific instruction-set processor (ASIP), a graphics processing unit (GPU), a physics processing unit (PPU), a microcontroller unit, a digital signal processor (DSP), a field programmable gate array (FPGA), an acorn reduced instruction set computing (RISC) machine (ARM), or the like, or any combination thereof.
For instance, the analysis controller 130 may control whether to acquire a signal, or the time when the next signal acquisition may occur. As another example, the analysis controller 130 may control which section of radiation rays may be processed during an iteration of the reconstruction. As a further example, the analysis controller 130 may control which algorithm may be selected to process the raw data of an image, and/or determine the iteration times of the iteration projection process, and/or the location of the radiation rays. In some embodiments, the analysis controller 130 may receive a real-time or a predetermined command provided by an operator including, e.g., an imaging technician, or a doctor, and adjust the ECT scanner 110. In some embodiments, the analysis controller 130 may communicate with the other modules for exchanging information relating to the operation of the scanner or other parts of the imaging system 100.
The network 120 may be a single network or a combination of different networks. For example, the network 120 may be a local area network (LAN), a wide area network (WAN), a public network, a private network, a proprietary network, a Public Telephone Switched Network (PSTN), the Internet, a wireless network, a virtual network, or any combination thereof. The network 120 may also include various network access points, for example, wired or wireless access points such as base stations or Internet exchange points (not shown in
It should be noted that the above description of the imaging system 100 is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. For example, the assembly and/or function of the imaging system 100 may be varied or changed according to specific implementation scenarios. Merely by way of example, some other components may be added into the imaging system 100, such as a patient positioning module, a gradient amplifier module, and other devices or modules. However, those variations and modifications do not depart from the scope of the present disclosure.
The ECT data storage 201 may be used to store the acquired data or signals, control parameters, or the like. For instance, the ECT data storage 201 may store signals acquired by the ECT scanner 110. As another example, the ECT data storage 201 may store processing parameters during a process performed by the acquisition circuitry 202, the ECT processor 204, the image reconstruction processor 207, and/or the visualization processor 208. Merely by way of example, the processing parameter may include an acquisition parameter, a processing parameter (e.g., a gating parameter, a division parameter (e.g., size of a divided subset, or the like), or the like), a reconstruction algorithm, a parameter relating to visualization, or the like, or a combination thereof. In some embodiments, the ECT data storage 201 may include a random access memory (RAM), a read only memory (ROM), for example, a hard disk, a floppy disk, a cloud storage, a magnetic tape, a compact disk, a removable storage, or the like, or a combination thereof. The removable storage may read from and/or write data to a removable storage unit in a certain manner. The storage device may also be other similar means for loading computer programs or other instructions into the computer or processor.
The acquisition circuitry 202 may be configured to acquire data or signals. In some embodiments, the acquisition circuitry 202 may detect radiation rays. As described above, the acquisition circuitry 202 may be integrated in the ECT scanner 110. In some embodiments, the acquisition circuitry 202 may convert analog signals to digital signals. For example, an analog signal acquired by the ECT scanner 110 may be transmitted to the acquisition circuitry 202, and the analog signal may be converted to a corresponding digital signal. The acquisition circuitry 202 may include an amplifier, a filter, an analog-to-digital converter, or the like, or a combination thereof. In some embodiments, the acquisition circuitry 202 may receive data or signals from a related device (e.g., the ECT data storage 201, the ECT data processor 204, an external database, or the like, or a combination thereof). In some embodiments, the data or signals acquired by the acquisition circuitry 202 may be transmitted to the ECT data storage 201 and may be loaded when needed.
The ECT data processor 204 may process the ECT data. In some embodiments, the acquired data may be transmitted to the ECT data processor 204 to be further processed. In some embodiments, the ECT data processor 204 may determine a volume of interest (VOI) before processing the ECT data. As used herein, the VOI may refer to a selected subset of the data according to a particular purpose. Different VOIs may be determined under different situations. For example, a specific geometric space may be determined based on a specific VOI. As another example, a specific VOI may be selected to measure the volume of a specific tissue or tumor. As a further example, a specific VOI may be selected to reduce background noises. In some embodiments, the VOI may include a three-dimensional volume, e.g., a sphere, a column, a block, or the like, or a combination thereof. In some embodiments, the ECT data processor 204 may analyze the ECT data. For example, physical motion (e.g., physiological motion or rigid motion) related data may be obtained from the ECT data. In some embodiments, the ECT data processor 204 may process the ECT data based on an instruction or a demand from an operator (e.g., a doctor).
In some embodiments, the ECT data processor may include a division processor (not shown in
In some embodiments, the ECT data processor 204 may further include a signal filter (not shown in
The image reconstruction processor 207 may reconstruct an image based on the acquired ECT data or the processed ECT data. In some embodiments, the image reconstruction processor 207 may include a microcontroller, a reduced instruction set computer (RISC), application specific integrated circuits (ASICs), an application-specific instruction-set processor (ASIP), a central processing unit (CPU), a graphics processing unit (GPU), a physics processing unit (PPU), a microcontroller unit, a digital signal processor (DSP), a field programmable gate array (FPGA), an acorn reduced instruction set computing (RISC) machine (ARM), or any other circuit or processor capable of executing the functions described herein, or the like, or any combination thereof. In some embodiments, the image reconstruction processor 207 may employ different kinds of imaging reconstruction techniques for the image reconstruction procedure. Exemplary image reconstruction techniques may include Fourier reconstruction, constrained image reconstruction, regularized image reconstruction in parallel MRI, or the like, or a variation thereof, or any combination thereof. In some embodiments, the image reconstruction processor 207 may use different reconstruction algorithms including an analytic reconstruction algorithm or an iterative reconstruction algorithm for the image reconstruction procedure. Exemplary analytic reconstruction algorithms may include a filter back projection (FBP) algorithm, a back projection filter (BFP) algorithm, a p-filtered layer gram, or the like, or a combination thereof. Exemplary iterative reconstruction algorithms may include an Maximum Likelihood Expectation Maximization (ML-EM), an Ordered Subset Expectation Maximization (OSEM), a Row-Action Maximum Likelihood Algorithm (RAMLA), a Dynamic Row-Action Maximum Likelihood Algorithm (DRAMA), or the like, or a combination thereof.
The visualization processor 208 may display imaging results generated by, for example, the image reconstruction processor 207. In some embodiments, the visualization processor 208 may include, for example, a display device and/or a user interface. Exemplary display devices may include a liquid crystal display (LCD), a light emitting diode (LED)-based display, a flat panel display or curved screen (or television), a cathode ray tube (CRT), or the like, or a combination thereof. In some embodiments, the visualization processor 208 may be connected to a device, e.g., a keyboard, a touch screen, a mouse, a remote controller, or the like, or any combination thereof.
It should be noted that the above description is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. For example, the visualization processor 208 is unnecessary and the imaging results may be displayed via an external device (e.g., a monitor). As a further example, the visualization processor 208 may be integrated in the image reconstruction processor 207 and the imaging results or intermediate results may be displayed in real time.
The I/O module 301 may be used to input or output data or information. In some embodiments, as illustrated in
The model setting module 302 may select a model and set one or more parameters relating to the model. The model may include a physiological model or a rigid motion model. In some embodiments, the ECT data may be analyzed based on the model and data of different types (e.g., physiological motion related data or rigid motion related data) may be obtained. For instance, physiological motion related data may correspond to a physiological model. In some embodiments, one or more parameters relating to the model may be set, including, VOI, probability distribution type, a signal noise ratio (SNR) calculation method, target frequency of weighted signals to be analyzed, or the like, or a combination thereof.
In some embodiments, the model setting module 302 may include a computing unit (not shown in
The gating module 303 may gate the ECT data. As used herein, “gate” may refer to that the ECT data may be classified into a plurality of groups (also referred to as “gates”) and one of the groups may be selected to be further processed when needed. Merely by way of example, the ECT data may be divided into two gates, one of the gates may correspond to phase π/2, or the “peak”, of physiological motion, and the other may correspond to phase 3π/2, or the “valley”, of physiological motion. The gating module 303 may control when or whether the gates may be transmitted to the image reconstruction processor 207 to be further processed.
The storage module 305 may be configured to store data or information generated by the I/O module 301, the model setting module 302, the computing module 303, or the control module 304. Exemplary data or information may include ECT data, models, factors of the models, control parameters, computing results, calculation algorithms, or the like, or a combination thereof. The storage module 305 may be unnecessary, any storage disclosed anywhere in the present disclosure may be used to store the data or information mentioned above. For example, the ECT data processor 204 may share a common storage with the system 100.
In some embodiments, the ECT data processor 204 may further include a control module (not shown in
It should be noted that the above description is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. For example, the storage module 305 may be integrated in any module of the ECT data processor 204. As another example, the ECT data processor 204 does not include the storage module 305, but may access and/or a storage medium of the system 100, or a storage medium external to the system 100. As a further example, the modules may be partially integrated in one or more independent modules or share one or more sub-modules. As a still further example, the I/O module 301 may be unnecessary in the ECT data processor 204, and an I/O port between any two components illustrated in
In step 403, motion information may be obtained based on the model. The motion information may be obtained by the model setting module 302. As described in step 402, the selected model may be used for describing corresponding motion data. For instance, the distribution of the LORs may be determined based on the model, and the corresponding motion related data may be obtained. The motion information may include a physical motion curve generated based on the obtained motion data. It may be seen from the physical motion curve that the amplitudes may vary with acquisition time (see, e.g.,
In step 404, the original ECT data may be gated based on the motion information. The gating may be performed by the gating module 303. A motion may occur within a specific time interval, in which the acquired ECT data may be screened out and designated as motion related data. In some embodiments, the motion related data may include physiological motion related data, rigid motion related data, or the like, or a combination thereof. In some embodiments, an amplitude threshold may be set, the amplitudes that exceed the amplitude threshold may be screened out, and the corresponding data or signals may be designated as motion related data. In some embodiments, the amplitude threshold may be set based on a default setting of the imaging system 100 or an instruction from an operator (e.g., a doctor). In some embodiments, the amplitude threshold may be adjusted under different situations. For instance, the amplitude of a respiratory motion when a subject is in a resting state may be different with the amplitude of a respiratory motion when the subject is in an exercising state.
In some embodiments, the gated ECT data may be further processed (e.g., used to reconstruct an image). An image may be reconstructed based on the gated ECT data. The image reconstruction may be performed by the image reconstruction processor 207. In some embodiments, the gated ECT data may include physiological motion related data and rigid motion related data, and images of two types may be reconstructed. In some embodiments, the gated ECT data may be corrected and motion related data may be removed or compensated before image reconstruction.
It should be noted that the above description is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. For example, a storing step or a caching step may be added between any two steps, in which signals or intermediate data may be stored or cached.
The motion classifier 501 may classify the types of motions exhibited in the raw ECT data. In some embodiments, the motion classifier 501 may classify the motion to be of a physiological motion, a rigid motion, etc. In some embodiments, other types of motions, such as a hybrid motion, may be identified. In some embodiments, the classification may be executed based on information about a subject. For example, if the information about a subject indicates that the subject undergoes hysteria, a seizure, etc., at the time the raw ECT data are acquired, the motion classifier 501 may determine that the motion to be detected in the raw ECT data may be of a rigid motion. As another example, if the information about a subject indicates that the volume of interest of the subject lies substantially near a lung or the heart of the subject, the motion classifier 501 may determine that the motion to be detected in the raw ECT data may be a physiological motion. The result of the classification may be sent to the rigid motion processor 503 or the physiological motion detector 502 accordingly. The physiological motion detector 502 may then process the raw data to detect a physiological motion, or the rigid motion processor 503 may process the raw data to detect a rigid motion.
The exemplary detectors that may be used within the present module are not exhaustive and are not limiting. After consulting the present disclosure, one skilled in the art may envisage numerous other changes, substitutions, variations, alterations, and modifications without inventive activity, and it is intended that the present disclosure encompasses all such changes, substitutions, variations, alterations, and modifications as falling within its scope.
In step 601, a set of raw ECT data may be obtained. The raw ECT data may be received from one scan systems (e.g., the ECT scanner 110 of
In step 602, an operation of classifying the type of motion in the raw ECT data may be performed. The classification may be based on the information of a subject. For example, if the information about a subject indicates that the subject undergoes hysteria, a seizure, etc., at the time the raw ECT data are acquired, the motion to be detected in the raw ECT data may be a rigid motion. As another example, if the information about a subject indicates that the volume of interest of the subject lies substantially near a lung or the heart of the subject, the motion to be detected in the raw ECT data may be a physiological motion. If the classification result is of a physiological motion type, the process may proceed to step 603; if the classification result is of a rigid motion type, the process may proceed to step 604.
In step 603, physiological motion information may be obtained from the raw ECT data. For instance, the physiological motion information may include a physiological motion curve. The physiological motion curve may show the movement of the subject with respect to time. In some embodiments, the physiological motion curve may show the movement of a certain organ including, for example, the heart, of a subject. For example, the physiological motion curve may show the movement of center of mass of a VOI of the subject. In some embodiments, the movement of the center of mass of a VOI along a certain direction, for example, the x-axis, may be obtained. The physiological motion information and/or the physiological motion curve may contain physiological information, such as the heart rate, the respiratory rate, or the like, or the combination thereof.
In some embodiments, a volume of interest may be designated first. For example, the volume of interest may be the area near the heart. As another example, the volume of interest may be the area near one or both lungs. In some embodiments, a simplified shape, such as a sphere, a cuboid, a column, a block, etc., may be used to approximate a volume of interest. A physiological motion curve may be obtained based on a volume of interest. An exemplary illustration may be seen in the process illustrated in
In step 604, rigid motion information may be obtained from the raw ECT data. The rigid motion curve may indicate the rigid motion of a subject, for example, the movement of the head of a subject, with respect to time. A rigid motion may include translation, rotation, or the like, or a combination thereof. For example, a translation along the x-axis is an example of a rigid motion. Rigid motion may be described in terms of a motion field matrix. A motion field matrix may take the form as described in the following formula:
T=Rx*Ry*Rz*S Formula 1
where Rx, Ry and Rz may represent the rotation matrices around the x-axis, the y-axis, and the z-axis, respectively, and S may represent the translation matrix. A motion field matrix may be used to quantify the rigid motion of a subject or a portion thereof. For example, the translation along the x-axis, the y-axis, and/or the z-axis may be obtained from the translation matrix.
The rigid motion information may include a rigid motion curve. The rigid motion curve may be obtained based on a threshold. In some embodiments, a threshold may be used to determine the occurrence of a rigid motion. For example, the raw ECT data may be divided into a plurality of subsets. The similarity between two different subsets of the raw ECT data may be assessed. If the similarity between two different subsets of ECT data is smaller than the threshold, the rigid motion may be deemed to be too small to be noticed. In some embodiments, a subset of the raw ECT data may be chosen as reference ECT data. Anther subset of the raw ECT data may be considered to undergo a rigid motion if the similarity between the reference ECT data and the other subset of ECT data exceeds the threshold.
In some embodiments, additional steps may be performed after step 603 or step 604. For example, a correction process may be performed on the raw ECT data based on the rigid motion curve. The spatial coordinates of the crystal pairs, used for locating the position of LORs, may be corrected by performing certain spatial transformation based on the motion field matrix. An exemplary process of correcting the list mode ECT data is illustrated in
The VOI setting unit 701 may set parameters for describing the shape of a VOI. Exemplary shapes of VOIs may be a sphere, a cuboid, a column, a block, or the like, or the combination thereof. In the exemplary embodiments that the shape of VOI is a sphere, exemplary parameters of the sphere may be (X1, X2, X3) and X4, namely the spatial coordinates of the center and the radius of the sphere, respectively. If the shape of a VOI is a block, exemplary parameters of the block may be (X1, X2, X3), X4, X5, and X6, namely the spatial coordinates of the block center, the length, the width, and the height of the block, respectively.
The TOF probability distribution setting unit 702 may set up probability models used to estimate the probabilistic distribution of list mode data, for example, the probabilistic distribution of the time of flight coordinate λe. See, for example,
The physiological spectrum setting unit 703 may set a physiological spectrum for a target signal. The target signal may be a signal that contains physical motion information. For example, a target signal may be a signal that describes the movement of a center of mass of a VOI of a subject. An exemplary physiological spectrum may include the frequency range of a respiratory signal, and/or a cardiac signal, or the like, or any combination thereof. See, for example,
The SQI setting unit 704 may calculating a signal quality indicator of a target signal with respect to a physiological spectrum. As used herein, the signal quality indicator may refer to the signal-noise-ratio, which may compare the energy level of a target signal within a physiological spectrum, to the energy level of a target signal outside of the physiological spectrum. The energy level of a signal may be measured in terms of the spectral behavior of the signal. For example, Fourier transform may be performed on the signal to obtain its spectral behavior.
The storage unit 705 may store data. The data to be stored may be from the VOI setting unit 701, the TOF probability distribution setting unit 702, the physiological spectrum setting unit 703, the SQI setting unit 704, or the like, or any combination thereof. Exemplary types of data stored may include the parameters of VOI, the probability distribution for TOF, the physiological spectrum setting, the SQI calculation method, or the like, or any combination thereof. The storage unit 705 may include a plurality of components. In some embodiments, the storage unit 705 may include a hard disk drive. In some embodiments, the storage unit 705 may include a solid-state drive. In some embodiments, the storage unit 705 may include a removable storage drive. Merely by way of examples, a non-exhaustive listing of removable storage drives that may be used in connection with the present disclosure includes a flash memory disk drive, an optical disk drive, or the like, or any combination thereof.
In step 801, a set of raw ECT data may be obtained. The set of raw ECT data may refer to data from SPECT (Single-Photon Emission Computed Tomography), PET (Positron Emission Computed Tomography), etc. The set of raw ECT data may be obtained by scanning a target body, or from a storage unit, or some database via a network. The set of raw ECT data may be stamped with time information and spatial information of an annihilation event. In some embodiments, the set of ECT data may refer to list mode data. In some embodiments, the set of ECT data may refer to sinogram data. The sinogram data may be stamped with time information, and/or spatial information, or any other information that may be familiar to a person of ordinary skill in the art.
In step 802, a physiological model may be selected. The physiological model may be a method to process raw ECT data to obtain physiological information. For example, the model may include specifying the shape of VOI, setting parameters such as spatial coordinates for a VOI, designating a TOF probability distribution type, a SQI calculation method, a physiological spectrum for a target signal, or the like, or any combination thereof. Model selection may be accomplished in the model setting module 302.
In step 803, a characteristic VOI may be determined based on the selected model. The characteristic VOI may be an optimal VOI according to some criteria based on the selected model. Merely by way of example, the value of SQI may be used as a criterion to optimize the choice of VOI. An exemplary process for obtaining a characteristic VOI may be found in
In step 805, raw ECT data may be gated based on the physiological motion information. In some embodiments, the raw ECT data may be gated based on the phase of the physiological motion curve. For example, the ECT data within the same phase of the physiological motion curve may be gated into the same subset. In some embodiments, the ECT data may be gated based on the amplitude of the physiological motion curve. One or more new images may then be reconstructed based on the gated ECT data in step 806.
In step 901, data such as ECT data may be divided into sections. The data may be divided into a plurality of sections that span, for example, approximately 100 milliseconds. Exemplary time span may be 5 seconds, 10 seconds, etc. In step 902, VOI parameters may be designated to identify the shape and/or the location of a VOI. As used herein, VOI parameters may include, for example, the parameters to identify the shape, volume, position of a VOI. Exemplary shapes of VOIs may be a sphere, a cuboid, a block, or the like, or any combination thereof. Exemplary parameters for identifying a sphere may be (X1, X2, X3) and X4, namely the coordinates of the sphere center and the radius r of the sphere. Exemplary parameters for identifying a block may be (X1, X2, X3), X4, X5, and X6, namely the coordinates of the center of the block, and the length, the width, and the height of the block. A shape of a complicated contour may be described using more parameters than a shape of a simple contour. In some embodiments, a shape that may be described using a set of VOI parameters including fewer than 10 parameters may be designated in step 902. In some embodiments, a shape that may be described using a set of VOI parameters including fewer than 20 parameters may be designated in step 902.
In step 903, a weighted signal may be calculated based on the data and the parametrized VOI. Merely by way of example, for the case of data being list mode ECT data, the coordinate representation of an annihilation event (or referred to as an event for brevity) (se, φ, ze, θ, λe), or any other data that are familiar to a person of ordinary skills in the art, may be used. The specific meaning of the parameters se, φ, ze, θ and λe may be found in
The annihilation point (x, y, z) of an event may be calculated from the coordinate representation (se, φ, ze, θ, λe) of the event as follows:
In some embodiments, the accuracy of the parameter λe may be improved, and an improved TOF coordinate of an event Λ may be treated as a random variable of a probabilistic distribution as expressed in Formula 3:
in which λe is the expectation of the Gaussian distribution, σ may be the variance of the Gaussian distribution which may be obtained based on the time resolution of the imaging system. Because the coordinate Λ of a TOF may be treated as a random variable, the annihilation point of this event may also be a random variable depending upon Λ. The distribution probability of the annihilation point (x, y, z) may be expressed as Fe({right arrow over (r)}=(x, y, z))=Pe(Λ=λ).
Accordingly, the probability distribution of an observed event at time t may be expressed as follows:
F(x,y,z,t)=Σtime(e)=tFe({right arrow over (r)}=(x,y,z)). Formula 4
After the probability distribution of an observed event at time t is obtained, a weighted signal indicating a physiological motion of a subject (e.g., the physiological motion of an inner organ of the subject), may be calculated as follows:
when w1(x, y, z)=z, and w2(x, y, z)=1. A specific weighted signal indicating the movement of center of mass (COM) along z direction, may be obtained as follows:
In the Formulas 5 and 6, dv=dxdydz, that represents the differential of the volume, dτ is the differential of incident integral, integrated over the field of view (FOW). [t, t+ΔT] represents the time internal during which the event involved in F(x, y, z, t) occurs. ΔT is the time internal. In some embodiments, ΔT is 50 ms˜1000 ms. The value of the time interval may be determined according to a sampling theorem. In some embodiments, the value of ΔT may be determined when there are a sufficient number of sample points. COM(t) may represent the distribution of the annihilations occurred along the z direction within an entire scan.
In some embodiments, a weighted signal based on a VOI, indicating the movement of the center of mass of the VOI, may be calculated as follows:
In Formula 7, V(x,y,z) may be an indicator function of a VOI, i.e. V(x, y, z) may take value 1 if (x, y, z) is located within the VOI, and zero if (x, y, z) is located outside of the VOI.
In step 904, a Fourier spectrum analysis may be performed on the weighted signals to determine signal quality indicator (SQI). As used herein, the signal quality indicator may refer to the signal-noise-ratio, which may compare the energy level of a target signal within a physiological spectrum, to the energy level of the target signal outside of the physiological spectrum. The energy level of a signal may be measured in terms of the spectral behavior of the signal.
Merely by way of example, as shown in the following Formula 8, a signal quality indicator (SQI) based on a physiological spectrum may be calculated as follows:
where FT(signal(t)) is a Fourier transform of signal(t), f∈signal_space means that the frequency f falls within the physiological spectrum, and f∈signal_space indicates that the frequency f falls outside of the physiological spectrum. When a respiratory motion needs to be detected, the physiological spectrum for a respiratory signal may be chosen. When a cardiac motion needs to be detected, the physiological spectrum for a cardiac signal may be chosen. G1 and G2 are two functions measuring the energy level of the function g(f), where g(f) may be a function of f, ∥g(f)∥ may be the absolute value of g(f). For example, G1 and G2 may be shown in Formula 9 as follows:
G1(g(f))=G2(g(f))=∥g(f)∥2, Formula 9
In step 905, SQI for a given VOI may be maximized by traversing the set of VOI parameters. Merely by way of example, a way of traversing the set of VOI parameters may be choosing a subset of VOI parameters and performing a traversing algorithm on the chosen subset of VOI parameters to accelerate the execution of the traversing algorithm.
In step 906, a characteristic VOI with the maximized SQI among the SQIs calculated in the step 905 may be determined. In some embodiments, a characteristic VOI may be determined by considering the behavior of SQI and other indicators jointly. For example, a characteristic VOI may be determined by maximizing a certain combination of SQI and the volume of a VOI.
The rigid motion corrector 1002 may correct the list mode data based on determined motion information. As described above, the occurrence of a rigid motion and/or the amplitude thereof may be determined. The rigid motion corrector 1002 may correct the list mode data in a specific subset in which motion information is determined. For example, the rigid motion corrector 1002 may remove the motion information from the subset based on a motion field matrix (see details in
It should be noted that the above description is provided for the purposes of illustration, not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be reduced to practice in the light of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure.
As illustrated in
In step 1080, the acquired list mode data may be corrected based on the rigid motion information. The correction may be performed on the list mode directly. In some embodiments, the correction may be performed on the images generated from the list mode data.
As illustrated in
S={si,1≤i≤T/Δt}, Formula 10
where si may represent the ith subset divided from the list-mode data, T may represent the acquisition time of the list-mode data, and Δt may represent the time interval.
In step 1103, one of the plurality of subsets may be selected as a reference subset. The selection may be performed based on a default setting of the imaging system 100 or an instruction from an operator (e.g., a doctor). In some embodiments, the first subset S1 may be selected as the reference subset, and the reference subset may be expressed as sref.
In step 1104, the data in the plurality of subsets may be transformed into a plurality of images or histograms. In some embodiments, the data in the plurality of subsets may be transformed into a plurality of images according to the formula below:
where crystalx, crystaly, ringz may represent identifiers of two scintillator crystals and scintillator ring corresponding to the LOR, respectively.
As mentioned above, the reference subset may correspond to a reference image. The reference image may be expressed as imgref, while the other images transformed from the data in other subsets may be expressed as imgi.
In some embodiments, the data in the plurality of subsets may be transformed into a plurality of histograms. The reference subset may correspond to a reference histogram. The reference histogram may be expressed as historef, while the other histograms transformed from the data in other subsets may be expressed as histoi.
In step 1105, a similarity between the reference image or the reference histogram and other images or histograms may be assessed. In some embodiments, the similarity between the reference image and the other images may be obtained by:
I={Ii|Ii=similarity(imgref,imgi),1≤i≤T/Δt}, Formula 12
where similarity(a, b) may represent the similarity function, Ii may represent the similarity between the reference image (imgref) and the ith image (imgi) corresponding to ith subset.
In some embodiments, the similarity between the reference histogram and the other histograms may be obtained by Formula 13 below. As used herein, the similarity may include shape similarity, distribution similarity, or the like, or a combination thereof.
I={Ii|Ii=similarity(historef,histoi),1≤i≤T/Δt}, Formula 13
where similarity(a, b) may represent the similarity function, and Ii may represent the similarity between the reference histogram (historef) and the ith histogram (histoi) corresponding to ith subset.
In step 1106, an occurrence of a rigid motion and/or amplitude of the rigid motion may be determined based on the similarity and a threshold. The threshold may be set based on a default setting of the imaging system 100 or an instruction from an operator (e.g., a doctor). In some embodiment, the occurrence of the motion may be determined based on Formula 14 below.
where 0 may mean that no motion have occurred, and Ii may represent that a rigid motion has occurred within the time interval in which the ith subset is belong to.
In step 1202, list mode data relating to a subject may be acquired. The list mode data may be acquired from the acquisition circuitry 202, the ECT data storage 201, or any storage disclosed anywhere in the present disclosure.
In step 1204, the list mode data may be divided based on the time information and/or the spatial information. For example, the list mode data may be divided according to time span of 1 second or less. For another example, the list mode data may be divided according to time span of 5 seconds or less. As a further example, the list mode data may be divided according to time span of 10 seconds or less.
In step 1206, a relationship between the data space and the reconstruction space may be established. In some embodiments, data space may be given sinogram coordinates, and the reconstruction space may be given the 3D reconstruction coordinates. A relationship between sinogram coordinates and 3D reconstruction coordinates may be established (see details in
s=x cos(φ)+y sin(φ), Formula 15
where x may represent the x-axis coordinate, and y may represent the y-axis coordinate.
In step 1208, expectation information based on the divided list mode data may be obtained. Firstly, P(ϕ, s, z, θ, t) may represent the list mode data located at the sinogram coordinates (ϕ, s, z, θ) at time t, by histogramming all events that are detected at time t. The list mode data P(ϕ, s, z, θ, t) may be divied into the time interval [iΔT, (i+1)ΔT], i=0,1, . . . At the n-th time interval [nΔT, (n+1)ΔT], the rigid motion information may be obtained from the operation on the list mode data P(ϕ, s, z, θ, t), t∈[nΔT, (n+1)ΔT]. Specifically, E(S(ϕ, n)) may represent the mathematical expectation of s with a projection angle ϕ in the n-th time interval [nΔT, (n+1)ΔT]. E(S(ϕ, n)) (ϕ=ϕ1, ϕ2 . . . ϕK) may be calculated by:
where φ32 φ1, φ2 . . . φK, and φ1, φ2, . . . , φK may represent candidate discrete values of φ.
In some embodiments, E(S2(φ, n)) (φ=φ1, φ2 . . . φK) may be calculated by
E(X(n)) and E(Y(n)) may represent mathematical expectations of translations of the subject's center of mass along the x axis and y axis, respectively in the nth time interval. E(X(n)) and E(Y(n)) may be calculated by:
where φ1, φ2, . . . , φK may represent candidate discrete values of φ. In some embodiments, K≥2 may be sufficient to determine E(X(n)) and E(Y(n)), using, for example, the least square type methods. The distribution of φ1, φ2, . . . , φK may be uniform, i.e. the difference between φi and φi-1 is invariant or independent of i. The distribution of φ1, φ2, . . . , φK may be non-uniform, i.e. the difference between φi and φi-1 is not invariant or dependent of i.
In some embodiments, the translation of the subject's mass center along the z axis in the nth time interval may be defined as E(Z(θ, φ, n)). E(Z(θ, φ, n)) may be calculated by:
The covariance matrix (n) may be calculated as
where E(X2(n)), E(Y2(n)), and E(X(n)Y(n)) may be calculated by
In some embodiments, K≥3 may be sufficient to determine E(X2(n)), E(Y2(n)), and E(X(n)Y(n)) using, for example, the least square type methods. The distribution of φ1, φ2, . . . , φK may be uniform, i.e. the difference between φi and φi-1 is invariant or independent of i. The distribution of φ1, φ2, . . . , φK may be non-uniform, i.e. the difference between φi and φi-1 is not invariant or dependent of i.
E(Z(n)X(n)), and E(Y(n)Z(n)) may be calculated by
where E(Z(θ, φ, n)S(θ, φ, n)) (φ=φ1, φ2 . . . φK) may be calculated by
E(Z2(n)) may be calculated by
E(Z2(n))=E(Z2(0,n)). Formula 24
where E(Z2(0, n)) may be calculated by
In step 1210, the translation information based on the expectation information may be obtained. As used herein, the term “translation information” may refer to a translational motion of a subject along the x axis, the y axis, and/or the z axis. For example, a translational motion of the subject's center of mass along the x axis, the y axis, and/or the z axis may be obtained and may be defined as U(n), V (n), and W(n), respectively.
U(n), V(n), and W(n) may be calculated by:
where E(X(n)) and E(Y(n)) may represent the mathematical expectation of translations of the subject's center of mass along the x axis and the y axis, respectively, in the nth time interval.
In step 1212, the rotation information based on the expectation information may be obtained. As used herein, the term “rotation information” may refer to the rotation angles of the subject around the x axis, the y axis, and/or the z axis. For example, rotation angles of the subject centered at the center of mass around the x axis, the y axis, and/or the z axis may be obtained and may be defined as α(n), β(n), and γ(n), respectively. The angles α(n), β(n), and γ(n) may be calculated by:
where r32(n), r31(n), and r21(n) may be calculated by
where R(n) may represent the rotation matrix of the patient, and R(n) may be Rx*Ry*Rz as given in Formula 1, and ((n) may represent the eigen-vector matrix for the covariance matrix (n). (n) may be calculated by
(n)(n)=(n)(n) Formula 29
where (n) may represent a 3×3 diagonal matrix, and (n) may represent the covariance matrix given above.
The translation information (such as the translational motion of the subject's center of mass along the x axis, the y axis, and/or the z axis, denoted as U(n), V(n), and W(n), n=0, 1, 2, . . . respectively) may be used together with the rotational information (such as the angles α(n), β(n), and γ(n), n=0, 1, 2, . . . ) to detect the rigid motion of the subject. The detection may be based on a threshold. For example, if the magnitude of translation information (U(n), V(n), W(n)) is larger than or equal to a threshold, then the subject may be deemed to be undergoing rigid motion movement during the n-th time interval. For another example, if the magnitude of angles (α(n), β(n), γ(n)) is larger than or equal to a threshold, then the subject may be deemed to be undergoing rigid motion movement during the n-th time interval. For a further example, if the magnitude of translation/rotational information (U(n), V(n), W(n), α(n), β(n), γ(n)) is larger than or equal to a threshold, then the subject may be deemed to be undergoing rigid motion movement during the n-th time interval. The magnitude of a quantity (C1, C2, . . . , Ck) may be calculated by √{square root over (C12+C22+ . . . +Ck2)}.
Besides the detection of rigid motion of the subject, the translation information (such as the translational motion of the subject's center of mass along the x axis, the y axis, and/or the z axis, denoted as U(n), V(n), and W(n), respectively) may be used together with the rotational information (such as the angles α(n), βn), and γ(n)) to gate or subgate the list mode data. The gating may be performed in either an automatic or a manual way. For example, the list mode data in the n-th time interval may be subgated so that neither the magnitude of translation information nor the magnitude of rotational information of the subject within the subgated time intervals exceeds a predetermined threshold.
It should be noted that the above description is provided for the purposes of illustration, not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be reduced to practice in the light of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. For example, to detect the rigid motion of the subject, a person of ordinary skill in the art may consider using the difference of translation information between two consecutive time intervals to detect the rigid motion. If the difference of translation information (or rotational information, or the like, or a combination thereof) between two consecutive time intervals is larger than or equal to a threshold, then the subject may be deemed to be undergoing rigid motion movement during the transition of the two consecutive time intervals. For a further example, if the difference between the covariance matrices for two consecutive time intervals is larger than or equal to a threshold, then the subject may be deemed to be undergoing rigid motion movement during the transition of the two consecutive time intervals.
Furthermore, the translation information (such as the translational motion of the subject's center of mass along the x axis, the y axis, and/or the z axis, denoted as U(n), V(n), and W(n), respectively may be used together with the rotational information (such as the angles α(n), β(n), and γ(n)) to correct the artifact caused by the rigid motion of the subject in the list mode data, the sinograms, or the reconstructed images. The artifact correction may be performed in either an automatic or a manual way. The artifact correction may be performed in an “in situ” way. For example, the reconstructed images may undergo a transformation of translation and/or rotation to counter the translation and/or rotation induced by U(n), V(n), and W(n) and the rotation angles α(n), β(n), and γ(n).
In step 1301, the information, such as the amplitude of the motion, may be loaded. As illustrated in
∇I=I(i)−I(i−1) Formula 30
where I(i) may represent the similarity corresponding to the time interval.
The list mode data may be gated based on the gradient values and a threshold and a plurality of gates (also referred to as “gated list mode data sets”) may be generated. The threshold may be set based on a default setting of the imaging system 100 or an instruction from an operator (e.g., a doctor). In some embodiments, data in the ith subset may be combined with that in the (i−1)th subset if the gradient between the ith time interval and (i−1)th time interval does not exceed a threshold. Otherwise, data in the ith subset may be classified into a new data set. The plurality of gated list mode data sets may be defined as:
D={Di,1≤i≤n} Formula 31
where n may represent the number of gated list-mode data set, and Di may represent the ith gated List-mode data set.
In step 1303, an image reconstruction may be performed based on gated list-mode data. The image reconstruction may be performed by the image reconstruction processor 207. The reconstructed images may be defined as:
f={f(Di),1≤i≤n} Formula 32
where f (Di) may represent the reconstructed image corresponding to the ith gated list-mode data set.
In step 1304, a reconstructed image may be selected as a reference image. The selection may be performed based on a default setting of the imaging system 100 or an instruction from an operator (e.g., a doctor). In some embodiments, the image corresponding to the first gated list-mode data set may be selected as the reference image.
In step 1305, an image registration may be performed to obtain a motion field matrix. The image registration may be performed between the reference image and other reconstructed images. Methods used for the image registration may include a cross-correlation (CC), a normalized CC, a sequential similarity detection algorithm (SSDA), a mutual information (MI) method, a sum of squared intensity differences, a ratio image uniformity, or the like, or a combination thereof. In some embodiments, the motion field matrix may be defined as:
T={Ti,1≤i≤n} Formula 33
The motion field matrix may be calculated by Formula 1 as already described.
In step 1306, the list mode data may be corrected based on the motion field matrix. In some embodiments, the imaging system 100 may correct the spatial information of the list mode data based on the motion field matrix. As used herein, the term “spatial information” may refer to spatial coordinates of a pair of scintillator crystals in a PET system corresponding to an LOR. Exemplary spatial information may be defined as:
where {right arrow over (x)}a may represent the space coordinates of crystala; {right arrow over (x)}b may represent the space coordinates of crystalb; xa, ya and za may represent the x-axis coordinate, y-axis coordinate, and z-axis coordinate of crystala respectively; xb, yb and zb may represent the x-axis coordinate, the y-axis coordinate, and/or the z-axis coordinate of crystalb, respectively.
The space coordinates of a pair of scintillator crystals may be corrected by the following formula:
where {right arrow over (x)}′a may represent the corrected space coordinates of crystala, and {right arrow over (x)}′b may represent the corrected space coordinates of crystalb.
In some embodiments, the imaging system 100 may correct spatial information and time information of the list mode data based on the motion field matrix. In some embodiments, the format of the list mode data may be defined as:
where ta and tb represent time of flight (TOF) corresponding to crystala and crystalb, respectively.
With a high timing resolution, the space coordinates of event may be localized based on the list mode data. Taking into consideration the timing resolution of the imaging system 100 a maximum likelihood point (MLP) may be obtained together with the probability distribution centered with MLP along the LOR. The MLP may be calculated by the following formula:
=½*c*(tb−ta)*((−)/|−|)+(+)/2 Formula 37
where c may represent speed of light.
According to Formula 37, the format of the list mode data may be defined as:
The probability distribution centered with MLP along the LOR may be symmetrical. A new probability distribution may be obtained with a corrected space coordinates of MLP. The list mode data may be corrected with the following formula:
where {right arrow over (x)}′0 may represent the corrected space coordinates of MLP.
In an optional step 1307, an image reconstruction may be performed based on the corrected list mode data. The image reconstruction may be performed by the mage reconstruction processor 207.
It should be noted that the above embodiments are provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. After consulting the present disclosure, one skilled in the art may envisage numerous other changes, substitutions, variations, alterations, and modifications without inventive activity, and it is intended that the present disclosure encompasses all such changes, substitutions, variations, alterations, and modifications, as falling within its scope.
The following examples are provided for illustration purposes, and not intended to limit the scope of the present disclosure.
In step 1402, list mode data may be acquired. The list mode data may be acquired the acquisition circuitry 202, the ECT data storage 201, or any storage disclosed anywhere in the present disclosure. In step 1404, the list mode data may be divided into a plurality of subsets according to a fixed acquisition time interval. The time interval may range from 1 second to 10 seconds. In step 1406, one of the plurality of subsets may be selected as a reference subset. In some embodiments, the subset S1 corresponding to the first time interval may be selected as the reference subset. In some embodiments, multi-frame subsets may be obtained. As used herein, the multi-frame subset may refer to the subsets other than the reference subset. In some embodiments, the reference subset and the multi-frame subsets may be obtained simultaneously or successively.
In step 1408, an image may be reconstructed based on the reference subset. The reconstructed image may be designated as reference image for further usage in step 1420 to be described later.
In step 1410, similarities between the reference subset and the other subsets may be assessed respectively (detailed descriptions regarding the similarity may be found in
Having thus described the basic concepts, it may be rather apparent to those skilled in the art after reading this detailed disclosure that the foregoing detailed disclosure is intended to be presented by way of example only and is not limiting. Various alterations, improvements, and modifications may occur and are intended to those skilled in the art, though not expressly stated herein. These alterations, improvements, and modifications are intended to be suggested by this disclosure, and are within the spirit and scope of the exemplary embodiments of this disclosure.
Moreover, certain terminology has been used to describe embodiments of the present disclosure. For example, the terms “one embodiment,” “an embodiment,” and/or “some embodiments” mean that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Therefore, it is emphasized and should be appreciated that two or more references to “an embodiment” or “one embodiment” or “an alternative embodiment” in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the present disclosure.
Further, it will be appreciated by one skilled in the art, aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc.) or combining software and hardware implementation that may all generally be referred to herein as a “block,” “module,” “engine,” “unit,” “component,” or “system.” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including electro-magnetic, optical, or the like, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable signal medium may be transmitted using any appropriate medium, including wireless, wireline, optical fiber cable, RF, or the like, or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB. NET, Python or the like, conventional procedural programming languages, such as the “C” programming language, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages. The program code may execute entirely on the operator's computer, partly on the operator's computer, as a stand-alone software package, partly on the operator's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the operator's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a Software as a Service (SaaS).
Furthermore, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations therefore, is not intended to limit the claimed processes and methods to any order except as may be specified in the claims. Although the above disclosure discusses through various examples what is currently considered to be a variety of useful embodiments of the disclosure, it is to be understood that such detail is solely for that purpose, and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover modifications and equivalent arrangements that are within the spirit and scope of the disclosed embodiments. For example, although the implementation of various components described above may be embodied in a hardware device, it may also be implemented as a software only solution—e.g., an installation on an existing server or mobile device.
Similarly, it should be appreciated that in the foregoing description of embodiments of the present disclosure, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the various inventive embodiments. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed subject matter requires more features than are expressly recited in each claim. Rather, inventive embodiments lie in less than all features of a single foregoing disclosed embodiment.
Number | Date | Country | Kind |
---|---|---|---|
201610111524.7 | Feb 2016 | CN | national |
The application is a Continuation of U.S. application Ser. No. 15/323,077, filed on Dec. 29, 2016, which is a U.S. national stage under 35 U.S.C. § 371 of International Application No. PCT/CN2016/092413, filed on Jul. 29, 2016, designating the United States of America, which claims priority of Chinese Patent Application No. 201610111524.7 filed on Feb. 29, 2016, the entire contents of which are hereby incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
8335363 | Stolin et al. | Dec 2012 | B2 |
9730664 | Bal et al. | Aug 2017 | B2 |
9990718 | Feng et al. | Jun 2018 | B2 |
10049440 | Lyu | Aug 2018 | B2 |
10360724 | Feng et al. | Jul 2019 | B2 |
20020168106 | Miroslav | Nov 2002 | A1 |
20060034545 | Fourier | Feb 2006 | A1 |
20060074292 | Thomson et al. | Apr 2006 | A1 |
20070003135 | Mattausch et al. | Jan 2007 | A1 |
20070040122 | Manjeshwar et al. | Feb 2007 | A1 |
20080159401 | Lee et al. | Jul 2008 | A1 |
20100063514 | Maschke | Mar 2010 | A1 |
20100067765 | Buther | Mar 2010 | A1 |
20100166274 | Busch et al. | Jul 2010 | A1 |
20100266099 | Busch et al. | Oct 2010 | A1 |
20100290683 | Demeester et al. | Nov 2010 | A1 |
20100316275 | Stolin et al. | Dec 2010 | A1 |
20110081068 | Jolly et al. | Apr 2011 | A1 |
20110178387 | Ladebeck et al. | Jul 2011 | A1 |
20120275657 | Kolthammer et al. | Nov 2012 | A1 |
20120281897 | Razifar et al. | Nov 2012 | A1 |
20120305780 | Thiruvenkadam | Dec 2012 | A1 |
20130243331 | Noda et al. | Sep 2013 | A1 |
20140119611 | Prevrhal et al. | May 2014 | A1 |
20140138548 | Li et al. | May 2014 | A1 |
20140149412 | Nakamura | May 2014 | A1 |
20150371379 | Averikou et al. | Dec 2015 | A1 |
20160324500 | Fan et al. | Nov 2016 | A1 |
20170164911 | Lv et al. | Jun 2017 | A1 |
20190099148 | Rupcich et al. | Apr 2019 | A1 |
Number | Date | Country |
---|---|---|
101017196 | Aug 2007 | CN |
101702232 | May 2010 | CN |
101814112 | Aug 2010 | CN |
102005031 | Apr 2011 | CN |
102871685 | Jan 2013 | CN |
103054605 | Apr 2013 | CN |
103668465 | Mar 2014 | CN |
103800027 | May 2014 | CN |
103810712 | May 2014 | CN |
103824273 | May 2014 | CN |
103829962 | Jun 2014 | CN |
103845070 | Jun 2014 | CN |
103908278 | Jul 2014 | CN |
103908280 | Jul 2014 | CN |
103908285 | Jul 2014 | CN |
104007450 | Aug 2014 | CN |
104068884 | Oct 2014 | CN |
104173070 | Dec 2014 | CN |
104181487 | Dec 2014 | CN |
104183012 | Dec 2014 | CN |
104217447 | Dec 2014 | CN |
104287758 | Jan 2015 | CN |
104346102 | Feb 2015 | CN |
104414671 | Mar 2015 | CN |
104434160 | Mar 2015 | CN |
103567685 | Apr 2015 | CN |
103757721 | May 2015 | CN |
103860187 | May 2015 | CN |
104586416 | May 2015 | CN |
104599302 | May 2015 | CN |
104655858 | Jun 2015 | CN |
104665857 | Jun 2015 | CN |
104672366 | Jun 2015 | CN |
103376458 | Jul 2015 | CN |
103800019 | Jul 2015 | CN |
104184473 | Jul 2015 | CN |
104751499 | Jul 2015 | CN |
104795304 | Jul 2015 | CN |
104840212 | Aug 2015 | CN |
103158203 | Sep 2015 | CN |
104978754 | Oct 2015 | CN |
105078495 | Nov 2015 | CN |
103376459 | Dec 2015 | CN |
104183012 | Dec 2015 | CN |
105266813 | Jan 2016 | CN |
103767722 | Apr 2016 | CN |
2008127368 | Oct 2008 | WO |
2009060348 | May 2009 | WO |
2010147570 | Dec 2010 | WO |
2012093313 | Jul 2012 | WO |
Entry |
---|
Correction of the Respiratory Motion of the Heart by Tracking of the Center of Mass of Thresholded Projections: A Simulation Study Using the Dynamic MCAT Phantom; Bruyant et al. (Year: 2002). |
First Office Action in Chinese Application No. 201610111524.7 dated Jan. 17, 2020, 25 pages. |
F. Büther et al., List Mode-Driven Cardiac and Respiratory Gating in PET, Journal of Nuclear Medicine, 50(5) 674-681, 2009. |
Jingyan Xu et al., improved Intrinsic Motion Detection Using Time-of-Flight PET, IEEE Transaction on Medical Imaging, 34(10): 2131-2145, 2015. |
The Second Office Action in Chinese Application No. 201610617191.5 dated Jan. 21, 2019, 17 pages. |
The First Office Action in Chinese Application No. 201610617191.5 dated Jun. 4, 2018, 14 pages. |
The First Office Action in Chinese Application No. 201610617163.3 dated Jun. 4, 2018, 13 pages. |
European Search Report In European Application No. 16815532.3 dated Jul. 25, 2018, 8 pages. |
P. P. Bruyant et al., Correction of the Respiratory Notion of the Heartby Tracking of the Center of Mass of Threshoulded Projections: A Simulation Study Using the Dynamic MCAT Phantom, IEEE Transactions on Nuclear Science, 49(5): 2159-2166, 2002. |
Bing Feng et al., Estimation of 6-Degree-of-Freedom (6-DOF) Rigid-Body Patient Motion From Projection Data by the Principal-Axes Method in Iterative Reconstruction, IEEE Transactions on Nuclear Science, 60(3): 1658-1663, 2013. |
B. Feng et al., Estimation of the Rigid-Body Motion From Three-Dimensional Images Using a Generalized Center-of-Mass Points Approach, IEEE Transactions on Nuclear Science, 53(5): 2712-2718, 2006. |
Written Opinion in PCT/CN2016/092413 dated Nov. 8, 2016,5 pages. |
International Search Report in PCT/CN2016/092413 dated Nov. 8, 2016,4 pages. |
Decision to Grant a Patent in Japanese Application No. 2018-546528 dated Aug. 23, 2021, 6 pages. |
Notice of Reasons for Rejection in Japanese Application No. 2018-546526 dated Jul. 20, 2020, 8 pages. |
Number | Date | Country | |
---|---|---|---|
20200211237 A1 | Jul 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15323077 | US | |
Child | 16740360 | US |