METHOD FOR A BRAIN REGION LOCATION AND SHAPE PREDICTION

Information

  • Patent Application
  • 20230293130
  • Publication Number
    20230293130
  • Date Filed
    April 27, 2023
    a year ago
  • Date Published
    September 21, 2023
    7 months ago
Abstract
A volumetric segmentation method is disclosed for brain region analysis, in particular but not limited to, regions of the basal ganglia such as the subthalamic nucleus (STN). This serves for visualization and localization within the sub-cortical region of the basal ganglia, as an example of prediction of a region of interest for deep brain stimulation procedures. A statistical shape model is applied for variation modes of the STN, or the corresponding regions of interest, and its predictors on high-quality training sets obtained from high-field, e.g., 7 T, MR imaging. The partial least squares regression (PLSR) method is applied to induce the spatial relationship between the region to be predicted, e.g., STN, and its predictors. The prediction accuracy for validating the invention is evaluated by measuring the shape similarity and the errors in position, size, and orientation between manually segmented STN and its predicted one.
Description
Claims
  • 1. A method for generating an image, comprising: accessing a database including a plurality of image files of three-dimensional body portions including structures of interest (SOI) of a plurality of subjects, wherein the database includes one or more first image files of images of a first subject defined by a first contrast, and one or more second image files of images of the first subject defined by a second contrast that is greater than the first contrast;selecting a plurality of image files from the database as a training set, including at least one of the first image files and at least one of the second image files;processing the training set and generating from the training set information representative of the SOI of the image files of the training set;receiving a patient image file of three-dimensional body portions including the patient’s structure of interest;processing the patient image file based upon the information representative of the SOI of the image files of the training set to generate a patient-specific atlas identifying one or more of a shape, location, size or orientation of the patient’s structure of interest in the patient image file.
  • 2. The method of claim 1, wherein the first and second image files are defined by a first coordinate space.
  • 3. The method of claim 2, further comprising: receiving a third image file of the three-dimensional body portion of the first subject including the subject’s SOI, wherein the third image file is defined by a second coordinate space that is different than the first coordinate space, and the second contrast; andgenerating the second image file from the third image file, comprising: segmenting the subject’s SOI in the third image file; andtransforming the segmented SOI into the first coordinate space.
  • 4. The method of claim 1 wherein the patent image file is defined by a contrast that is less than the second contrast.
  • 5. The method of claim 1, wherein generating the training set of information representative of the SOI of the image files comprises generating predictor information.
  • 6. The method of claim 1, further comprising: receiving a post-implantation image file of the three dimensional body portions including the patient’s structure of interest after insertion of an implant into the patient’s structure of interest; andmerging the post-implantation image with the patient-specific atlas to form a post-implantation composite image.
  • 7. The method of claim 1, wherein the patient image file and the post-implantation image file are defined by a contrast that is less than the second contrast.
  • 8. A method for generating training image files for use in connection with the segmentation of structures of interest in a patient image file, comprising: receiving a first image file of a three-dimensional body portion of a training subject including the training subject’s structure of interest (SOI), wherein the first image file is defined by a first coordinate space and a first contrast;receiving a second image file of the three-dimensional body portion of the training subject including the training subject’s SOI, wherein the second image file is defined by a second coordinate space and a second contrast that is greater than the first contrast;generating a third image file of the three-dimensional body portion of the training subject including the training subject’s SOI, wherein the third image file is defined by the first coordinate space and the second contrast, comprising: segmenting the subject’s SOI in the second image file; andtransforming the segmented SOI into the first coordinate space.
  • 9. A method for generating an image, comprising selecting a training set of images, including at least one first image file of a first subject including a structure of interest (SOI) defined by a first coordinate space and a first contrast, and at least one second image file of the first subject including the SOI defined by the first coordinate space and a second contrast that is greater than the first contrast.
Provisional Applications (2)
Number Date Country
61929053 Jan 2014 US
61841955 Jul 2013 US
Continuations (3)
Number Date Country
Parent 17137022 Dec 2020 US
Child 18140437 US
Parent 15457355 Mar 2017 US
Child 17137022 US
Parent 14317925 Jun 2014 US
Child 15457355 US