Method, apparatus and system for joining image volume data

Information

  • Patent Grant
  • 8599215
  • Patent Number
    8,599,215
  • Date Filed
    Thursday, May 7, 2009
    15 years ago
  • Date Issued
    Tuesday, December 3, 2013
    10 years ago
Abstract
An apparatus and method for joining two MRI image data sets to form a composite image. The images are joined together at one or more places along the common area by processing the first and second image data using the square of the normalized intensity difference between at least one group of pixels in the first image data and another group of pixels in the second image data.
Description
BACKGROUND

It is sometimes necessary to join together two images to make a larger, composite image. These images typically have some overlapping features in common which must be aligned properly to create the larger image. Typically, in plane translational offsets must be calculated to align the images. The problem is compounded when dealing with 3D volume data. For example, to diagnose and analyze scoliosis using MRI, separate 3D volume data sets of the upper and lower spine are acquired in order to achieve the necessary field-of-view while preserving necessary image quality and detail. In this case, both in-plane and out-of-plane translational offsets must be computed to align the volume data.


Presently used techniques register two images from differing modalities (e.g., MRI and CT) for the purpose of fusing the data. Both modalities cover the same volume of anatomy and several control points are used to identify common features. Having to use two different modalities with several control points increases the complexity of these techniques.


Of utility then are methods and system that reduce the complexity of prior art systems and processes in joining volume image data from a single modality.


SUMMARY

In one aspect, the present invention provides one or more processes and a user interface that allows for 3D alignment of volume image data. This aspect of the invention is used to join together images from the same modality for the purpose of generating a composite image with a larger field-of-view but with the resolution and quality of individual scans.


In another aspect, the present invention preferably requires only one control point placed on a single image from each data set. Given this single data point, 3D volume data is combined to form a larger 3D volume. For example, where one set of image data is displayed on screen and shows a common area of anatomy with a second set of image data, a control point within the common area is selected. Once that selection is made, a search is done through the volumetric overlapping areas. The two image data sets are joined based on minimization of an error function generated from the area surrounding the control point in both volumetric data sets. The area surrounding the control point is referred to as the control point's neighborhood. The neighborhood can be in-plane (two dimensional) or it can include neighboring slices (three dimensional). In the case of two dimensions it may be rectangular and in the case of three dimensions it may be a rectangular prism (rectangular parallelepiped). The size of the neighborhood is chosen to encompass distinguishing features in the images. In the case of MRI spinal images the neighborhood is chosen to be slightly larger than the dimensions of lumbar vertebrae and adjacent discs. The similarity of the pixel intensities of the neighborhoods of an upper control point and a lower control point is determined by computing the error function. The two neighborhoods are then offset from each other in two (or three) dimensions and the error function is re-computed. This process is continued for other offsets, the limits of which are predetermined. Those offsets that produce the minimum error function are chosen as the best overlapping match and the two image volumes are joined using them.


In other embodiments, more than one data point may be used to form a larger 3D volume. More specifically, two 3D volume data sets may be combined into one larger volume, e.g., in the case of MRI diagnosis and analysis of scoliosis, 3D image data of the upper and lower spine are acquired. These two image sets are loaded into the “joining” software program. The center slice of each set is displayed where the images may be adjusted in brightness and contrast using controls provided through a computer interface, e.g., a mouse, stylus, touch screen, etc. If these images are not desired, other images may be selected by computer control, e.g., scroll bars, either in-synch or independently. The images are displayed above and below each other vertically. If their order is incorrect, they may be swapped.


In another aspect of the present invention an apparatus is provided. The apparatus comprises a memory containing executable instructions; and a processor. The processor is preferably programmed using the instructions to receive pixel data associated with a first magnetic resonance image; receive pixel data associated with a second magnetic resonance image; and detect a common area between the first and second images. Further in accordance with this aspect of the present invention, the processor is also programmed to combine the images together at one or more places along the common area by processing the first and second image data using the square of the normalized intensity difference between at least one group of pixels in the first image data and another group of pixels in the second image data and display the combined image.


Further still in accordance with this aspect of the present invention, the processor is further programmed to display data indicating the offset between the first and second magnetic resonance images in forming the combined image. In addition, in detecting the common area the processor is programmed to search volumetric data associated with the first and second magnetic resonance images based on one or more control points selected in the range.


Further in accordance with this aspect of the present invention, the processor is further programmed to display at least two views of the combined image with one view being orthogonal to the other.


Further still in accordance with this aspect of the present invention, the processor combines the images based on in-plane and out-of-plane offsets.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1A illustratively depicts in-plane translations of image data of two images in accordance with an aspect of the present invention.



FIG. 1B illustratively depicts out-of-plane translations of image data of two images in accordance with an aspect of the present invention.



FIG. 2A is a flow chart illustrating a process for joining two image data sets in accordance with an aspect of the present invention.



FIG. 2B is a flow chart illustrating a process for joining two image data sets in accordance with an aspect of the present invention.



FIGS. 3A and 3B depict a display screen of two images joined in accordance with an aspect of the present invention.



FIG. 4 illustratively depicts a network and equipment that may be used in accordance with an aspect of the present invention.



FIG. 5 illustratively depicts a network and equipment that may be used in accordance with an aspect of the present invention.





DESCRIPTION

Turning now to FIGS. 1A and 1B, there is illustrated two images, top image 102 and bottom image 108, that are taken of portions of a patient's anatomy in accordance with an aspect of the present invention. In the preferred embodiment, the images are taken using the same modality, and most preferably using an MRI system such as that described in commonly assigned U.S. Pat. No. 6,828,792, the disclosure of which is incorporated herein by reference. In addition, though the images shown in the accompanying drawings show the use of the methods and systems in detecting images of the spine, the present invention is not limited to processing only spinal images. The present invention may be applied to making composite images from any portion of a patient's anatomy, including, for example, any organ in the patient's torso.


More specifically, FIG. 1A shows in-plane translations of two images that may be joined or stitched together to form a larger composite image having a desired resolution. As this figure shows, a top image 102 and a bottom image 108 includes common or overlapping data 116, e.g., pixel data, that may be used to form or stitch together the two images. (In FIG. 1A, overlapping data 116 is shown as the cross hatched region.) In addition to overlapping data, both images include non-overlapping data 1201, 1202 that is used to form the combined image. Some of the non-overlapping data 130 is preferably discarded since it is outside the area of interest.



FIG. 1B shows the out of plane translation and how the discarded and retained image data are processed along this plane. For example, assuming FIG. 1A shows the image data along an x-y plane, FIG. 1B shows the same data along a third dimension, e.g., the z-axis. As is known in the art, MRI accumulates volumetric pixel data in slices. These slices when combined provide the three dimensional images that make this modality popular for looking at the human anatomy since soft tissue and non-soft tissue can be shown in proper anatomical position.



FIG. 2A is a high level flow chart showing a method in accordance with an aspect of the present invention. Preferably two modes of operations are provided: (1) manual mode; and (2) automatic mode. In the manual mode of operation the images may be translated vertically and horizontally relative to each other under computer control until they are aligned properly. For example, tracking balls or other computer input control means may be used to provide the relative movement needed for alignment. FIGS. 3A and 3B show, for example, the use of “Manual Shifts” that may be used to offset the different images relative to each other. By aligning a single image pair, the entire image volume is aligned using the same translational offsets. Out-of-plane alignment may be accomplished similarly by offsetting displayed images or slices. Any data with horizontal coordinates not common to both data sets is not included in the joined data as is shown in FIGS. 1A and 1B.


As shown in FIG. 2A, the process may be initiated with a response to a request to swap images 210. Once a response to the request to swap images is received the mode is then selected, block 220. If manual mode 222 is selected, the process proceeds as discussed above with a user manually adjusting the image to obtain initial alignment.


More specifically, with regard to manual mode 222, an adjustment is made of the slices associated with the top image, block 230. This may require a vertical image adjustment 232 and a horizontal image adjustment 238. Once these adjustments are made, the images are aligned and joined 244 as is discussed in further detail below.


If the manual mode 222 is not selected, the process proceeds in automatic mode 224. In automatic mode, after the images showing the best overlapping features are selected into the display, an “Auto Join” feature is then selected, block 250, preferably via software. Using mouse or other control a user may then place a cursor on one of the features in the upper image, block 254. (Although this description is done with respect to upper and lower images, it should be apparent to one skilled in the art the images may be arranged in other orientations, e.g., side-by-side, in lieu of an upper/lower orientation.) In this regard, the feature may be a particular portion of the anatomy, such as a particular vertebrae. A second cursor is placed over the corresponding feature in the lower image, block 260, and the 3D volume data is then joined automatically 244 using the best in-plane and out-of-plane offsets. The offsets are computed using an algorithm that minimizes the square of the normalized intensity differences between the pixel neighborhoods of the two cursor locations. The pixel neighborhoods may be in-plane (2 dimensional) or a volume (3 dimensional). That is, in 3D:






min





i


N
i








j


N
j








k


N
k






[



G


(

i
,
j
,
k

)




A
G



(

i
,
j
,
k

)



-


H


(

i
,
j
,
k

)




A
H



(

i
,
j
,
k

)




]

2









where


Ni is the pixel neighborhood in the i-direction;


Nj is the pixel neighborhood in the j-direction;


Nk is the pixel neighborhood in the k-direction;


G(i,j,k) is the pixel value at i, j, k for the first or upper data;


H(i,j,k) is the pixel value at i, j, k for the second or lower data;


AG(i,j,k) is the average intensity of the pixel neighborhood centered at i, j, k for the upper data; and


AH(i,j,k) is the average intensity of the pixel neighborhood centered at i, j, k for the lower data.


The average intensities, AG(i,j,k) and AH(i,j,k) are used to normalize the pixel data so as to mitigate differences in common feature intensity levels between the two data sets. The combination of i, j and k which minimizes the function yields the offset values used to translate the images before joining the pixel data.


As discussed above, the neighborhood is chosen based on distinguishing features in the images. In effect, this results in a determination based on distance. If too big of an area is chosen, the computational complexity increases. Once the control points are chosen, a search is performed to determine similarities of the control point neighborhoods. The pixel intensities are used in the above error function to determine the similarities between the two neighborhoods. The images are then offset from each other and the error function is then recalculated. The change in the error function value provides an indication of whether the offset is converging toward an optimal point. In other words, the offsets that minimize the above error function provide the best overlapping match.



FIG. 2B is a flow diagram illustrating the above process. As shown, the process begins at block 262 with the selection of control points in the first and second images. With the control points selected, the neighborhood size is set around the control points based on pre-defined limits, block 264. Next, the neighborhood offset limits are set in the x, y and z directions according to pre-defined values, block 266. The second image neighborhood is then offset with respect to the first image control point, block 270.


Next, the normalized error function is computed based on overlapping neighborhood pixel intensities, block 274. As discussed above, the normalized error function is computed using the above equation. The newly computed normalized error function is compared to the minimum error function, block 276. If the new error function is less than the minimum error function, the minimum error function is replaced by the new error function, block 280, and the x, y and z offsets are incremented, block 282. If the new error function is not less than the minimum error function, the process proceeds directly to block 282.


The process then determines whether the volume search is completed, block 284. That is, a determination is made of whether a search of the pertinent volume of neighborhood pixel data is searched. If the volume search is incomplete, the process returns to block 270 and repeats. If the volume search is completed, the first and second images are joined or stitched together, block 290. A “C” routine that computes the offsets is provided below as part of the specification.


An example of a user interface and a composite image formed using the present invention is shown in FIGS. 3A and 3B. The left image 302 of FIG. 3A shows upper and lower sagittal spine images from two 3D volume data sets. These two data sets are to be combined into one large composite data set by overlapping them with appropriate x, y and z slice shifts. The vertical line 304 indicates an orthogonal plane cut through the 3D volumes and the resulting orthogonal coronal images 312 are shown on the right. This feature is used to facilitate the marking of the two control points 320, 322 that are used as starting locations for the offset computations. After the “Join” procedure is instantiated using the Join Button 328, the offset computations or shift finding process finds the best combination of x, y and z slice shifts and combines the 3D data volumes into a singular composite 3D volume with those shifts. A slice of the new 3D volume 360 is shown on the left side of FIG. 3B and the coronal cut plane resulting from the orthogonal cut 374 is shown on the 364. The shifts determined by the shift process are shown in the shift control panels 380 on the right side of FIG. 3B. As can also be seen from a comparison of FIGS. 3A and 3B, the upper and lower images are now offset from one another as shown by the boxes 384, 388.


The images shown in FIGS. 3A and 3B are of the spine of a patient and demonstrate the use of the methods described herein in detecting, for example, scoliosis. Methods and systems for performing sciolis are described in commonly assigned U.S. application Ser. No. 12/152,139, the disclosure of which is hereby incorporated herein by reference.


A block diagram of a computer system on which the features of the inventions discussed above may be executed is shown in FIGS. 4 and 5. In accordance with one aspect of the invention, system 100 includes a computer 150 containing a processor 152, memory 154 and other components typically present in general purpose computers.


Memory 154 stores information accessible by processor 152, including instructions 180 that may be executed by the processor 152 and data 182 that may be retrieved, manipulated or stored by the processor. The memory may be of any type capable of storing information accessible by the processor, such as a hard-drive, memory card, ROM, RAM, DVD, CD-ROM, write-capable, read-only memories. The memory 154 may contain machine executable instructions or other software programs that embody the methods discussed above and generally shown in FIGS. 2A and 2B. In addition, the code shown below is preferably stored as machine readable instructions in memory 154.


The processor may comprise any number of well known general purpose processors, such as processors from Intel Corporation or AMD. Alternatively, the processor may be a dedicated controller such as an ASIC.


The instructions 180 may comprise any set of instructions to be executed directly (such as machine code) or indirectly (such as scripts or source code) by the processor. Further, the terms “instructions,” “steps” and “programs” may be used interchangeably herein. The instructions may be stored in object code form for direct processing by the processor, or in any other computer language including scripts or collections of independent source code modules that are interpreted on demand or compiled in advance. As shown below, the functions, methods and routines of instructions may comprise source code written in “C”.


Data 182 may be retrieved, stored or modified by processor 152 in accordance with the instructions 180. The data may be stored as a collection of data and will typically comprise the MRI image or pixel data discussed above. For instance, although the invention is not limited by any particular data structure, the data may be stored in computer registers, in a database as a table having a plurality of different fields and records, XML documents, or flat files. The data may also be formatted in any computer readable format such as, but not limited to, binary values, ASCII or EBCDIC (Extended Binary-Coded Decimal Interchange Code). Moreover, the data may comprise any information sufficient to identify the relevant information, such as descriptive text, proprietary codes, pointers, references to data stored in other memories (including other network locations) or information which is used by a function to calculate the relevant data.


Although the processor and memory are functionally illustrated in FIG. 5 within the same block, it will be understood by those of ordinary skill in the art that the processor and memory may actually comprise multiple processors and memories that may or may not be stored within the same physical housing. For example, some of the instructions and data may be stored on removable CD-ROM and others within a read-only computer chip. Some or all of the instructions and data may be stored in a location physically remote from, yet still accessible by, the processor. Similarly, the processor may actually comprise a collection of processors which may or may not operate in parallel.


In another aspect, the methods, programs, software or instructions described above may executed via a server 110. This is desirable where many users need to access and view the same data, such as in a hospital or in non-collocated facilities. Server 110 communicates with one or more client computers 150, 151, 153. Each client computer may be configured as discussed above. Each client computer may be a general purpose computer, intended for use by a person, having all the internal components normally found in a personal computer such as a central processing unit (CPU), display 160, CD-ROM, hard-drive, mouse, keyboard, speakers, microphone, modem and/or router (telephone, cable or otherwise) and all of the components used for connecting these elements to one another. Moreover, computers in accordance with the systems and methods described herein may comprise any device capable of processing instructions and transmitting data to and from humans and other computers, including network computers lacking local storage capability, PDA's with modems and Internet-capable wireless phones. In addition to a mouse, keyboard and microphone, other means for inputting information from a human into a computer are also acceptable such as a touch-sensitive screen, voice recognition, etc.


The server 110 and client computers 150, 151, 153 are capable of direct and indirect communication, such as over a network. Although only a few computers are depicted in FIGS. 4 and 5, it should be appreciated that a typical system can include a large number of connected computers, with each different computer being at a different node of the network. The network, and intervening nodes, may comprise various configurations and protocols including the Internet, intranets, virtual private networks, wide area networks, local networks, private networks using communication protocols proprietary to one or more companies, Ethernet, WiFi and HTTP. Such communication may be facilitated by any device capable of transmitting data to and from other computers, such as modems (e.g., dial-up or cable), networks and wireless interfaces. Server 110 may be a web server. Although certain advantages are obtained when information is transmitted or received as noted above, other aspects of the invention are not limited to any particular manner of transmission of information. For example, in some aspects, the information may be sent via a medium such as a disk, tape, CD-ROM.


The information may also be transmitted over a global or private network, or directly between two computer systems, such as via a dial-up modem. In other aspects, the information may be transmitted in a non-electronic format and manually entered into the system.


It should be understood that the operations discussed above and illustrated, for example, in FIGS. 2 and 3, do not have to be performed in the precise order described above. Rather, various steps may be handled simultaneously and in some instances in reverse order.


Source Code Example













void AutoJoin( LPSTUDY lpStudyTop, LPSTUDY lpStudyBot,


   LPIMAGEINFO lpImageRefTop,


   POINT ptTopSeed, POINT ptBotSeed,


   int nTopSliceRef, int nBotSliceRef,


   int * iBot, int * jBot, int * kBot )


{


 int     i, j, k;


 int     iCC, jCC, kCC;


 int     iMax, jMax, kMax;


 int     iOffset, jOffset, kOffset;


 int     iLimitLo, jLimitLo, kLimitLo;


 int     iLimitHi, jLimitHi, kLimitHi;


 int     nLimit;


 int     nHeight, nWidth;


 int     iRange, jRange, kRange;


 double     dTemp;


 double     dSum;


 double     dConvert;


 double     MinCC;


 double     dAvgTop, dAvgBot;


 unsigned short     *usTop, *usBot;


 IMAGE     ImTop, ImBot;


 // make i and j limits 20 mm, k limit 10 mm


 nLimit = RoundDouble( lpImageRefTop->lpImgChunk->pix_


per_mm * 20.0 );


 iLimitLo = nLimit;


 iLimitHi = nLimit;


 jLimitLo = nLimit;


 jLimitHi = nLimit;


 kLimitLo = nLimit/2;


 kLimitHi = nLimit/2;


 nHeight = nWidth = lpImageRefTop->lpinfoHeader->biWidth;


 // convert all values to image coordinates


 dConvert = (double)lpImageRefTop->lpInfoHeader->biWidth /


lpDisplayImageTop->lpInfoHeader->biWidth;


 ptTopSeed.x = (int)( dConvert * ptTopSeed.x );


 ptTopSeed.y = (int)(‘dConvert * ptTopSeed.y );


 ptBotSeed.x = (int)( dConvert * ptBotSeed.x );


 ptBotSeed.y = (int)( dConvert * ptBotSeed.y );


 if( ptTopSeed.y − iLimitLo < 0 | | ptBotSeed.y − iLimitLo < 0 )


  iLimitLo = _min( ptTopSeed.y, ptBotSeed.y );


 if( ptTopSeed.x − jLimitLo < 0 | | ptBotSeed.x − jLimitLo < 0 )


  jLimitLo = _min( ptTopSeed.x, ptBotSeed.x );


 if( nTopSliceRef − kLimitLo < lpStudyTop->nScoutOffset | |


nBotSliceRef − kLimitLo < lpStudyBot->nScoutOffset)


  kLimitLo = _min( nTopSliceRef, nBotSliceRef ) − 1;


if( ptTopSeed.y + iLimitHi >= nWidth | 51 ptBotSeed.y + iLimitHi >=


nWidth )


  iLimitHi = _min( nWidth-ptTopSeed.y, nWidth-ptBotSeed.y ) − 1;


 if( ptTopSeed.x + jLimitHi >= nHeight | | ptBotSeed.x + jLimitHi >=


nHeight )


  jLimitHi = min( nHeight-ptTopSeed.x, nHeight-ptBotSeed.x ) −


1;


 if( nTopSliceRef + kLimitHi >= lpStudyTop->NumberImages | |


nBotSliceRef + kLimitHi >= lpStudyBot->NumberImages)


  kLimitHi = _min( lpStudyTop->NumberImages-nTopSliceRef,


lpStudyBot->NumberImages-nBotSliceRef ) −1;


 iRange = iLimitHi+iLimitLo+1;


 jRange = jLimitHi+jLimitLo+1;


 kRange = kLimitHi+kLimitLo+1;


 kCC = jCC = iCC = 0;


 MinCC = le20;


 for( kCC = −kLimitLo; kCC <= kLimitHi; kCC++ )


 {


  for( iCC = −iLimitLo; iCC <= iLimitHi; iCC++ )


  {


   for( jCC = −jLimitLo; jCC <= jLimitHi; jCC++ )


   {


    dAvgTop = 0;


    dAvgBot = 0;


    for(k= −kiLimitLo; k<= kLimitHi; k++ )


    {


     ImTop = lpStudyTop-


>lpImageInfo[nTopSliceRef+k];


     usTop = (unsigned short*)ImTop.lp12BitImage;


     ImBot = lpStudyBot-


>lpImageInfo[nBotSliceRef+kCC+k];


     usBot = (unsigned short*)ImBot.lp12BitImage;


     for( i = −iLimitLo; i <= iLimitHi; i++ )


     {


      for( j = −jLimitLo; j <= jLimitHi; j++


)


      {


       dAvgTop +=


(double)usTop[(ptTopSeed.y+i)*nWidth + (ptTopSeed.x+j)];


       dAvgBot +=


(double)usBot[(ptBotSeed.y+iCC+i)*nWidth + (ptBotSeed.x+jCC+j)];


      }


     }


    }


    dAvgTop /= iRange*jRange*kRange;


    dAvgBot /= iRange*jRange*kRange;


    dSum = 0;


    for( k = −kLimitLo; k <= kLimitHi; k++ )


    {


     ImTop = lpStudyTop-


>lpImageInfo[nTopSliceRef+k];


     usTop = (unsigned short*)ImTop.lp12BitImage;


     ImBot = lpStudyBot-


>lpImageInfo[nBotSliceRef+kCC+k];


     usBot = (unsigned short*)ImBot.lp12BitImage;


     for( i = −iLimitLo; i <= iLimitHi; i++ )


     {


      for( j = −jLimitLo; j <= jLimitHi; j++


)


      {


       dTemp =


(double)usTop[(ptTopSeed.y+i)*nWidth + (ptTopSeed.x+j)]/dAvgTop −


 usBot[(ptBotSeed.y+iCC+i)*nWidth + (ptBotSeed.x+jCC+j)]/


dAvgBot;


       dTemp = dTemp*dTemp;


       dSum += dTemp;


      }


     }


    }


    if( dSum <= MinCC )


    {


     MinCC = dSum;


     iMax = iCC;


     jMax = jCC;


     kMax = kCC;


    }


   }


  }


 }


 // convert i and j back to screen coordinates


 *iBot = RoundDouble( iMax / dConvert );


 *jBot = RoundDouble( jMax / dConvert );


 *kBot = kMax;


}









Although the invention herein has been described with reference to particular embodiments, it is to be understood that these embodiments are merely illustrative of the principles and applications of the present invention. It is therefore to be understood that numerous modifications may be made to the illustrative embodiments and that other arrangements may be devised without departing from the spirit and scope of the present invention as defined by the appended claims.

Claims
  • 1. An apparatus, comprising: a memory containing executable instructions; anda processor programmed using the instructions to: receive pixel data associated with a first magnetic resonance image havinga first control point; receive pixel data associated with a second magnetic resonance image having a second control point, wherein the second image only partially overlaps the first image;for each of a plurality of offsets between the first control point and the second control point, compare a first neighborhood of pixels surrounding the first control point and a second neighborhood of pixels surrounding the second control point using the square of the normalized intensity difference between the first neighborhood of pixels and the second neighborhood of pixels;combine the images together at one or more places using one of the plurality of offsets based on the comparison between the first neighborhood of pixels and the second neighborhood of pixels, thereby forming a combined image; andprovide data for displaying the combined image.
  • 2. The apparatus of claim 1, wherein the processor is programmed to compare a first neighborhood of pixels that are in-plane and a second neighborhood of pixels.
  • 3. The apparatus of claim 1, wherein the processor is programmed to compare a first neighborhood of pixels comprising multiple planes of pixels and a second neighborhood of pixels.
  • 4. The apparatus of claim 1, wherein the processor is programmed to compare a first neighborhood of pixels having a rectangular parallelepiped shape and a second neighborhood of pixels.
  • 5. The apparatus of claim 1, wherein the processor is programmed to define each of the first neighborhood of pixels and the second neighborhood of pixels based on a predetermined limit.
  • 6. The apparatus of claim 1, wherein the processor is programmed to define offset values for each of the plurality of offsets based upon predefined values.
  • 7. The apparatus of claim 1, wherein the processor is programmed to define an offset value for each of the plurality of offsets in each of three dimensions.
  • 8. The apparatus of claim 1, wherein the processor is further programmed to provide data for display indicating the offset between the first and second magnetic resonance images used in forming the combined image.
  • 9. The apparatus of claim 1, wherein the processor is further programmed to discard select image data from the first and second image data based on the detected common area.
  • 10. The apparatus of claim 1, wherein in comparing the first neighborhood of pixels and the second neighborhood of pixels, the processor is programmed to search volumetric data associated with the first and second neighborhoods of pixels.
  • 11. The apparatus of claim 1, wherein the processor is further programmed to provide data for displaying at least two views of the combined image with one view being orthogonal to the other.
  • 12. The apparatus of claim 1, wherein the processor combines the images based on in-plane and out-of-plane offsets.
  • 13. The apparatus of claim 1, wherein, for each of the plurality of offsets, the processor is programmed to determine a degree of similarity between the first neighborhood of pixels and the second neighborhood of pixels, and combine the images using the offset for which the corresponding degree of similarity is minimized, wherein a lower degree of similarity indicates a greater similarity between the compared neighborhoods of pixels.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of the filing date of U.S. Provisional Application No. 61/126,752, filed May 7, 2008, the disclosure of which is hereby incorporated herein by reference.

US Referenced Citations (148)
Number Name Date Kind
3810254 Utsumi et al. May 1974 A
4407292 Edrich et al. Oct 1983 A
4411270 Damadian Oct 1983 A
4534076 Barge Aug 1985 A
4534358 Young Aug 1985 A
D283858 Opsvik May 1986 S
4608991 Rollwitz Sep 1986 A
4613820 Edelstein et al. Sep 1986 A
4614378 Picou Sep 1986 A
4641119 Moore Feb 1987 A
4651099 Vinegar et al. Mar 1987 A
4663592 Yamaguchi et al. May 1987 A
4664275 Kasai et al. May 1987 A
4668915 Daubin et al. May 1987 A
4672346 Miyamoto et al. Jun 1987 A
4675609 Danby et al. Jun 1987 A
4707663 Minkoff et al. Nov 1987 A
4766378 Danby et al. Aug 1988 A
4767160 Mengshoel et al. Aug 1988 A
4770182 Damadian et al. Sep 1988 A
4777464 Takabatashi et al. Oct 1988 A
4816765 Boskamp et al. Mar 1989 A
4829252 Kaufman May 1989 A
4875485 Matsutani Oct 1989 A
4908844 Hasegawa et al. Mar 1990 A
4920318 Misic et al. Apr 1990 A
4924198 Laskaris May 1990 A
4943774 Breneman et al. Jul 1990 A
4985678 Gangarosa et al. Jan 1991 A
5008624 Yoshida et al. Apr 1991 A
5030915 Boskamp et al. Jul 1991 A
5050605 Eydelman et al. Sep 1991 A
5061897 Danby et al. Oct 1991 A
5065701 Punt Nov 1991 A
5065761 Pell Nov 1991 A
5081665 Kostich Jan 1992 A
5124651 Danby et al. Jun 1992 A
5134374 Breneman et al. Jul 1992 A
5153517 Oppelt et al. Oct 1992 A
5153546 Laskaris Oct 1992 A
5155758 Vogl Oct 1992 A
5162768 McDougall et al. Nov 1992 A
5171296 Herman Dec 1992 A
5194810 Breneman et al. Mar 1993 A
5197474 Englund et al. Mar 1993 A
5207224 Dickinson et al. May 1993 A
5221165 Goszczynski Jun 1993 A
5229723 Sakurai et al. Jul 1993 A
5250901 Kaufman et al. Oct 1993 A
5251961 Pass Oct 1993 A
5256971 Boskamp Oct 1993 A
5274332 Jaskolski et al. Dec 1993 A
5291890 Cline et al. Mar 1994 A
5304932 Carlson Apr 1994 A
5305365 Coe Apr 1994 A
5305749 Li et al. Apr 1994 A
5315244 Griebeler May 1994 A
5315276 Huson et al. May 1994 A
5317297 Kaufman et al. May 1994 A
5323113 Cory et al. Jun 1994 A
5349956 Bonutti Sep 1994 A
5382904 Pissanetzky Jan 1995 A
5382905 Miyata et al. Jan 1995 A
5386447 Siczek Jan 1995 A
5394087 Molyneaux Feb 1995 A
5412363 Breneman et al. May 1995 A
5471142 Wang et al. Nov 1995 A
5473251 Mori Dec 1995 A
5475885 Ishikawa et al. Dec 1995 A
5477146 Jones Dec 1995 A
5490513 Damadian et al. Feb 1996 A
5515863 Damadian May 1996 A
5519372 Palkovich et al. May 1996 A
5548218 Lu Aug 1996 A
5553777 Lampe Sep 1996 A
5566681 Manwaring et al. Oct 1996 A
5592090 Pissanetzky Jan 1997 A
5606970 Damadian Mar 1997 A
5621323 Larsen Apr 1997 A
5623241 Minkoff Apr 1997 A
5640958 Bonutti Jun 1997 A
5652517 Maki et al. Jul 1997 A
5654603 Sung et al. Aug 1997 A
5666056 Cuppen et al. Sep 1997 A
5671526 Merlano et al. Sep 1997 A
5680861 Rohling Oct 1997 A
5682098 Vij Oct 1997 A
5743264 Bonutti Apr 1998 A
5754085 Danby et al. May 1998 A
5779637 Palkovich et al. Jul 1998 A
5836878 Mock et al. Nov 1998 A
5862579 Blumberg et al. Jan 1999 A
5929639 Doty Jul 1999 A
5951474 Matsunaga et al. Sep 1999 A
D417085 Kanwetz, II Nov 1999 S
5983424 Naslund et al. Nov 1999 A
5988173 Scruggs Nov 1999 A
6008649 Boskamp et al. Dec 1999 A
6014070 Danby et al. Jan 2000 A
6023165 Damadian et al. Feb 2000 A
6075364 Damadian et al. Jun 2000 A
6122541 Cosman et al. Sep 2000 A
6137291 Szumowski et al. Oct 2000 A
6138302 Sashin et al. Oct 2000 A
6144204 Sementchenko Nov 2000 A
6150819 Laskaris et al. Nov 2000 A
6150820 Damadian et al. Nov 2000 A
6201394 Danby et al. Mar 2001 B1
6208144 McGinley et al. Mar 2001 B1
6226856 Kazama et al. May 2001 B1
6246900 Cosman et al. Jun 2001 B1
6249121 Boskamp et al. Jun 2001 B1
6249695 Damadian Jun 2001 B1
6285188 Sakakura et al. Sep 2001 B1
6332034 Makram-Ebeid et al. Dec 2001 B1
6357066 Pierce Mar 2002 B1
6369571 Damadian et al. Apr 2002 B1
6377044 Burl et al. Apr 2002 B1
6385481 Nose et al. May 2002 B2
6411088 Kuth et al. Jun 2002 B1
6414490 Damadian et al. Jul 2002 B1
6424854 Hayashi et al. Jul 2002 B2
6456075 Damadian et al. Sep 2002 B1
6468218 Chen et al. Oct 2002 B1
6504371 Damadian et al. Jan 2003 B1
6591128 Wu et al. Jul 2003 B1
6677753 Danby et al. Jan 2004 B1
6792257 Rabe et al. Sep 2004 B2
6806711 Reykowski Oct 2004 B2
6828792 Danby et al. Dec 2004 B1
6850064 Srinivasan Feb 2005 B1
6882149 Nitz et al. Apr 2005 B2
6882877 Bonutti Apr 2005 B2
6894495 Kan May 2005 B2
7049819 Chan et al. May 2006 B2
7221161 Fujita et al. May 2007 B2
7245127 Feng et al. Jul 2007 B2
7348778 Chu et al. Mar 2008 B2
7474098 King Jan 2009 B2
20030156758 Bromiley et al. Aug 2003 A1
20040204644 Tsougarakis et al. Oct 2004 A1
20050122343 Bailey et al. Jun 2005 A1
20050213849 Kreang-Arekul et al. Sep 2005 A1
20070092121 Periaswamy et al. Apr 2007 A1
20070165921 Agam et al. Jul 2007 A1
20080292194 Schmidt et al. Nov 2008 A1
20080317317 Shekhar et al. Dec 2008 A1
20100111375 Jones May 2010 A1
Provisional Applications (1)
Number Date Country
61126752 May 2008 US