The disclosure relates generally to the field of volume imaging and more particularly to methods and apparatus for combining volume images that have been reconstructed from radiographic projection images of the human head of a patient with image content from motion studies of the patient.
Radiological imaging is acknowledged to be of value for the dental practitioner, helping to identify various problems and to validate other measurements and observations related to the patient's teeth and supporting structures. Among x-ray systems with particular promise for improving dental care is the extra-oral imaging apparatus that is capable of obtaining one or more radiographic images in series and, where multiple images of the patient are acquired at different angles, combining these images to obtain a 3-D reconstruction showing the dentition of the jaw and other facial features for a patient. Various types of imaging apparatus have been proposed for providing volume image content of this type. In these types of systems, a radiation source and an imaging detector, maintained at a known distance (e.g., fixed or varying) from each other, synchronously revolve about the patient over a range of angles, taking a series of images by directing and detecting radiation that is directed through the patient at different angles of revolution. For example, a volume image that shows the shape and dimensions of the head and jaws structure can be obtained using computed tomography (CT), such as cone-beam computed tomography (CBCT), or other volume imaging method, including magnetic resonance imaging (MRI) or magnetic resonance tomography (MRT).
While 3-D radiographic imaging techniques can be used to generate volume images that accurately show internal structure and features, however, there are some limitations to the type of information that is available. One limitation relates to the static nature of the volume image content that is obtained. CBCT volume reconstruction requires the imaged subject to be stationary, so that 2-D radiographic image content that is used in reconstructing the 3-D volume information can be captured from a number of angles for combination. The patient must be still during imaging, so that points in image space can be accurately characterized in order to generate reliable voxel data. Only limited information useful for movement analysis for the jaws and related structure can be obtained due to CBCT imaging constraints. However, the capability to analyze and to visualize motion with the 3-D content can be very useful to help in diagnosing various conditions and in monitoring treatment results.
Motion analysis can be particularly useful for supporting the assessment and treatment of various conditions of the jaw, including diseases related to the mandibular joint and craniomandibular dysfunction, for example. Jaw motion analysis can also be of value in preparation of occlusal mouth-guards, dentures and other prosthetics, as well as in supporting aesthetically functional reconstruction, with or without tooth implants.
Conventional techniques for jaw motion analysis have included the dental pantograph that generates a graphical output representative of jaw movement. These methods are hampered by difficulties of setup and use, and provide only a relatively limited amount of information for characterizing jaw movement.
Solutions that have been proposed for providing jaw motion analysis include the use of various types of measurement sensors for determining the position of a jaw or facial feature in 3-D space. One exemplary system is described, for example, in U.S. Patent Application Publication No. 2003/0204150 by Brunner. To use this type of solution, it is necessary to attach and register various signal sensor and detector elements to the head and jaw of the patient, such as using head bands, bite structures, and other features. With these elements attached to the head and jaw, motion analysis data can be collected by having the patient move mouth and jaw in a fixed sequence of positions and recording the movement data. Then, once the movement sequence is complete, the motion information can be spatially correlated to the reconstructed 3-D volume, so that movement of the jaw and related structures can be modeled and assessed. The sensor and detector elements used for such a solution can include ultrasound emitters and receivers, cameras paired with LED markers or lasers, magnets and Hall effect sensors or pickup coils, and other types of motion detection apparatus.
The sensor and instrumentation detector attachment features currently in use or proposed for jaw motion analysis, however, can be somewhat bulky and awkward to use. Significant preparation time can be required for setting up the sensor/detector arrangement, including registering the instrumentation devices to the patient and to each other. The required instrumentation and harnessing can be costly and cumbersome and may make it difficult for the patient to move the mouth in a normal fashion, thus adversely affecting the motion data that is obtained.
Reference is also made to U.S. Pat. No. 4,836,778 to Baumrind et al., which shows a detector/sensor arrangement that employs infrared LEDs paired with photodiodes for measuring mandibular movement. U.S. Patent Application Publication No. 2013/0157218 by Brunner et al. shows another detector/sensor configuration for detecting jaw motion.
Video imaging of mandibular motion using three cameras in different positions about the patient has been described, for example, in U.S. Pat. No. 5,340,309 to Robertson.
International patent application publication WO 2013/175018 describes a method for generating a virtual jaw image.
Although various attempts have been made to provide sensing mechanisms that measure jaw motion, there is a need for improved methods that not only accurately profile jaw movement without significant cost or patient discomfort, but are also able to integrate this information with radiographic volume images from CBCT and related systems. The capability to relate jaw motion information with volume image content for the internal jaw structure can give the dental practitioner a useful tool for assessing a patient's condition, for providing an appropriate treatment plan, and for monitoring the status of a particular treatment approach.
It is an object of this application to advance the art of volume imaging and visualization used in medical and dental applications.
Another object of this application is to address, in whole or in part, at least the foregoing and other deficiencies in the related art.
It is another object of this application to provide, in whole or in part, at least the advantages described herein.
Method and/or apparatus embodiments of this application address the particular need for improved visualization and assessment of jaw motion, wherein internal structures obtained using CBCT and other radiographic volume imaging techniques can be correlated to motion data obtained from the patient. By combining volume image data with data relating to motion of the patient's jaw or other feature, method and/or apparatus embodiments of the present disclosure can help the medical or dental practitioner to more effectively characterize a patient's condition relative to jaw movement and to visualize the effects of a treatment procedure for improving or correcting particular movement-related conditions.
These objects are given only by way of illustrative example, and such objects may be exemplary of one or more embodiments of the invention. Other desirable objectives and advantages inherently achieved by the may occur or become apparent to those skilled in the art. The invention is defined by the appended claims.
According to one aspect of the disclosure, there is provided an apparatus for imaging the head of a patient, that can include a transport apparatus that moves an x-ray source and an x-ray detector in at least partial orbit about a head supporting position for the patient and configured for acquiring, at each of a plurality of angles about the supporting position, a 2-D radiographic projection image of the patient's head; one or more head marker retaining elements that hold a first plurality of markers in position relative to the skull of the patient; one or more jaw marker retaining elements that hold a second plurality of markers in position relative to the jaw bone of the patient; at least one camera that is disposed and energizable to acquire a jaw motion study comprising a set of a plurality of reflectance images from the head supporting position; a control logic processor that is in signal communication with the x-ray source, x-ray detector, and the at least one camera and that is configured by programmed instructions to reconstruct volume image content using the 2-D radiographic projection images acquired, to segment the jaw bone structure from the skull bone structure, to register the reconstructed volume image content to the first and second plurality of markers, and to generate animated display content according to jaw bone structure motion acquired from the acquired set of reflectance images; and a display that is in signal communication with the control logic processor and is energizable to display the generated animated display content.
According to one aspect of the disclosure, there is provided a method for recording movement of a jaw relative to a skull of a patient that can include orbiting an x-ray source and an x-ray detector in at least partial orbit about a head supporting position for the patient; acquiring, at each of a plurality of angles about the supporting position, a 2-D radiographic projection image of the patient's head; reconstructing a volume image from the acquired 2-D radiographic projection images; segmenting the jaw of the patient and the skull of the patient from the reconstructed volume image to form a reconstructed, segmented volume image; energizing a light source and recording a series of a plurality of reflectance images of the head during movement of the jaw of the patient; registering the recorded reflectance images to the reconstructed, segmented volume image according to a first set of markers that are coupled to a skull of the patient and a second set of markers that are coupled to the jaw of the patient; and generating and displaying an animation showing movement of the jaw bone relative to the skull according to the recorded series of reflectance images.
The foregoing and other objects, features, and advantages of the invention will be apparent from the following more particular description of the embodiments of the invention, as illustrated in the accompanying drawings. The elements of the drawings are not necessarily to scale relative to each other.
The following is a detailed description of the preferred embodiments, reference being made to the drawings in which the same reference numerals identify the same elements of structure in each of the several figures.
Where they are used, the terms “first”, “second”, and so on, do not necessarily denote any ordinal or priority relation, but may be used for more clearly distinguishing one element or time interval from another.
As used herein, the term “energizable” relates to a device or set of components that perform an indicated function upon receiving power and, optionally, upon receiving an enabling signal. The opposite state of “energizable” is “disabled”.
The term “actuable” has its conventional meaning, relating to a device or component that is capable of effecting an action in response to a stimulus, such as in response to an electrical signal, for example.
The term “modality” is a term of art that refers to types of imaging. Modalities for an imaging system may be conventional x-ray, fluoroscopy, tomosynthesis, tomography, ultrasound, nuclear magnetic resonance (NMR), contour imaging, color reflectance imaging using reflected visible light, reflectance imaging using infrared light, or other types of imaging. The term “subject” refers to the patient who is being imaged and, in optical terms, can be considered equivalent to the “object” of the corresponding imaging system.
In the context of the present disclosure, the term “coupled” is intended to indicate a mechanical association, connection, relation, or linking, between two or more components, such that the disposition of one component affects the spatial disposition of a component to which it is coupled. For mechanical coupling, two components need not be in direct contact, but can be linked through one or more intermediary components or fields.
It will be understood that when an element is referred to as being “connected,” or “coupled,” to another element, it can be directly connected or coupled to the other element or intervening elements or magnetic fields may be present. In contrast, when an element is referred to as being “directly connected,” or “directly coupled,” to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., “between,” versus “directly between,” “adjacent,” versus “directly adjacent,” etc.).
The term “exemplary” indicates that the description is used as an example, rather than implying that it is an ideal.
The phrase “in signal communication” as used in the application means that two or more devices and/or components are capable of communicating with each other via signals that travel over some type of signal path. Signal communication may be wired or wireless. The signals may be communication, power, data, or energy signals which may communicate information, power, and/or energy from a first device and/or component to a second device and/or component along a signal path between the first device and/or component and second device and/or component. The signal paths may include physical, electrical, magnetic, electromagnetic, optical, wired, and/or wireless connections between the first device and/or component and second device and/or component. The signal paths may also include additional devices and/or components between the first device and/or component and second device and/or component.
In the context of the present disclosure, the terms “pixel” and “voxel” may be used interchangeably to describe an individual digital image data element, that is, a single value representing a measured image signal intensity. Conventionally an individual digital image data element is referred to as a voxel for 3-Dimensional or volume images and a pixel for 2-Dimensional (2-D) images. Volume images, such as those from CT or CBCT apparatus, are formed by obtaining multiple 2-D images of pixels, taken at different relative angles, then combining the image data to form corresponding 3-D voxels. For the purposes of the description herein, the terms voxel and pixel can generally be considered equivalent, describing an image elemental datum that is capable of having a range of numerical values. Voxels and pixels have attributes of both spatial location and image data code value.
In the context of the present disclosure, the term “code value” refers to the value that is associated with each volume image data element or voxel in the reconstructed 3-D volume image. The code values for CT images are often, but not always, expressed in Hounsfield units (HU).
“Static” imaging relates to images of a subject without consideration for movement. “Patterned light” is used to indicate light that has a predetermined spatial pattern, such that the light has one or more features such as one or more discernable parallel lines, curves, a grid or checkerboard pattern, or other features having areas of light separated by areas without illumination. In the context of the present disclosure, the phrases “patterned light” and “structured light” are considered to be equivalent, both used to identify the light that is projected onto the head of the patient in order to derive contour image data.
In the context of the present disclosure, the terms “interference filter” and “dichroic filter” are considered to be synonymous.
In the context of the present disclosure, the terms “digital sensor” or “sensor panel” and “digital x-ray detector” or simply “digital detector” are considered to be equivalent. These describe the panel that obtains image data in a digital radiography system. The term “revolve” has its conventional meaning, describing movement along a curved path or orbit around a center point.
In the context of the present disclosure, the terms “viewer”, “operator”, and “user” are considered to be equivalent and refer to the viewing practitioner, technician, or other person who views and manipulates an x-ray image or a volume image that is formed from a combination of multiple x-ray images, on a display monitor. A “viewer instruction” or “operator command” can be obtained from explicit commands entered by the viewer or may be implicitly obtained or derived based on some other user action, such as making a collimator setting, for example. With respect to entries on an operator interface, such as an interface using a display monitor and keyboard, for example, the terms “command” and “instruction” may be used interchangeably to refer to an operator entry.
In the context of the present disclosure, a single projected line of light is considered a “one dimensional” pattern, since the line has an almost negligible width, such as when projected from a line laser, and has a length that is its predominant dimension. Two or more of such lines projected side by side, either simultaneously or in a scanned arrangement, provide a two-dimensional pattern.
The schematic diagram of
It should be noted that the orbit of generator apparatus 24 and detector 22 about the head of patient 14 is typically a circular orbit, but may alternately have a different shape (e.g., fixed or varying). For example, the orbit can be in the shape of a polygon or ellipse or some other shape. Different portions of the orbit can be used for the different types of image acquisition that are performed. In practice, radiographic imaging for CBCT reconstruction is performed over a range of angles. However, a small portion of the orbit can be used in some cases, such as imaging from a single angle for acquiring some types of reflectance images, to a substantial portion of the orbit, such as acquiring reflectance images at numerous incremental angles about the head, over a portion of the orbit that extends from one side of the head to the other.
Method and/or apparatus embodiments of the present disclosure can adapt the basic CBCT imaging system described with reference to
The perspective views of
Referring first to
In the alternate embodiment of
Referring to
The schematic diagram of
In order to accurately identify marker Cl-C6 location and movement within the 3-D coordinate system, cameras 38 (
Alternately, for the equipment configuration shown in the example of
For the structured light imaging system of
Capturing jaw motion for analysis can employ the basic reflectance image capture mechanism described with reference to
In order to provide accurate motion tracking, markers C1-C6 must be positioned so that they correspond to appropriate positions along the skull and jaw and so that their own movement can be readily detected, over the full distance over which movement takes place. Three points in space define a plane. For skull marker C1-C3 placement, the first plane that is defined is non-parallel to the imaging plane of the camera 38 sensor. Similarly, for jaw marker C4-C6 placement, the second plane that is defined is also non-parallel to the imaging plane of the camera 38 sensor.
Each marker C1-C6 can be featured in some way that facilitates identification of its position and orientation. Identifying marker features can include color, reflective or retro-reflective portions, imprinted or formed shapes, markings such as geometric markings, alphanumeric labels, or symbols, etc. For structured light embodiments that use the component arrangement of
The perspective view of
In one exemplary embodiment, more than three markers are used for locating the skull position, and more than three markers are used for locating the jaw position, but at least three markers for each of the jaw position and skull position are arranged to be visible to tracking mechanisms (e.g. cameras) at any point in the respective scan or jaw motion analysis.
The logic flow diagram of
Referring to
Continuing with the
During jaw motion study S120, the patient follows instructions to execute a given sequence that can include jaw movement functions such as chewing, upwards/downwards and side-to-side motion, jutting motion, and retraction of the jaw, and other jaw movement. During jaw motion study S120, a succession of reflectance images is acquired from a single camera 38 or from two or more cameras 38 that are angled toward patient 14 and have markers C1-C6 in their field of view, as described previously with respect to
The
According to an alternate embodiment, the camera 38 can also move along an orbit or other trajectory during jaw movement if it is fully calibrated (both intrinsic and extrinsic parameters) over the whole trajectory.
By way of illustration, the simplified schematic diagrams of
A correlation step S130 associates the reference marker positions that are shown in acquired reflectance images with corresponding spatial positions in the reconstructed volume image content. This correlation can be executed in a number of ways, using methods well known to those skilled in the volume imaging arts. Correlation of reference marker C1-C6 positions can be performed by obtaining one or more radiographic images of the patient with the markers in place, for example, at a given angular position. Markers can alternately or additionally be located according to relative proximity to discernable structural features, for example.
Continuing with
Segmentation of the jaw can be provided using techniques familiar to those skilled in the image analysis arts and is well known in diagnostic imaging arts. Various segmentation methods could be used. According to an embodiment of the present disclosure, a template is used, coarsely outlining the jaw area, then refining and optimizing the segmentation result. A facial surface or 3-D tooth surface model can alternately be employed to help initiate or guide segmentation processing.
The reflectance images that are obtained can be used to show marker C1-C6 position and the overall surface of the patient's face during motion of the jaw. The components and methods that are used to acquire the reflectance images differ according to whether the imaging apparatus uses stereoscopic imaging as described with reference to
Each reflectance imaging assembly 80a, 80b has at least one camera 38 and associated illumination components. As noted previously for stereoscopic imaging embodiments such as those described with reference to
As noted previously for structured light embodiments as described with reference to
According to an embodiment of the present disclosure, projector 52 uses a near infrared (NIR) laser of Class 1, with a nominal emission wavelength of 780 nm, well outside the visible spectrum. Light from this type of light source can be projected onto the patient's face without awareness of the patient and without concern for energy levels that are considered to be perceptible or harmful at Class 1 emission levels. Infrared or near infrared light in the 700-900 nm spectral region appears to be particularly suitable for surface contour imaging of the head and face, taking advantage of the resolution and accuracy advantages offered by the laser, with minimal energy requirements. It can be appreciated that other types of lasers and light sources, at suitable power levels and wavelengths, can alternately be used.
Light source 34 is shown coupled to generator apparatus 24 in the embodiments shown in
In surface contour imaging, according to an embodiment of the present disclosure, projector 52 projects one 1-D line of light at a time onto the patient or, at most, not more than a few lines of light at a time, at a particular angle, and acquires an image of the line as reflected from the surface of the patient's face or head. This process is repeated, so that a succession of lines is obtained for processing as transport apparatus 20 moves to different angular positions. Other types of pattern can be projected, including irregularly shaped patterns or patterns having multiple lines. Projector 52 can be provided with an appropriate lens for forming a line, such as with a cylindrical lens or aspheric lens such as a Powell lens, for example. Additional optical components can be provided for shaping the laser output appropriately for contour imaging accuracy. The laser light can also be scanned across the face surface, such as using a rotating reflective scanner, for example. Scanning can be along the line or orthogonal to line direction.
The use of light outside the visible spectrum for forming lines 44 or other laser light pattern can be advantageous from a number of aspects. Lines 44 can be detected on a camera 38 that is sensitive to light at a particular wavelength, such as using one or more filters in the imaging light path.
For display, the surface or contour information obtained from the reflectance imaging can have variable appearance when showing jaw motion. According to an embodiment of the present disclosure, the outline of the surface is shown, as shown in
With respect to the camera(s) used to capture reflectance image content, it is useful to have calibration information that relates to the optical geometry of image acquisition. This type of intrinsic information includes data describing parameters such as focal length, imaging length (depth of field), image center, field of view, and related metrics. An initial calibration of the camera can be performed to identify optical characteristics, such as using a set of targets and executing an imaging sequence that captures image data from representative angles, for example.
Additional, extrinsic information relates the position of the imaging subject at head supporting position 16 (
In the camera model at a position during jaw movement, logic processing that is executed by control logic processor 30 (
Consistent with one embodiment, the present invention utilizes a computer program with stored instructions that control system functions for image acquisition and image data processing for image data that is stored and accessed from an electronic memory. As can be appreciated by those skilled in the image processing arts, a computer program of an embodiment of the present invention can be utilized by a suitable, general-purpose computer system, such as a personal computer or workstation that acts as an image processor, when provided with a suitable software program so that the processor operates to acquire, process, and display data as described herein. Many other types of computer systems architectures can be used to execute the computer program of the present invention, including an arrangement of networked processors, for example.
The computer program for performing the method of the present invention may be stored in a computer readable storage medium. This medium may comprise, for example; magnetic storage media such as a magnetic disk such as a hard drive or removable device or magnetic tape; optical storage media such as an optical disc, optical tape, or machine readable optical encoding; solid state electronic storage devices such as random access memory (RAM), or read only memory (ROM); or any other physical device or medium employed to store a computer program. The computer program for performing the method of the present invention may also be stored on computer readable storage medium that is connected to the image processor by way of the internet or other network or communication medium. Those skilled in the image data processing arts will further readily recognize that the equivalent of such a computer program product may also be constructed in hardware.
It is noted that the term “memory”, equivalent to “computer-accessible memory” in the context of the present disclosure, can refer to any type of temporary or more enduring data storage workspace used for storing and operating upon image data and accessible to a computer system, including a database. The memory could be non-volatile, using, for example, a long-term storage medium such as magnetic or optical storage. Alternately, the memory could be of a more volatile nature, using an electronic circuit, such as random-access memory (RAM) that is used as a temporary buffer or workspace by a microprocessor or other control logic processor device. Display data, for example, is typically stored in a temporary storage buffer that is directly associated with a display device and is periodically refreshed as needed in order to provide displayed data. This temporary storage buffer can also be considered to be a memory, as the term is used in the present disclosure. Memory is also used as the data workspace for executing and storing intermediate and final results of calculations and other processing. Computer-accessible memory can be volatile, non-volatile, or a hybrid combination of volatile and non-volatile types.
It is understood that the computer program product of the present invention may make use of various image manipulation algorithms and processes that are well known. It will be further understood that the computer program product embodiment of the present invention may embody algorithms and processes not specifically shown or described herein that are useful for implementation. Such algorithms and processes may include conventional utilities that are within the ordinary skill of the image processing arts. Additional aspects of such algorithms and systems, and hardware and/or software for producing and otherwise processing the images or co-operating with the computer program product of the present invention, are not specifically shown or described herein and may be selected from such algorithms, systems, hardware, components and elements known in the art.
In this document, the terms “a” or “an” are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of “at least one” or “one or more.” In this document, the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated. In this document, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Also, in the following claims, the terms “including” and “comprising” are open-ended, that is, a system, device, article, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim.
Exemplary embodiments according to the application can include various features described herein (individually or in combination).
While the invention has been illustrated with respect to one or more implementations, alterations and/or modifications can be made to the illustrated examples without departing from the spirit and scope of the appended claims. In addition, while a particular feature of the invention can have been disclosed with respect to one of several implementations, such feature can be combined with one or more other features of the other implementations as can be desired and advantageous for any given or particular function. The term “at least one of” is used to mean one or more of the listed items can be selected. The term “about” indicates that the value listed can be somewhat altered, as long as the alteration does not result in nonconformance of the process or structure to the illustrated embodiment. Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims, and all changes that come within the meaning and range of equivalents thereof are intended to be embraced therein.