This disclosure relates generally to a method and system for controlling a position of a virtual light source when displaying a volume-rendered image.
Volume-rendered images are very useful for representing 3D medical imaging datasets. Volume-rendered images are typically 2D representations of a 3D medical imaging dataset. There are currently many different techniques for generating a volume-rendered image. One such technique, ray-casting, includes projecting a number of rays through the 3D medical imaging dataset. Each sample in the 3D medical imaging dataset is mapped to a color and a transparency. Data is accumulated along each of the rays. According to one common technique, the accumulated data along each of the rays is displayed as a pixel in the volume-rendered image. In order to gain an additional sense of depth and perspective, volume-rendered images are oftentimes shaded and illuminated based on the position of a virtual light source with respect to the rendered object shown in the volume-rendered image. Shading and illumination may be used in order to help convey the relative positioning of structures or surfaces shown in the volume-rendered image. The shading and illumination helps a viewer to more easily visualize the three-dimensional shape of the rendered object represented in the volume-rendered image.
In order to fully understand the three-dimensional shape of the rendered object in the volume-rendered image, it may be desired for the user to adjust the position of the virtual light source used to calculate the shading and illumination of the volume-rendered image. Viewing the volume-rendered while it is illuminated and shaded with the virtual light source in different positions will help the user to more fully understand the three-dimensional shape and geometry of the rendered object in the volume-rendered image.
Conventional solutions are configured to adjust the angular position of a virtual light source with respect to the rendered object. However, depending on the shape of the rendered object represented in the volume-rendering, this oftentimes results in different distances between the virtual light source and the rendered object. Conventional volume-rendering techniques typically model the intensity of light from the light source to decrease with distance. This results in lower levels of illumination when the virtual light source is a greater distance from the rendered object.
According to other conventional solutions, the user needs to control the position of the virtual light source or sources in three-dimensional space. It can be difficult for the user to quickly and accurately position the virtual light source in the desired position due to the challenges of controlling the three-dimensional position of the virtual light source with respect to the rendered object.
Therefore, for these and other reasons, an improved system and method for controlling the position of a virtual light source is desired.
The above-mentioned shortcomings, disadvantages and problems are addressed herein which will be understood by reading and understanding the following specification.
In an embodiment, a method of volume-rendering includes accessing a 3D medical imaging dataset and generating a volume-rendered image including a rendered object from the 3D medical imaging dataset. The method includes controlling a shading and an illumination of the volume-rendered image based on a position of a virtual light source with respect to the rendered object. Controlling the shading and the illumination includes performing the following steps: a) receiving a control input from a user interface; b) automatically moving the virtual light source to an updated position along a height contour established with respect to a surface of a rendered object in the volume-rendered image in response to the control input; c) calculating an updated shading and an updated illumination based on the updated virtual light source position; d) displaying the volume-rendered image with the updated shading and the updated illumination; and e) repeating steps a), b), c), and d), as the virtual light source is being moved in response to the control input.
In an embodiment, a system for interacting with a 3D medical imaging dataset includes a display device, a user interface, and a processor communicatively connected to the display device and the user interface. The processor is configured to access the 3D medical imaging dataset and generate a volume-rendered image including a medical object from the 3D medical imaging dataset. The processor is configured to receive a control input from the user interface to adjust a position of a virtual light source with respect to the rendered object, wherein the position of the virtual light source is used to calculate an illumination and a shading of the volume-rendered image. The processor is configured to automatically move the virtual light source to an updated position along a height contour established with respect to the surface of a rendered object in the volume-rendered image in response to the control input. The processor is configured to display the volume-rendered image with the illumination and the shading determined based on a real-time position of the virtual light source during the process of automatically moving the virtual light source to the updated position.
In an embodiment, an ultrasound imaging system includes an ultrasound probe, a display device, a memory, a user interface, and a processor communicatively connected to the ultrasound probe, the display device, the memory, and the user interface. The processor is configured to control the ultrasound probe to acquire a 3D ultrasound dataset and generate a volume-rendered image including a rendered object from the 3D ultrasound dataset. The processor is configured to receive a control input from the user interface to adjust a position of a virtual light source with respect to the rendered object, wherein the position of the virtual light source is used to calculate an illumination and a shading of the volume-rendered image. The processor is configured to automatically move the virtual light source to an updated position along a height contour established with respect to a surface of a rendered object in the volume-rendered image in response to the control input. The processor is configured to display the volume-rendered image with the illumination and the shading determined based on a real-time position of the virtual light source during the process of automatically moving the virtual light source to the updated position.
Various other features, objects, and advantages of the invention will be made apparent to those skilled in the art from the accompanying drawings and detailed description thereof.
In the following detailed description, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration specific embodiments that may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the embodiments, and it is to be understood that other embodiments may be utilized and that logical, mechanical, electrical and other changes may be made without departing from the scope of the embodiments. The following detailed description is, therefore, not to be taken as limiting the scope of the invention.
A user interface 106 is communicatively connected to the processor 104. The user interface 106 may be communicatively connected in either a wired or a wireless fashion. The user input 106 may include a trackball and one or more buttons according to an exemplary embodiment. However, according to other embodiments, the user interface 106 may include one or more of a mouse, a track pad, a touch screen, rotary controls, or an assortment of hard or soft keys with defined or definable functions. The display device 108 is communicatively connected to the processor 104 as well. The display device 108 may include a monitor or display screen such as a monitor, an LCD screen, an LED screen, a projector, or any other device suitable for displaying a volume-rendered image. Other embodiments may include multiple display devices.
The ultrasound imaging system 120 also includes a processor 136 to control the transmit beamformer 121, the transmitter 122, the receiver 128 and the receive beamformer 130. The processor 136 is communicatively connected to the transmit beamformer 121, the transmitter 122, the receiver 128, and the receive beamformer 130 by either wired or wireless techniques. The processor 136 is in electronic communication with the ultrasound probe 126. The processor 136 may control the ultrasound probe 126 to acquire data. The processor 136 controls which of the elements 124 are active and the shape of a beam emitted from the ultrasound probe 126. The processor 136 is also in electronic communication with a display device 138, and the processor 136 may process the data into images or values for display on the display device 138. The display device 138 may comprise a monitor, an LED display, a cathode ray tube, a projector display, or any other type of apparatus configured for displaying an image. Additionally, the display device 138 may include one or more separate devices. For example, the display device 138 may include two or more monitors, LED displays, cathode ray tubes, projector displays, etc. The display device 138 may also be a touchscreen. For embodiments where the display device 138 is a touchscreen, the touchscreen may function as an input device and it may be configured to receive touch or touch gesture inputs from a user. For purposes of this disclosure, the term “electronic communication” may be defined to include both wired and wireless connections. The processor 136 may include a central processor (CPU) according to an embodiment. According to other embodiments, the processor 136 may include other electronic components capable of carrying out processing functions, such as a digital signal processor, a field-programmable gate array (FPGA), or a graphic board. According to other embodiments, the processor 136 may include multiple electronic components capable of carrying out processing functions. For example, the processor 136 may include two or more electronic components selected from a list of electronic components including: a central processor, a digital signal processor, an FPGA, and a graphic board. According to another embodiment, the processor 136 may also include a complex demodulator (not shown) that demodulates the RF data and generates raw data. In another embodiment the demodulation can be carried out earlier in the processing chain. The processor 136 may be adapted to perform one or more processing operations according to a plurality of selectable ultrasound modalities on the data. The data may be processed in real-time during a scanning session as the echo signals are received. For the purposes of this disclosure, the term “real-time” is defined to include a procedure that is performed without any intentional delay. For purposes of this disclosure, the term “real-time” will additionally be defined to include an action occurring within 2 seconds. For example, if data is acquired, a real-time display of that data would occur within 2 seconds of the acquisition. Those skilled in the art will appreciate that most real-time procedures/processes will be performed in substantially less time than 2 seconds. The data may be stored temporarily in a buffer (not shown) during a scanning session and processed in less than real-time in a live or off-line operation.
Some embodiments of the invention may include multiple processors (not shown) to handle the processing tasks. For example, a first processor may be utilized to demodulate and decimate the RF signal while a second processor may be used to further process the data prior to displaying an image. It should be appreciated that other embodiments may use a different arrangement of processors.
The ultrasound imaging system 120 may continuously acquire data at a given frame-rate or volume-rate. Images generated from the data may be refreshed at a similar frame-rate or volume-rate. A memory 140 is included for storing processed frames of acquired data. In an exemplary embodiment, the memory 140 is of sufficient capacity to store at least several seconds' worth of frames of ultrasound data. The frames of data are stored in a manner to facilitate retrieval thereof according to its order or time of acquisition. The memory 140 may comprise any known data storage medium.
Optionally, embodiments of the present invention may be implemented utilizing contrast agents. Contrast imaging generates enhanced images of anatomical structures and blood flow in a body when using ultrasound contrast agents including microbubbles. After acquiring data while using a contrast agent, the image analysis includes separating harmonic and linear components, enhancing the harmonic component and generating an ultrasound image by utilizing the enhanced harmonic component. Separation of harmonic components from the received signals is performed using suitable filters. The use of contrast agents for ultrasound imaging is well-known by those skilled in the art and will therefore not be described in further detail.
In various embodiments of the present invention, data may be processed by other or different mode-related modules by the processor 136 (e.g., B-mode, Color Doppler, M-mode, Color M-mode, spectral Doppler, Elastography, TVI, strain, strain rate, and the like) to form 2D or 3D data. For example, one or more modules may generate B-mode, color Doppler, M-mode, color M-mode, spectral Doppler, Elastography, TVI, strain, strain rate and combinations thereof, and the like. The image beams and/or frames are stored and timing information indicating a time at which the data was acquired in memory may be recorded. The modules may include, for example, a scan conversion module to perform scan conversion operations to convert the image frames from coordinates beam space to display space coordinates. A video processor module may be provided that reads the image frames from a memory and displays the image frames in real time while a procedure is being carried out on a patient. A video processor module may store the image frames in an image memory, from which the images are read and displayed.
Referring to both
The volume-rendered image may be shaded and illuminated in order to present the user with a better perception of depth of the rendered object represented in the volume-rendered image. This may be performed in several different ways according to various embodiments. For example, a surface of a rendered object may be defined based on the volume-rendering of the 3D medical imaging dataset. According to an exemplary embodiment, a gradient may be calculated at each of the pixels. The processor 104 (shown in
Referring to
The 3D medical imaging dataset may include voxel data where each voxel is assigned a value and an opacity. The value and opacity may correspond to the intensity of the voxel. At step 304, the processor 104 generates a volume-rendered image from the 3D medical imaging dataset. According to an embodiment, the processor 104 may generate the volume-rendered image according to one of the techniques previously described with respect to
At step 308, the processor 104 is configured to display the volume-rendered image including the shading and the illumination calculated at step 308 on the display device 108. The position of the virtual light source will therefore directly affect the appearance of the volume-rendered image displayed on the display device 108.
Step 310 is a decision step in the method 300. If the processor 104 does not receive a control input from the user interface at step 310, then the method 300 returns to step 308 and the volume-rendered image with the previously calculated illumination and shading is displayed. If, however, at step 310, the processor 104 receives a control input from the user interface 106 to adjust the position of one or more of the virtual light sources, then the method 300 advances to step 312.
At step 312, the processor 106 automatically moves the virtual light source along a height contour in response to the control input. The height contour is calculated by the processor 106 at a generally fixed distance from a surface, such as the outer surface 407 of the rendered object 403 represented in the volume-rendered image 402. According to an example, the height contour may be at a constant height with respect to the outer surface 407 of the rendered object in the volume-rendered image. According to another example, the height contour may be at a constant height with respect to a smoothed surface of the volume-rendered image 407. The processor 106 may, for instance, apply a smoothing function to the outer surface of the rendered object represented in the volume-rendered image and then calculate the height contour that is a constant height with respect to the smoothed surface. The smoothing function may include an averaging function, a curve-fitting function, or a moving-average function for instance.
The height contour is determined by the processor 106 to be either a fixed distance (height) from the outer surface of the rendered object in the volume-rendered image or to be a generally fixed distance (height) from the outer surface of the rendered object in the volume-rendered image. The height of the height contour may be manually adjusted by the user. According to an example, the user may use a rotary control to adjust a height of the height contour from the surface of the rendered object in the volume-rendered image. According to another example, the user may use slider or a virtual slide bar to adjust the height of the height contour from the surface of the rendered object or objects in the volume-rendered image. The height contour may be positioned at a positive height above the surface of the rendered object, the height contour may be positioned at a negative height below the surface of the rendered object, or the height contour may be positioned exactly at the surface of the rendered object. For convention, in this disclosure, a height of zero will be defined to be at exactly at the surface of the rendered object, a positive height will be defined to be in a direction away from a center of the rendered object from the surface, and a negative height will be defined to be in a direction towards the center of the rendered object from the surface. In response to an input entered through the user interface 106, the processor 104 may therefore adjust a height of the virtual contour.
After implementing step 312, the method 300 returns to step 306 where the processor 106 calculates the shading and illumination for the volume-rendered image based on the updated position of the virtual light source. Or for examples using multiple light sources, the processor 106 calculates the shading and illumination for the volume-rendered image based on the current, or real-time, position of multiple virtual light sources, one or more of which may be an updated position.
According to an embodiment, the control input that is optionally received at step 310 may be a control received from a mouse, a control input received from a trackball, or a control input received from a touchscreen or a touchpad. The control input or inputs may optionally trace the path 420 shown in
According to an embodiment, the user may define a height of a height contour used to define the height of the virtual light source with respect to the surface shown in the volume-rendered image. The user may for instance, adjust a height of the virtual light source using a rotary, a slide bar, a manual increment/decrement controller, or any other type or user interface control. The user may for instance, control the height of the virtual light source with the user interface 106 and the processor 104 may optionally show the height of the virtual light source above the surface of the volume-rendered image 402. The processor 104 may optionally display the height of the virtual light source on the display device 108 in millimeters (mm), centimeters (cm), or inches.
According to an embodiment, at step 312, the processor 104 automatically moves the virtual light source along a height contour that is a either a constant height from a surface of the volume-rendered image 402 or that is a constant height from a smoothed surface 407 of the volume-rendered image 402. The height of the height contour may be determined in a direction perpendicular to the outer surface of the volume-rendered image 402 for instance. The processor 104 receives a control input at step 310 to adjust a position of one or more virtual light sources. As described previously, this may, for example, be an input through a mouse, a trackball, a touchscreen, or a touchpad. Upon receiving this control input at step 310, the processor 104 automatically moves the virtual light source along the height contour. Until a separate height control adjustment input is received through the user interface 106, the processor 104 automatically maintains the virtual light source along the height contour as the method 300 repeats steps 306, 308, 310, and 312 and additional control inputs are received. The permits the user to easily position the virtual light source at any position with respect to the rendered object shown in the volume-rendered image 402 while maintaining the virtual light source at a fixed height from a surface, such as the outer surface 407, of the rendered object 403. Advantageously, the technique described with respect to the method 300 allows the user to easily control the position of the virtual light source in three-dimensions using only a two-dimensional control input. The processor 104 automatically keeps the virtual light source at a generally constant height, defined by the height contour, from the surface of the rendered object 403. This makes it much easier and quicker for the clinician to reposition the virtual light source to a different location while keeping the virtual light source at a generally constant distance from the surface of the rendered object 403. Keeping the virtual light source at a generally constant distance helps ensure relatively similar intensity levels from the virtual lighting source at different positions. Additionally, keeping the virtual light source at a generally constant distance help ensure that the illumination, shading, and any shadows cast from structures of the rendered object will vary smoothly as the position of the virtual light source is adjusted.
The path 420 shown in
According to an embodiment, the processor 104 may also be used to control a reference plane light source indicator, such as the either the first reference plane light source indicator 414 or the second reference plane light source indicator 416. The processor 104 may be configured to control one or more display properties of each reference plane light source indicator to indicate the distance of the virtual light source from the plane represented in the reference plane. For example, the processor 104 may use intensity, color, size of the reference plane light source indicator, or a combination or intensity, color, and size to indicate the distance of the virtual light source from the plane represented in a particular reference plane.
As previously described, the first reference plane 404 represents a first plane along line 410 and the second reference plane 406 represents a second plane along line 412. According to the embodiment shown in
At the second time, represented in
In the example described with respect to
According to an embodiment, the user may select to control the position of either virtual light source independently or the user may move both of the virtual light sources based on a single user input. According to an exemplary embodiment, the user may select one of the virtual light sources, such as by clicking on or selecting either the first light source indicator 409 or the second light source indicator 411. Then, the user may adjust the position of the selected virtual light source based according to the technique described with respect to the method 300.
In the example shown in
This written description uses examples to disclose the invention, including the best mode, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the invention is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if they have structural elements that do not differ from the literal language of the claims, or if they include equivalent structural elements with insubstantial differences from the literal language of the claims.