The invention relates to methods and apparatus for producing records of use of medical ultrasound imaging systems. Some embodiments produce records containing ultrasound image video signals and synthetic display element signals conveying information that may be displayed to an ultrasound technician during use of medical ultrasound imaging systems.
Ultrasound examinations may performed on animals, such as humans, to acquire images useful in the treatment or diagnosis of a wide range of medical conditions. In ultrasound imaging, ultrasound pulses are transmitted into a body, and reflected off of structures in the body (e.g., interfaces where there is a density change in the body). Reflected ultrasound pulses (echos) are detected at a transducer. Timing and strength information for reflected pulses is used to construct images. Two-dimensional ultrasound images show a cross-sectional “slice” of anatomy traversed by the ultrasound pulses.
Some ultrasound apparatus comprise freehand ultrasound probes, which ultrasound technicians may move freely over the skin of a subject to acquire images of different slices of the subject's anatomy. An ultrasound technician may maneuver such probes to acquire images that show particular views of particular structures (e.g., views of structures prescribed by a physician).
Ultrasound images may be analyzed by medical personnel, such as radiologists, physicians and the like. Typically, ultrasound images are provided to medical personnel with indications of the anatomical structure(s) or region(s) depicted in the images and, optionally, the views thereof. An individual analyzing an ultrasound image may have difficulty determining whether the particular appearance of a structure in the ultrasound image is due to a characteristic of the subject or some aspect of the procedure by which the ultrasound image was acquired (e.g., difficulty experienced by the ultrasound technician in obtaining a suitable image, the orientation of the ultrasound probe relative to the structure, the configuration of the ultrasound apparatus, etc.). For instance, if an ultrasound probe is oriented such that it images a cylindrical structure at a skew angle, the cross-section of the structure depicted in the image will be oval, rather than circular as would be the case if the probe was oriented to image the structure transversely.
Apparatus for acquiring ultrasound images are becoming both more common and more user-friendly. As a result, ultrasound apparatus may now be found to be used by operators having little or no formal training in the use of ultrasound apparatus and/or anatomy. Where this is the case, it may be desirable to have a record of the use of the ultrasound apparatus as it was used by an operator to acquire an ultrasound image. Since the subject matter of ultrasound examinations may be, without exaggeration, a matter of life and death, it is desirable that such a record can be made without difficulty, and, once made, is readily accessible, comprehensive and reliable.
Often serious consequences turn on diagnoses of medical conditions made using ultrasound images. Where missed or incorrect diagnoses lead to adverse consequences, litigation is often pursued by the adversely affected parties. In such litigation, having a record of the use of the ultrasound apparatus that acquired an ultrasound image used in making a diagnosis may be useful to the parties in discovering the truth of the matter. It may be desirable for doctors, hospitals and liability insurers to have ready access to such records (e.g., from a repository of such records) in the event that such records can demonstrate proper medical conduct. Such records may also be useful in defending or prosecuting lawsuits relating to sexual harassment and termination of employment.
There is a need for practical and cost effective methods and apparatus for providing information useful in the interpretation of ultrasound images.
In drawings that show non-limiting example embodiments:
Throughout the following description specific details are set forth in order to provide a more thorough understanding to persons skilled in the art. However, well known elements may not have been shown or described in detail to avoid unnecessarily obscuring the disclosure. Accordingly, the description and drawings are to be regarded in an illustrative, rather than a restrictive, sense.
User interface 20 is coupled to processing apparatus 22, and is operable to control the operation of system 12. In the illustrated embodiment, user interface 20 comprises a customized control panel 20A and display 16. User interface 20 may comprise user input interface apparatus such as a keyboard, a computer mouse, a touch screen display, a joystick, a trackball, combinations thereof, or the like, for example. In some embodiments, user interface 20 comprises a graphical user interface displayed on display 16. In some embodiments, user interface 20 comprises user input interface apparatus located on probe 14 (e.g., buttons, switches, a touch screen, and the like). User interface 20 may be operable to configure ultrasound imaging parameters of system 12, such as frequency, depth, gain, focus, compounding, and the like. User interface 20 may provide indications of ultrasound imaging parameters of system 12 (e.g., by displaying visual indications of such parameters on display 16).
In some embodiments, ultrasound system 12 is susceptible to use in one of a plurality of operating modes, each such mode characterized by a plurality of defined ultrasound imaging parameters. In some such embodiments, user interface 20 may be operated to select an operating mode, and may provide an indication of the selected ultrasound mode and/or the ultrasound imaging parameters of system 12 associated therewith.
In use, an ultrasound transducer in probe 14 emits ultrasound pulses which reflect off of structures in the body of subject S (e.g., interfaces where there is a density change in the body of subject S). Reflected ultrasound pulses (echos) are detected at the ultrasound transducer in probe 14. Signals generated by the transducer of probe 14 in response to echos are communicated to processing apparatus 22 inside housing 18. The signals generated in response to echoes comprise timing and strength information, from which processing apparatus 22 constructs images of the structures which caused the echoes. Technician T may maneuver probe 14 over the skin of subject S to acquire different images of structures in the body of subject S.
Ultrasound images generated by processing apparatus 22 may be displayed on display 16. In some embodiments, processing apparatus 22 is configured to provide ultrasound images in an ultrasound image video signal that is provided to display 16. For example, ultrasound processing apparatus may be configured to generate a video signal comprising video frames that include ultrasound images. In some embodiments, system 12 may be operable to acquire and display ultrasound images in real-time or near real-time.
The appearance of ultrasound image 42 may change during operation of system 12. For example, where system 12 acquires and displays ultrasound images in real-time, ultrasound image 42 will change when technician T moves probe 14 to image different parts of the anatomy of subject S. For another example, ultrasound image 42 may be affected by ultrasound imaging parameters controllable via user interface 20.
The appearance of one or more of synthetic display elements 44 may change during operation of ultrasound system 12. For example:
In some embodiments, processing apparatus 22 is configured to produce a video signal comprising display image 40, and the video signal is provided to display 16, which renders the signal as display image 40. Processing apparatus 22 may be configured to produce the video signal by combining an ultrasound image video signal and one or more synthetic display element signals. Combining an ultrasound image video signal and a synthetic display element signal may comprise compositing synthetic display elements and ultrasound images in video frames (e.g., overlaying synthetic display elements on ultrasound images, or vice versa, or a combination of these).
In some embodiments, display element processor 70 and video signal processor 80 are combined, and display element signal 74 is generated and combined with ultrasound image video signal 64 by modifying ultrasound image video signal 64. In some embodiments, ultrasound image processor 60 and video signal processor 80 are combined, and ultrasound image video signal 64 is generated and combined with display element signal 74 by modifying display element signal 74.
In some embodiments, a plurality of different synthetic display elements are comprised in a corresponding plurality of different synthetic display element signals, and video signal processor 80 combines the plurality of different synthetic display element signals with an ultrasound image video signal to produce a video signal.
In some embodiments, ultrasound system 12 is operable to generate a video record of its use. For example, processing apparatus 22 may be configured to store information pertaining to ultrasound exams conducted using probe 14 in a multimedia container file. In some embodiments, ultrasound system 12 is operable to store an ultrasound image video signal (e.g., ultrasound image video signal 64) and a synthetic display element signal (e.g., synthetic image signal 74) as one or more data streams in a multimedia container file (e.g., a multimedia container file suitable for storing one or more different data streams, such as one or more video streams, audio streams, or the like, as well as synchronization information useful in playing the various streams together) for synchronous playback of the ultrasound image video signal and the synthetic display element signal.
Processing apparatus 22 may comprise a data store that can be used for storing video, such as a non-transitory data recording medium, a memory, a disk, or the like. In some embodiments, ultrasound system 12 comprises a data store external to processing apparatus 22 (e.g., removable media, storage on a server, etc.), and processing apparatus 22 is communicatively coupled to the external memory.
In some embodiments, ultrasound system 12 is operable to store an ultrasound image video signal and a synthetic display element signal as a single data stream in a multimedia container file by combining the ultrasound image video signal and the synthetic display element signal to produce a video signal, encoding the video signal as a video data stream, and storing the video data stream in the multimedia container file. For example, system 12 may be operable to store a video signal provided to display 16 (e.g., a video signal 84 produced by video signal processor 80), which comprises an ultrasound image video signal and a synthetic display element signal. In these embodiments, the stored video signal may comprise the same display images that were viewable by technician T during use of ultrasound system 12. When viewed by medical personnel, the stored video signal may provide information that elucidates the ultrasound images contained in the video signal (e.g., ultrasound imaging parameters indicated by a displayed synthetic display element, ultrasound image video showing a pattern of ultrasound images acquired, etc.).
In block 98, the display video signal is encoded as a video data stream. Block 98 may comprise encoding the display video signal with or without data compression. Block 98 may comprise encoding the display video signal using a video codec such as MPEG-4 Part 2 (e.g., DivX™, Xvid™, etc.), H.264/MPEG-4 AVC, MPEG-1 Part 2, H.262/MPEG-2 Part 2, JPEG 2000, Motion JPEG, Windows™ Media Video, Theora™, or the like. In some embodiments, blocks 97 and 98 are combined (e.g., the display video signal provided to the display may be an uncompressed video data stream).
In block 99, the video data stream produced in block 98 is stored. Block 99 may comprise storing the video data stream in its native format. In some embodiments, block 99 comprises storing the video data stream in a multimedia container file, such as an audio-video interleave (AVI) container file, a Matroska (MKV) container file, an MPEG program stream container file, an MPEG transport stream container file, a QuickTime™ multimedia container file, an Adobe™ Flash™ (FLV) container file, an MP4 (or other ISO/IEC 14496-14 based) container file, an Ogg™ container file, a DivX™ media format (DMF) container file, or the like. Block 99 may comprise storing the video data stream in a streaming multimedia container file (i.e., a type of multimedia container file suitable for delivering the video data stream as streaming media). In some embodiments, blocks 98 and 99 are combined. Block 99 may be performed simultaneously with block 98 or after block 98 is complete.
In some circumstances, technician T may wish to record ultrasound images displayed on display 16 without certain synthetic display elements (e.g., display elements that might obscure ultrasound images in the recorded video) or with certain synthetic display element not displayed to technician T (e.g., display elements more useful in interpreting recorded ultrasound images than in assisting technician T in acquiring the images). In some embodiments, user interface 20 may provide a control for selecting a set of computer display elements to be recorded.
In block 112, an identification of select synthetic display elements to be included in a record copy video signal is obtained. Block 112 may comprise obtaining an identification of select synthetic display elements provided by technician T through user interface 20 (e.g., before the ultrasound image video signal has been acquired, while the ultrasound image video signal is being acquired, or after the ultrasound image video signal has been acquired). The record copy video signal may contain the same display elements displayed during an ultrasound examination or different display elements than were displayed during an ultrasound examination (e.g., less than all display elements displayed during an ultrasound examination, display elements not displayed during an ultrasound examination, etc.).
In block 114, a record display element signal comprising the select synthetic display elements is combined with the ultrasound image video signal to produce a record copy video signal. Block 114 may comprise combining select synthetic display elements with ultrasound images in a manner that replicates or approximates the combining performed in step 110 (e.g., so that display elements common to the display copy elements signal and the record copy video signal are rendered identically or similarly). In some embodiments, block 114 comprises combining the select synthetic display elements with the ultrasound image in a manner that is different from the combining performed in step 110. For example, display elements combined in block 110 to appear adjacent to an ultrasound image in the display copy video signal may be combined in block 114 to appear overlaid on the ultrasound image in the record copy video signal.
In block 116, the record copy video signal is encoded as a video data stream. Block 116 may comprise encoding the record copy video signal using a video codec, including without limitation any of the video codecs mentioned in connection with block 98 of method 90. In some embodiments, blocks 114 and 116 are combined.
In block 118, the video data stream is stored. Block 118 may comprise storing the video data stream in its native format and/or in a multimedia container file, including in any of the multimedia container files mentioned in connection with block 99 of method 90. Block 118 may be performed simultaneously with block 116 or after block 116 is complete.
In block 146, a synthetic display element signal is generated. In block 148, the display element signal is encoded as a video data stream. Block 148 may comprise rendering the display element signal as raster images and encoding the rendered images as a video data stream. For example, block 148 may comprise rendering display elements contained in the display element signal as pre-rendered subtitles. In some embodiments, block 148 comprises rendering display elements contained in the display element signal as raster images having at least one alpha channel (e.g, a transparent background). For example, block 148 may comprise rendering display elements contained in a synthetic display element signal as non-transparent image features over an alpha channel Block 148 may comprise encoding the display element raster images using a video codec, including any one of the video codecs mentioned in connection with block 98 of method 90.
Blocks 146 and 148 may comprise encoding the ultrasound image video signal and display element signal, respectively, using the same timescale, such that an ultrasound image and associated display elements (e.g., those displayed on display 16 simultaneously with the ultrasound image), will be displayed synchronously when the ultrasound image and display element video data streams are played together.
In block 150, the ultrasound image video data stream and the display element video data stream are stored in a common multimedia container file. Block 150 may be performed simultaneously with block 146 and/or block 148, or may be performed after block 146 and/or block 148 is complete. Block 150 may comprise storing the ultrasound image video data stream and the display element video data stream in any of the multimedia container files mentioned in connection with block 99 of method 90.
Storing separate video streams in a common multimedia container file may comprise assembling the separate video streams according to the format of the multimedia container file. Assembling separate video streams into a multimedia container file may comprise dividing the streams into“chunks” (also know as “atoms”, “packets” and “segments”), and interleaving the chunks. Assembling separate video streams into a multimedia container file may comprise adding metadata, headers and/or synchronization information, for example.
Block 150 comprises sub-blocks 152 and 154. In sub-block 152, the ultrasound image video data stream is stored as a first video track in the multimedia container file. In sub-block 154, the display element video data stream is stored as a second video track in the multimedia container file. In blocks 152 and 154, the second video track may be designated as a higher layer track than the first video track, such that when the tracks are layered for display, the display element video data stream is layered on top of the ultrasound image video data stream.
In some embodiments, block 148 of method 140 comprises encoding the synthetic display element signal as a plurality of different display element video data streams corresponding to the different synthetic display elements contained in the signal. In some such embodiments, block 154 comprises storing the plurality of different display element video data streams in a corresponding plurality of different tracks in the multimedia container file.
Since method 140 provides the ultrasound image video signal and display element signal in different data streams, medical personnel may view the ultrasound image video signal without viewing the display element signal, or may combine (e.g., layer) the ultrasound image video signal and display element signal and view the ultrasound image video signal and display element signal together. For example, a display element video track may be overlaid on an ultrasound image video track to replicate or approximate the appearance of display 16 during an ultrasound examination. It is not necessary that a display element video track be generated such that the display elements thereof are rendered identically or similarly to how they are rendered on the display 16.
Where a multimedia container file comprises a plurality of display element video tracks, medical personnel may combine the ultrasound image video track and select display element video tracks to view the ultrasound image video and select display elements together. In some embodiments, block 150 comprises storing metadata indicative of the display elements displayed on display 16 during an ultrasound examination, and this metadata may be used, for example, to determine a set of select display element video tracks that when overlaid on the ultrasound image video track produces a video that replicates or approximates the appearance of display 16 during the ultrasound examination.
In block 166 a synthetic display element signal is generated. In block 168, the display element signal is encoded as an adjunct data stream, such as a raster image stream (e.g., a subpicture stream), a text stream (e.g., a text-based subtitle script), or the like. For example, block 168 may comprise extracting text elements from a display element signal and encoding the text elements as a text-based soft subtitle stream (such as in SubRip Text, Advanced SubStation Alpha, Ogg™ Writ, Structured Subtitle Format, Universal Subtitle Format, XSUB, Binary Information For Scenes (BIFS), 3GPP Timed Text, MPEG-4 Timed Text (TTXT), or like formats). For another example, block 168 may comprise obtaining raster images from a display element signal (e.g., by rendering display elements contained in the display element signal as raster images, by extracting text elements from a display element signal and rendering the text elements as raster images, etc.) and encoding the raster images as a raster image stream (such as subpicture units (e.g., DVD subtitles), as a Blu-Ray™ Presentation Graphic Stream (PGS), in VobSub format, in subpicture file format (SUP), or the like).
Blocks 166 and 168 may comprise encoding the ultrasound image video signal and synthetic display element signal, respectively, using the same timescale, such that an ultrasound image and associated display elements (e.g., the display elements displayed on display 16 simultaneously with the ultrasound image), will be displayed synchronously when the ultrasound image video stream and display element adjunct data streams are played together.
In block 170, the ultrasound image video data stream and the display element adjunct data stream are stored in a single multimedia container file. Block 170 may comprise storing the ultrasound image video data stream and the display element adjunct data stream in any of the multimedia container files mentioned in connection with block 99 of method 90. Block 170 comprises sub-blocks 172 and 174. In sub-block 172, the ultrasound image video data stream is stored as a video track in the multimedia container file. In sub-block 174, the display element adjunct data stream is stored as an adjunct track in the multimedia container file. Sub-block 174 may comprise storing the display element adjunct data stream as a subtitle track, a subpicture track, a presentation graphics track, or the like.
In some embodiments, block 168 comprises encoding information specifying the appearance and/or location of display elements in a frame. Such appearance and location information may be configured to replicate or approximate the appearance and location of display elements on display 16 during an ultrasound examination. For example, block 168 may comprise encoding a raster image of menu panel and information specifying a location of the raster image in a video frame that corresponds to the location of the menu panel on display 16 during an ultrasound examination.
In some embodiments, block 168 comprises encoding the synthetic display element signal as a plurality of different display element adjunct data streams corresponding to the different synthetic display elements contained in the display element signal. In some such embodiments, block 174 comprises storing the plurality of different display element adjunct data streams in a corresponding plurality of different adjunct tracks in the multimedia container file.
Since method 160 provides the ultrasound image video signal and display element signal in different data streams, medical personnel may view the ultrasound image video signal without viewing the display element signal, or may combine (e.g., layer) the ultrasound image video signal and display element signal and view the ultrasound image video signal and display element signal together. For example, a display element adjunct track may be overlaid on an ultrasound image video track to replicate or approximate the appearance of display 16 during an ultrasound examination. It is not necessary that a display element adjunct track be generated such that the display elements thereof are rendered identically or similarly to how they are rendered on the display 16.
Where a multimedia container file comprises a plurality of display element adjunct tracks, medical personnel may combine the ultrasound image video track and select display element adjunct tracks to view the ultrasound image video and select display elements together. In some embodiments, block 170 comprises storing metadata indicative of the display elements displayed on display 16 during an ultrasound examination, and this metadata may be used, for example, to determine a set of select display element adjunct tracks that when overlaid on the ultrasound image video track produces a video that simulates the appearance of display 16 during the ultrasound examination. For example, an association between display elements and display element adjunct tracks may be pre-defined, and a set of select display element adjunct tracks determined from metadata indicative of the display elements displayed on display 16 during an ultrasound examination using a lookup table that embodies the predefined association. For another example, metadata may identify display elements displayed on display 16 during an ultrasound examination by identifiers used to denote corresponding display element adjunct tracks in a multimedia container file.
Returning to
Non-limiting aspects of the invention include medical ultrasound machines comprising interfaces for receiving video signals from video cameras connected to the interfaces and medical ultrasound machines comprising video cameras and video recording apparatus for recording video images concurrently with performing ultrasound examinations.
In the illustrated ultrasound environment 10, field of view 26 encompasses probe 14, a portion of subject S that includes structures imaged by probe 14, and the hand of technician T used to maneuver probe 14. In some embodiments, field of view 26 is adjustable. For example, camera 24 may be manually repositionable and/or may comprise repositioning actuators (e.g., servo motors, etc.). Camera 24 may comprise zoom functionality (e.g., digital zoom, a zoom lens, etc.).
Camera 24 may be controllable by technician T. In some embodiments, user interface 20 provides a control for the operation of camera 24. For example, user interface 20 may provide a control to change field of view 26 of camera 24, such as by panning and zooming. In some embodiments, probe 14 may comprise a position sensor and system 12 may comprise a position monitoring system (not shown) operable to determine position information indicative of the position of probe 14 in space, and camera 24 may be configured to track the position of probe 14 automatically using the position information determined by the position monitoring system, such that field of view 26 follows probe 14.
In some embodiments, ultrasound system 12 generates a video record of its use that includes subject images acquired by camera 24. For example, processing apparatus may be configured to store an ultrasound image video signal and a subject image video signal as one or more data streams in a multimedia container file for synchronous playback of the ultrasound image video signal and the subject image video signal. When viewed by medical personnel, the one or more data streams may provide information (e.g., position and motion of ultrasound probe 14 relative to subject S in time with ultrasound images, etc.) that elucidates the ultrasound images contained in the data stream(s).
In block 208, a subject image video signal is acquired. In optional block 212, an identification of select synthetic display elements to be included in a record copy video signal is obtained. Block 212 may comprise obtaining an identification of select synthetic display elements provided by technician T through user interface 20 (e.g., before recording of video is commenced). In block 214, a record display element signal comprising the select synthetic display elements is combined with the ultrasound image video signal and the subject image video signal to produce a record copy video signal (e.g., by compositing ultrasound images, synthetic display elements and subject images in video frames). Block 214 may comprise overlaying a subject image video signal on an ultrasound image track as a picture-in-picture element, or vice versa, for example.
In embodiments that lack optional step 212, the record display element signal may be the same as the output display element signal, and block 214 comprises combining the output display element signal, the ultrasound image video signal and the subject image video signal. Some such embodiments lack step 210, and the record copy video signal produced in step 14 may be provided to a display.
In block 216, the record copy video signal is encoded as a video data stream. Block 216 may comprise encoding the record copy video signal using a video codec, including without limitation any of the video codecs mentioned in connection with block 98 of method 90. In block 218, the video data stream is stored. Block 218 may comprise storing the video data stream in its native format and/or in a multimedia container file, including in any of the multimedia container files mentioned in connection with block 99 of method 90.
In block 243 a synthetic display element signal is generated. In block 244, the display element signal is encoded as a video data stream. Block 244 may comprise rendering display elements contained in the display element signal as one or more raster images and encoding the rendered image as a video stream. For example, block 244 may comprise rendering display elements contained in the display element signal as pre-rendered subtitles. In some embodiments, block 244 comprises rendering display elements contained in the display element signal as one or more raster images over an alpha channel (transparent) background. Block 244 may comprise rendering display elements contained in the display element signal as one or more raster images that replicates or approximates the appearance of the display elements on display 16. Block 244 may comprise encoding the display element signal using a video codec, including any one of the video codecs mentioned in connection with block 98 of method 90.
In block 245, a subject image video signal is acquired. In block 246, the subject image video signal is encoded as a video data stream. Block 246 may comprise encoding the subject image video signal acquired in block 245 using a video codec, including without limitation any of the video codecs mentioned in connection with block 98 of method 90.
Blocks 242, 244 and 246 may comprise encoding the ultrasound image video signal, display element signal and subject image video signal, respectively, using the same timescale, such that an ultrasound image, associated display elements (e.g., those displayed on display 16 simultaneously with the ultrasound image) and an associated subject image (e.g., an image of subject S, technician T and probe 14 acquired at the same time as the ultrasound image was acquired), will be displayed synchronously when the ultrasound image, display element and subject image video data streams are played together.
In block 250, the ultrasound image video data stream, the display element video data stream and the subject image video data stream are stored in a multimedia container file. Block 250 may comprise storing these video streams in any of the multimedia container files mentioned in connection with block 99 of method 90. Block 250 comprises sub-blocks 252, 254 and 256. In sub-block 252, the ultrasound image video data stream is stored as a first video track in the multimedia container file. In sub-block 254, the display element video data stream is stored as a second video track in the multimedia container file. In sub-block 256, the subject image video data stream is stored as a third video track in the multimedia container file. In blocks 252, 254, and 256 the second and third video tracks may be designated as higher layer tracks than the first video track, such that when the video tracks are layered, the display element video data stream and subject image data stream are layered on top of the ultrasound image video data stream.
In some embodiments, block 244 of method 240 comprises encoding a plurality different synthetic display elements as a plurality of corresponding different display element video data streams, and sub-block 254 comprises storing the plurality of different display element video data streams in a corresponding plurality of different tracks in the multimedia container file.
Since method 240 provides the ultrasound image video signal, display element video signal and subject image video signal as different data streams, medical personnel may view the component videos separately (e.g., at different times or simultaneously on different displays), or may combine two or more of the component data streams (e.g., by layering) and view the component videos together on the same display. For example, a display element video track may be overlaid on an ultrasound image video track to simulate the appearance of display 16 during an ultrasound examination. For another example, a subject image video track may be overlaid on an ultrasound image track as a picture-in-picture element.
In block 266, a synthetic display element signal is generated. In block 268, the display element signal is encoded as an adjunct data stream, such as a raster image stream (e.g., a subpicture stream), a text stream (e.g., a text-based subtitle script), or the like. For example, block 268 may comprise extracting and/or generating text from the synthetic display element signal and encoding the text as a text-based soft subtitle stream. For another example, block 268 may comprise rendering the display element signal as raster images and encoding the graphics as a raster image stream.
In some embodiments, block 268 comprises encoding information specifying the appearance and location of graphics elements in a frame. Such appearance and location information may be configured to replicate the appearance and location of display elements on display 16 during an ultrasound examination (e.g., relative to a simultaneously displayed ultrasound image).
Blocks 264 and 268 may comprise encoding the combined image signal and display element signal, respectively, using the same timescale, such that a combined image (e.g., a composite of simultaneous ultrasound and subject images) and associated display elements (e.g., those displayed on display 16 simultaneously with the ultrasound image) will be displayed synchronously when the combined image video data stream and the display element adjunct data stream are played together.
In block 270, the combined image video data stream and the display element adjunct data stream are stored in a multimedia container file. Block 270 comprises sub-blocks 272 and 274. In sub-block 272, the combined image video data stream is stored as a video track in the multimedia container file. In sub-block 274, the display element adjunct data stream is stored as an adjunct track in the multimedia container file. Sub-block 274 may comprise storing the display element data stream as a subtitle track, a subpicture track, a presentation graphics track, or the like.
In some embodiments, block 268 comprises encoding a plurality of different display element signals as a plurality of corresponding different adjunct data streams, and block 274 comprises storing the plurality of different adjunct data streams in a corresponding plurality of different adjunct tracks in the multimedia container file.
As compared with method 240, method 260 may provide output having a smaller memory footprint, since the output of method 260 comprises only one video data stream, rather than two.
Method 280 further differs from method 240 in that the step of storing the ultrasound image, display element and subject image streams in the same container file (step 290 in method 280) comprises storing the subject image adjunct data stream as an adjunct track of the multimedia container file (step 294), rather than as a video track (step 254 of method 240).
As compared with method 240, method 280 may provide output having a smaller memory footprint, since the output of method 280 comprises only one video data stream, rather than two.
In block 306, a subject image video signal is acquired. In block 307, a synthetic display element signal is generated. In block 308, the subject image video signal and the synthetic display element signal are combined. In some embodiments, block 308 comprises rendering the display elements contained in the display element signal as one or more raster images. In some such embodiments, block 308 may comprise overlaying one or more of the raster images on the subject image video signal and/or non-overlappingly compositing one or more of the raster images with the subject image video signal (e.g., as a split-screen image signal). In block 309, the combined subject image and display element signal is encoded as an adjunct data stream (e.g., a subpicture stream, a presentation graphics stream, etc.).
Blocks 304 and 309 may comprise encoding the ultrasound image video signal and the combined subject image video signal and display element signal, respectively, using the same timescale, such that an ultrasound image and an associated subject image and display elements (e.g., a subject image acquired simultaneously with the ultrasound image and display elements displayed on display 16 simultaneously with the ultrasound image) will be displayed synchronously when the ultrasound image video data stream and the combined subject image and display element adjunct data stream are played together.
In block 310, the ultrasound image video data stream and the combined subject image and display element adjunct data stream are stored in a multimedia container file. Block 310 comprises sub-blocks 312 and 314. In sub-block 312, the ultrasound image video stream is stored as a video track in the multimedia container file. In sub-block 314, the combined subject image and display element adjunct data stream is stored as an adjunct track in the multimedia container file. Sub-block 314 may comprise storing the combined subject image and display element adjunct data stream as a subpicture track, a presentation graphics track, or the like.
As compared with method 280, method 300 may provide output having a smaller memory footprint, since the output of method 300 comprises only one adjunct data stream, rather than two.
Controller 402 is configured to communicate with user interface 20 via user interface signal 402A. Controller 402 is also configured to control the operation of the components of processing apparatus 400 (e.g., to coordinate performance of steps of methods described herein). Ultrasound image processor 404 is configured to construct ultrasound images from ultrasound echo data 404A, and to generate an ultrasound image video signal containing ultrasound images. Camera image processor 406 is configured to receive image data 406A (e.g., subject image video data) from camera 24, and to produce a subject image video signal. Camera image processor 406 may be configured to process received image data, such as by cropping, zooming, downsampling, modifying color depth, changing frame rate and the like. Display element generator 408 is configured to generate a synthetic display element signal in accordance based on information provided by controller 402.
Apparatus 400 comprises a first signal combiner 410. Ultrasound image processor 404 may be configured to provide an ultrasound image video signal to signal combiner 410. Camera image processor 406 may be configured to provide a subject image video signal to signal combiner 410. Display element generator 408 may be configured to provide a display element signal to signal combiner 410. Signal combiner 410 is configured to produce one or more combined video signals by combining any two or more of a received ultrasound image video signal, subject image video signal and display element signal.
Signal combiner 410 may be configured to combine received signals by layering signals (or portions thereof), by non-overlappingly compositing signals (or portions thereof), by doing combinations of these, and the like. Signal combiner 410 may be configured to convert display elements in a received display element signal into raster images for combining. Signal combiner 410 may be configured to downsample, reduce the color depth and/or decrease the frame rate of received signals and/or images derived from received signals.
Apparatus 400 comprises a second signal combiner 412. Camera image processor 406 may be configured to provide a subject image video signal to signal combiner 412. Display element generator 408 may be configured to provide a display element signal to signal combiner 412. Signal combiner 412 is configured to produce a combined adjunct signal by combining a received subject image video signal and a display element signal. Signal combiner 412 may be configured to combine received signals by layering signals (or portions thereof), by non-overlappingly compositing signals (or portions thereof), by doing combinations of these, and the like. Signal combiner 412 may be configured to convert display elements in a received display element signal into raster images for combining. Signal combiner 412 may be configured to downsample, reduce the color depth and/or decrease the frame rate of received signals and/or images derived from received signals.
In some embodiments, signal combiners 410 and 412 are combined.
Apparatus 400 comprises video data stream encoder 414. Ultrasound image processor 404, camera image processor 406, display element generator 408 and/or signal combiner 410 may be configured to provide video signals to encoder 414. Encoder 414 is configured to encode received video signals as video data streams. Encoder 414 may comprise any of the codecs mentioned in connection with block 98 of method 90, for example. Encoder 414 may be configured to downsample, reduce the color depth and/or decrease the frame rate of received video signals.
Apparatus 400 comprises adjunct data stream encoder 416. Camera image processor 406, display element generator 408 and/or signal combiner 412 may be configured to provide adjunct data signals to encoder 416. Encoder 416 is configured to encode received adjunct data signals as adjunct data streams. Encoder 416 may be configured to encode adjunct data signals containing text or raster images. For example, encoder 416 may be configured to encode a combined subject image and display element adjunct data signal from signal combiner 412 as a raster image stream (e.g., a subpicture stream). For another example, encoder 416 may be configured to encode a display element signal received from display element generator 408 as a text stream (e.g., a text-based subtitle script).
In some embodiments, encoder 416 is configured to extract text elements from a display element signal and encode the text elements as a text-based soft subtitle stream (such as subtitle stream according to any of the formats mentioned in connection with block 168 of method 160). In some embodiments, encoder 416 is configured to obtain raster images from a display element signal (e.g., by rendering display elements contained in the display element signal as raster images, by extracting text elements from a display element signal and rendering the text elements as raster images, etc.) and to encode the raster images as a raster image stream (such as raster stream according to any of the formats mentioned in connection with block 168 of method 160). Encoder 416 may be configured to downsample, reduce the color depth and/or decrease the frame rate of received signals and/or images derived from received signals.
In some embodiments, user interface 20 is operable to acquire narration (e.g., narration provided by an ultrasound technician via a microphone, keyboard, and/or the like) and to provide a narration signal embodying the narration (e.g., in audio and/or text formats). In some such embodiments, such a narration signal may be provided via controller 402 to encoder 416, and encoder 416 is configured to encode the narration stream as an adjunct data stream. For example, encoder 416 may be configured to encode a text-based narration signal as a subtitle data stream. For another example, encoder 416 may be configured to encode an audio narration signal as an audio data stream. Narration provided by an ultrasound technician to explain her actions during an ultrasound examination (e.g., for instructional, diagnostic or other purposes) may thus be encoded for inclusion along with an associated ultrasound image video stream.
Apparatus 400 comprises multimedia container file packager 418. Multimedia container file packager is configured to package encoded data streams received from video data stream encoder 414 or adjunct data stream encoder 416. Multimedia container file packager 418 is configured to package one or more video data streams and/or one or more adjunct data streams in a multimedia container file. Multimedia container file packager 418 may be configured to package video data streams and/or adjunct data streams into any of the container files mentioned in connection with block 99 of method 90, for example.
Apparatus 400 comprises memory 420. Memory 420 may store video data streams, adjunct data streams, and multimedia container files containing one or more video data streams or adjunct data streams.
Variations on the example embodiments disclosed herein are within the scope of the invention, including without limitation:
Providing an ultrasound image video signal and a subject image video signal in a single multimedia container file may provide one or more of the following advantages:
Providing an ultrasound image video signal and a synthetic display element signal in a single multimedia container file may provide one or more of the following advantages:
A single multimedia container file containing an ultrasound image video signal and one or more of a synthetic display element signal and a subject image video signal provides a portable and accessible record of an ultrasound examination, which may be stored using robust and commonly available information management systems and/or streamed over communication networks. Such a record may be useful for:
Systems and apparatus described herein may comprise software, firmware, hardware, or any combination(s) of software, firmware, or hardware suitable for the purposes described herein. Software and other modules may reside on servers, workstations, personal computers, computerized tablets, and other devices suitable for the purposes described herein. Those skilled in the relevant art will appreciate that aspects of the system can be practiced with other communications, data processing, or computer system configurations, including: devices (including personal digital assistants (PDAs)), multi-processor systems, microprocessor-based or programmable consumer electronics, mini-computers, mainframe computers, and the like. Furthermore, aspects of the system can be embodied in a special purpose computer or data processor that is specifically programmed, configured, or constructed to implement one or more of the methods, or parts thereof, disclosed herein.
Software and other modules may be accessible via local memory, via a network, via a browser or other application in an ASP context, or via other means suitable for the purposes described herein. Examples of the technology can also be practiced in distributed computing environments where tasks or modules are performed by remote processing devices, which are linked through a communications network, such as a Local Area Network (LAN), Wide Area Network (WAN), or the Internet. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
Data structures described herein (e.g., data streams, container files, etc.) may comprise computer files, variables, programming arrays, programming structures, or any electronic information storage schemes or methods, or any combinations thereof, suitable for the purposes described herein. User interface elements described herein may comprise elements from graphical user interfaces, command line interfaces, and other interfaces suitable for the purposes described herein.
Image processing and processing steps as described above may be performed in hardware, software or suitable combinations of hardware and software. For example, such image processing may be performed by a data processor (such as one or more microprocessors, graphics processors, digital signal processors or the like) executing software and/or firmware instructions which cause the data processor to implement one or more methods, or parts thereof, disclosed herein. Such methods may also be performed by logic circuits which may be hard configured or configurable (such as, for example logic circuits provided by a field-programmable gate array “FPGA”).
Certain implementations of the invention comprise computer processors which execute software instructions which cause the processors to perform a method of the invention. For example, one or more processors in a processing apparatus or the like may implement methods as described herein by executing software instructions in a program memory accessible to the processors.
The invention may also be provided in the form of a program product. The program product may comprise any non-transitory medium which carries a set of computer-readable signals comprising instructions which, when executed by a data processor, cause the data processor to execute a method of the invention. Program products according to the invention may be in any of a wide variety of forms. The program product may comprise, for example, physical media such as magnetic data storage media including floppy diskettes, hard disk drives, optical data storage media including CD ROMs, DVDs, electronic data storage media including ROMs, flash RAM, hardwired or preprogrammed chips (e.g., EEPROM semiconductor chips), nanotechnology memory, or the like. The computer-readable signals on the program product may optionally be compressed or encrypted. Computer instructions, data structures, and other data used in the practice of the technology may be distributed over the Internet or over other networks (including wireless networks), on a propagated signal on a propagation medium (e.g., an electromagnetic wave(s), a sound wave, etc.) over a period of time, or they may be provided on any analog or digital network (packet switched, circuit switched, or other scheme).
Where a component (e.g. processing apparatus, processor, image processor, signal combiner, data stream encoder, multimedia container file packager, controller, etc.) is referred to above, unless otherwise indicated, reference to that component (including a reference to a “means”) should be interpreted as including as equivalents of that component any component which performs the function of the described component (i.e., that is functionally equivalent), including components which are not structurally equivalent to the disclosed structure which performs the function in the illustrated exemplary embodiments of the invention.
Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense, as opposed to an exclusive or exhaustive sense; that is to say, in the sense of “including, but not limited to.” As used herein, the terms “connected,” “coupled,” or any variant thereof, means any connection or coupling, either direct or indirect, between two or more elements; the coupling of connection between the elements can be physical, logical, or a combination thereof. Where the context permits, words in the above Detailed Description using the singular or plural number may also include the plural or singular number respectively. The word “or,” in reference to a list of two or more items, covers all of the following interpretations of the word: any of the items in the list, all of the items in the list, and any combination of the items in the list.
The above detailed description of examples of the technology is not intended to be exhaustive or to limit the system to the precise form disclosed above. While specific examples of, and examples for, the system are described above for illustrative purposes, various equivalent modifications are possible within the scope of the system, as those skilled in the relevant art will recognize. For example, while processes or blocks are presented in a given order, alternative examples may perform routines having steps, or employ systems having blocks, in a different order, and some processes or blocks may be deleted, moved, added, subdivided, combined, and/or modified to provide alternative or subcombinations. Each of these processes or blocks may be implemented in a variety of different ways. Also, while processes or blocks are at times shown as being performed in series, these processes or blocks may instead be performed in parallel, or may be performed at different times.
The teachings of the technology provided herein can be applied to other systems, not necessarily the system described above. The elements and acts of the various examples described above can be combined to provide further examples. Aspects of the system can be modified, if necessary, to employ the systems, functions, and concepts of the various references described above to provide yet further examples of the technology.
These and other changes can be made to the system in light of the above Detailed Description. While the above description describes certain examples of the system, and describes the best mode contemplated, no matter how detailed the above appears in text, the system can be practiced in many ways. Details of the system and method for classifying and transferring information may vary considerably in its implementation details, while still being encompassed by the system disclosed herein. As noted above, particular terminology used when describing certain features or aspects of the system should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects of the system with which that terminology is associated. In general, the terms used in the following claims should not be construed to limit the system to the specific examples disclosed in the specification, unless the above Detailed Description section explicitly defines such terms. Accordingly, the actual scope of the system encompasses not only the disclosed examples, but also all equivalent ways of practicing or implementing the technology under the claims.
Those skilled in the art will appreciate that certain features of embodiments described herein may be used in combination with features of other embodiments described herein, and that embodiments described herein may be practised or implemented without all of the features ascribed to them herein. Such variations on described embodiments that would be apparent to the skilled addressee, including variations comprising mixing and matching of features from different embodiments, are within the scope of this invention.
As will be apparent to those skilled in the art in the light of the foregoing disclosure, many alterations, modifications, additions and permutations are possible in the practice of this invention without departing from the spirit or scope thereof. The embodiments described herein are only examples. Other example embodiments may be obtained, without limitation, by combining features of the disclosed embodiments. It is therefore intended that the following appended claims and claims hereafter introduced are interpreted to include all such alterations, modifications, permutations, additions, combinations and sub-combinations as are within their true spirit and scope.
This application claims the benefit under 35 U.S.C. §119 of U.S. Patent Application No. 61/430,806 filed on 7 Jan. 2011 and entitled METHODS AND APPARATUS FOR PRODUCING VIDEO RECORDS OF USE OF MEDICAL ULTRASOUND IMAGING SYSTEMS which is incorporated hereby by reference.
Number | Date | Country | |
---|---|---|---|
61430806 | Jan 2011 | US |