The present disclosure relates to ultrasound imaging and, more particularly, to a method and system for improving ultrasound image quality by reducing speckle.
Ultrasound imaging is an important and attractive tool for a wide variety of applications (e.g., diagnostic medical imaging, non-diagnostic medical imaging, etc.). However, the quality of ultrasound images is usually degraded by coherent wave interference, known as speckle, which shows up as small-scale brightness fluctuations or mottling superimposed on parts of the image. Compounding is a speckle-reduction and contrast-enhancing technique. Beneficially, compounding results in an increase in the signal-to-noise ratio of the image, which improves the image quality (e.g., resolution) of that image. Compounding techniques include spatial compounding and frequency compounding. Compared to spatial compounding, frequency compounding is more robust against tissue motion because sequential vectors rather than frames are summed together for compounding. In frequency compounding, images with different characteristics are summed incoherently. The drawback of frequency compounding is resolution degradation.
Examples of high resolution frequency compounding methods include wide-band frequency compounding and two-firing harmonic frequency compounding. A two-firing harmonic frequency compounding method is used in the EDAN U50 portable color Doppler diagnostic system, which is illustrated in
Harmonic signals 108 in the system 100 are isolated by summing the beam from the first firing 102 with the beam from the second firing 104. The summation cancels out the linear signal. The summed signal is then provided to a depth-dependent, band-pass filter 112 that is configured to pass the harmonic frequency (approximately twice the transmission frequency of the fundamental signal) while rejecting other frequencies. The depth-dependent, band-pass filter 112 is modified as a function of depth so that the filter adjusts to the reduction in signal frequency caused by attenuation. Following filtration, the signal is envelope detected 116 using a Hilbert filter (to produce a phase shift).
The fundamental signal 110 is isolated from the harmonic signal by taking the difference between the first beam from the first firing 102 (stored in a buffer) and the second beam from the second firing 104. Because the beam transmissions through the first and second firings 102 and 104 are inverted, subtracting the received signals cancels the non-linear signals as well as improves the signal-to-noise ratio of the beam due to averaging. The resulted signal from subtraction is provided to a depth-dependent, band-pass filter 114 that is configured to pass the fundamental signal while rejecting other frequencies. Analogous to the filter for isolating the harmonic component, this filter is depth dependent to adjust for attenuation. Following filtration, the signal is envelope detected using a Hilbert filter to produce a phase shift.
The detected harmonic and fundamental signals are weighted 122 using depth-dependent gain elements 118 and 120, respectively. For shallow depths, the gain elements 118 and 120 are set to emphasize the harmonic signal while at deep depths, the fundamental signal is given more weight. This allows the image to benefit from an increased resolution and a reduced clutter from the harmonic signal near the transducer and an increased signal strength and a reduced noise from the fundamental signal at deep depths. After a weighted combination 122 of the detected signals, further processing 124 is done on the combined signal to create an image. The two-firing harmonic frequency compounding method is used to reduce image speckles. However, there is still a need for improved systems and methods for reducing speckle in ultrasound images.
One embodiment relates to an ultrasound machine. The ultrasound machine includes an image acquisition device structured to acquire image data corresponding to an object of interest, wherein the image data includes channel data corresponding to each of a plurality of at least three firings from the image acquisition device, and wherein the image data includes a fundamental component and a harmonic component. The ultrasound machine also includes an image processing system communicably coupled to the image acquisition device, the image processing system structured to isolate the fundamental component from the harmonic component by summing channel data from a set of firings in the at least three firings and subsequently combining the isolated fundamental and harmonic components. The ultrasound machine further includes an image output device structured to provide an ultrasound image from the combined harmonic and fundamental components.
Another embodiment relates to an image processing system. The image processing system includes a beamformer module structured to receive channel data from each of at least three firings; and, a synthesis module communicably coupled to the beamformer module, the synthesis module structured to: combine channel data corresponding to two inverted firings to isolate a harmonic component; combine channel data from one of the two inverted firings with channel data from a third firing to isolate a fundamental component; and combine the fundamental component with the harmonic component incoherently.
Still another embodiment relates to method for reducing speckle in an ultrasound image. According to one embodiment, the method includes: receiving, by an image processing system, channel data specific to each of at least three firings from an image acquisition device; combining, by the image processing system, channel data from two inverted firings to isolate a harmonic component; combining, by the image processing system, channel data from one of the two inverted firings with channel data from a third firing to isolate a fundamental component; log compressing, by the image processing system, each of the isolated harmonic and fundamental components separately; and combining the log compressed isolated harmonic and fundamental components to form an image.
Harmonic imaging and conventional imaging are techniques used in ultrasonography. Compared to conventional imaging, harmonic imaging provides images with better quality, but limited depth. In general, a conventional ultrasound image is formed by sending out a sound pulse (i.e., a firing) to structures in the body and listening (i.e., receiving) for the transmitted pulse to echo off of one or more various structures. A harmonic image is formed by sending out a sound pulse to structures (e.g., tissue, bones, etc.) in the body, receiving the transmitted sound pulse that echoes off of the structures, and also receiving a harmonic pulse (e.g., twice the transmission frequency) generated by the structures. Therefore, the signal returned by the structures includes not only the transmitted frequency (i.e., the “fundamental” frequency), but also signals of other frequencies, most notably, the “harmonic” frequency, which is twice the fundamental frequency. Because of the differences in frequencies, different characteristics are attributable to each frequency (i.e., the fundamental frequency is able to penetrate deeper depths than the weaker harmonic frequency), where those characteristics may be leveraged by personnel to obtain relatively more detailed images depending on the object of interest (e.g., a technician may focus on a higher frequency generated image when the object is at a deeper depth within the body).
The system and method of the present disclosure is structured to reduce speckle noise without sacrificing resolution. Compared to other image compounding system and method, the present disclosure is relatively more robust against tissue motion because sequential vectors rather than frames are summed together for compounding. As described more fully herein, the method and system of the present disclosure is implemented by transmitting two or more firings, combining the two or more firings coherently to extract the harmonic and fundamental components, filtering the harmonic and fundamental components at baseband, detecting the filtered harmonic and fundamental components, applying log compression on both of the harmonic and fundamental signals and combining the compressed signals to form a compounded harmonic image. Unlike other frequency compounding systems, according to the present disclosure both the fundamental and harmonic components are created through a weighting combination of firings with different frequencies in order to extract and emphasize the signals with the frequencies of interest (e.g., firings with higher frequency for creating fundamental images) and deduct the undesired frequency signals. According to one embodiment, both the harmonic and fundamental components of the firing are processed at baseband in order to obtain a relatively greater rejection of out-of-band signals. Following the baseband processing, the processed harmonic and fundamental components are compounded (e.g., summed with gains) to create images that have relatively lower amounts of speckle to achieve higher quality images (i.e., higher resolution and contrast) relative to conventional systems. Accordingly, the generated high quality images allow users (e.g., radiologists, ultrasonography technicians, etc.) to observe a relatively greater amount of details of the targeted objects, which functions to improve the accuracy capabilities of ultrasound imaging systems.
Before turning to the Figures, which illustrate the exemplary embodiments in detail, it should be understood that the present application is not limited to the details or methodology set forth in the description or illustrated in the figures. It should also be understood that the terminology is for the purpose of description only and should not be regarded as limiting. For illustrative purposes, imaging systems using four-firing and three-firing harmonic frequency processes are shown according to various example embodiments herein.
Referring to the Figures generally, a system and method for smoothing the speckle pattern and increasing contrast resolution in ultrasound images is shown according to various embodiments herein. While the present disclosure is largely explained in regard to B-mode imaging, it should be understood that the systems and methods described herein are widely applicable. For example, the systems and methods described herein may be used with multiple other imaging modes, such as B-mode, Doppler mode (e.g., Color Doppler, Pulsed wave (PW) Doppler, etc.), Contrast, Elastography, Photoacoustics, Shear wave, Acoustic radiation force imaging mode, etc.
Referring now to
As shown, the imaging system 200 includes an image processing system 204 communicably coupled to an image acquisition device 202 and an image output device 206. Communication between and among the components of
The image acquisition device 202 is structured as any type of image acquisition device utilized in ultrasonography systems. For example, the image acquisition device 202 may include, but is not limited to, an ultrasound transducer 207. The ultrasound transducer 207 may be configured as at least one of a probing (e.g., structured to be received in an opening or orifice of a patient and inserted inside the patient), a non-probing type transducer (e.g., structured to be passed over a surface of the body of a patient), or a combination of probing and non-probing type transducer. In some embodiments, the transducer 207 may be a combination of multiple transducers. In other embodiments, the transducer 207 may have multiple elements with different configurations. The transducer 207 is structured to generate and transmit a firing towards an object of interest in order to obtain image data regarding the object of interest. In one embodiment, the firing is structured as a sound wave. In this configuration, the transducer 207 is structured to convert high voltage pulses into sound waves that travel into the object of interest during transmission. In operation, the sound waves reflect off of one or more objects. The transducer 207 is structured to receive at least some of those reflections or echoes. Accordingly, each firing corresponds with specific channel data. The channel data includes the amplitude, frequency, and any other characteristic information regarding the particular firing. Image data refers to the totality of all the channel data.
The image acquisition device 202 is also shown to include a buffer 210. The buffer 210 is structured to store beams generated from transducer 207. According to one embodiment, a first firing created from the transducer 207 may be stored in the buffer 210 until a second firing is created from the transducer 207. According to another embodiment, a first firing and a third firing may be stored in the buffer until a second and/or a fourth firing are created so that harmonic and fundamental signals may be extracted from all four beams.
The image output device 206 is structured to provide the created images (e.g., to a user, radiologist, technician, other personnel, etc.). Accordingly, the image output device 206 may include, but is not limited to, a display device 209 which may be a monitor, a display screen on a computing device (e.g., a phone, tablet, etc.), a printer, a combination of these, etc. In some embodiments, the image output device 206 may include a user interface 211 configured to link to an image processing module to post-process the provided images. For example, the provided images may be adjusted with colors, contrasts, and/or focus areas through the user interface 211.
While shown as included in the image processing system 204, in some embodiments, the buffer 210 may be excluded from in the image processing system 204 (e.g., a part of the image acquisition device). Thus, the imaging system 200 may have different layouts of the devices and modules other than that illustrated in
The image processing system 204 is structured to receive the beams generated from the firings to generate an ultrasound image(s). The image processing system 204 is structured to apply harmonic frequency compounding to reduce speckles and create high resolution and high quality images. Two example flow diagrams of harmonic frequency compounding systems are shown in regard to
An example structure of the image processing system 204 is shown in
As shown, the image processing system 204 includes a synthesis module 216, a beamformer module 208, a detection module 222, a log compression module 224, a gain module 226, and a post image processing module 228. The synthesis module 216 is structured to isolate signals of interest. In one embodiment, the synthesis module 216 may be configured to filter out harmonic signals from the received signals. In another embodiment, the synthesis module 216 may be configured to filter out the fundamental signals from the received signals. In an alternate embodiment, the synthesis module 216 may be structured to isolate any other frequency of interest in the beam (i.e., different from either the harmonic or the fundamental frequencies).
As shown a beamformer module 208 is configured to receive the firings, which correspond to specific channel data, and form beams. The processing may include amplifying, digitizing, and coherently combining the firings within a predefined angle. In some embodiments, beamformer module 208 may be configured differently according to each firing (e.g. adjusting beam angles, scan line time intervals). According to one embodiment, the beamformer module 208 may be structured as one or more algorithms, processes, formulas, etc. Accordingly, the beamformer module 208 may be implemented in machine-readable media. In other embodiments, the beamformer module 208 may include one or more hardware components (e.g., application-specific integrated circuit (ASIC), field-programmable gate array (FPGA), digital signal processors (DSP) or a combination of these, etc. In still other embodiments, the beamformer module 208 may be a combination of multiple beamformers and each beamformer may have a specific configuration according to each firing.
As shown, the synthesis module 216 includes a quadrature demodulation module 218 and a base-band filter 220. The quadrature demodulation module 218 is configured to demodulate the received signals (i.e., harmonic signals and/or fundamental signals on a radio frequency band) to baseband signals. The baseband signals may be used to generate a relatively clear image (e.g., to depict lesions). According to one embodiment, the quadrature demodulation module 218 may be a dynamic demodulator to account for the requirements of penetration depth and signal-to-noise ratio (SNR) and reduce processing time. Accordingly, in some embodiments, the quadrature demodulation module 218 may be configured to down mix the received signals on a radio frequency (RF) band with cosine and sine values to obtain in-phase components (I) and quadrature components (Q). The Euclidean sum, √{square root over (I2+Q2)}, is the magnitude of the signal while the phase is represented as arctan
According to one embodiment, the quadrature demodulation module 218 varies the down-mixing along the ultrasound penetration depth to account for the change in the signals caused by depth-dependent attenuation of tissues.
The base-band filter 220 is configured to remove signals with uninterested frequencies from the images. The uninterested frequencies may be predefined by a user of the imaging system 200. For example, the base-band filter 220 may be structured to remove all signals in a non-harmonic frequency band (e.g., the fundamental signals) from the received combined signals in order to obtain harmonic signals. According to one embodiment, the baseband filter 220 may be structured as a low-pass filter to isolate the baseband signals, such that the in-phase and quadrature components from quadrature demodulation module 218 may pass through the baseband filter 220. In one embodiment, the baseband filter 220 is depth-dependent to account for the change of bandwidth caused by the depth-dependent attenuation. In another embodiment, the baseband filter 220 is a dynamic filter to account for the requirements of penetration depth and signal-noise-ration (SNR).
The detection module 222 is configured to detect the peaks of filtered signals. The envelope of the detected signals is used for compounding images. In one embodiment, the detection module 222 may be structured as a Hilbert filter. The Hilbert filter may be configured to produce a phase shifted signal based on the input signal and summing the square of the original input signal and the phase shifted signal (i.e., obtain the amplitude of combination of original and phase shifted signals). In another embodiment, the detection module 222 may be structured as a complex rotator. The complex rotator may be further configured to detect the peak frequency of the filtered baseband signals. The detected peak frequency may be used as the center frequency for the rotator. The amplitude of the complex signal (i.e., summing the square of the in-phase and quadrature components) may be used as detected signal.
The log compression module 224 is structured to reduce the dynamic range of the baseband signals using the peak frequency values from detection module 222 for efficient display. The log compression module 224 may be applied to the signals from the detection module 222 before the compounding in order to provide a relatively greater compounding effect. In some embodiments, the log compression module 224 may include parameters configured to adjust brightness of images.
The gain module 226 is structured to weight the signals (i.e., channel data). In some embodiments, gain module 226 may be configured to weight the channel data to emphasize signals with an interested or selected frequency. In other embodiments, gain module 226 is structured to weight signals from different transducers or different transducer elements in order to control the aperture (i.e., the width of the firing). In some embodiments, gain module 226 may also be configured to apply an apodization function on the signals to suppress signal sidelobes that could cause images being placed in the wrong location on the displayed image (e.g., displayed as bright and rounded lines). The gain module 226 may include multiple gain components. Each gain component may be structured to control the weight of relative signal. In some embodiments, each of the gain components in the gain module 226 may include dynamic gains and independent from each other. In other embodiments, some gain components may be relative to each other. For example, a second beam and a third beam may use the same gain value to weight for a fundamental signal.
The post image processing module 228 is structured to process the images before displaying images to users in order to further reduce speckle and increase image quality. Accordingly, the post image processing module 228 may include, but is not limited to, spatial compounding processes, digital scan conversion processes, additional speckle reduction processes, etc. In some embodiments, the post image processing module 228 may be linked to the user interface in the image output device 206 to post process the provided images as commanded by the users.
Referring now to
In one embodiment, Golay codes may be used for firings 305 and 306. In other embodiments, Golay codes may be used for any of the firing described herein. The Golay codes may include any type of Golay code, such as binary Golay code, extended Binary Golay code, perfect binary Golay code, etc. In still further embodiments, any other type of error correcting code may be used for firings (e.g., forward error correction, etc.). The utilization of error-correcting codes (e.g., Golay codes) may be beneficial to the detection and correction of errors within the firing. All such variations are intended to fall within the spirit and scope of the present disclosure.
Each firing is structured to obtain channel data to indicate that the acquired data is specific to a particular firing. The channel data is received by the transducer and provided to beamformers 308, 310, 312, and 314. As shown, there is one beamformer for each firing. Accordingly, each beamformer receives channel data specific to one of the first, second, third, and fourth firings. The beamformers 308, 310, 312, and 314 are structured to amplify, digitize, and coherently combine the received channel data. In some embodiments, a single beamformer may be used, but adapted differently for each firing.
The beams generated from beamformers 308 and 310 are summed together at summation element 316 to isolate the harmonic signals. The summation element 316 cancels out the linear signal components because the linear signal components in firing 302 and firing 304 are inverted. The beams generated from beamformers 312 and 314 are summed coherently at summation element 318 to reduce random noise. As shown, both the harmonic and fundamental components are demodulated into baseband through quadrature demodulation module 218 with different demodulation frequencies. The baseband harmonic and fundamental components are filtered through baseband filter module 220 to further remove unwanted frequencies. The filtered baseband harmonic and fundamental components are used for detection module 222 to generate detected harmonic and fundamental signals. The detected harmonic and fundamental components are compressed through log-compression module 224. The compressed detected signals are then combined together with a weighting function applied through gains 330 and 332 to form a compounded image. Gains 330 and 332 may be gain components in the gain module 226. In one embodiment, the weighting may emphasize the harmonic signal at shallow depths and the fundamental signal at deep depths through gain 330 and 332. For example, at shallow depths, the gain 330 for the harmonic signal may be larger than gain 332 for the fundamental signal and at deep depths, the gain 330 may be smaller than the gain 332 for the fundamental signal. In some embodiments, the gains are programmable to adjust with depth. For example, at some shallow depths, gain 332 may be larger than gain 330 (i.e., weighting may favor the fundamental in certain shallow depth as well). In some other embodiments, the gains 330 and 332 may have same value.
After forming a compounded image by combining the compressed harmonic and fundamental signals through weighting 334, the compounded image is provided to the post processing module 228.
Referring now to
In conventional frequency compounding system, the harmonic signal components are usually emphasized at shallow depth and fundamental signal components at deep depth. Compared to the conventional system, the present disclosure may further reduce speckles at both shallow and deep depth by improving quality of the fundamental signal components. According to the present disclosure, the fundamental and harmonic signal components are generated using different sets of firings, such as, firings 305 and 306 in the four-firing system, firings 406 and 404 in the three-firing system. The present system provides a relatively better rejection of out-of-band components by demodulating both of harmonic and fundamental signal components prior to compounding. In addition, the present system improves the compounding effect by applying the log-compression prior to compounding.
It should be understood that while
It should also be understood that in still further embodiments, a user may choose parts of systems 300 and 400 together. As such, the user may tailor the image processing system to their specific needs by advantageously identifying and selecting which components from system 300 and system 400 to use in the image formation process.
As such, as will be readily appreciated by those of skill in the art, the present disclosure is widely applicably with a high degree of configurability. While many examples are described in isolation, this description is meant for clarity and not meant to be limiting. Accordingly, many different implementation embodiments are contemplated by the present disclosure with all such embodiments intended to fall within the spirit and scope of the present disclosure.
It should be understood that the foregoing embodiments could be extended to other multiple firing harmonic frequency compounding combinations that can be generally expressed as follows. First, two or more ultrasound firings would be made. These firings would be divided into several groups. In this way, each group can share firings with other groups. For groups with more than one firing, the firings would be coherently combined to form tissue-generated harmonic signals. The harmonic signals can include sub-harmonic, ultra-harmonics, second order harmonic, and higher order harmonic. The number of groups with more than one firing could be one or more. The number of groups with one firing could also be one or more. For each group, the output of the coherent sum is detected in the case of two or more firings and the firing is directly detected in the case of one firing. All detected outputs are combined to form the compounded image as described above. The fundamental signals can be generated by a plurality of ways to combine the firings. The fundamental signals may also contain sub groups similar to the harmonic signals.
Although the figures show a specific order of method/system steps, the order of the steps may differ from what is depicted. Also two or more steps may be performed concurrently or with partial concurrence. Such variation will depend on the software and hardware systems chosen and on designer choice. All such variations are within the scope of the disclosure. Likewise, software implementations could be accomplished with standard programming techniques with rule based logic and other logic to accomplish the various connection steps, processing steps, comparison steps and decision steps.
Additionally, the format and symbols employed are provided to explain the logical steps of the schematic diagrams and are understood not to limit the scope of the methods/systems illustrated by the diagrams. Although various arrow types and line types may be employed in the schematic diagrams, they are understood not to limit the scope of the corresponding methods/systems. Indeed, some arrows or other connectors may be used to indicate only the logical flow of a method. For instance, an arrow may indicate a waiting or monitoring period of unspecified duration between enumerated steps of a depicted method. Additionally, the order in which a particular method occurs may or may not strictly adhere to the order of the corresponding steps shown. It will also be noted that each block of the block diagrams and/or flowchart diagrams, and combinations of blocks in the block diagrams and/or flowchart diagrams, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and program code.
Many of the functional units described in this specification have been labeled as modules, in order to more particularly emphasize their implementation independence. For example, a module may be implemented as a hardware circuit comprising custom VLSI circuits or gate arrays, off-the-shelf semiconductors such as logic chips, transistors, or other discrete components. A module may also be implemented in programmable hardware devices such as field programmable gate arrays, programmable array logic, programmable logic devices or the like.
Modules may also be implemented in machine-readable medium for execution by various types of processors. An identified module of executable code may, for instance, comprise one or more physical or logical blocks of computer instructions, which may, for instance, be organized as an object, procedure, or function. Nevertheless, the executables of an identified module need not be physically located together, but may comprise disparate instructions stored in different locations which, when joined logically together, comprise the module and achieve the stated purpose for the module.
Indeed, a module of computer readable program code may be a single instruction, or many instructions, and may even be distributed over several different code segments, among different programs, and across several memory devices. Similarly, operational data may be identified and illustrated herein within modules, and may be embodied in any suitable form and organized within any suitable type of data structure. The operational data may be collected as a single data set, or may be distributed over different locations including over different storage devices, and may exist, at least partially, merely as electronic signals on a system or network. Where a module or portions of a module are implemented in machine-readable medium (or computer-readable medium), the computer readable program code may be stored and/or propagated on in one or more computer readable medium(s).
The computer readable medium may be a tangible computer readable storage medium storing the computer readable program code. The computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, holographic, micromechanical, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
More specific examples of the computer readable medium may include but are not limited to a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a portable compact disc read-only memory (CD-ROM), a digital versatile disc (DVD), an optical storage device, a magnetic storage device, a holographic storage medium, a micromechanical storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, and/or store computer readable program code for use by and/or in connection with an instruction execution system, apparatus, or device.
The computer readable medium may also be a computer readable signal medium. A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electrical, electro-magnetic, magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport computer readable program code for use by or in connection with an instruction execution system, apparatus, or device. Computer readable program code embodied on a computer readable signal medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, Radio Frequency (RF), or the like, or any suitable combination of the foregoing.
In one embodiment, the computer readable medium may comprise a combination of one or more computer readable storage mediums and one or more computer readable signal mediums. For example, computer readable program code may be both propagated as an electro-magnetic signal through a fiber optic cable for execution by a processor and stored on RAM storage device for execution by the processor.
Computer readable program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone computer-readable package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
The program code may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the schematic flowchart diagrams and/or schematic block diagrams block or blocks.
Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases “in one embodiment,” “in an embodiment,” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
Accordingly, the present disclosure may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the disclosure is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
This application claims the benefit of U.S. Provisional Patent Application No. 62/131,673, filed Mar. 11, 2015, which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
62131673 | Mar 2015 | US |