The subject matter disclosed herein relates generally to audible messages, and more particularly to a methods and systems for providing audible notifications for medical devices.
In medical environments, especially complex medical environments where multiple patients may be monitored for multiple medical conditions, standardization of alarms and/or warnings creates significant potential for confusion and inefficiency on the part of users (e.g., clinicians or patients) in responding to specific messages. For example, it is sometimes difficult for clinicians and/or users of medical devices to distinguish or quickly identify the source and condition of a particular audible alarm or warning. Accordingly, the effectiveness and efficiency with which users respond to medical messaging can be adversely affected, which can lead to delays to responding to medical or system conditions associated with these audible alarms or warnings.
In particular, medical facilities typically include rooms to enable surgery to be performed on a patient, to enable a patient's medical condition to be monitored, and/or to enable a patient to be diagnosed. At least some of these rooms include multiple medical devices that enable the clinician to perform the operation, monitoring, and/or diagnosis. During operation of these medical devices, at least some of the devices are configured to emit audible indications, such as audible alarms and/or warnings that are utilized to inform the clinician of a medical condition being monitored. For example, a heart monitor and a ventilator may be attached to a patient. When a medical condition arises, such as low heart rate or low respiration rate, the heart monitor or ventilator emits an audible indication that alerts and prompts the clinician to perform some action.
Under certain conditions or in certain medical environments, multiple medical devices may concurrently generate audible indications. In some instances, two different medical devices may generate the same audible indication or an indistinguishably similar audible indication. For example, the heart monitor and the ventilator may both generate a similar high-frequency sound when an urgent condition is detected with the patient, which is output as the audible indication. Therefore, under certain conditions, the clinician may not be able to distinguish whether the alarm condition is being generated by the heart monitor or the ventilator. In this case, the clinician visually observes each medical device to determine which medical device is generating the audible indication. Moreover, when three, four, or more medical devices are being utilized, it is often difficult for the clinician to easily determine which medical device is currently generating the audible indication. Thus, delay in taking action may result from the inability to distinguish the audible indications from the different devices. Additionally, in some instances the clinician is not able to associate the audible indication with a specific condition and accordingly must visually view the medical device to assess a course of action.
Moreover, in some instances, no alarms and/or warnings exist for certain conditions, which can result in adverse results, such as injury to patients. For example, movement of major parts of medical equipment (e.g., CT/MR table and cradle, interventional system table/C-arm, etc.) is known for creating a potential for pinch points and collisions. In the majority of these cases, the only indication for these movements, especially for users not controlling the movements and for the patients is direct visual contact, which is not always possible.
In one embodiment, a method for generating an audible medical message is provided. The method includes receiving semantic rating scale data corresponding to a plurality of sounds and medical message descriptions and performing semantic mapping using the received semantic rating scale data. The method also includes determining profiles for audible medical messages based on the semantic mapping and generating audible medical messages based on the determined profiles.
In another embodiment, a method for generating an audible medical message is provided. The method includes defining an audible signal to include an acoustical property based on a semantic sound profile that corresponds to a medical message for a medical device. The method also includes broadcasting the audible signal using the medical device.
In yet another embodiment, a medical arrangement is provided that includes a plurality of medical devices capable of generating different medical messages. The medical arrangement also includes a processor in each of the medical devices configured to generate an audible signal that includes an acoustical property based on a semantic sound profile that corresponds to one of the medical messages.
The following detailed description of certain embodiments will be better understood when read in conjunction with the appended drawings. The figures illustrate diagrams of the functional blocks of various embodiments. The functional blocks are not necessarily indicative of the division between hardware circuitry. Thus, for example, one or more of the functional blocks (e.g., processors or memories) may be implemented in a single piece of hardware (e.g., a general purpose signal processor or a block or random access memory, hard disk, or the like) or multiple pieces of hardware. Similarly, the programs may be stand alone programs, may be incorporated as subroutines in an operating system, may be functions in an installed software package, and the like. It should be understood that the various embodiments are not limited to the arrangements and instrumentality shown in the drawings.
Various embodiments provide methods and systems for providing audible indications or messages, particularly audible alarms and warnings for devices, especially medical devices. For example, a classification system may be provided, as well as a semantic mapping for these audible indications or messages.
As described in more detail herein, the various embodiments provide for the differentiation of audible notifications or messages, such as alarms or warnings based on acoustical and/or musical properties that convey specific semantic character(s). Additionally, these audible notifications or messages also may be used to provide an auditory means to indicate device movements, such as movement of major equipment pieces. It should be noted that although the various embodiments are described in connection with medical systems having particular medical devices, the various embodiments may be implemented in connection with medical systems having different devices or non-medical systems. The various embodiments may be implemented generally in any environment or in any application to distinguish between different audible indications or messages associated or corresponding to a particular event or condition for a device or process.
Moreover, as used herein, an audible indication or message refers to any sound that may be generated and emitted by a machine or device. For example, audible indications or alarms may include auditory alarms or warnings that are specified in terms of frequency, duration and/or volume of sound.
In the exemplary embodiment, the facility 10 includes at least one room 12, which are illustrated as a plurality of rooms 40, 42, 44, 46, 48, and 50. At least one of the rooms 12 may include different medical systems or devices, such as a medical imaging system 14 or one or more medical devices 16 (e.g., a life support system). The medical systems or devices may be, for example, any type of monitoring device, treatment delivery device or medical imaging device, among other devices. For example, different types of medical imaging devices or medical monitors include a Computed Tomography (CT) imaging system, an ultrasound imaging system, a Magnetic Resonance Imaging (MRI) system, a Single-Photon Emission Computed Tomography (SPECT) system, a Positron Emission Tomography (PET) system, an Electro-Cardiograph (ECG) system, an Electroencephalography (EEG) system, etc. It should be realized that the systems are not limited to the imaging and/or monitoring systems described above, but may be utilized with any medical device configured to emit a sound as an indication to an operator.
Thus, at least one of the rooms 12 may include a medical imaging device 14 and a plurality of medical devices 16. The medical devices 16 may include, for example, a heart monitor 18, a ventilator 20, anesthesia equipment 22, and/or a medical imaging table 24. It should be realized that the medical devices 16 described herein are exemplary only, and that the various embodiments described herein are not limited to the medical devices shown in
In operation, the audible indications/messages generated by the medical imaging systems 14 and/or each medical device 16 creates an audible landscape that enables a clinician to audibly identify which medical device 16 is generating the audible indication and/or message and/or the type of message (e.g., the severity of the message) without viewing the particular medical device 16. The clinician may then directly respond to the audible indication and/or message by visually observing the medical imaging system 14 or device 16 that is generating the audible indication without the need to observe, for example, several of the medical devices 16, if not desired.
In various embodiments, the audible indication 34, which may be a complex auditory indication, is semantically related to a particular medical message, such as corresponding to a specific medical alarm or warning, or to indicate movement of a piece of equipment, such as a scanning portion of the medical imaging system 14. The audible indication 34 in various embodiments enables two or more medical systems or devices, such as the heart monitor 18 and the ventilator 20 to be concurrently monitored audibly by the operator, such that different alarms and/or warning sounds may be differentiated on the basis of acoustical and/or musical properties that convey a specific semantic character. Thus, the various audible indications 34 generated by the medical imaging system 14 and/or the various medical devices 16 provides a set of indications and/or messages that operate with each other to provide a soundscape for this particular environment. The set of sounds, which may include multiple audible indications 34, may be customized for a particular environment. For example, the audible indications 34 that produce the set of sounds for an operating room may be different than the audible indications 34 that produce the set of sounds for a monitoring room.
Additionally, the audible indications 34 may be utilized to inform a clinician that a medical device is being repositioned. For example, an audible indication 34 may indicate that the table of a medical imaging device is being repositioned. The audible indication 34 may indicate that a portable respiratory monitor is being repositioned, etc. In each case, the audible indication 34 generated for each piece of equipment may be differentiated to enable the clinician to audibly determine that either the table or the respiratory monitor, or some other medical device is being repositioned. Other medical devices that may generate a distinct audible indication 34 include, for example, a radiation detector, an x-ray tube, etc. Thus, each medical device 16 may be programmed to emit an audible indication/message based on an alarm condition, a warning condition, a status condition, or a movement of the medical device 16 or medical imaging system 14.
In various embodiments, the audible indication 34 is designed and/or generated based on different criteria, such as different acoustical and/or musical properties that convey a specific semantic character. In general, a set of medical messages or audible indications 34 that are desired to be broadcast to a clinician may be determined, for example, initially selected. In one embodiment, the audible indications 34 may be used to inform listeners that a particular medical condition exists and/or to inform the clinician that some action potentially needs to be performed. Thus, each audible indication 34 may include different elements or acoustical properties. For example, one of the acoustical properties enables the clinician to audibly identify the medical device generating the audible message and a different second acoustical property enables the clinician to identify the type of the audible alarm/warning, movement, or when any operator interaction is required. Moreover, other acoustical properties may communicate the medical condition (or patient status) to the clinician. For example, how the audible indication/message is broadcast, and the tone, frequency, and/or timbre of the audible indication may provide information regarding the severity of the alarm or warning, such as that a patient's heart is stopped, breathing has ceased, the imaging table is moving, etc.
In particular, various embodiments provide a conceptual framework and a perceptual framework for defining audible indications or messages. In some embodiments, sound profiles for medical images are defined that are used to generate the audible indications 34. The sound profiles map different audible messages to sounds corresponding to the audible indications 34, such as to indicate a particular condition or operation. For example, as shown in
The auditory message profile generation module 60 receives as an input defined message categories, which may correspond, for example, to medical alarms or indications. The auditory message profile generation module 60 also receives as an input a plurality of defined quality differentiating scales. The inputs are based on a semantic rating scale as described in more detail herein and are processed or analyzed to define or generate a plurality of sound profiles that may be used to generate, for example, audible alarms or warnings. In various embodiments, the auditory message profile generation module 60 uses at least one of a hierarchical cluster analysis or a principal components factor analysis to define or generate the plurality of sound profiles.
For example, various embodiments classify medical auditory messages into a plurality of categories, which may correspond to the conceptual model of clinicians working in ICU environments. In one embodiment, the medical auditory messages are classified into seven categories, which include the following auditory message types:
1. Non-critical Device message;
2. Extreme high urgency condition;
3. Extreme high urgency message;
4. International Electrotechnical Commission (IEC) high urgency alarm;
5. Device info./feedback;
6. Device process began; and
7. IEC low urgency alarm
It should be noted that the conceptual model may result in categories not related to medical messages and that may be utilized for additional purposes in clinical environments.
In various embodiments, a set of sound quality differentiating scales that describe the medical auditory design space are also defined. For example, in one embodiment, a set of four sound quality differentiating scales may define sound quality axes as follows:
1. Discordance . . . Concordance;
2. Resolved . . . Unresolved;
3. Hard attack . . . Soft attack; and
4. Novelty . . . Familiarity.
Thus, in this embodiment, the seven different categories of medical auditory messages may be mapped to the four sound qualities differentiating scales to generate the plurality of sound profiles. For example, as shown in
As shown in
Various embodiments provide a method 90 as shown in
The method 90 generally provides a semantic mapping of different message types to define sound profiles for use in generating audible alarms or warnings. Specifically, the method 90 includes determining a plurality of sounds for auditory messages at 92. For example, different sounds may be provided based on defined standards, known alarm or warning sounds or arbitrary sounds or sounds combinations. In one embodiment, thirty sounds are determined including (i) an IEC low-urgency alarm, (ii) an IEC high-urgency alarm, variations of IEC standards for low, medium and high urgency alarms obtained by manipulating musical properties such as timbre, attack, sustain, decay and release and (iii) arbitrary sounds, such as new sound creations of a sound designer.
The method 90 also includes identifying messages communicated using auditory signals at 94. For example, different messages may be identified based on the particular application or environment. In one embodiment, the messages are medical messages, such as thirty medical messages typically communicated using auditory signals determined based on messages used for ventilators, monitors and infusion pumps, among other devices. The medical message may include, for example, patient and device issues spanning a range of severity/urgency.
Thereafter, rating data is received at 96 based on an evaluation of semantic perception. For example, sounds may be presented to a group, such as a group of nurses, using any suitable auditory means (e.g., computer with headphones) for rating. Additionally, semantic differential rating scales may be provided, for example, which in one embodiment, includes eighteen word pairs that span or encompass a range of semantic content including the key alarm attribute of urgency. The rating data may be collected and or received using, for example, an online data collection tool accessed via a laptop computer. Accordingly, medical messages may be displayed within a rating tool and sounds presented independently.
The data may be received from small groups, such as of four or five subjects. Different methods may be used, such as presenting the sounds and medical messages in separate blocks, half of the groups hearing sounds first. In some embodiments, sounds and medical messages are presented in quasi-counterbalanced orders across groups, for example, in four quasi-counterbalanced orders. It should be noted that in various embodiments, each sound and each message appears equally often in the first, second, third and fourth quarter of the sequence. In some embodiments, the order of stimuli in each quarter of the sequence may be reversed for two of the four sequences. Additionally, in various embodiments, all participants are allowed to complete ratings of a given sound before presenting the next sound in the sequence. It should be noted that the rating data may be acquired in different ways and may be based on previously acquired data.
Thereafter, the received rating data is processed or analyzed, which in various embodiments includes performing semantic mapping at 98. In one embodiment, the rating data is processed using (i) a hierarchical cluster analysis of sound and message ratings using an unweighted pair-group average linkage and (ii) a principal components factor analysis of sound and message ratings. It should be noted that the various steps and methods described herein for various embodiments may be performed using any suitable processor or computing machine.
Additionally, a principal components factor analysis is also performed on the combined rating data for sounds and messages received at 96. The principal components factor analysis in one embodiment uses the Varimax Rotation. It should be noted that Eigen values for the four-factor solution in one analysis exceeded the critical value of 1.00, resulting in a 65.46% of the variance in ratings. The table 140 shown in
F1: Disturbing . . . Reassuring
F2: Unusual . . . Typical
F3: Elegant . . . Unpolished; and
F4: Precise . . . Vague
It should be noted that the table 140 shows attribute pairs sorted according to highest load factors. In particular, attributes loading highest on Factor 1 reflect variation in the Disturbing (Tense, Sick, Assertive) quality of sounds and messages. Accordingly, in some embodiment, sounds nearest the Disturbing end of Factor 1 are most discordant whereas sounds nearest the Reassuring end of Factor 1 are most harmonious. Attributes loading highest on Factor 2 reflect variation in the Unusual (Rare, Unexpected, Imaginative) quality of sounds and messages. Sounds nearest the Typical end of Factor 2 are traditional alarms whereas sounds nearest the Unusual end of Factor 2 are most unlike typical alarms. It should be noted that many messages tend to be Typical. Attributes loading highest on Factor 3 reflect variation in the Elegant (Harmonious, Satisfying, Calm) quality of sounds and messages. Accordingly, in some embodiments, sounds nearest the Elegant end of Factor 3 are most resolved (i.e., sound musically complete) whereas sounds nearest the Unpolished end of Factor 3 are most unresolved (i.e., musically incomplete). Attributes loading highest on Factor 4 reflect variation in the Precise (Trustworthy, Urgent, Firm Distinct, Strong) quality of sounds and messages. Accordingly, in some embodiments, sounds nearest the Precise end of Factor 4 have the hardest “attack”, a musical quality describing the force with which a note is struck, whereas sounds nearest the Vague end of Factor 4 have the softest attack. It should be noted that the attribute of Urgency traditionally associated with alarm quality loads on Factor 4. Additionally, it should be noted that Perceived Urgency is shown to relate to the force with which a sound is presented and is independent of the Disturbing quality reflected in Factor 1 in the illustrated embodiment.
Referring again to
The profiles 166a represent the four clusters associated with “Patient Conditions”. As can be seen, with one exception, these profiles 166a are characteristically Disturbing, Typical, Unpolished and Precise. The exception is the “Extreme High Urgency Message”, which is defined as highly Unusual. Also, as the criticality of messages increases, the profiles 166 shift toward more Disturbing, Unusual and Precise. The profiles 166a for Low-urgency and High-urgency patient messages correspond to IEC standards. However, there is no IEC sound for “Extreme high-urgency message” indicating that a more Disturbing (discordant) and Precise (hard attack) sound may be used to accommodate this level of criticality. The sound for “critical alarm turned off” also does not correspond to an IEC standard and is highly Unusual in sound. It should be noted that the capitalized terms correspond to the scale descriptors. In various embodiments, sound properties included with or within one or more standards, for example IEC standards, may be instantiated in other sounds that are not standards.
The profiles 166b represent the three clusters associated with “Device Info/Status”. As can be seen, compared to Patient Conditions, these profiles 166b tend to be more Reassuring, Elegant and Vague. It should be noted that the profile 166b for “Non-critical device info” is another message for which there are no associated sounds. A sound fitting this profile may be highly Reassuring (harmonious), as Typical as the Low-urgency alarm sound, more Elegant (resolved) than current alarms and more Vague (softer attack) than all but the low-urgency alarm. The profile 166b for the cluster Device Info/Status tends to be more Precise (harder attack) than the other two profiles 166b.
Thus, the graph 160 illustrates a conceptual framework for defining medical messages wherein the quality of sounds map to each of the categories of medical messages, which in the illustrated embodiment is seven messages. The graph 160 shows that various embodiments use conceptual categories (illustrated as terms 168) wherein description qualities describe sounds and different musical qualities can be associated with these terms. It should be noted that different sounds qualities may be used as desired or needed or as defined. Accordingly, the sound profiles 166 provide for the sounds to be described in four-dimensions, namely four independent and inherently meaningful semantic dimensions. Using the sound profiles 166, sounds may be created for different audible notifications, such as audible alarms or warnings.
In operation or implementation, the audible indications/messages may be selected and implemented based on a medical device by medical device basis. Thus, in one embodiment, a suite of medical devices all installed in the same room will produce a distinct set of sounds that enable the clinician to immediately identify the medical device, the urgency of the alarm, and/or the medical reason the alarm is being generated.
In the various embodiments, a set of candidate audible indications/messages, spanning a range of acoustical/musical properties that may be used for messaging is implemented for each selected medical device 16. Each sound produced by each medical device 16 may have a different acoustic property that identifies the medical device 16 generating the sound. As discussed above, the acoustic properties may include, for example, timbre, frequency, tonal sequence, or various other sound properties. The sound properties are may be selected based on the audible perception of the clinicians who will hear the sounds. For example, an urgent alarm condition may be indicated by generating a sound that has a relatively high frequency. Whereas, a sound used to indicate a status condition may have a relatively low frequency, etc.
Thus, each audible indication 34 generated by a medical device 16 may be described using a vocabulary of attribute words that describe the semantic qualities of audible indications. Accordingly, each audible indication 34 may be selected that has a specific meaning to the clinician, for example, what is the medical device generating the audible indication 34 and what is the medical condition indicated by the audible indication 34. Each audible indication/message or sound therefore may be tailored to human perception such that the sound communicates to the clinician what problem has occurred. For example, a high frequency sound may have a first effect on the listener, and a low frequency may have a different effect on the listener. Therefore, as discussed above, a high frequency sound may indicate that urgent or immediate action is required. Whereas, a low frequency sound may indicate that a patient needs to be monitored.
Because each sound has multiple properties, humans may listen to multiple properties simultaneously. Therefore, each sound can communicate at least two pieces of information to the clinician. For example, a first audible indication may have a first frequency and a first tone indicating that an urgent action is indicated at the heart monitor. Moreover, a second different audible indication may have the first frequency and a second tone indicating that an urgent action is indicated by the respiratory monitor, etc. Thus, a portion of some of the audible indications may be similar to each other, but also include different characteristics to identify the specific medical device, urgency, condition, etc.
As described in more detail herein, the audible indications 34 may be defined and/or tested prior to implementation using a sample of potential users to quantify the semantic qualities of each medical message as described herein. The semantic qualities of each sound may be measured using measurement scales based upon attribute words. The attribute words may include, for example, tone, timbre, frequency, etc. The attribute words describing each sound may then be correlated with one another to produce clusters of words that represent common underlying semantic concepts, for example, urgency, etc. Each medical message, or audible indication 34, is measured with respect to each semantic concept producing a multi-dimensional profile for each message. Potential users may then be used to quantify the semantic qualities of each sound using measurement scales based upon attribute words. The attribute words may then be clustered with one another to reduce a quantity of words and to reduce the quantity of clusters that represent common underlying semantic concepts. Acoustical/musical properties correlated with each concept may then be identified. Moreover, medical messages and sounds that share common semantic profiles may then be identified. Additionally, musical/acoustical properties that characterize each semantic concept and used to create new sounds that communicate similar medical messages may be identified.
The sounds defined by the profiles 166 may be used to generate audible messages. For example, a flowchart of a method 170 for generating audible messages in accordance with various embodiments is shown in
The method 170 may further include broadcasting at 176 another signal using a different second medical device to generate a soundscape for a medical environment. In operation, the audible signal enables an operator to identify a medical message, as well as the medical device that broadcast (e.g., emitted) the audible signal. The audible signal may also indicate a movement of a medical device in some embodiments. The audible signal is configured to audibly convey semantic characteristics indicative of the medical device.
In the exemplary embodiment, each sound 184 has multiple properties 186 that may be aligned or correlated with different words in the vocabulary. The descriptive words or attributes may be, for example, loud, large, sharp, good, pleasant, etc. The attributes may also be used to describe the messages. Accordingly, various embodiments disclosed herein provide a means to define a common set of attributes that describe the message 182 and the sounds 184 and then use these attributes to relate the message 182 to the sounds 184 in a language that is understood by the user.
Examples of messages may also include, for example, blood pressure is high, CO2 is high, blood pressure is low, etc. The sound properties 186 include, for example, the auditory frequency of the sound, the timbre, is the sound pleasing to the operator, is the sound elegant, musical properties, such as is the note flat, is the tone melodic, etc. These sound properties 186 enable the user to distinguish between different sounds 184. Thus, the sounds 184 generated relate a message 182 and have an intrinsic meaning to the users of the medical equipment. Thus, various embodiments align the intrinsic meaning of the sound 184 with the message 182. For example, the sound may have an intrinsic meaning that there is a problem in the vasculature.
It should be realized that a single medical message 182 may be correlated with one or more sounds 184 using one or more descriptive words because humans can distinguish multiple sound qualities concurrently. For example, medical message 1 has a descriptive word that is particularly descriptive of message 1 and is correlated with a property 1 of sound 1. There may be other descriptive words used to describe message 1, but not associated with the medical connotation, and still used to describe other aspects, such as the device emitting the sound.
Thus, various embodiments may be used to generate unique sounds that denote medical messages/conditions and devices. Individual medical messages/conditions and individual devices are mapped to specific sounds via common semantic/verbal descriptors. The mapping leverages the complex nature of sounds having multiple perceptual impressions, connoted by words, as well as multiple physical properties. Certain properties of sounds are aligned with specific medical messages/conditions whereas other properties of sounds are aligned with different devices, and may be communicated concurrently, simultaneously or sequentially.
Various embodiments may define sounds that relate a particular medical message to a user. Specifically, descriptive words are used to relate or link medical messages to sounds. Various embodiments also may provide a set or list of sounds that relate the medical message to a sound. Additionally, various embodiments enable a medical device user to differentiate alarm/warning sounds on the basis of acoustical/musical properties of the sounds. Thus, the sounds convey specific semantic characteristics, as well as communicate patient and system status and position through auditory means.
At least one technical effect of various embodiments is increased effectiveness or efficiency with which a user responds to audible indications.
It should be noted that the various embodiments, for example, the modules described herein, may be implemented in hardware, software or a combination thereof. The various embodiments and/or components, for example, the modules, or components and controllers therein, also may be implemented as part of one or more computers or processors. The computer or processor may include a computing device, an input device, a display unit and an interface, for example, for accessing the Internet. The computer or processor may include a microprocessor. The microprocessor may be connected to a communication bus. The computer or processor may also include a memory. The memory may include Random Access Memory (RAM) and Read Only Memory (ROM). The computer or processor further may include a storage device, which may be a hard disk drive or a removable storage drive, optical disk drive, solid state disk drive (e.g., flash drive of flash RAM) and the like. The storage device may also be other similar means for loading computer programs or other instructions into the computer or processor.
As used herein, the term “computer” or “module” may include any processor-based or microprocessor-based system including systems using microcontrollers, reduced instruction set computers (RISC), application specific integrated circuits (ASICs), logic circuits, and any other circuit or processor capable of executing the functions described herein. The above examples are exemplary only, and are thus not intended to limit in any way the definition and/or meaning of the term “computer”.
The computer or processor executes a set of instructions that are stored in one or more storage elements, in order to process input data. The storage elements may also store data or other information as desired or needed. The storage element may be in the form of an information source or a physical memory element within a processing machine.
The set of instructions may include various commands that instruct the computer or processor as a processing machine to perform specific operations such as the methods and processes of the various embodiments. The set of instructions may be in the form of a software program. The software may be in various forms such as system software or application software. Further, the software may be in the form of a collection of separate programs, a program module within a larger program or a portion of a program module or a non-transitory computer readable medium. The software also may include modular programming in the form of object-oriented programming. The processing of input data by the processing machine may be in response to user commands, or in response to results of previous processing, or in response to a request made by another processing machine.
As used herein, the terms “software” and “firmware” are interchangeable, and include any computer program stored in memory for execution by a computer, including RAM memory, ROM memory, EPROM memory, EEPROM memory, and non-volatile RAM (NVRAM) memory. The above memory types are exemplary only, and are thus not limiting as to the types of memory usable for storage of a computer program.
It is to be understood that the above description is intended to be illustrative, and not restrictive. For example, the above-described embodiments (and/or aspects thereof) may be used in combination with each other. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from its scope. While the dimensions and types of materials described herein are intended to define the parameters of the invention, they are by no means limiting and are exemplary embodiments. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. The scope of the invention should, therefore, be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled. In the appended claims, the terms “including” and “in which” are used as the plain-English equivalents of the respective terms “comprising” and “wherein.” Moreover, in the following claims, the terms “first,” “second,” and “third,” etc. are used merely as labels, and are not intended to impose numerical requirements on their objects. Further, the limitations of the following claims are not written in means-plus-function format and are not intended to be interpreted based on 35 U.S.C. §112, sixth paragraph, unless and until such claim limitations expressly use the phrase “means for” followed by a statement of function void of further structure.
This written description uses examples to disclose the various embodiments, including the best mode, and also to enable any person skilled in the art to practice the various embodiments, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the various embodiments is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims if the examples have structural elements that do not differ from the literal language of the claims, or if the examples include equivalent structural elements with insubstantial differences from the literal languages of the claims.
This application claims priority to and the benefit of the filing date of U.S. Provisional Application No. 61/505,395, filed Jul. 7, 2011, the subject matter of which is hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5438607 | Przygoda, Jr. et al. | Aug 1995 | A |
5441047 | David | Aug 1995 | A |
5785650 | Akasaka | Jul 1998 | A |
6450172 | Hartlaub | Sep 2002 | B1 |
6727814 | Saltzstein et al. | Apr 2004 | B2 |
7138575 | Childs, Jr. | Nov 2006 | B2 |
7508307 | Albert | Mar 2009 | B2 |
7742807 | Walls | Jun 2010 | B1 |
8775196 | Simpson | Jul 2014 | B2 |
20040121767 | Simpson et al. | Jun 2004 | A1 |
20040172222 | Simpson et al. | Sep 2004 | A1 |
20050055242 | Bello | Mar 2005 | A1 |
20050065817 | Mihai | Mar 2005 | A1 |
20050151640 | Hastings | Jul 2005 | A1 |
20050188853 | Scannell, Jr. | Sep 2005 | A1 |
20050288563 | Feliss et al. | Dec 2005 | A1 |
20050289092 | Sumner, II | Dec 2005 | A1 |
20060103541 | Esson | May 2006 | A1 |
20070073745 | Scott | Mar 2007 | A1 |
20070106126 | Mannheimer | May 2007 | A1 |
20080001735 | Tran | Jan 2008 | A1 |
20080198023 | Hansen | Aug 2008 | A1 |
20100022902 | Lee | Jan 2010 | A1 |
20100030576 | Gregory | Feb 2010 | A1 |
20100234718 | Sampath | Sep 2010 | A1 |
20100286490 | Koverzin | Nov 2010 | A1 |
20100312095 | Jenkins et al. | Dec 2010 | A1 |
20110015493 | Koschek | Jan 2011 | A1 |
20110172740 | Matos | Jul 2011 | A1 |
20110201951 | Zhang | Aug 2011 | A1 |
20110304460 | Keecheril | Dec 2011 | A1 |
20120123241 | Stilley et al. | May 2012 | A1 |
20120123242 | Stilley et al. | May 2012 | A1 |
20120271372 | Osorio | Oct 2012 | A1 |
20120330557 | Zhang | Dec 2012 | A1 |
Number | Date | Country | |
---|---|---|---|
20130238314 A1 | Sep 2013 | US |
Number | Date | Country | |
---|---|---|---|
61505395 | Jul 2011 | US |