System and method for integrating voice with a medical device

Information

  • Patent Grant
  • 8160683
  • Patent Number
    8,160,683
  • Date Filed
    Thursday, December 30, 2010
    14 years ago
  • Date Issued
    Tuesday, April 17, 2012
    13 years ago
Abstract
There is provided a system and method for integrating voice with a medical device. More specifically, in one embodiment, there is provided a medical device comprising a speech recognition system configured to receive a processed voice, compare the processed voice to a speech database, identify a command for the medical device corresponding to the processed voice based on the comparison, and execute the identified medical device command.
Description
BACKGROUND OF THE INVENTION

1. Field of the Invention


The present invention relates generally to medical devices and, more particularly, to integrating voice controls and/or voice alerts into the medical device.


2. Description of the Related Art


This section is intended to introduce the reader to various aspects of art that may be related to various aspects of the present invention, which are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present invention. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.


In the field of medicine, doctors often desire to monitor certain physiological characteristics of their patients. Accordingly, a wide variety of devices have been developed for monitoring physiological characteristics. Such devices provide cargivers, such as doctors, nurses, and/or other healthcare personnel, with the information they need to provide the best possible healthcare for their patients. As a result, such monitoring devices have become an indispensable part of modern medicine.


For example, one technique for monitoring certain physiological characteristics of a patient is commonly referred to as pulse oximetry, and the devices built based upon pulse oximetry techniques are commonly referred to as pulse oximeters. Pulse oximetry may be used to measure various blood flow characteristics, such as the blood-oxygen saturation of hemoglobin in arterial blood, the volume of individual blood pulsations supplying the tissue, and/or the rate of blood pulsations corresponding to each heartbeat of a patient.


Pulse oximeters and other medical devices are typically mounted on stands that are positioned around a patient's bed or around an operating room table. When a caregiver desires to command the medical device (e.g., program, configure, and so-forth) they manipulate controls or push buttons on the monitoring device itself. The medical device typically provides results or responses to commands on a liquid crystal display (“LCD”) screen mounted in an externally visible position within the monitoring device.


This conventional configuration, however, has several disadvantages. First, as described above, this conventional configuration relies upon physical contact with the monitoring device to input commands (e.g., pushing a button, turning a knob, and the like). Such physical contact, however, raises several concerns. Among these concerns are that in making contact with the medical device, the caregiver may spread illness or disease from room to room. More specifically, a caregiver may accidentally deposit germs (e.g., bacteria, viruses, and so forth) on the medical device while manipulating the device's controls. These germs may then be spread to the patient when a subsequent caregiver touches the medical device and then touches the patient. Moreover, if the medical device is moved from one patient room to another, germs transferred to the medical device via touch may be carried from one patient room to another. Even in operating rooms where medical devices are typically static, germs may be transferred onto a medical device during one surgery and subsequently transferred off the medical device during a later performed surgery.


Second, beyond contamination, monitoring devices that rely on physical contact for command input may clutter the caregiver's workspace. For example, because the medical device must be within an arm's length of the caregiver, the medical device may crowd the caregiver—potentially even restricting free movement of the caregiver. In addition, caregivers may have difficulty manipulating controls with gloved hands. For example, it may be difficult to grasp a knob or press a small button due to the added encumbrance of a latex glove.


Third, current trends in general medical device design focus on miniaturizing overall medical device size. However, as controls which rely on physical contact must be large enough for most, if not all, caregivers to manipulate with their hands, medical devices that employ these types of controls are limited in their possible miniaturization. For example, even if it were possible to produce a conventional oximeter that was the size of a postage stamp, it would be difficult to control this theoretical postage stamp-sized pulse oximeter with currently available techniques.


In addition, conventional techniques for outputting medical data also have several potential drawbacks. For example, as described above, conventional techniques for displaying outputs rely on LCD screens mounted on the medical device itself. Besides constantly consuming power, these LCD screens must be large enough to be visually accessed by a doctor or nurse. As such, the conventional LCD screens employed in typical medical devices also may be a barrier towards miniaturization of the medical device. Further, conventional screen-based output techniques may be impersonal to the patient and may lack configurability by the caregiver.


For at least the reasons set forth above, an improved system or method for interacting with a medical monitoring device would be desirable.





BRIEF DESCRIPTION OF THE DRAWINGS

Advantages of the invention may become apparent upon reading the following detailed description and upon reference to the drawings in which:



FIG. 1 is a diagrammatical representation of a pulse oximeter featuring an integral microphone in accordance with one embodiment of the present invention;



FIG. 2 is a diagrammatical representation of a pulse oximeter featuring an external microphone in accordance with one embodiment of the present invention;



FIG. 3 is a block diagram of a medical device configured for voice control in accordance with one embodiment of the present invention;



FIG. 4 is a flow chart illustrating an exemplary technique for processing a voice command in accordance with one embodiment of the present invention;



FIG. 5A illustrates an exemplary operating room employing a medical device configured for voice control in accordance with one embodiment of the present invention;



FIG. 5B illustrates an enlarged view of a caregiver employing a medical device configured for voice control in accordance with one embodiment of the present invention;



FIG. 6 is a flow chart illustrating an exemplary technique for setting up a patient record in a medical device in accordance with one embodiment of the present invention;



FIG. 7 is a flow chart illustrating an exemplary technique for training a voice system in a medical device in accordance with one embodiment of the present invention;



FIG. 8 is a block diagram of a medical device configured to broadcast voice alerts in accordance with one embodiment of the present invention;



FIG. 9 is a flow chart illustrating an exemplary technique for setting up a voice alert in accordance with one embodiment of the present invention; and



FIG. 10 is a block diagram illustrating an exemplary technique for broadcasting a voice alert in accordance with one embodiment of the present invention.





DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTS

One or more specific embodiments of the present invention will be described below. In an effort to provide a concise description of these embodiments, not all features of an actual implementation are described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.


Turning initially to FIG. 1, an exemplary pulse oximeter featuring an integral microphone in accordance with one embodiment is illustrated and generally designated by the reference numeral 10. The pulse oximeter 10 may include a main unit 12 that houses hardware and/or software configured to calculate various physiological parameters. As illustrated, the main unit 12 may include a display 14 for displaying the calculated physiological parameters, such as oxygen saturation or pulse rate, to a caregiver or patient. In alternate embodiments, as described in further detail below, the display 14 may be omitted from the main unit 12.


The pulse oximeter 10 may also include a sensor 16 that may be connected to a body part (e.g., finger, forehead, toe, or earlobe) of a patient or a user. The sensor 16 may be configured to emit signals or waves into the patient's or user's tissue and detect these signals or waves after dispersion and/or reflection by the tissue. For example, the sensor 16 may be configured to emit light from two or more light emitting diodes (“LEDs”) into pulsatile tissue (e.g., finger, forehead, toe, or earlobe) and then detect the transmitted light with a light detector (e.g., a photodiode or photo-detector) after the light has passed through the pulsatile tissue.


As those of ordinary skill in the art will appreciate, the amount of transmitted light that passes through the tissue generally varies in accordance with a changing amount of blood constituent in the tissue and the related light absorption. On a beat-by-beat basis, the heart pumps an incremental amount of arterial blood into the pulsatile tissue, which then drains back through the venous system. The amount of light that passes through the blood-perfused tissue varies with the cardiac-induced cycling arterial blood volume. For example, when the cardiac cycle causes more light-absorbing blood to be present in the tissue, less light travels through the tissue to strike the sensor's photo-detector. These pulsatile signals allow the pulse oximeter 10 to measure signal continuation caused by the tissue's arterial blood, because light absorption from other tissues remains generally unchanged in the relevant time span.


In alternate embodiments, the sensor 16 may take other suitable forms beside the form illustrated in FIG. 1. For example, the sensor 16 may be configured to be clipped onto a finger or earlobe or may be configured to be secured with tape or another static mounting technique. The sensor 16 may be connected to the main unit 12 via a cable 18 and a connector 20.


The pulse oximeter 10 may also include an integral microphone 22. As will be described further below, the integral microphone 22 may be configured to receive voice commands from a caregiver or user that can be processed into commands for the pulse oximeter 10. Although FIG. 1 illustrates the integral microphone 22 as being located on a front façade of the main unit 12, it will be appreciated that in alternate embodiments, the integral microphone 22 may be located at another suitable location on or within the main unit 12.


The pulse oximeter 10 may also include a speaker 23. As will be described further below, the speaker 23 may be configured to broadcast voice alerts or other suitable types of alerts to a caregiver or user. Although FIG. 1 illustrates the speaker 23 as being located on a side façade of the main unit 12, it will be appreciated that in alternate embodiments, the speaker 23 may be located at another suitable location on or within the main unit 12.


Turning next to FIG. 2, another embodiment of the exemplary pulse oximeter 10 featuring an external microphone and speaker in accordance with one embodiment is illustrated. For simplicity, like reference numerals have been used to designate those features previously described in regard to FIG. 1. As illustrated, the pulse oximeter 10 of FIG. 2 also includes the main unit 12, the screen 14, the sensor 16, the cable 18, and the connector 20. However, in place of or in addition to the integral microphone 22, the pulse oximeter 10 illustrated in FIG. 2 may also include an audio connector 24 suitable for coupling a headset 26 to the main unit 12.


As illustrated in FIG. 2, the headset 26 may include one or more speakers 28 and an external microphone 30. As will be described further below, the one or more external speakers 28 may be employed by the pulse oximeter 10 to broadcast voice alerts or other suitable alerts to a caregiver or user. In addition, the external microphone 30 may be employed to receive voice commands for the pulse oximeter 10, as described further below.



FIG. 3 is a block diagram of an exemplary medical device 40 configured for voice control in accordance with one embodiment. For simplicity, like reference numerals have been used to designate those features previously described with regard to FIGS. 1 and 2. In one embodiment, the pulse oximeter 10 set forth in FIGS. 1 and/or 2 may comprise the medical device 40. As illustrated in FIG. 3, the medical device 40 may include a plurality of modules (blocks 41-52). These modules may be hardware, software, or some combination of hardware and software. Additionally, it will be appreciated that the modules shown in FIG. 3 are merely one exemplary embodiment and other embodiments can be envisaged wherein the module functions are split up differently or wherein some modules are not included or other modules are included.


As illustrated in FIG. 3, the medical device 40 may include a voice receiver 41. The voice receiver 41 may include any suitable form of microphone or voice recording device, such as the integral microphone 22 (as illustrated in FIG. 1) or the external microphone 30 (as illustrated in FIG. 2). As those of ordinary skill in the art will appreciate, the voice receiver 41 may be configured to receive a voice (i.e., an acoustic wave) and to convert the voice into an electronic analog waveform.


The voice receiver 41 may be configured to transmit the analog waveform to a voice sampling system 42. The voice sampling system 42 may be configured to sample the analog waveform to create digital voice data. For example, in one embodiment, the voice sampling system 42 may be configured to sample the electronic analog waveform 16,000 times per second to create a digital waveform of pulse amplitudes. In alternate embodiments, other suitable sampling techniques may be employed.


The voice processing system 44 may be configured to receive the digital waveform from the voice sampling system 42 and to convert the digital waveform into frequencies that can be recognized by a speech recognition system 46. In one embodiment, the voice processing system 44 may be configured to perform a fast fourier transform on the incoming digital waveform to generate a plurality of frequencies. The voice processing system 44 may then transmit the plurality of frequencies to the speech recognition system 46.


The speech recognition system 46 may be pre-populated or programmed with a plurality of frequency combinations that are associated with commands for the medical device 40. For example, frequencies combinations associated with the voice command “turn off alarm” may be associated with a command for the medical device 40 to silence an alarm. As mentioned above, in one embodiment, the particular frequency combinations may be pre-programmed or pre-configured. However, in alternate embodiments, the frequency combinations may be programmed into the speech database via a voice training system 48, which will be described in greater detail below.


In addition, the speech recognition system 46 may also be coupled to a medical language model 50. The medical language model 50 may be programmed with a plurality of command combinations that are prevalently used in controlling the medical device 40. For example, if the medical device 40 were an oximeter, such as the pulse oximeter 10, the medical language model 50 may store command combinations such as “turn oximeter off,” “turn alarm off,” “adjust volume,” “pause alarms,” and so-forth. In this way, the medical language model 50 may assist the speech recognition system 46 in determining the medical command associated with a particular voice command.


More specifically, in one embodiment, the medical language model 50 may assist the speech recognition system 46 in determining the proper medical command when the speech recognition system 46 is able to recognize some portion but not all of a voice command. For example, if the speech recognition system 46 is able to recognize the first and third words of the medical command “turn off alarms,” but is unable to recognize the second word, the speech recognition system 46 may search the medical language model 50 for command combinations matching the recognized terms (i.e., “turn” and “alarms”). Because the medical language model 50 may be programmed with only those commands relevant to the operation of the medical device 40, the medical language model 50 enables the successful recognition of medical commands that would otherwise be unrecognizable by conventional, generic voice recognition systems. The medical language model 50 may be preprogrammed, may be programmed through the voice training system 48, or may be programmed via an external computer (not shown).


Upon recognizing a voice command as a command for the medical device 40, the speech recognition system 44 may be configured to transmit the command to a pulse medical device system 52. As will be appreciated by those with ordinary skill in the art, the medical device control system 52 may be configured to control the medical device. For example, if the medical device 40 were the pulse oximeter 10, the control system 52 would be configured to control the main unit 12 as well as the sensor 16 to produce physiological monitoring results and/or alarms, which may be transmitted to the display 14 or the speaker 23.


Turning next to FIG. 4, a flow chart illustrating an exemplary technique for processing a voice command in accordance with one embodiment is illustrated and generally designated by a reference numeral 60. In one embodiment, the technique 60 may be employed by the medical device 40 (as illustrated in FIG. 3) or the pulse oximeter 10 (as illustrated in FIGS. 1 and 2). It will be appreciated, however, that the technique 60 may also be employed by any other suitable type of medical device including, but not limited to, other forms of monitors, respirators, or scanners.


As illustrated by block 62 of FIG. 4, the technique 60 may begin by receiving a voice (i.e., a portion of spoken audio). For example, in one embodiment, the pulse oximeter 10 may receive the voice via the microphone 23 or the microphone 30. After receiving the voice, the technique 60 may include processing the received voice, as indicated in block 64. In one embodiment, processing the received voice may include converting the received voice into one or more frequencies that can be recognized by a speech recognition system, such as the speech recognition system 46 illustrated in FIG. 3.


The technique 60 may also include comparing the processed voice with a speech database and/or a medical language model, as indicated by blocks 66 and 68, and as described above with regard to FIG. 3. For example, in one embodiment, blocks 66 and 68 may include comparing the processed voice to a speech database within the speech recognition system 46 and/or the medical language model 50.


After performing one or more of these comparisons, the technique 60 may involve identifying a medical device command associated with the processed voice based upon the one or more of the comparisons, as indicated by block 70. For example, if comparisons to the speech database and/or the medical language model indicate that the processed voice is a command to “turn off alarms,” then technique 60 may involve identifying the medical device command as a command to turn off the medical device's alarms.


Next, after identifying the medical device command, the technique 60 may include prompting a user (e.g., the caregiver) to confirm the new patient information was correctly determined, as indicated by block 72. For example, in one embodiment, the pulse oximeter 10 may display the identified command on the display 14 and prompt the user to confirm the correctness of the identified command. If the user does not confirm the command (block 72), the technique 60 may cycle back to block 62 (see above) and re-prompt the user for the new patient information. If, however, the user confirms the command, the technique may execute the command, as indicated by block 74. For example, in one embodiment, the user may confirm the command by speaking the word “yes” or the word “execute” in response to the displayed command.


As described above, the pulse oximeter 10 and/or the medical device 40 may be employed in a variety of suitable medical procedures and/or environments. For example, FIG. 5A illustrates an exemplary operating room setting 80 employing the pulse oximeter 10 in accordance with one embodiment. As illustrated in FIG. 5A, the operating room 80 may include a first caregiver 82a, a second caregiver 82b, and a patient 84. In addition, the operating room 80 may also include an operating table 86 and the pulse oximeter 10.


As illustrated, the caregiver 82b may employ and/or interact with the pulse oximeter 10 by wearing the headset 26. As highlighted in FIG. 5B, which illustrates an enlarged view of the caregiver 82b, the caregiver 82b may place the speaker 28 over his or her ear and place the external microphone 30 over his or her mouth. In this way, the caregiver 82b may receive alerts and issue commands from and to the main unit 12 via the headset 26. Advantageously, the functionality enables the main unit 12 to be placed at a remote location in the operating room 80 such that the main unit 12 does not crowd the medical procedure taking place in the operating room 80. However, those with ordinary skill in the art will appreciate that the embodiment set forth in FIGS. 5A and 5B is merely exemplary, and, as such, not intended to be exclusive. Accordingly, in alternate embodiments, the pulse oximeter 10 and/or the medical device 40 may be employed in any one of a number of suitable medical environments.


As described above, the pulse oximeter 10 and/or the medical device 40 may be configured to receive voice commands. Additionally, however, the pulse oximeter 10 and/or the medical device 40 may also be configured to enable entry of patient information by voice. For example, FIG. 6 is a flow chart illustrating an exemplary technique 90 for setting up a patient record in a medical device in accordance with one embodiment. In one embodiment, the technique 90 may be executed by the pulse oximeter 10 and/or the medical device 40.


As indicated by block 92 of FIG. 6, the technique 90 may begin by entering a new patent setup mode, as indicated by block 92. Next, the technique 90 may involve prompting a user for new patient information, as indicated by block 94. In one embodiment, prompting the user for new patient information may include displaying a message to the user on the display 14 (see FIGS. 1-3). Alternatively, prompting the user may involve an audio or voice prompt, as described further below, or another suitable form of user notification.


Next, the technique 90 may include receiving audio corresponding to the new patient information, as indicated by block 96. In one embodiment, audio corresponding to the new patient information may be received over the internal microphone 22 and/or the external microphone 30. For example, the external microphone 30 may receive patient information, such as patient name, age, and so-forth from the caregiver 82b wearing the headset 26. After receiving the audio corresponding to the new patient information, the technique 90 may involve determining the new patient information from the received audio, as indicated by block 98. In one embodiment, determining the new patient information may include processing the received audio and comparing the received audio to a speech database and/or medical language model, as described above with regard to FIGS. 3 and 4.


After determining the new patient information from the received audio, the technique 90 may include prompting a user (e.g., the caregiver 82b) to confirm the new patient information was correctly determined, as indicated by block 100. For example, in one embodiment, the pulse oximeter 10 may display the determined patient information on the display 14 and prompt the user to confirm the correctness of the determined patient information with a voice command (e.g., “correct,” “yes,” and so-forth). If the user does not confirm the new patient information (block 102), the technique 90 may cycle back to block 94 (see above) and re-prompt the user for the new patient information.


Alternatively, if the user does confirm the determined new patient information, the technique 90 may include storing the new patient information, as indicated by block 104. For example, in one embodiment, storing the new patient information may include storing the patient's name, age, and so-forth in a memory located within the pulse oximeter 10 and/or the medical device 40.


As described above, one or more embodiments described herein is directed towards a medical device configured to receive voice commands. Accordingly, FIG. 7 illustrates a technique 110 that may be employed to train a voice system in a medical device in accordance with one embodiment. In one embodiment, the technique 110 may be employed by the pulse oximeter 10 and/or the medical device 40. More specifically, in one embodiment, the technique 110 may be executed by the voice training system 48 of FIG. 3. However, it will be appreciated, that in alternate embodiments, other suitable medical devices may employ the technique 110.


As illustrated by block 112 of FIG. 7, the technique 110 may begin by entering a training mode. In one embodiment, the medical device 40 may be configured to enter a training mode in response to a depressed button or a sequence of depressed buttons on the medical device 40. Alternatively, in other embodiments, the pulse oximeter 10 and/or the medical device 40 may be configured to enter the training mode in response to a voice command and/or other suitable form of command or instruction.


After entering the training mode, the technique 110 may include prompting a user with a medical device training routine, as indicated by block 114. The medical device training routine may involve displaying one or more medical device specific words, phrases, or commands on the display 14. For example, the pulse oximeter 10 may be configured to display commands such as “turn off alarms,” “turn down volume,” “show pleth,” or any other suitable voice command or instruction.


After prompting the user, as described above, the technique 110 may include recording a response to the training routine, as indicated by block 116. For example, the pulse oximeter 10 and/or the medical device 40 may be configured to record the response to the training routine via the external microphone 30. After recording the response to the training routine, the technique 110 may include storing the response in a speech database, such as the speech database within the speech recognition system 46. After storing the response in the speech database, the technique 110 may cycle back to block 114 and repeat the training routine with additional words, phrases, or comments. In one embodiment, medical device 40 may be configured to cycle through blocks 114, 116, and 118 for each of a predefined group of words and instructions stored within the voice training system 48.


Turning next to another embodiment, FIG. 8 is a block diagram of a medical device 130 configured to broadcast voice alerts in accordance with one embodiment. As those of ordinary skill in the art will appreciate, conventional medical devices are configured to use buzzes and/or beeps to indicate medical alerts or alarms (hereafter referred to collectively as “alerts”). In addition to disturbing patients and medical practitioners (and possibly breaking a medical professional's concentration), these buzzes and beeps typically provide no other useful information to a listener other than indicating the presence of an alert condition. Advantageously, the medical device 130 illustrated in FIG. 8 is configured to produce custom voice alerts that can advantageously provide detailed information about the alert conditions while at the same time being less jarring and/or abrasive than traditional medical device alerts.


The medical device 130 may include a voice receiver 132, such as the microphone 22 or the microphone 30 (FIGS. 1-2). As will be appreciated, the voice receiver 132 may be configured to receive audio patterns that may be employed to create voice alerts. The medical device 130 may also include a voice recording system 134 that may be configured to receive audio from the voice receiver 132 and to record the received audio.


The voice recording system 134 may be coupled to a medical device control system 136 that may be configured to receive the recorded audio and to store or play it, when appropriate, to produce voice alerts. For example, the medical device control system 136 may be configured to play an appropriate voice alert over a speaker 140. In addition, the medical device control system 136 may be coupled to a display 142. As will be appreciated, the display 142 may be configured to display instructions to a user during setup of the voice alerts as well as for other suitable user notifications.


Further, the medical device control system 136 may also be coupled to a storage medium 144. In one embodiment, the storage medium 144 is configured to store the recorded audio in an indexed format, such as a look-up table, link list, and so-forth, such that a portion of recorded audio may be associated with one or more alert conditions. As such, in this embodiment, the medical device control system 136, upon detecting an alert condition, may access the stored portion of recorded audio corresponding to the alert condition and then broadcast the portion of audio over the speaker 130.


As illustrated, the medical device 130 may also include a network interface 146. The network interface 146 may be configured to enable the medical device control system 136 to communicate with other computers or computerized devices over a network. In this capacity, the network interface 146 may allow the medical device control system 136 to download and/or upload portions of audio for use as voice alerts.


As described above, one or more of the embodiments set forth herein may be directed towards a medical device configured to produce voice alerts. Accordingly, FIG. 8 is a flow chart illustrating an exemplary technique 150 for setting up a voice alert in accordance with one embodiment. As such, in one embodiment, the technique 150 may be executed by the medical device 130.


As illustrated by block 152 of FIG. 9, the technique 150 may begin by entering a voice alert setup mode. In various embodiments, entering a voice alert setup mode may be triggered by a voice command to the medical device 130, by physically manipulating one or more buttons on the medical device 130, or by another suitable technique. After entering the voice alert setup mode, the technique 150 may include prompting a user with a name of an alert condition. In one embodiment, the medical device 130 may prompt a user with a name of the alert condition by displaying the name of the alert condition on the display 142.


Next, the technique 150 may include recording a voice alert corresponding to the prompted alert condition. More specifically, in response to the prompt on the display 142, a user would speak the voice alert, which would subsequently be recorded as part of the technique 150. After recording the voice alert, technique 150 may include storing the voice alert (block 158) and associating the stored voice alert with the alert condition (block 160). For example, in one embodiment, the voice alert may be stored in the storage medium 144 and the medical device control system 136 may be configured to associate the stored voice alert with one or more of its alert conditions.


As described above, medical device 130 may be configured to broadcast voice alerts. Accordingly, FIG. 10 is a flow chart illustrating an exemplary technique 170 for broadcasting a voice alert in accordance with one embodiment. As shown, the technique 170 may begin by identifying an alert condition in the medical device 130. For example, in one embodiment, the medical device control system 136 may be configured to identify an alert condition, such as signal or power loss, as indicated by block 172.


Upon identifying the alert condition, the technique 170 may include locating a voice alert associated with the alert condition. For example, in one embodiment, the medical device control system 136 may locate a voice alert stored in the storage medium 144 that is associated with the alert condition. Lastly, the technique 170 may include broadcasting the voice alert, as indicated by block 176. For example, in one embodiment, the medical device control system 136 may be configured to broadcast the voice alert over the speaker 140.


While the invention may be susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and have been described in detail herein. However, it should be understood that the invention is not intended to be limited to the particular forms disclosed. Rather, the invention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the following appended claims. Indeed, the present techniques may not only be applied to pulse oximeters, but also to other suitable medical devices. For example, the embodiments set forth herein may also be employed in respirators, ventilators, EEGs, medical cutting devices, and so-forth.

Claims
  • 1. A pulse oximeter comprising: a speech recognition system configured to:receive a processed voice of a person proximate to the speech recognition system;compare the processed voice to a speech database;identify a command for the pulse oximeter corresponding to the processed voice based on the comparison; andexecute the identified pulse oximeter command without the person physically touching the pulse oximeter.
  • 2. The pulse oximeter, as set forth in claim 1, comprising a tangible machine readable medium comprising a medical language model, wherein the speech recognition system is configured to identify the command based on the medical language model.
  • 3. The pulse oximeter, as set forth in claim 2, wherein the medical language model comprises a plurality of commands for the pulse oximeter.
  • 4. The pulse oximeter, as set forth in claim 1, comprising a voice training system configured to populate the speech database.
  • 5. The pulse oximeter, as set forth in claim 1, comprising a voice processing system configured to process a received voice of the person proximate to the speech recognition system to create the processed voice.
  • 6. The pulse oximeter, as set forth in claim 5, comprising a headset, wherein the voice processing system is configured to receive the received voice from the headset, wherein the headset is proximate to the pulse oximeter.
  • 7. The pulse oximeter, as set forth in claim 5, comprising an integral microphone, wherein the voice processing system is configured to receive the voice of the person proximate to the speech recognition system from the integral microphone.
  • 8. A method comprising: receiving a processed voice of a person proximate to a pulse oximeter;comparing the processed voice to a speech database disposed in the pulse oximeter;identifying a command for the pulse oximeter corresponding to the processed voice based on the comparison; andexecuting the identified pulse oximeter command without the person physically touching the pulse oximeter.
  • 9. The method, as set forth in claim 8, comprising comparing the received processed voice to an oximeter language model.
  • 10. A pulse oximeter comprising: a control system configured to:identify an alert condition for the pulse oximeter;locate a voice alert corresponding to the alert condition; andbroadcast the voice alert over a speaker to a person proximate to the pulse oximeter.
  • 11. The pulse oximeter, as set forth in claim 10, comprising a voice recording system configured to record the voice alert.
  • 12. The pulse oximeter, as set forth in claim 11, comprising a storage medium, wherein the control system is configured to store the recorded voice alert on the storage medium.
  • 13. The pulse oximeter, as set forth in claim 10, comprising a network interface, wherein the control system is configured to download the voice alert over the network interface.
  • 14. A method of broadcasting voice alerts from a pulse oximeter, the method comprising: prompting a user proximate to the pulse oximeter with a name of a pulse oximeter alert condition;recording a voice alert of the user in the pulse oximeter;associating the recorded voice alert with the pulse oximeter alert condition; andbroadcasting the recorded voice alert to a person proximate to the pulse oximeter when the alert condition is detected.
  • 15. The method, as set forth in claim 14, comprising storing the voice alert in a memory disposed in the pulse oximeter.
  • 16. A method for broadcasting a voice alert from a pulse oximeter, the method comprising: identifying an alert condition for the pulse oximeter;locating a voice alert corresponding to the alert condition; andbroadcasting the voice alert over a speaker integral to the pulse oximeter.
  • 17. The method, as set forth in claim 16, wherein locating the voice alert comprises locating a recorded voice alert in a storage medium coupled to the pulse oximeter.
  • 18. The method, as set forth in claim 16, wherein locating the voice alert comprises locating a recorded voice alert on a network via a network interface.
  • 19. The method, as set forth in claim 16, wherein identifying the alert condition comprises identifying a loss of signal from a sensor placed on a patient proximate to the pulse oximeter.
  • 20. A method for programming a pulse oximeter with patient information, the method comprising: prompting a user proximate to the pulse oximeter for new patient information;receiving audio corresponding to the new patient information using the pulse oximeter; anddetermining the new patient information from the audio using the pulse oximeter.
  • 21. The method, as set forth in claim 20, comprising: prompting the user to confirm the determined new patient information; andif the user confirms the new patient information, storing the new patient information in the pulse oximeter.
  • 22. A medical device comprising: one or more controls on the medical device, wherein activation of a respective control by physical touch executes a corresponding medical device command;a speech recognition system configured to:receive a processed voice of a person proximate to the speech recognition system;compare the processed voice to a speech database;identify a command for the medical device corresponding to the processed voice based on the comparison; andexecute the identified medical device command without the person physically touching the one or more controls.
  • 23. The medical device, as set forth in claim 22, comprising a tangible machine readable medium comprising a medical language model, wherein the speech recognition system is configured to identify the command based on the medical language model.
  • 24. The medical device, as set forth in claim 23, wherein the medical device comprises a pulse oximeter and the medical language model comprises a plurality of pulse oximeter commands.
  • 25. The medical device, as set forth in claim 23, wherein the medical language model comprises a plurality of commands for the medical device.
  • 26. The medical device, as set forth in claim 22, comprising a voice training system configured to populate the speech database.
CROSS-REFERENCES TO RELATED APPLICATION

This application is a continuation of U.S. patent application Ser. No. 11/540,457 filed on Sep. 29, 2006.

US Referenced Citations (180)
Number Name Date Kind
3638640 Shaw Feb 1972 A
4621643 New, Jr. et al. Nov 1986 A
4653498 New, Jr. et al. Mar 1987 A
4700708 New, Jr. et al. Oct 1987 A
4714341 Hamaguri et al. Dec 1987 A
4770179 New, Jr. et al. Sep 1988 A
4805623 Jobsis Feb 1989 A
4869254 Stone et al. Sep 1989 A
4911167 Corenman et al. Mar 1990 A
4936679 Mersch Jun 1990 A
4972331 Chance Nov 1990 A
5078163 Stone et al. Jan 1992 A
5119815 Chance Jun 1992 A
5122974 Chance Jun 1992 A
5167230 Chance Dec 1992 A
5267174 Kaufman et al. Nov 1993 A
5297548 Pologe Mar 1994 A
5329459 Kaufman et al. Jul 1994 A
5348008 Bornn et al. Sep 1994 A
5351685 Potratz Oct 1994 A
5355880 Thomas et al. Oct 1994 A
5368026 Swedlow et al. Nov 1994 A
5372136 Steuer et al. Dec 1994 A
5385143 Aoyagi Jan 1995 A
5482036 Diab et al. Jan 1996 A
5533507 Potratz Jul 1996 A
5553614 Chance Sep 1996 A
5564417 Chance Oct 1996 A
5572241 Kanayama et al. Nov 1996 A
5575285 Takanashi et al. Nov 1996 A
5594638 Illiff Jan 1997 A
5630413 Thomas et al. May 1997 A
5645059 Fein et al. Jul 1997 A
5645060 Yorkey Jul 1997 A
5662106 Swedlow et al. Sep 1997 A
5692503 Kuenstner Dec 1997 A
5754111 Garcia May 1998 A
5758644 Diab et al. Jun 1998 A
5779631 Chance Jul 1998 A
5806515 Bare et al. Sep 1998 A
5830139 Abreu Nov 1998 A
5842981 Larsen et al. Dec 1998 A
5873821 Chance et al. Feb 1999 A
5995856 Mannheimer et al. Nov 1999 A
6011986 Diab et al. Jan 2000 A
6035223 Baker Mar 2000 A
6064898 Aldrich May 2000 A
6120460 Abreu Sep 2000 A
6134460 Chance Oct 2000 A
6163715 Larsen et al. Dec 2000 A
6181958 Steuer et al. Jan 2001 B1
6230035 Aoyagi et al. May 2001 B1
6266546 Steuer et al. Jul 2001 B1
6312393 Abreu Nov 2001 B1
6397091 Diab et al. May 2002 B2
6415236 Kobayashi et al. Jul 2002 B2
6438399 Kurth Aug 2002 B1
6445597 Boylan et al. Sep 2002 B1
6478800 Fraser et al. Nov 2002 B1
6487439 Skladnev et al. Nov 2002 B1
6501974 Huiku Dec 2002 B2
6501975 Diab et al. Dec 2002 B2
6526301 Larsen et al. Feb 2003 B2
6544193 Abreu Apr 2003 B2
6546267 Sugiura et al. Apr 2003 B1
6549795 Chance Apr 2003 B1
6591122 Schmitt Jul 2003 B2
6594513 Jobsis et al. Jul 2003 B1
6606509 Schmitt Aug 2003 B2
6615064 Aldrich Sep 2003 B1
6622095 Kobayashi et al. Sep 2003 B2
6658277 Wasserman Dec 2003 B2
6662030 Khalil et al. Dec 2003 B2
6671526 Aoyagi et al. Dec 2003 B1
6671528 Steuer et al. Dec 2003 B2
6678543 Diab et al. Jan 2004 B2
6690958 Walker et al. Feb 2004 B1
6693812 Li et al. Feb 2004 B1
6708048 Chance Mar 2004 B1
6711424 Fine et al. Mar 2004 B1
6711425 Reuss Mar 2004 B1
6748254 O'Neil et al. Jun 2004 B2
6785568 Chance Aug 2004 B2
6801797 Mannheimer et al. Oct 2004 B2
6801799 Mendelson Oct 2004 B2
6849045 Illiff Feb 2005 B2
6873865 Steuer et al. Mar 2005 B2
6934571 Wiesmann et al. Aug 2005 B2
6947780 Scharf Sep 2005 B2
6949081 Chance Sep 2005 B1
6961598 Diab Nov 2005 B2
6996427 Ali et al. Feb 2006 B2
7001334 Reed et al. Feb 2006 B2
7024233 Ali et al. Apr 2006 B2
7027849 Al-Ali Apr 2006 B2
7136684 Matsuura et al. Nov 2006 B2
7186966 Al-Ali Mar 2007 B2
7209775 Bae et al. Apr 2007 B2
7306560 Illiff Dec 2007 B2
7539532 Tran May 2009 B2
7539533 Tran May 2009 B2
7698002 Music et al. Apr 2010 B2
7706896 Music et al. Apr 2010 B2
20010005773 Larsen et al. Jun 2001 A1
20010020122 Steuer et al. Sep 2001 A1
20010039376 Steuer et al. Nov 2001 A1
20010044700 Kobayashi et al. Nov 2001 A1
20020026106 Khalil et al. Feb 2002 A1
20020035318 Mannheimer et al. Mar 2002 A1
20020038079 Steuer et al. Mar 2002 A1
20020038081 Fein et al. Mar 2002 A1
20020042558 Mendelson Apr 2002 A1
20020049389 Abreu Apr 2002 A1
20020062071 Diab et al. May 2002 A1
20020111748 Kobayashi et al. Aug 2002 A1
20020133068 Huiku Sep 2002 A1
20020161287 Schmitt Oct 2002 A1
20020161290 Chance Oct 2002 A1
20020165439 Schmitt Nov 2002 A1
20020173721 Grunwald et al. Nov 2002 A1
20020198443 Ting Dec 2002 A1
20030023140 Chance Jan 2003 A1
20030055324 Wasserman Mar 2003 A1
20030060693 Monfre et al. Mar 2003 A1
20030093503 Yamaki et al. May 2003 A1
20030130016 Matsuura et al. Jul 2003 A1
20030135095 Illiff Jul 2003 A1
20030139687 Abreu Jul 2003 A1
20030144584 Mendelson Jul 2003 A1
20030195402 Fein et al. Oct 2003 A1
20030220548 Schmitt Nov 2003 A1
20030220576 Diab Nov 2003 A1
20040006261 Swedlow et al. Jan 2004 A1
20040010188 Wasserman Jan 2004 A1
20040054270 Pewzner et al. Mar 2004 A1
20040087846 Wasserman May 2004 A1
20040107065 Al-Ali Jun 2004 A1
20040127779 Steuer et al. Jul 2004 A1
20040162472 Berson et al. Aug 2004 A1
20040171920 Mannheimer et al. Sep 2004 A1
20040176670 Takamura et al. Sep 2004 A1
20040176671 Fine et al. Sep 2004 A1
20040204635 Scharf et al. Oct 2004 A1
20040215958 Ellis et al. Oct 2004 A1
20040230106 Schmitt et al. Nov 2004 A1
20050033580 Wang et al. Feb 2005 A1
20050080323 Kato Apr 2005 A1
20050101850 Parker May 2005 A1
20050107676 Acosta et al. May 2005 A1
20050113656 Chance May 2005 A1
20050168722 Forstner et al. Aug 2005 A1
20050192488 Bryenton et al. Sep 2005 A1
20050203357 Debreczeny et al. Sep 2005 A1
20050267346 Faber et al. Dec 2005 A1
20050280531 Fadem et al. Dec 2005 A1
20060009688 Lamego et al. Jan 2006 A1
20060015021 Cheng Jan 2006 A1
20060020181 Schmitt Jan 2006 A1
20060025660 Swedlow et al. Feb 2006 A1
20060030762 David et al. Feb 2006 A1
20060030763 Mannheimer et al. Feb 2006 A1
20060030765 Swedlow et al. Feb 2006 A1
20060052680 Diab Mar 2006 A1
20060058683 Chance Mar 2006 A1
20060058691 Kiani Mar 2006 A1
20060142740 Sherman et al. Jun 2006 A1
20060195025 Ali et al. Aug 2006 A1
20060220881 Al-Ali et al. Oct 2006 A1
20060226992 Al-Ali et al. Oct 2006 A1
20060238358 Al-Ali et al. Oct 2006 A1
20070265533 Tran Nov 2007 A1
20070270665 Yang et al. Nov 2007 A1
20080004904 Tran Jan 2008 A1
20080082338 O'Neil et al. Apr 2008 A1
20080082339 Li et al. Apr 2008 A1
20080097175 Boyce et al. Apr 2008 A1
20080097176 Music et al. Apr 2008 A1
20080097177 Music et al. Apr 2008 A1
20080208009 Shklarski Aug 2008 A1
20080242959 Xu et al. Oct 2008 A1
Foreign Referenced Citations (14)
Number Date Country
10213692 Oct 2003 DE
1905356 Apr 2008 EP
1986543 Nov 2008 EP
5212016 Aug 1993 JP
WO9220273 Nov 1992 WO
WO9403102 Feb 1994 WO
WO9749330 Dec 1997 WO
WO0145553 Jun 2001 WO
WO2006006158 Jan 2006 WO
WO2006009830 Jan 2006 WO
WO2006039752 Apr 2006 WO
WO2007017777 Feb 2007 WO
WO2007097754 Aug 2007 WO
WO2007140151A21 Dec 2007 WO
Related Publications (1)
Number Date Country
20110098544 A1 Apr 2011 US
Continuations (1)
Number Date Country
Parent 11540457 Sep 2006 US
Child 12981974 US