Microphone natural speech capture voice dictation system and method

Information

  • Patent Grant
  • 12088985
  • Patent Number
    12,088,985
  • Date Filed
    Friday, October 7, 2022
    2 years ago
  • Date Issued
    Tuesday, September 10, 2024
    2 months ago
Abstract
A system for voice dictation includes an earpiece, the earpiece may include an earpiece housing sized to fit into an external auditory canal of a user and block the external auditory canal, a first microphone operatively connected to the earpiece housing and positioned to be isolated from ambient sound when the earpiece housing is fitted into the external auditory canal, a second microphone operatively connected to earpiece housing and positioned to sound external from the user, and a processor disposed within the earpiece housing and operatively connected to the first microphone and the second microphone. The system may further include a software application executing on a computing device which provides for receiving the first voice audio stream into a first position of a record and receiving the second voice audio stream into a second position of the record.
Description
FIELD OF THE INVENTION

The present invention relates to wearable devices. More particularly, but not exclusively, the present invention relates to ear pieces.


BACKGROUND

The patient medical record is the essential document of the medical profession which accurately and adequately captures the details of each patient encounter. Over the years, the requirements of the document have changed, as electronic medical records have added significant new levels of data required for processing. Such new burdens have significant impact on the health care providers, both personally and professionally. On a professional level, these new demands require protracted lengths of time to fulfill the demands of documentation. Additionally, these demands require health care professionals to spend an increasing segment of their time documenting the patient visit. This removes them from what they are trained to do: patient care. On a personal level, such increasing demands are the source of frustration, fatigue and increasing dissatisfaction. Therefore, what is needed is a new system that effectively captures critical data for the documentation process at the point of service.


SUMMARY

Therefore, it is a primary object, feature, or advantage of the present invention to improve over the state of the art.


It is a further object, feature, or advantage of the present invention to provide for accurate accuracy in the voice capture of a user of a wearable device.


It is a still further object, feature, or advantage of the present invention to markedly improve data capture from a wearable user due to isolation of the bone microphone.


Another object, feature, or advantage is to acquire patient voice signals in real time, using an external facing microphone to detect patient voice inputs.


Yet another object, feature, or advantage is to allow for instantaneous voice to text conversion.


A further object, feature, or advantage is to allow for capture of a voice snippet at a position within a document.


A still further object, feature, or advantage to allow for editing and correction of incorrect segments of the voice to text conversion.


Another object, feature, or advantage is to allow for standard edits to other non-voice sections of a document.


Yet another object, feature, or advantage is to allow for insertion of voice to text snippets at the direction of the primary user, in this case the health care provider.


A further object, feature, or advantage is to allow for the capture of the patient encounter at the point of service, greatly improving accuracy while simultaneously saving time and money.


A still further object, feature, or advantage is to reduce healthcare administrative costs.


Yet another object, feature, or advantage is to collect contextual sensor data at an earpiece.


A further object, feature, or advantage is to create a record and/or interpret nonverbal information as a part of a transcript of a communication.


One or more of these and/or other objects, features, or advantages of the present invention will become apparent from the specification and claims that follow. No single embodiment need provide every object, feature, or advantage. Different embodiments may have different objects, features, or advantages. Therefore, the present invention is not to be limited to or by any objects, features, or advantages stated herein.


A new and novel way of capturing patient information at the point of service is provided. Such a system may be able to distinguish between a physician voice and a patient's voice. The system may use a combination of microphones. The first microphone may be in the external auditory canal of the healthcare provider. It may be optimized to pick up the “Self-voice” of the healthcare provider. This has the distinct advantage of being acoustically isolated in the external canal of the healthcare provider while providing the optimal environment for capturing the “self-voice” of the primary user. The external microphone may be optimized to pick up the vocal sounds from the patient in the room. In doing so, the healthcare user's microphone would be able to discern the difference between the two voices based upon microphone inputs. This allows the optimized speech engine to segregate the two voice inputs. Such inputs can then be directly inputted into the patient record, stored in the selected position within the record as a voice file, or both. In this fashion, the system may provide the ultimate in flexibility to rapidly and accurate capture the conversation between a healthcare worker and patient, convert to text while at the same time allowing for review or modification as needed. Such editing capability allows the user to have the ability to edit all aspects of the document before their electronic signature.


According to one aspect, a system for voice dictation is provided. The system includes an earpiece. The earpiece includes an earpiece housing sized to fit into an external auditory canal of a user and block the external auditory canal, a first microphone operatively connected to the earpiece housing and positioned to be isolated from ambient sound when the earpiece housing is fitted into the external auditory canal, a second microphone operatively connected to earpiece housing and positioned to sound external from the user, and a processor disposed within the earpiece housing and operatively connected to the first microphone and the second microphone. The processor is adapted to capture a first voice audio stream using at least the first microphone, the first voice audio stream associated with the user, and a second voice audio stream using at least the second microphone, the second voice audio stream associated with a person other than the user. The system may also include a software application executing on a computing device which provides for receiving the first voice audio stream into a first position of a record and receiving the second voice audio stream into a second position of the record.


According to another aspect, a method for voice dictation is provided. The method includes providing an earpiece, the earpiece having an earpiece housing sized to fit into an external auditory canal of a user and block the external auditory canal, a first microphone operatively connected to the earpiece housing and positioned to be isolated from ambient sound when the earpiece housing is fitted into the external auditory canal, a second microphone operatively connected to earpiece housing and positioned to sound external from the user; and a processor disposed within the earpiece housing and operatively connected to the first microphone and the second microphone. The processor is adapted to capture a first voice audio stream using at least the first microphone, the first voice audio stream associated with the user, and a second voice audio stream using at least the second microphone, the second voice audio stream associated with a person other than the user. The method further includes capturing a first voice audio stream using at least the first microphone, the first voice audio stream associated with the user, storing the first voice audio stream on a machine readable storage medium, converting the first voice audio stream to text, placing the text within a first form field in a software application, and providing access to the first voice audio stream through the software application.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates one example of a system.



FIG. 2 illustrates a set of earpieces in greater detail.



FIG. 3 illustrates a block diagram of one of the earpieces.



FIG. 4 illustrates one example of a screen display from a software application.



FIG. 5 illustrates one example of a screen display from a word processor.



FIG. 6 illustrates one example of a screen display from a medical record application.



FIG. 7 illustrates one example of a screen display for a software application where contextual feedback is sensed by the earpiece and received into the software application.





DETAILED DESCRIPTION


FIG. 1 illustrates one example of a system. As shown in FIG. 1 there are one or more earpieces 10 such as a left earpiece 12A and a right earpiece 12B. Although multiple earpieces are shown, only a single earpiece may be used. The earpieces 12A, 12B may be in operative communication with a computing device 2. The computing device 2 may be a computer, a mobile device such as a phone or tablet, or other type of computing device. There may be a display 4 associated with the computing device 2. A server 6 is also shown. The server 6 is in operative communication with a data store 8 such as a database. The server 6 may be a cloud-based server, a physical server, a virtual server executing on a hardware platform, or other type of server.



FIG. 2 illustrates a set of earpieces 10 in greater detail. A left earpiece 12A is housed within an earpiece housing 14A. The left earpiece 12A includes an outward facing microphone 70A. The right earpiece 12B is housed within an earpiece housing 14B. The right earpiece 12B includes an outward facing microphone 70B. The earpieces may be the earpieces which are commercially available from Bragi GmbH such as THE DASH.



FIG. 3 illustrates a block diagram of one of the earpieces 12. The earpiece 12 has an earpiece housing 14. Disposed within the earpiece housing is at least one processor 30. The processor 30 is operatively connected to at least one wireless transceiver 34 which may include a radio transceiver capable of communications using Bluetooth, BLE, Wi-Fi, or other type of radio communication. One or more external microphones 70 and one or more internal microphones 71 are also operatively connected to the processor 30. In addition, a speaker 73 is operatively connected to the processor 30. Note that the external microphone(s) 70 may be positioned to detect or capture voice streams associated with one or more speakers other than the person wearing the earpiece (the user). The one or more internal microphones 71 may be, for example, positioned at or near the external auditory canal or mastoid bone of the user and may provide for picking-up bone vibrations or are otherwise configured to pick up frequency ranges associated with the person wearing the earpiece. In addition, there may be one or more inertial sensors 74 present in the earpiece 12. The inertial sensor may include a gyroscope, accelerometer, or magnetometer. For example, the inertial sensor 74 may be a 9-axis accelerometer which includes a 3-axis gyroscope, a 3-axis accelerometer, and a 3-axis magnetometer.



FIG. 4 illustrates one example of a software application which includes a screen display 100. Various form fields 102, 106, 110 are shown. In one embodiment, each time a different speaker (e.g. person) speaks, the software application moves to the next form field. Each form field is populated with text acquired from conversion of voice information to text information. In addition, to this representation of the translated text, the underlying voice stream or voice recording may be played by selecting the corresponding play button 104, 108, 112. Thus, information from multiple individuals may be collected. It is of further note, where the earpiece includes separate microphones for external users and the user of the ear pieces, that separate voice streams may be captured even when the user of the earpieces and another individual are talking at the same time. It is to be further contemplated that there may be more one other individual who is speaking who is within the environment of the user.


Capturing and storing the voice streams or voice snippets and associating these voice streams or voice snippets with the text may provide additional advantages. There is a complete record so that if need be the text information may be correctly later or date if it does not accurately match the voice snippet.



FIG. 5 illustrates another example of a software application that may be used. As shown in FIG. 5, there is a screen display 120 which may be associated with a word processor document. The word processor may be a word processor such as Microsoft Word, the Microsoft Office Online version of Microsoft Word, WordPerfect, TextMaker, Pages from Apple, Corel Write, Google Docs, or any other word processor. The word processor software may execute on a local machine or on a remote machine such as available through cloud or web access. Functionality may be built-into the word processor or may be provided as an add-in, as a connected application, or otherwise.


As shown in FIG. 5, a transcript may be created which includes text from multiple different speakers. As shown, each speaker may be identified such as “Speaker 1”, “Speaker 2.” Alternatively, each speaker may be given a name. Also, instead of or in addition to identifying speakers in this fashion, text associated with different speakers may be presented in different colors of text, different fonts, or different styles. As shown in FIG. 5, an icon may be shown associated with a mouse or other control device. The mouse or other control device may be used to select a portion of the text. When that portion of the text is selected, the corresponding audio may be played. Thus, if there appears to be a transcription error in the text, a user may confirm whether there was a transcription error or not. Alternatively, a portion of text may be otherwise selected such as by selecting an icon associated with that portion of the text. Thus, as shown a first speaker may make a first statement 122, a second speaker may make a second statement 124, and the first speaker may make a third statement 126. A tooltip 130 is shown indicating that a user can choose to select text to listen to corresponding audio.



FIG. 6 illustrates another example of a software application. FIG. 6 illustrates a screen display 130 associated with an electronic medical record (EMR), electronic health record (EHR), electronic patient record (EPR), or other type of medical record. In the context of a medical record, it is contemplated that information entered into a medical record may come from words dictated by a health care provider or from information obtained orally from a patient. The earpiece described herein may be used to collect audio from both the health care provider (such as by using a bone conduction microphone) and from the patient (such as by using an external facing microphone). For example, as shown in FIG. 6 voice information associated with the reason for the visit, as spoken by a patient, may be input as text into form field 132 and a recording of the audio may be associated with this form field. In addition, voice information, as spoken by the health care provider, may be input as text into form field 134 and a recording of the audio may be associated with this form field. Although given as an example in the context of the medical field, any number of other situations may be appropriate where a transcript of an encounter is desired.



FIG. 7 illustrates another example of a screen display 140. As shown in FIG. 7, a transcript may be created which includes text from multiple different speakers. As shown, each speaker may be identified such as “Speaker 1”, “Speaker 2.” Alternatively, each speaker may be given a name. In addition to a transcript of text, the ear piece may include other information sensed by the ear piece. For example, where the ear piece includes an inertial sensor, information associated with the inertial sensor or a characterization of information associated with the inertial sensor may be included. In this example, “Speaker 2” is wearing the earpiece. The statement 142 made by Speaker 1 may be detected with an externally facing microphone of an earpiece worn by Speaker 2. In response to statement 142, Speaker 2 may nod their head in agreement. This gesture or movement associated with the head nod may be detected with one or more inertial sensors of the earpiece. This head movement or a record of it may then be incorporated into the transcript. The record of the head movement 146 may be shown in a manner distinct from the voice transcript such as using different colors, fonts, or styles, such as underlining, including in parentheses, or otherwise. In addition, additional information may be obtained by selecting the inserted text indicating that the nod occurred. The additional information may be in the form of raw sensor data, or other characterization of the nod or other sensor data. Examples of different characterizations may include the degree of the head nod or characterization of how pronounced the head nod is. The characterizations may be quantitative or qualitative. A tooltip 148 may be shown indicating that a user may select the contextual feedback to access this additional information. In addition to head nods, other gestures may also be detected. This may include a head shaking movement, such as may be associated with a “NO.” Although gestures as detected with inertial sensors are one type of movement which may detected to provide contextual feedback, it is contemplated that other types of contextual feedback may be used such as may be detected through physiological monitoring or otherwise. Other types of sensors may also include image sensors. Where image sensors are used, the image sensors may be used to detect information from either the individual wearing the earpiece or other wearable device or from others. Thus, records may be created for nonverbal information as a part of a transcript of a communication or as input into different fields within a document or software application.


In another embodiment, a person is using the earpieces on a phone call and the voice of the person on the other side of the call is captured and transcribed as opposed to capturing voice of a person through one or more microphones on the ear piece. In yet another embodiment, a conversation may occur either in person or over a communication network with two or more individuals with at least two of the individuals wearing earpieces so that contextual information from more than one person may be captured as a part of the conversation.


Therefore, methods and systems for voice dictation using one or more earpieces have been shown and described. Although specific embodiments are shown here, it is contemplated that any number of options, variations, and alternatives may also be used. The present invention is not to be limited unduly to specifically what is shown and described herein.

Claims
  • 1. A system for voice dictation, the system comprising: (1) an earpiece, the earpiece comprising:an earpiece housing;a first microphone operatively connected to the earpiece housing and positioned to detect a voice of a user;a second microphone operatively connected to earpiece housing and positioned to detect a sound external from the user;a processor disposed within the earpiece housing and operatively connected to the first microphone and the second microphone, wherein the processor is adapted to capture a first voice audio stream using at least the first microphone, the first voice audio stream associated with the user, and a second voice audio stream using at least the second microphone, the second voice audio stream associated with a person other than the user;an inertial sensor comprising an accelerometer and a gyroscope, the inertial sensor disposed within the earpiece housing and operatively connected to the processor; and(2) a software application executing on a computing device in wireless communication with the earpiece which provides generating a screen display showing a record having a first field at a first position, a second field at a second position, and a third field at a third position, wherein the software application further providing for inputting the first voice audio stream into the first field at the first position of the record on the screen display, the second voice audio stream into the second field at the second position of the record on the screen display, and contextual data from the inertial sensor into the third field at the third position of the record on the screen display wherein the contextual data is based on head movement from the user.
  • 2. The system of claim 1 wherein the record is a medical record, the user is a health care provider and the person other than the user is a patient.
  • 3. The system of claim 2 wherein the software application provides for converting the first voice audio stream into a first audio file, storing the first audio file, converting the first voice audio stream into first text and placing both the first text and a first link to the first audio file at the first position of the record.
  • 4. The system of claim 3 wherein the software application provides for converting the second voice audio stream into a second audio file, storing the second audio file, converting the second voice audio stream into second text and placing both the second text and a second link to the second audio file at the second position of the record.
  • 5. The system of claim 1, wherein the processor is configured to interpret input from the inertial sensor as head movement.
  • 6. The system of claim 5 wherein the processor is configured to interpret the head movement as indicative of a yes.
  • 7. The system of claim 5 wherein the processor is configured to interpret the head movement as indicative of a no.
  • 8. A method for voice dictation, the method comprising: providing a computing system worn on a head of a user, the computing system comprising:a first microphone positioned to detect a voice of the user;a second microphone positioned to receive a sound external from the user;a processor disposed operatively connected to the first microphone and the second microphone; andan inertial sensor positioned on the user in operative communication with the processor;capturing a first voice audio stream using at least the first microphone, the first voice audio stream associated with the user;capturing inertial sensor data with the inertial sensor and interpreting the inertial sensor data into contextual data;storing the first voice audio stream on a machine readable storage medium;converting the first voice audio stream to first text;executing a software application to display on a screen display a plurality of form fields;placing the first text within a first form field of the plurality of form fields of the screen display; andproviding user controls on the screen display to provide access to the first voice audio stream and the contextual data through the software application.
  • 9. The method of claim 8 wherein the first microphone is a bone microphone.
  • 10. The method of claim 8 further comprising: capturing a second voice audio stream using the second microphone, the second voice audio stream associated with a person other than the user;storing the second voice audio stream on a machine readable storage medium;converting the second voice audio stream to second text;placing the second text of the second voice audio stream within a second form field of the plurality of form fields of the screen display; andproviding user controls on the screen display to provide access to the second voice audio stream through the software application.
  • 11. The method of claim 10 wherein the software application is a medical records software application.
  • 12. The method of claim 11 wherein the user is a health care provider and wherein the person other than the user is a patient of the health care provider.
  • 13. The method of claim 12 wherein the voice dictation is performed during a patient encounter to document the patient encounter.
  • 14. The method of claim 10 further comprising receiving a correction of the first text from the user and updating the first form field with corrected text.
  • 15. The method of claim 8 further comprising capturing a second voice audio stream at a wireless transceiver operatively connected to the computing system.
  • 16. The method of claim 15 further comprising converting the second voice audio stream to second text.
  • 17. The method of claim 8 further comprising capturing sensor data with the computing system and interpreting the sensor data into text data and placing the text data into a third form field of the plurality of form fields of the screen display within the software application.
  • 18. The system of claim 17, wherein the software application provides for indicating the occurrence of the head movement by the user at the third position of the record.
  • 19. The system of claim 18 wherein the head movement is indicative of a yes.
  • 20. The system of claim 18 wherein the head movement is indicative of a no.
PRIORITY STATEMENT

This application is a continuation of U.S. patent application Ser. No. 17/159,695 filed on Jan. 27, 2021, which a continuation of U.S. patent application Ser. No. 15/946,100 filed on Apr. 5, 2018 now U.S. Pat. No. 10,904,653, which is a continuation of U.S. patent application Ser. No. 15/383,809 filed on Dec. 19, 2016 now U.S. Pat. No. 9,980,033, which claims priority to U.S. Provisional Patent Application No. 62/270,419 filed on Dec. 21, 2015, all of which are titled Microphone Natural Speech Capture Voice Dictation System and Method, all of which are hereby incorporated by reference in their entireties.

US Referenced Citations (402)
Number Name Date Kind
2325590 Carlisle et al. Aug 1943 A
2430229 Kelsey Nov 1947 A
3047089 Zwislocki Jul 1962 A
D208784 Sanzone Oct 1967 S
3586794 Michaelis Jun 1971 A
3696377 Wall Oct 1972 A
3934100 Harada Jan 1976 A
3983336 Malek et al. Sep 1976 A
4069400 Johanson et al. Jan 1978 A
4150262 Ono Apr 1979 A
4334315 Ono et al. Jun 1982 A
D266271 Johanson et al. Sep 1982 S
4375016 Harada Feb 1983 A
4588867 Konomi May 1986 A
4617429 Bellafiore Oct 1986 A
4654883 Iwata Mar 1987 A
4682180 Gans Jul 1987 A
4791673 Schreiber Dec 1988 A
4852177 Ambrose Jul 1989 A
4865044 Wallace et al. Sep 1989 A
4984277 Bisgaard et al. Jan 1991 A
5008943 Arndt et al. Apr 1991 A
5185802 Stanton Feb 1993 A
5191602 Regen et al. Mar 1993 A
5201007 Ward et al. Apr 1993 A
5201008 Arndt et al. Apr 1993 A
D340286 Seo Oct 1993 S
5280524 Norris Jan 1994 A
5295193 Ono Mar 1994 A
5298692 Ikeda et al. Mar 1994 A
5343532 Shugart Aug 1994 A
5347584 Narisawa Sep 1994 A
5363444 Norris Nov 1994 A
5444786 Raviv Aug 1995 A
D367113 Weeks Feb 1996 S
5497339 Bernard Mar 1996 A
5526407 Russell Jun 1996 A
5606621 Reiter et al. Feb 1997 A
5613222 Guenther Mar 1997 A
5654530 Sauer et al. Aug 1997 A
5692059 Kruger Nov 1997 A
5721783 Anderson Feb 1998 A
5748743 Weeks May 1998 A
5749072 Mazurkiewicz et al. May 1998 A
5771438 Palermo et al. Jun 1998 A
D397796 Yabe et al. Sep 1998 S
5802167 Hong Sep 1998 A
5844996 Enzmann et al. Dec 1998 A
D410008 Almqvist May 1999 S
5929774 Charlton Jul 1999 A
5933506 Aoki et al. Aug 1999 A
5949896 Nageno et al. Sep 1999 A
5987146 Pluvinage et al. Nov 1999 A
6021207 Puthuff et al. Feb 2000 A
6054989 Robertson et al. Apr 2000 A
6081724 Wilson Jun 2000 A
6084526 Blotky et al. Jul 2000 A
6094492 Boesen Jul 2000 A
6111569 Brusky et al. Aug 2000 A
6112103 Puthuff Aug 2000 A
6157727 Rueda Dec 2000 A
6167039 Karlsson et al. Dec 2000 A
6181801 Puthuff et al. Jan 2001 B1
6185152 Shen Feb 2001 B1
6208372 Barraclough Mar 2001 B1
6230029 Yegiazaryan et al. May 2001 B1
6275789 Moser et al. Aug 2001 B1
6339754 Flanagan et al. Jan 2002 B1
D455835 Anderson et al. Apr 2002 S
6408081 Boesen Jun 2002 B1
6424820 Burdick et al. Jul 2002 B1
D464039 Boesen Oct 2002 S
6470893 Boesen Oct 2002 B1
D468299 Boesen Jan 2003 S
D468300 Boesen Jan 2003 S
6542721 Boesen Apr 2003 B2
6560468 Boesen May 2003 B1
6563301 Gventer May 2003 B2
6654721 Handelman Nov 2003 B2
6664713 Boesen Dec 2003 B2
6690807 Meyer Feb 2004 B1
6694180 Boesen Feb 2004 B1
6718043 Boesen Apr 2004 B1
6738485 Boesen May 2004 B1
6748095 Goss Jun 2004 B1
6754358 Boesen et al. Jun 2004 B1
6784873 Boesen et al. Aug 2004 B1
6823195 Boesen Nov 2004 B1
6852084 Boesen Feb 2005 B1
6879698 Boesen Apr 2005 B2
6892082 Boesen May 2005 B2
6920229 Boesen Jul 2005 B2
6952483 Boesen et al. Oct 2005 B2
6987986 Boesen Jan 2006 B2
7010137 Leedom et al. Mar 2006 B1
7113611 Leedom et al. Sep 2006 B2
D532520 Kampmeier et al. Nov 2006 S
7136282 Rebeske Nov 2006 B1
7203331 Boesen Apr 2007 B2
7209569 Boesen Apr 2007 B2
7215790 Boesen et al. May 2007 B2
D549222 Huang Aug 2007 S
D554756 Sjursen et al. Nov 2007 S
7403629 Aceti et al. Jul 2008 B1
D579006 Kim et al. Oct 2008 S
7463902 Boesen Dec 2008 B2
7508411 Boesen Mar 2009 B2
7532901 LaFranchise et al. May 2009 B1
D601134 Elabidi et al. Sep 2009 S
7825626 Kozisek Nov 2010 B2
7859469 Rosener et al. Dec 2010 B1
7965855 Ham Jun 2011 B1
7979035 Griffin et al. Jul 2011 B2
7983628 Boesen Jul 2011 B2
D647491 Chen et al. Oct 2011 S
8095188 Shi Jan 2012 B2
8108143 Tester Jan 2012 B1
8140357 Boesen Mar 2012 B1
D666581 Perez Sep 2012 S
8300864 Müllenborn et al. Oct 2012 B2
8406448 Lin et al. Mar 2013 B2
8430817 Al-Ali et al. Apr 2013 B1
8436780 Schantz et al. May 2013 B2
D687021 Yuen Jul 2013 S
8679012 Kayyali Mar 2014 B1
8719877 VonDoenhoff et al. May 2014 B2
8774434 Zhao et al. Jul 2014 B2
8831266 Huang Sep 2014 B1
8891800 Shaffer Nov 2014 B1
8994498 Agrafioti et al. Mar 2015 B2
D728107 Martin et al. Apr 2015 S
9013145 Castillo et al. Apr 2015 B2
9037125 Kadous May 2015 B1
D733103 Jeong et al. Jun 2015 S
9081944 Camacho et al. Jul 2015 B2
9461403 Gao et al. Oct 2016 B2
9510159 Cuddihy et al. Nov 2016 B1
D773439 Walker Dec 2016 S
D775158 Dong et al. Dec 2016 S
D777710 Palmborg et al. Jan 2017 S
9544689 Fisher et al. Jan 2017 B2
D788079 Son et al. May 2017 S
9684778 Tharappel et al. Jun 2017 B2
9711062 Ellis et al. Jul 2017 B2
9729979 Özden Aug 2017 B2
9767709 Ellis Sep 2017 B2
9848257 Ambrose et al. Dec 2017 B2
20010005197 Mishra et al. Jun 2001 A1
20010027121 Boesen Oct 2001 A1
20010043707 Leedom Nov 2001 A1
20010056350 Calderone et al. Dec 2001 A1
20020002039 Qureshey et al. Jan 2002 A1
20020002413 Tokue Jan 2002 A1
20020007510 Mann Jan 2002 A1
20020010590 Lee Jan 2002 A1
20020030637 Mann Mar 2002 A1
20020046035 Kitahara et al. Apr 2002 A1
20020057810 Boesen May 2002 A1
20020076073 Taenzer et al. Jun 2002 A1
20020118852 Boesen Aug 2002 A1
20030002705 Boesen Jan 2003 A1
20030065504 Kraemer et al. Apr 2003 A1
20030100331 Dress et al. May 2003 A1
20030104806 Ruef et al. Jun 2003 A1
20030115068 Boesen Jun 2003 A1
20030125096 Boesen Jul 2003 A1
20030218064 Conner et al. Nov 2003 A1
20040070564 Dawson et al. Apr 2004 A1
20040102931 Ellis et al. May 2004 A1
20040160511 Boesen Aug 2004 A1
20050017842 Dematteo Jan 2005 A1
20050043056 Boesen Feb 2005 A1
20050094839 Gwee May 2005 A1
20050125320 Boesen Jun 2005 A1
20050148883 Boesen Jul 2005 A1
20050165663 Razumov Jul 2005 A1
20050196009 Boesen Sep 2005 A1
20050197063 White Sep 2005 A1
20050212911 Marvit et al. Sep 2005 A1
20050251455 Boesen Nov 2005 A1
20050266876 Boesen Dec 2005 A1
20060029246 Boesen Feb 2006 A1
20060073787 Lair et al. Apr 2006 A1
20060074671 Farmaner et al. Apr 2006 A1
20060074808 Boesen Apr 2006 A1
20060166715 Engelen et al. Jul 2006 A1
20060166716 Seshadri et al. Jul 2006 A1
20060220915 Bauer Oct 2006 A1
20060258412 Liu Nov 2006 A1
20070102009 Wong et al. May 2007 A1
20070106724 Gorti May 2007 A1
20070239225 Saringer Oct 2007 A1
20070269785 Yamanoi Nov 2007 A1
20080076972 Dorogusker et al. Mar 2008 A1
20080090622 Kim et al. Apr 2008 A1
20080102424 Holljes May 2008 A1
20080146890 LeBoeuf et al. Jun 2008 A1
20080187163 Goldstein et al. Aug 2008 A1
20080215239 Lee Sep 2008 A1
20080253583 Goldstein Oct 2008 A1
20080254780 Kuhl et al. Oct 2008 A1
20080255430 Alexandersson et al. Oct 2008 A1
20080260180 Goldstein Oct 2008 A1
20080298606 Johnson et al. Dec 2008 A1
20090003620 McKillop et al. Jan 2009 A1
20090008275 Ferrari et al. Jan 2009 A1
20090017881 Madrigal Jan 2009 A1
20090041313 Brown Feb 2009 A1
20090073070 Rofougaran Mar 2009 A1
20090097689 Prest et al. Apr 2009 A1
20090105548 Bart Apr 2009 A1
20090154739 Zellner Jun 2009 A1
20090191920 Regen et al. Jul 2009 A1
20090226017 Abolfathi et al. Sep 2009 A1
20090240947 Goyal et al. Sep 2009 A1
20090245559 Boltyenkov et al. Oct 2009 A1
20090261114 McGuire et al. Oct 2009 A1
20090296968 Wu et al. Dec 2009 A1
20090303073 Gilling et al. Dec 2009 A1
20090304210 Weisman Dec 2009 A1
20100033313 Keady et al. Feb 2010 A1
20100075631 Black et al. Mar 2010 A1
20100166206 Macours Jul 2010 A1
20100203831 Muth Aug 2010 A1
20100210212 Sato Aug 2010 A1
20100290636 Mao et al. Nov 2010 A1
20100320961 Castillo et al. Dec 2010 A1
20110018731 Linsky et al. Jan 2011 A1
20110103609 Pelland et al. May 2011 A1
20110137141 Razoumov et al. Jun 2011 A1
20110140844 McGuire et al. Jun 2011 A1
20110239497 McGuire et al. Oct 2011 A1
20110286615 Olodort et al. Nov 2011 A1
20110293105 Arie et al. Dec 2011 A1
20120057740 Rosal Mar 2012 A1
20120155670 Rutschman Jun 2012 A1
20120159617 Wu et al. Jun 2012 A1
20120163626 Booij et al. Jun 2012 A1
20120197737 LeBoeuf et al. Aug 2012 A1
20120212499 Haddick Aug 2012 A1
20120235883 Border et al. Sep 2012 A1
20120309453 Maguire Dec 2012 A1
20130106454 Liu et al. May 2013 A1
20130154826 Ratajczyk Jun 2013 A1
20130178967 Mentz Jul 2013 A1
20130200999 Spodak et al. Aug 2013 A1
20130204617 Kuo et al. Aug 2013 A1
20130293494 Reshef Nov 2013 A1
20130316642 Newham Nov 2013 A1
20130343584 Bennett Dec 2013 A1
20130346168 Zhou et al. Dec 2013 A1
20140004912 Rajakarunanayake Jan 2014 A1
20140014697 Schmierer et al. Jan 2014 A1
20140020089 Perini, II Jan 2014 A1
20140072136 Tenenbaum et al. Mar 2014 A1
20140072146 Itkin et al. Mar 2014 A1
20140073429 Meneses et al. Mar 2014 A1
20140079257 Ruwe et al. Mar 2014 A1
20140106677 Altman Apr 2014 A1
20140122116 Smythe May 2014 A1
20140146973 Liu et al. May 2014 A1
20140153768 Hagen et al. Jun 2014 A1
20140163771 Demeniuk Jun 2014 A1
20140185828 Helbling Jul 2014 A1
20140219467 Kurtz Aug 2014 A1
20140222462 Shakil et al. Aug 2014 A1
20140235169 Parkinson et al. Aug 2014 A1
20140270227 Swanson Sep 2014 A1
20140270271 Dehe et al. Sep 2014 A1
20140276227 Pérez Sep 2014 A1
20140310595 Acharya et al. Oct 2014 A1
20140321682 Kofod-Hansen et al. Oct 2014 A1
20140335908 Krisch et al. Nov 2014 A1
20140348367 Vavrus et al. Nov 2014 A1
20150028996 Agrafioti et al. Jan 2015 A1
20150035643 Kursun Feb 2015 A1
20150036835 Chen Feb 2015 A1
20150056584 Boulware et al. Feb 2015 A1
20150110587 Hori Apr 2015 A1
20150138074 Hennelly May 2015 A1
20150148989 Cooper et al. May 2015 A1
20150181356 Krystek et al. Jun 2015 A1
20150230022 Sakai et al. Aug 2015 A1
20150245127 Shaffer Aug 2015 A1
20150256949 Vanpoucke et al. Sep 2015 A1
20150264472 Aase Sep 2015 A1
20150264501 Hu et al. Sep 2015 A1
20150317565 Li et al. Nov 2015 A1
20150358751 Deng et al. Dec 2015 A1
20150359436 Shim et al. Dec 2015 A1
20150364058 Lagree Dec 2015 A1
20150373467 Gelter Dec 2015 A1
20150373474 Kraft et al. Dec 2015 A1
20150379251 Komaki Dec 2015 A1
20160033280 Moore et al. Feb 2016 A1
20160034249 Lee et al. Feb 2016 A1
20160071526 Wingate et al. Mar 2016 A1
20160072558 Hirsch et al. Mar 2016 A1
20160073189 Lindén et al. Mar 2016 A1
20160086605 Kim Mar 2016 A1
20160094550 Bradley et al. Mar 2016 A1
20160100262 Inagaki Apr 2016 A1
20160119737 Mehnert et al. Apr 2016 A1
20160124707 Ermilov May 2016 A1
20160125892 Bowen et al. May 2016 A1
20160140870 Connor May 2016 A1
20160142818 Park May 2016 A1
20160162259 Zhao et al. Jun 2016 A1
20160209691 Yang et al. Jul 2016 A1
20160253994 Panchapagesan et al. Sep 2016 A1
20160324478 Goldstein Nov 2016 A1
20160353196 Baker et al. Dec 2016 A1
20160360350 Watson et al. Dec 2016 A1
20170021257 Gilbert Jan 2017 A1
20170046503 Cho et al. Feb 2017 A1
20170059152 Hirsch et al. Mar 2017 A1
20170060262 Hviid et al. Mar 2017 A1
20170060269 Förstner et al. Mar 2017 A1
20170061751 Loermann et al. Mar 2017 A1
20170061817 May Mar 2017 A1
20170062913 Hirsch et al. Mar 2017 A1
20170064426 Hviid Mar 2017 A1
20170064428 Hirsch Mar 2017 A1
20170064432 Hviid et al. Mar 2017 A1
20170064437 Hviid et al. Mar 2017 A1
20170078780 Qian et al. Mar 2017 A1
20170078785 Qian et al. Mar 2017 A1
20170100277 Ke Apr 2017 A1
20170108918 Boesen Apr 2017 A1
20170109131 Boesen Apr 2017 A1
20170110124 Boesen et al. Apr 2017 A1
20170110899 Boesen Apr 2017 A1
20170111723 Boesen Apr 2017 A1
20170111725 Boesen et al. Apr 2017 A1
20170111726 Martin et al. Apr 2017 A1
20170111740 Hviid et al. Apr 2017 A1
20170127168 Briggs et al. May 2017 A1
20170131094 Kulik May 2017 A1
20170142511 Dennis May 2017 A1
20170146801 Stempora May 2017 A1
20170150920 Chang et al. Jun 2017 A1
20170151085 Chang et al. Jun 2017 A1
20170151447 Boesen Jun 2017 A1
20170151668 Boesen Jun 2017 A1
20170151918 Boesen Jun 2017 A1
20170151930 Boesen Jun 2017 A1
20170151957 Boesen Jun 2017 A1
20170151959 Boesen Jun 2017 A1
20170153114 Boesen Jun 2017 A1
20170153636 Boesen Jun 2017 A1
20170154532 Boesen Jun 2017 A1
20170155985 Boesen Jun 2017 A1
20170155992 Perianu et al. Jun 2017 A1
20170155993 Boesen Jun 2017 A1
20170155997 Boesen Jun 2017 A1
20170155998 Boesen Jun 2017 A1
20170156000 Boesen Jun 2017 A1
20170164890 Leip et al. Jun 2017 A1
20170178631 Boesen Jun 2017 A1
20170180842 Boesen Jun 2017 A1
20170180843 Perianu et al. Jun 2017 A1
20170180897 Perianu Jun 2017 A1
20170188127 Perianu et al. Jun 2017 A1
20170188132 Hirsch et al. Jun 2017 A1
20170193978 Goldman Jul 2017 A1
20170195829 Belverato et al. Jul 2017 A1
20170208393 Boesen Jul 2017 A1
20170214987 Boesen Jul 2017 A1
20170215016 Dohmen et al. Jul 2017 A1
20170230752 Dohmen et al. Aug 2017 A1
20170251933 Braun et al. Sep 2017 A1
20170257698 Boesen et al. Sep 2017 A1
20170258329 Marsh Sep 2017 A1
20170263236 Boesen et al. Sep 2017 A1
20170263376 Verschueren et al. Sep 2017 A1
20170266494 Crankson et al. Sep 2017 A1
20170273622 Boesen Sep 2017 A1
20170280257 Gordon et al. Sep 2017 A1
20170301337 Golani et al. Oct 2017 A1
20170361213 Goslin et al. Dec 2017 A1
20170366233 Hviid et al. Dec 2017 A1
20180007994 Boesen et al. Jan 2018 A1
20180008194 Boesen Jan 2018 A1
20180008198 Kingscott Jan 2018 A1
20180009447 Boesen et al. Jan 2018 A1
20180011006 Kingscott Jan 2018 A1
20180011682 Milevski et al. Jan 2018 A1
20180011994 Boesen Jan 2018 A1
20180012228 Milevski et al. Jan 2018 A1
20180013195 Hviid et al. Jan 2018 A1
20180014102 Hirsch et al. Jan 2018 A1
20180014103 Martin et al. Jan 2018 A1
20180014104 Boesen et al. Jan 2018 A1
20180014107 Razouane et al. Jan 2018 A1
20180014108 Dragicevic et al. Jan 2018 A1
20180014109 Boesen Jan 2018 A1
20180014113 Boesen Jan 2018 A1
20180014140 Milevski et al. Jan 2018 A1
20180014436 Milevski Jan 2018 A1
20180034951 Boesen Feb 2018 A1
20180040093 Boesen Feb 2018 A1
20180042501 Adi et al. Feb 2018 A1
Foreign Referenced Citations (22)
Number Date Country
204244472 Apr 2015 CN
104683519 Jun 2015 CN
104837094 Aug 2015 CN
1469659 Oct 2004 EP
1017252 May 2006 EP
2903186 Aug 2015 EP
2074817 Nov 1981 GB
2508226 May 2014 GB
06292195 Oct 1998 JP
2008103925 Aug 2008 WO
2008113053 Sep 2008 WO
2007034371 Nov 2008 WO
2011001433 Jan 2011 WO
2012071127 May 2012 WO
2013134956 Sep 2013 WO
2014046602 Mar 2014 WO
2014043179 Jul 2014 WO
2015061633 Apr 2015 WO
2015110577 Jul 2015 WO
2015110587 Jul 2015 WO
2016032990 Mar 2016 WO
2016187869 Dec 2016 WO
Non-Patent Literature Citations (60)
Entry
Akkermans, “Acoustic Ear Recognition for Person Identification”, Automatic Identification Advanced Technologies, 2005 pp. 219-223.
Alzahrani et al: “A Multi-Channel Opto-Electronic Sensor to Accurately Monitor Heart Rate against Motion Artefact during Exercise”, Sensors, vol. 15, No. 10, Oct. 12, 2015, pp. 25681-25702, XPO55334602, DOI: 10.3390/s151025681 the whole document.
Announcing the $3,333,333 Stretch Goal (Feb. 24, 2014) pp. 1-14.
Ben Coxworth: “Graphene-based ink could enable low-cost, foldable electronics”, “Journal of Physical Chemistry Letters”, Northwestern University, (May 22, 2013), pp. 1-7.
Blain: “World's first graphene speaker already superior to Sennheiser MX400”, htt://www.gizmag.com/graphene-speaker-beats-sennheiser-mx400/31660, (Apr. 15, 2014).
BMW, “BMW introduces BMW Connected-The personalized digital assistant”, “http://bmwblog.com/2016/01/05/bmw-introduces-bmw-connected-the-personalized-digital-assistant”, (Jan. 5, 2016).
Bragi Is On Facebook (2014), pp. 1-51.
Bragi Update—Arrival Of Prototype Chassis Parts—More People—Awesomeness (May 13, 2014), pp. 1-8.
Bragi Update—Chinese New Year, Design Verification, Charging Case, More People, Timeline(Mar. 6, 2015), pp. 1-18.
Bragi Update—First Sleeves From Prototype Tool—Software Development Kit (Jun. 5, 2014), pp. 1-8.
Bragi Update—Let's Get Ready To Rumble, A Lot To Be Done Over Christmas (Dec. 22, 2014), pp. 1-18.
Bragi Update—Memories From April—Update On Progress (Sep. 16, 2014), pp. 1-15.
Bragi Update—Memories from May—Update On Progress—Sweet (Oct. 13, 2014), pp. 1-16.
Bragi Update—Memories From One Month Before Kickstarter—Update On Progress (Jul. 10, 2014), pp. 1-17.
Bragi Update—Memories From The First Month of Kickstarter—Update on Progress (Aug. 1, 2014), pp. 1-16.
Bragi Update—Memories From The Second Month of Kickstarter—Update On Progress (Aug. 22, 2014), pp. 1-15.
Bragi Update—New People @BRAGI—Prototypes (Jun. 26, 2014), pp. 1-9.
Bragi Update—Office Tour, Tour To China, Tour to CES (Dec. 11, 2014), pp. 1-14.
Bragi Update—Status On Wireless, Bits and Pieces, Testing-Oh Yeah, Timeline(Apr. 24, 2015), pp. 1-18.
Bragi Update—The App Preview, The Charger, The SDK, BRAGI Funding and Chinese New Year (Feb. 11, 2015), pp. 1-19.
Bragi Update—What We Did Over Christmas, Las Vegas & CES (Jan. 19, 2014), pp. 1-21.
Bragi Update—Years of Development, Moments of Utter Joy and Finishing What We Started(Jun. 5, 2015), pp. 1-21.
Bragi Update-Alpha 5 and Back To China, Backer Day, On Track(May 16, 2015), pp. 1-15.
Bragi Update-Beta2 Production and Factory Line(Aug. 20, 2015), pp. 1-16.
Bragi Update-Certifications, Production, Ramping Up (Nov. 13, 2015), pp. 1-15.
Bragi Update-Developer Units Shipping and Status(Oct. 5, 2015), pp. 1-20.
Bragi Update-Developer Units Started Shipping and Status (Oct. 19, 2015), pp. 1-20.
Bragi Update-Developer Units, Investment, Story and Status(Nov. 2, 2015), pp. 1-14.
Bragi Update-Getting Close(Aug. 6, 2015), pp. 1-20.
Brago Update-On Track, Design Verification, How It Works and What's Next(Jul. 15, 2015), pp. 1-17.
Bragi Update-On Track, On Track and Gems Overview (Jun. 24, 2015), pp. 1-19.
Bragi Update-Status On Wireless, Supply, Timeline and Open House@BRAGI(Apr. 1, 2015), pp. 1-17.
Bragi Update-Unpacking Video, Reviews On Audio Perform and Boy Are We Getting Close(Sep. 10, 2015), pp. 1-15.
Healthcare Risk Management Review, “Nuance updates computer-assisted physician documentation solution” (Oct. 20, 2016), pp. 1-2.
Hoffman, “How to Use Android Beam to Wirelessly Transfer Content Between Devices”, (Feb. 22, 2013).
Hoyt et al., “Lessons Learned from Implementation of Voice Recognition for Documentation in the Military Electronic Health Record System”, The American Health Information Management Association (2017), pp. 1-8.
Hyundai Motor America, “Hyundai Motor Company Introduces A Health + Mobility Concept For Wellness In Mobility”, Fountain Valley, Californa (2017), pp. 1-3.
International Search Report & Written Opinion, PCT/EP2016/070216 (Oct. 18, 2016) 13 pages.
International Search Report & Written Opinion, PCT/EP2016/070231 (Nov. 18, 2016) 12 pages.
International Search Report & Written Opinion, PCT/EP2016/070245 (Nov. 16, 2016) 10 pages.
International Search Report & Written Opinion, PCT/EP2016/070247 (Nov. 18, 2016) 13 pages.
International Search Report and Written Opinion, PCT/EP2016/070228 (Jan. 9, 2017) 13 pages.
Jain A et al: “Score normalization in multimodal biometric systems”, Pattern Recognition, Elsevier, GB, vol. 38, No. 12, Dec. 31, 2005, pp. 2270-2285, XPO27610849, ISSN: 0031-3203.
Last Push Before The Kickstarter Campaign Ends on Monday 4pm CET (Mar. 28, 2014), pp. 1-7.
Lovejoy: “Touch ID built into iPhone display one step closer as third-party company announces new tech”, “http://9to5mac.com/2015/07/21/virtualhomebutton/” (Jul. 21, 2015).
Nemanja Paunovic et al., “A methodology for testing complex professional electronic systems”, Serbian Journal of Electrical Engineering, vol. 9, No. 1, Feb. 1, 2012, pp. 71-80, XPO55317584, Yu.
Nigel Whitfield: “Fake tape detectors, ‘from the stands’ footie and UGH? Internet of Things in my set-top box”; http://www.theregister.co.uk/2014/09/24/ibc_round_up_object_audio_dlna_iot/ (Sep. 24, 2014).
Nuance, “ING Netherlands Launches Voice Biometrics Payment System in the Mobile Banking App Powered by Nuance”, “https://www.nuance.com/about-us/newsroom/press-releases/ing-netherlands-launches-nuance-voice-biometrics.html”, 4 pages (Jul. 28, 2015).
Staab, Wayne J., et al., “A One-Size Disposable Hearing Aid is Introduced”, The Hearing Journal 53(4):36-41) Apr. 2000.
Stretchgoal—It's Your Dash (Feb. 14, 2014), pp. 1-14.
Stretchgoal—The Carrying Case for The Dash (Feb. 12, 2014), pp. 1-9.
Stretchgoal—Windows Phone Support (Feb. 17, 2014), pp. 1-17.
The Dash + The Charging Case & The BRAGI News (Feb. 21, 2014), pp. 1-12.
The Dash-A Word From Our Software, Mechanical and Acoustics Team + An Update (Mar. 11, 2014), pp. 1-7.
Update From BRAGI—$3,000,000—Yipee (Mar. 22, 2014), pp. 1-11.
Weosoger; “Conjugated Hyperbilirubinemia”, Jan. 5, 2016.
Wertzner et al., “Analysis of fundamental frequency, jitter, shimmer and vocal intensity in children with phonological disorders”, V. 71, n.5, 582-588, Sep./Oct. 2005; Brazilian Journal of Othrhinolaryngology.
Wikipedia, “Gamebook”, https://en.wikipedia.org/wiki/Gamebook, Sep. 3, 2017, 5 pages.
Wikipedia, “Kinect”, “https://en.wikipedia.org/wiki/Kinect”, 18 pages, (Sep. 9, 2017).
Wikipedia, “Wii Balance Board”, “https://en.wikipedia.org/wiki/Wii_Balance_Board”, 3 pages, (Jul. 20, 2017).
Related Publications (1)
Number Date Country
20230032733 A1 Feb 2023 US
Provisional Applications (1)
Number Date Country
62270419 Dec 2015 US
Continuations (3)
Number Date Country
Parent 17159695 Jan 2021 US
Child 17938822 US
Parent 15946100 Apr 2018 US
Child 17159695 US
Parent 15383809 Dec 2016 US
Child 15946100 US