The present invention is related to the field of computer-based processing, and more particularly, to processing and comparing speech and handwriting.
Society has greatly benefited from the many advances in medical knowledge, pharmaceutical drugs, and patient treatments. Despite all these advances, a significant challenge facing the medical field are prescription errors. Prescription errors, due primarily to poor handwriting, are a leading cause of medical errors which lead to injuries and/or fatalities. Currently, when a patient goes to a pharmacy to receive a prescribed drug, the pharmacist often has to read and rely on illegible prescriptions when filling out a particular patient's prescription. This significantly increases the odds that an incorrect drug will be given to the patient. The pharmacist may be able to call the physician directly to verify the prescription, however, often times the physician may be unavailable to communicate with the pharmacist.
In order to more effectively fulfill the healthcare community's obligations to a patient, it is very important to be able to ensure that a patient actually receives what a physician prescribed for the patient's health condition. When patients receive the wrong medications it leads to problems such as health-related complications resulting from taking the wrong medications, decreased trust in the medical system, increased costs, and unnecessary expenditures of healthcare resources. As a result, there is a need for a more effective, efficient, and accurate means of reducing prescription errors through the use of systems and methods for bi-translation of speech and writing so as to verify the accuracy of prescriptions.
The present invention is directed to systems and methods for processing and translating speech and writing, particular handwriting pertaining to written prescriptions. The comparison of speech and writing enables improved prescription accuracy by alerting a prescriber or other individual of an inaccuracy between the speech and corresponding written text.
One embodiment of the invention is a system for bi-translation of speech and writing. The system can comprise one or more electronic data processors contained within one or more computing devices. The system can also include a module configured to execute on the more or more electronic data processors in order to record a spoken and written segment into the one or more computing devices, where the segments can be corroborated by selecting potential medications and processes. The module can also be configured to convert the spoken segment into a stream of text or tokens and the written segment into a stream of text or tokens. Moreover, the module can be configured to compare the converted spoken and written streams of text or tokens to determine whether the spoken segment and the written segment match and output the results.
Another embodiment of the invention is a computer-based method for bi-translation of speech and writing. The method can include recording a spoken and a written segment into one or more computing devices, where the segments can be corroborated by selecting potential medications and processes. The method can also include converting the spoken segment into a stream of text or tokens and the written segment into a stream of text or tokens. Furthermore, the method can include comparing the converted spoken and written streams of text or tokens to determine whether the spoken segment and the written segment match and outputting the results.
Yet another embodiment of the invention is a computer-readable storage medium that contains computer-readable code, which when loaded on a computer, causes the computer to perform the following steps: recording a spoken and a written segment into one or more computing devices, where the segments can be corroborated by selecting potential medications and processes; converting the spoken segment into a stream of text or tokens and the written segment into a stream of text or tokens; and, comparing the converted spoken and written streams of text or tokens to determine whether the spoken segment and the written segment match and outputting the results.
There are shown in the drawings, embodiments which are presently preferred. It is expressly noted, however, that the invention is not limited to the precise arrangements and instrumentalities shown.
Referring initially to
The system 100 further includes a module 108, which, can be implemented as computer-readable code configured to execute on the one or more electronic data processors 104. Alternatively, the module 108 can be implemented in hardwired, dedicated circuitry for performing the operative functions described herein. In yet another embodiment, however, the module 108 can be implemented in a combination of hardwired circuitry and computer-readable code.
Operatively, the module 108 can be configured to record a spoken and written segment, based on the inputs 102a-b, into the one or more computing devices 106, where the segments can be corroborated by selecting potential medications and processes. The module 108 can also be configured to convert the spoken segment into a stream of text or tokens and the written segment into a stream of text or tokens. Furthermore, the module 108 can be configured to compare the converted spoken and written streams of text or tokens to determine whether the spoken segment and the written segment match and also generate an output 110 detailing the results.
According to a particular embodiment, the module 108 can be configured to initiate and complete a recording by enabling a user to select one or more icons and characters on the one or more computing devices 106. For example, the computing devices 106 can be an internet tablet, a laptop, a personal digital assistant (PDA), a mobile device, a microphone, a touch screen-enabled device or other computing device. As an illustration, if a user is using a touch-screen enabled device which contains a microphone, the user can select one or more icons and characters on the touch screen that can initiate or complete a recording if the speech and handwriting. The module 108 can also be configured to record the spoken and written segments simultaneously.
According to another embodiment, the module 108 can be further configured to record the spoken and written data segments, that serve as the inputs 102a-b, at separate times, where the beginning and end of each segment are detectable. The module 108 can also be configured to rerecord the spoken and written segments if the beginning and end of each segment are not detectable. For example, if the module cannot determine when the beginning and end of each segment representing a particular prescription are, the module can prompt the user to rerecord the segments.
According to another embodiment, the module 108 can be configured to display the streams of text or tokens as accurate if the streams match and enable a prescriber to verify the results. In yet another embodiment, the module 108 can be further configured to provide an alert and to give a prescriber the option of key entry, selecting the correct text sequence from alternatives shown on the one or more computing devices 106, or selecting from a list of commonly prescribed medications and processes if the streams of text or tokens do not match. As an example, the module 106 can display the converted speech text in black and the converted writing text in red to the user. If the black and red text match, then the user can see that they match and verify the results. However, if the results do not match, then the user is provided with an alert and is given the option of entering in the right prescription, selecting an alternative, or selecting from a list of prescribed medications and processes.
Referring now to
According to another embodiment, the method 200 can further include, at the recording step 204, initiating and completing a recording by selecting one or more icons and characters on the one or more computing devices. Additionally, the recording step 204 can comprise speaking and writing the segments simultaneously into the one or more computing devices. The recording step 204 can further comprise speaking and writing segments at separate times into the one or more computing devices, where the beginning and end of each segment are detectable.
In one embodiment, the method 200 can include rerecording the spoken and written segments into the one or more computing devices if the beginning and end of each segment are not detectable. According to another embodiment, the method 200 can further include displaying the streams as accurate and enabling a prescriber to verify the results if the streams of text or tokens match. The method 200 can further include providing an alert and enabling a prescriber the option of keying in an entry, selecting the correct text sequence from alternatives shown on the one or more computing devices, or selecting from a list of commonly prescribed medications and processes if the streams of text or tokens do not match.
The invention, as already mentioned, can be realized in hardware, software, or a combination of hardware and software. The invention can be realized in a centralized fashion in one computer system, or in a distributed fashion where different elements are spread across several interconnected computer systems. Any type of computer system or other apparatus adapted for carrying out the methods described herein is suitable. A typical combination of hardware and software can be a general purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
The invention, as already mentioned, can be embedded in a computer program product, such as magnetic tape, an optically readable disk, or other computer-readable medium for storing electronic data. The computer program product can comprise computer-readable code, defining a computer program, which when loaded in a computer or computer system causes the computer or computer system to carry out the different methods described herein. Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.
The preceding description of preferred embodiments of the invention have been presented for the purposes of illustration. The description provided is not intended to limit the invention to the particular forms disclosed or described. Modifications and variations will be readily apparent from the preceding description. As a result, it is intended that the scope of the invention not be limited by the detailed description provided herein.
This application claims the benefit of U.S. Provisional Patent Application No. 61/083,029, which was filed Jul. 23, 2008, and which is incorporated herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5772585 | Lavin et al. | Jun 1998 | A |
6167376 | Ditzik | Dec 2000 | A |
6285785 | Bellegarda et al. | Sep 2001 | B1 |
6401067 | Lewis et al. | Jun 2002 | B2 |
6804654 | Kobylevsky et al. | Oct 2004 | B2 |
6889190 | Hegarty | May 2005 | B2 |
7058584 | Kosinski et al. | Jun 2006 | B2 |
7133937 | Leavitt | Nov 2006 | B2 |
7137076 | Iwema et al. | Nov 2006 | B2 |
7149970 | Pratley et al. | Dec 2006 | B1 |
7467089 | Roth et al. | Dec 2008 | B2 |
7702525 | Kosinski et al. | Apr 2010 | B2 |
7848934 | Kobylevsky et al. | Dec 2010 | B2 |
7853446 | Allard et al. | Dec 2010 | B2 |
7881936 | Longe et al. | Feb 2011 | B2 |
7957984 | Vallone | Jun 2011 | B1 |
8060380 | Sullivan et al. | Nov 2011 | B2 |
8150706 | Kobylevsky et al. | Apr 2012 | B2 |
8275613 | Harter et al. | Sep 2012 | B2 |
8423351 | Hughes | Apr 2013 | B2 |
8457959 | Kaiser | Jun 2013 | B2 |
20020035484 | McCormick | Mar 2002 | A1 |
20020099534 | Hegarty | Jul 2002 | A1 |
20020143533 | Lucas et al. | Oct 2002 | A1 |
20030055638 | Burns et al. | Mar 2003 | A1 |
20030182101 | Lambert | Sep 2003 | A1 |
20030233237 | Garside et al. | Dec 2003 | A1 |
20040049388 | Roth et al. | Mar 2004 | A1 |
20040102971 | Lipscher et al. | May 2004 | A1 |
20040267528 | Roth et al. | Dec 2004 | A9 |
20050159948 | Roth et al. | Jul 2005 | A1 |
20050234722 | Robinson et al. | Oct 2005 | A1 |
20050283364 | Longe et al. | Dec 2005 | A1 |
20060041427 | Yegnanarayanan et al. | Feb 2006 | A1 |
20060149587 | Hill et al. | Jul 2006 | A1 |
20060167685 | Thelen et al. | Jul 2006 | A1 |
20070067186 | Brenner et al. | Mar 2007 | A1 |
20080077399 | Yoshida | Mar 2008 | A1 |
20080221893 | Kaiser | Sep 2008 | A1 |
20080281582 | Hsu et al. | Nov 2008 | A1 |
20100023312 | Heath et al. | Jan 2010 | A1 |
20120215557 | Flanagan et al. | Aug 2012 | A1 |
20130304453 | Fritsch et al. | Nov 2013 | A9 |
Number | Date | Country |
---|---|---|
WO 9205517 | Apr 1992 | WO |
Entry |
---|
Stelios E. Lambros. Smartpad : A Mobile Multimodal Prescription Filling System. University of Virginia thesis. Mar. 25, 2003. |
Scott Durling and Jo Lumsden. 2008. Speech recognition use in healthcare applications. In Proceedings of the 6th International Conference on Advances in Mobile Computing and Multimedia (MoMM '08), Gabriele Kotsis, David Taniar, Eric Pardede, and Ismail Khalil (Eds.). ACM, New York, NY, USA, 473-478. |
Kart, F.; Gengxin Miao; Moser, L.E.; Melliar-Smith, P.M.; , “A Distributed e-Healthcare System Based on the Service Oriented Architecture,” Services Computing, 2007. SCC 2007. IEEE International Conference on , vol., no., pp. 652-659, Jul. 9-13, 2007. |
Number | Date | Country | |
---|---|---|---|
20100023312 A1 | Jan 2010 | US |
Number | Date | Country | |
---|---|---|---|
61083029 | Jul 2008 | US |