Wireless communications system and method

Information

  • Patent Grant
  • 11250847
  • Patent Number
    11,250,847
  • Date Filed
    Wednesday, July 17, 2019
    5 years ago
  • Date Issued
    Tuesday, February 15, 2022
    2 years ago
Abstract
A wireless communication system includes a smart device configured for transcribing text-to-speech (STT) for display. The smart device interfaces with a radio communications device, for example, in an aircraft (AC). The system includes a filter for optimizing STT functions. Such functions are further optimized by restricting the databases of information, including geographic locations, aircraft identifications and carrier information, whereby the database search functions are optimized. Methods for wireless communications using smart devices and STT functionality are disclosed.
Description
BACKGROUND OF THE INVENTION
1. Field of the Invention

The present invention relates generally to the field of wireless communications, and in particular to an avionics communications system utilizing speech-to-text (STT) functionality with an onboard smart device.


2. Description of the Related Art

One of the most common communication practices in aviation and aircraft (AC) navigation is voice communication via radio frequency (RF) transmissions. Examples include communications amongst aircrew, Air Traffic Control (ATC), Automatic Terminal Information Services (ATIS), etc. Aircrew are frequently tasked with managing AC piloting and navigation while listening and responding verbally to ATC and ATIS communications. Various systems and methods have previously been proposed for managing and optimizing communications in aviation operations. However, heretofore there is not been available a system and method with the advantages and features of the present invention.


BRIEF SUMMARY OF THE INVENTION

In the practice of the present invention, a wireless communication system includes an RF communications radio configured for transmitting and receiving analog or digital voice communications. Without limitation on the generality of useful applications of the present invention, an onboard AC application is disclosed. An onboard smart device can be configured for personal use by an aircrew member or members. The smart device interfaces with the communications radio via a hardwired or wireless connection. The smart device includes a speech-to-text (STT) program configured for transcribing analog or digital communications and displaying them in textual format to an aircrew member. The system further includes localizing functions for optimizing operation via GNSS-defined AC locations, and a signal filter adapted to optimize speech recognition functions in environments such as AC cockpits and cabins.





BRIEF DESCRIPTION OF THE DRAWINGS

The drawings constitute a part of this specification and include exemplary embodiments of the present invention illustrating various objects and features thereof.



FIG. 1 shows the architecture of a communication system embodying an aspect or embodiment of the present invention.



FIG. 2 is a fragmentary, schematic diagram thereof, particularly showing a multi-channel system.



FIG. 3 is a fragmentary diagram of the system, particularly showing an aircrew headset, a smart (mobile) device and part of an AC cockpit panel, hardwired together via a Y-splitter.



FIG. 4 is a fragmentary diagram of the system, particularly showing a wireless interface between the smart device, an aircrew headset and the AC cockpit panel.



FIG. 5 is a fragmentary diagram of the system, particularly showing a training system with a machine learning cluster.



FIG. 6 is a high level STT processing flowchart.



FIGS. 7-9 show alternative embodiment STT flowcharts.



FIG. 10 is a map showing an application of the localization function.



FIG. 11 is a Venn diagram showing call sign recognition probability.



FIG. 12 is a flowchart showing the high-level architecture of a training system.



FIG. 13 is a schematic diagram of an audio processing filter subsystem.



FIG. 14 is a block diagram of the system in conjunction with an ADS-B component.



FIG. 15 is a block diagram of a user interface of the system.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

As required, detailed aspects of the present invention are disclosed herein, however, it is to be understood that the disclosed aspects are merely exemplary of the invention, which may be embodied in various forms. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art how to variously employ the present invention in virtually any appropriately detailed structure.


Certain terminology will be used in the following description for convenience in reference only and will not be limiting. For example, up, down, front, back, right and left refer to the invention as orientated in the view being referred to. The words, “inwardly” and “outwardly” refer to directions toward and away from, respectively, the geometric center of the aspect being described and designated parts thereof. Forwardly and rearwardly are generally in reference to the direction of travel, if appropriate. Said terminology will include the words specifically mentioned, derivatives thereof and words of similar meaning.


I. Introduction and Environment


Referring to the drawings more detail, the reference numeral 10 generally designates a wireless communications system embodying an aspect of the present invention. Without limitation on the generality of useful applications of the system 10, an exemplary application is in an AC for facilitating aviation operations. The AC includes a cockpit panel 12 mounting flight controls (e.g., yoke 14) and instrumentation 16, including a radio communications device 18 connected to a headset 20 enabling an aircrew member (e.g., pilot) to effectively engage in RF communications.


As shown in FIG. 1, onboard components of the system 10 include the radio communications device 18 (shown in a radio/audio panel), the pilot headset 20, an ADS-B device 22 (e.g., a Stratus device available from Appareo Systems, LLC of Fargo, N. Dak. (U.S. Pat. No. 9,172,481 for Automatic Multi-Generational Data Caching and Recovery, which is incorporated herein by reference)) and a microprocessor-based smart device 24.


Elements of the system remote from the AC (e.g. ground-based or on another AC) include a server 26, a quality assurance operator 28, a software application source (e.g. Apple App store) 30. The server 26 connects via the cloud (Internet) 32 to a Wi-Fi network 34 for connection to the onboard microprocessor 24. A global navigation satellite system (GNSS) source 33 provides positioning signals to the system 10. FIG. 1 shows communications paths for analog and digital communications among the system 10 components.



FIG. 2 shows a multi-channel or multi-frequency embodiment of the present invention. A single-channel (frequency) embodiment is feasible, but RF communications systems commonly utilize multiple channels for accommodating different types of communications (e.g., AC-AC, AC-ATC, emergency, weather, etc.). The radio/audio panel component 12/18 accommodates the radio communications device 18 and other cockpit panel 12 components. The STT box, as described above, can be coupled with the microprocessor device 22. Audio input from the AC to the STT can also be provided via an area microphone 36. FIG. 2 also shows the wired and/or wireless interfaces between the STT box 22 and the mobile/smart device 24.



FIG. 3 shows the hardwired embodiment of the invention with the headset 20, the smart/mobile device 24 and the radio communications device 18 interconnected via a Y connector 36, which functions as a STT harness. Suitable plug-type, multi-conductor connectors can be utilized for efficiently assembling and disassembling the system 10 components. For example, aircrew often have their own headsets and smart devices, which can thus be transferred among multiple AC. FIG. 4 also shows these components, with a wireless audio input/output configuration with the smart/mobile device 24.



FIG. 5 shows a training system for the system 10 including a machine learning cluster 38 receiving inputs from an untrained model 40 and training data 42. The machine learning cluster 38 provides a trained 44 model as output.



FIG. 6 shows a flowchart for the STT processing function of the system 10. FIGS. 7-9 show flowcharts for alternative STT processing procedures. Such alternative procedures can be chosen for effectiveness in particular applications. For example, AC applications may benefit from cabin noise filtering. Moreover, STT can be optimized by accounting for regional dialects among speakers, multi-lingual STT software, localization and context-based speech recognition models.



FIG. 10 shows a database localization function based on GNSS-defined locations of the AC. For example, on a cross-country flight, communications with ATC's, other AC, etc. are of greater interest in proximity to the AC's flight crew. Waypoints and fixtures are commonly identified with 4-5 characters for decoding. Thus, the current location of the AC can define a circular geographic area of interest with a predetermined radius (e.g., 250 nm). Such filtering and localization can also be accomplished by utilizing names of carriers, e.g., “FedEx” for a database subset corresponding to locations (airports, addresses, other businesses, etc.) serviced by the Federal Express Corporation. Still further, the database can utilize the tail number registrations assigned by the Federal Aviation Administration (FAA) for callsign localization. The system of the present invention utilizes these and other database functions for maximizing the probabilities of accurate identifications. Such probabilities can be modeled and effectively utilized by software located onboard or remotely for access via the cloud 32. FIG. 11 shows a hierarchy with hypothetical probabilities based on databases including: all call signs; localized call signs; and air and ADS-B traffic call signs. Other signals can be filtered and excluded. The operating efficiency of the system 10 can thus be optimized by focusing consideration of communications locally on a relatively small subset of communications nationwide or globally.



FIG. 12 shows a flowchart for implementing the present invention using speech corpora, which can be selected among multiple options, normalizing data sets, data augmentation, deep neural network bi-directional long short-term memory (LSTM) and Correctionist Temporal Classification (CTC), resulting in a trained language model output.



FIG. 13 shows a filter subsystem flowchart 52 configured for use with the present invention. Signals progress from an audio frame 54 to a window 56 and then to a fast Fourier transform (FFT) 58. From a truncated spectral slope and power ratio step 60 the method proceeds to a decision box 62 whereat the signal is analyzed for a rolling off characteristic. If “YES,” the method proceeds to a Zero Past Samples step 64 and then proceeds to a Zero Current Samples step 66. Through a queue zeroed samples process the method proceeds to a Sample Queue at 68, then to a Look Ahead Full decision box 70. If “YES,” speech recognition results at 72. If the signal being filtered is not rolling off, the method proceeds to a “Is Cabin” decision box: if no, the method proceeds to the sample queue step 68; if yes, the method proceeds to a debounce step 76, satisfying the debounce and proceeding to the zero past samples step 64.



FIG. 14 is a block diagram showing a Stratus software development kit (SDK) 74, a Knox (wireless communications system) library 84 including an STT library (e.g., C++ programming language) 86, a base Stratus ecosystem 88. Other device libraries 90 can provide additional data. FIG. 15 is a block diagram of a user interface of the system.

Claims
  • 1. A speech-to-text (STT) avionics transcription system for transcribing radio frequency (RF) communications transmitted from and received on board an aircraft (AC), the system comprising: a smart device configured for receiving said RF communications;said smart device including a display visible to aircrew; andan STT program installed and running on said smart device, said STT program configured for converting said RF communications to digital text for display on said smart device,said STT program including a localization function configured for optimizing communications by maximizing probabilities of accurate identifications in converting said RF communications to digital text based on AC proximity to respective geographic locations along AC flight paths,the localization function including a call sign hierarchy function assigning probabilities to call signs in sets of I) all call signs, II) localized call signs, and III) air traffic and ADS-B traffic call signs,wherein the localized call signs set is a subset of the all call signs set, and localized call signs in the localized call signs subset have higher probabilities than call signs not included in the localized call signs subset, andwherein the air traffic and ADS-B traffic call signs set is a subset of the localized call signs subset, and air traffic and ADS-B traffic call signs in the air traffic and ADS-B traffic call signs subset have higher probabilities than call signs not included in the air traffic and ADS-B traffic call signs subset.
  • 2. The STT avionics transcription system according to claim 1, which includes: a radio communications device onboard said AC and configured for transmitting and receiving said RF communications;said radio communications device configured for selective connection to said smart device; andan audio subsystem onboard said AC, connected to said radio communications device and configured for output of said communications to aircrew.
  • 3. The STT avionics transcription system according to claim 2, which includes: said radio communications device and said smart device being interconnected via one of a hardwired or wireless connection; anda Y-connector including a first connection to said radio communications device, a second connection to said audio subsystem and a third connection to said smart device.
  • 4. The STT avionics transcription system according to claim 3, which includes: said radio communications device comprising a multiple-channel communications radio; and said multiple-channel communications radio is configured for monitoring:a primary communications channel for communicating with air traffic control (ATC) and other AC; anda secondary channel for weather and emergency communications.
  • 5. The STT avionics transcription system according to claim 2, which includes an automatic dependent surveillance-broadcast (ADS-B) device onboard said AC and connected to said radio communications device.
  • 6. The STT avionics transcription system according to claim 3, which includes a machine learning cluster configured for training said STT program, said machine learning cluster configured for receiving untrained model and training data input and providing trained model output.
  • 7. The STT avionics transcription system according to claim 3, wherein said aircrew can access said communications audibly and/or visually.
  • 8. The STT avionics transcription system according to claim 3, wherein said STT avionics transcription system includes: cabin filtering, voice activity detection (VAD), feature extraction, an inference-neural network (acoustic model), decoder language model and natural language processing.
  • 9. The STT avionics transcription system according to claim 3, wherein said STT avionics transcription system includes: an inference-neural network (acoustic model) configured to receive acoustic model inference processing input; anda decoder (language model) configured to receive language model decoding input.
  • 10. The avionics system according to claim 3, which includes: a Global Navigation Satellite System (GNSS) onboard said AC and configured for generating a GNSS-defined AC location of said AC; andand a database localization feature utilizing said GNSS-defined AC location.
  • 11. The speech-to-text avionics system according to claim 1, which includes: a database server located remotely from said aircraft and configured for interfacing with said smart device via a cloud network.
  • 12. The STT avionics transcription system according to claim 4, wherein said multiple-channel communications radio is configured for multi-frequency transmitting and receiving.
  • 13. The STT avionics transcription system according to claim 4, wherein said multiple-channel communications radio includes a primary frequency and multiple secondary frequencies, said multiple-channel communications radio configured for receiving and transmitting on said primary frequency and said multiple secondary frequencies.
  • 14. The STT avionics transcription system according to claim 1, which includes: a global navigation satellite system (GNSS) subsystem configured for locating an aircraft with said avionics system on board; andsaid STT avionics transcription system configured for optimizing performance by localizing said aircraft with said GNSS-defined position data.
  • 15. A speech-to-text (STT) avionics transcription system for transcribing radio frequency (RF) communications transmitted from and received on board an aircraft (AC), which system includes: a smart device configured for receiving said communications;said smart device including a display visible to aircrew;a STT program installed and running on said smart device, said STT program configured for converting said communications to digital text for display on said smart device,said STT program including a localization function configured for optimizing communications by maximizing probabilities of accurate identifications in converting said RF communications to digital text based on AC proximity to respective geographic locations along AC flight paths,said localization function includes a call sign hierarchy function assigning probabilities to call signs in sets of I) all call signs, II) localized call signs, and III) air traffic and ADS-B traffic call signs,wherein the localized call signs set is a subset of the all call signs set, and localized call signs in the localized call signs subset have higher probabilities than call signs not included in the localized call signs subset, andwherein the air traffic and ADS-B traffic call signs set is a subset of the localized call signs subset, and air traffic and ADS-B traffic call signs in the air traffic and ADS-B traffic call signs subset have higher probabilities than call signs not included in the air traffic and ADS-B traffic call signs subset,said STT program being configured to account for regional speaker dialects, multiple languages, and speech context;a radio communications device onboard said AC and configured for transmitting and receiving said RF communications;said radio communications device configured for selective connection to said smart device;an audio subsystem onboard said AC, connected to said radio communications device and configured for output of said communications to aircrew;said radio communications device and said smart device being interconnected via one of a hardwired or wireless connection;a Y-connector including a first connection to said radio communications device, a second connection to said AC audio subsystem and a third connection to said smart device;said radio communications device comprising a multiple-channel communications radio; and said multiple-channel communications radio is configured for monitoring: a primary communications channel for communicating with air traffic control (ATC) and other AC; and a secondary channel for weather and emergency communications;an automatic dependent surveillance-broadcast (ADS-B) device onboard said AC and connected to said radio communications device;a machine learning cluster configured for training said STT program, said machine learning cluster configured for receiving untrained model and training data input and providing trained model output;said STT avionics transcription system including: cabin filtering, voice activity detection (VAD), feature extraction, inference-neural network (acoustic model), decoder language model and natural language processing;said STT avionics transcription system including: the inference-neural network (acoustic model) receiving acoustic model inference processing input; and a decoder (language model) receiving language model decoding input;a Global Navigation Satellite System (GNSS) onboard said AC and configured for generating a GNSS-defined AC location of said AC; anda database localization feature utilizing said GNSS-defined AC location.
  • 16. A wireless communications method including the steps of: providing a smart device configured for receiving communications;providing said smart device with a display visible to aircrew; andinstalling on said smart device an STT program and running on said smart device,said STT program configured for converting said communications to digital text for display on said smart device,said STT program including a localization function configured for optimizing communications by maximizing probabilities of accurate identifications in converting said RF communications to digital text based on AC proximity to respective geographic locations along AC flight paths,the localization function including a call sign hierarchy function assigning probabilities to call signs in sets of I) all call signs, II) localized call signs, and III) air traffic and ADS-B traffic call signs,wherein the localized call signs set is a subset of the all call signs set, and localized call signs in the localized call signs subset have higher probabilities than call signs not included in the localized call signs subset, andwherein the air traffic and ADS-B traffic call signs set is a subset of the localized call signs subset, and air traffic and ADS-B traffic call signs in the air traffic and ADS-B traffic call signs subset have higher probabilities than call signs not included in the air traffic and ADS-B traffic call signs subset.
  • 17. The wireless communication method according to claim 16, which includes the additional steps of: providing a radio communications device selectively connected to said smart device; andoptimizing performance of said wireless communications method using one or more of aircraft identification in an avionics application and global navigation satellite system (GNSS) positioning information.
  • 18. The wireless communication method according to claim 16, which includes the additional filtering steps of: providing an audio frame for receiving audio signals;defining a signal window;applying a fast Fourier transform (FFT) to said audio signals; andapplying a truncated spectral slope and power ratio filter to said audio signals.
  • 19. The speech-to-text (STT) avionics transcription system according to claim 1, wherein the localization function is a database localization function based on a current location of the aircraft defining a circular geographical area of interest with a predetermined radius.
  • 20. The speech-to-text (STT) avionics transcription system according to claim 15, wherein the localization function is a database localization function based on a current location of the aircraft defining a circular geographical area of interest with a predetermined radius.
  • 21. The wireless communication method according to claim 16, wherein the localization function is a database localization function based on a current location of the aircraft defining a circular geographical area of interest with a predetermined radius.
CROSS-REFERENCE TO RELATED APPLICATION

This application claims priority in U.S. Provisional Patent Application No. 62/699,044 Filed Jul. 17, 2018, and also claims priority is U.S. Provisional Patent Application No. 62/715,380 Filed Aug. 7, 2018 both of which are incorporated herein by reference.

US Referenced Citations (85)
Number Name Date Kind
3656164 Rempt Apr 1972 A
5296861 Knight Mar 1994 A
5400031 Fitts Mar 1995 A
5689266 Stelling Nov 1997 A
6222480 Kuntman et al. Apr 2001 B1
6422518 Stuff Jul 2002 B1
7068233 Thronburg et al. Jun 2006 B2
8102301 Mosher Jan 2012 B2
8155612 Husted et al. Apr 2012 B1
9886862 Burgess Feb 2018 B1
9893413 Johnson et al. Feb 2018 B2
9917657 Reyes Mar 2018 B1
10027662 Mutagi Jul 2018 B1
10102760 Foltan et al. Oct 2018 B1
10211914 Stayton Feb 2019 B2
10370102 Boykin et al. Aug 2019 B2
10532823 Barber Jan 2020 B1
20030063004 Anthony Apr 2003 A1
20030233192 Bayh et al. Dec 2003 A1
20040224740 Ball et al. Nov 2004 A1
20050114627 Budny et al. May 2005 A1
20050156777 King et al. Jul 2005 A1
20050220055 Nelson et al. Oct 2005 A1
20050246353 Ezer et al. Nov 2005 A1
20060057974 Ziarno et al. Mar 2006 A1
20060176651 Olzak Aug 2006 A1
20060216674 Baranov et al. Sep 2006 A1
20060227995 Spatharis Oct 2006 A1
20070020588 Batcheller et al. Jan 2007 A1
20070100516 Olzak May 2007 A1
20070142980 Ausman et al. Jun 2007 A1
20070159378 Powers et al. Jul 2007 A1
20070241936 Arthur et al. Oct 2007 A1
20080254750 Whitaker Filho Oct 2008 A1
20080261638 Wahab et al. Oct 2008 A1
20090146875 Hovey Jun 2009 A1
20090147758 Kumar Jun 2009 A1
20090248287 Limbaugh et al. Oct 2009 A1
20100092926 Fabling Apr 2010 A1
20100149329 Maguire Jun 2010 A1
20100231706 Maguire Sep 2010 A1
20110125503 Dong et al. May 2011 A1
20110160941 Garrec Jun 2011 A1
20110282521 Vlad Nov 2011 A1
20110282522 Prus Nov 2011 A1
20110298648 Ferro Dec 2011 A1
20110298649 Robin et al. Dec 2011 A1
20120001788 Carlson et al. Jan 2012 A1
20120038501 Schulte et al. Feb 2012 A1
20120098714 Lin Apr 2012 A1
20120215505 Srivastev et al. Aug 2012 A1
20120265534 Coorman Oct 2012 A1
20120299752 Mahmoud et al. Nov 2012 A1
20130093612 Pschierer et al. Apr 2013 A1
20130121219 Stayton May 2013 A1
20130137415 Takikawa May 2013 A1
20130171964 Bhatia et al. Jul 2013 A1
20130201037 Glover et al. Aug 2013 A1
20130265186 Gelli et al. Oct 2013 A1
20140024395 Johnson et al. Jan 2014 A1
20140081483 Weinmann et al. Mar 2014 A1
20140197981 Hartley et al. Jul 2014 A1
20140303813 Ihns Oct 2014 A1
20150083674 Sarno et al. Mar 2015 A1
20150162001 Kar et al. Jun 2015 A1
20150302870 Burke et al. Oct 2015 A1
20150341796 Williams et al. Nov 2015 A1
20150349875 Lauer et al. Dec 2015 A1
20150364044 Kashi et al. Dec 2015 A1
20160170025 Johnson et al. Jun 2016 A1
20160202950 Hawley Jul 2016 A1
20160301439 Brinkley Oct 2016 A1
20160347473 Khatwa et al. Dec 2016 A1
20160349361 Schulte Dec 2016 A1
20160363652 Hamminga et al. Dec 2016 A1
20160379640 Joshi et al. Dec 2016 A1
20170036776 He Feb 2017 A1
20170069312 Sundararajan Mar 2017 A1
20170106997 Bekanich Apr 2017 A1
20170178624 Friedland Jun 2017 A1
20170213552 Gupta Jul 2017 A1
20170299685 Mccullen et al. Oct 2017 A1
20170330465 Kim Nov 2017 A1
20180044034 Newman et al. Feb 2018 A1
20190147556 McCann May 2019 A1
Foreign Referenced Citations (1)
Number Date Country
106452549 Feb 2017 CN
Non-Patent Literature Citations (3)
Entry
Google search list, Feb. 2021 (Year: 2021).
“International Search Report and Written Opinion; PCT?US2019/042296; dated Oct. 11, 2019”.
“International Search Report and Written Opinion; PCT/US18/46301/ dated Oct. 22, 2018”.
Related Publications (1)
Number Date Country
20200027457 A1 Jan 2020 US
Provisional Applications (2)
Number Date Country
62699044 Jul 2018 US
62715380 Aug 2018 US