1. Statement of the Technical Field
The inventive arrangements relate to communication systems, and more particularly to systems and method for providing group calls over a network.
2. Description of the Related Art
There are various communication networks known in the art. Such communication networks include a Land Mobile Radio (LMR) network, a Wideband Code Division Multiple Access (WCDMA) based network, a Code Division Multiple Access (CDMA) based network, a Wireless Local Area Network (WLAN), an Enhanced Data rates for GSM Evolution (EDGE) based network and a Long Term Evolution (LTE) based network. Each of these communication networks comprises a plurality of communication devices and network equipment configured to facilitate communications between the communication devices. Each communication network often provides a group call service to service users. The group call service is a service by which a service user (e.g., first responder) is able to simultaneously talk to other service users (e.g., other first responders) associated with a particular talk group or where a service user (e.g., internet user) is able to simultaneously talk to other service users (e.g., other internet users) associated with a particular social media profile. The group call service can be implemented by a Push-To-Talk (PTT) group call service. The PTT group call service is an instant service by which the PTT service user is able to immediately talk to other PTT service users of a particular talk group or social media profile by pushing a key or button of a communication device.
During operation, the service users may be engaged in a plurality of group calls at the same time. In this scenario, the portable communication devices (e.g., LMR radios and/or cellular telephones) utilized by the service users cannot simultaneously capture speech exchanged between members of the plurality of group calls. For example, if a first portable communication device of a first service user is receiving speech transmitted from a second portable communication device of a second service user of a first talk group or social media profile (or priority talk group), then the first communication device is unable to simultaneously capture speech transmitted from a third communication device of a third service user of a second talk group or social media profile (or non-priority talk group). As such, speech associated with the second talk group or social media profile is undesirably lost.
Also during operation, one or more of the portable communication devices (e.g., LMR radios and/or cellular telephones) may be in their muted state. In the muted state, the audio outputs of the portable communication devices are silenced. In this scenario, the muted, portable communication devices (e.g., LMR radios and/or cellular telephones) are unable to transfer speech of the plurality of group calls to their respective loudspeakers. As such, all information communicated during the group calls is undesirably lost.
Further during operation, one or more of the portable communication devices (e.g., LMR radios and/or cellular telephones) may be used in public safety and/or military covert operations. In this scenario, the service users do not want to be detected by a third party (e.g., an enemy or criminal). As such, the service users can not rely on audible communications. As such, there is a need for portable communication devices (e.g., LMR radios and/or cellular telephones) which provide the service users with a means to receive messages in a discrete manner.
It should also be noted that a console operator (e.g., a 911 operator) utilizing a communication device of a central or dispatch station is able to simultaneously monitor information exchanges between service users of a plurality of talk groups or social media profiles. In this scenario, the speech of the plurality of talk groups or social media profiles is often summed or mixed together to form combined speech. Thereafter, the combined speech from the talk groups or social media profiles that are under active monitoring is concurrently output from a single loud speaker or headset to the console operator. Also, the combined speech from the talk groups or social media profiles that are not under active monitoring is concurrently output from another single loud speaker to the console operator. Consequently, the console operator often has a hard time understanding the speech exchanged between service users of the plurality of talk groups or social media profiles. The console operator may also have difficulty distinguishing which of the service users is speaking at any given time.
Embodiments of the present invention concern implementing systems and methods for avoiding loss of data (e.g., speech streams) in a Land Mobile Radio (LMR) communication system in which individual LMR devices are assigned to more than one talk group. Each of the LMR devices can include, but is not limited to, an LMR console or an LMR handset. A first method generally involves receiving a first transmitted voice communication from a first LMR device for a first talk group to which the first LMR device and a second LMR device have been assigned. The first method also involves receiving a second transmitted voice communication from a third LMR device for a second talk group to which the first LMR device and the third LMR device have been assigned. The second transmitted voice communication occurs at a time at least partially concurrent with the first transmitted voice communication. In response to concurrently receiving the first and second transmitted voice communications, at least one action is performed to preserve speech information content of the second transmitted voice communication. At least one signal can be generated to notify a user that the preserving action has been performed.
According to an aspect of the present invention, the action includes converting the speech information content to text and/or storing the speech information content for later presentation at the second LMR device. The speech-to-text conversion can be performed at the second LMR device and/or at a network server remote from the second LMR device. The action also includes displaying the text at the second LMR device. At least one time stamp can be provided for the text. At least one identifier can be provided for associating the text with the third LMR device. The text can be stored for subsequent use. In this scenario, the text can be converted to speech. The speech is presented as audio at the second LMR device.
According to another aspect of the present invention, the first and second transmitted voice communications are automatically converted to text if an audio output of the second LMR device is set to a mute condition.
A second method of the present invention involves receiving a first transmitted voice communication from a first LMR device for a first talk group to which the first LMR device and a second LMR device have been assigned. The second method also involves determining if a condition exists which prevents audio from the first transmitted voice communication from being played over a loudspeaker at the second LMR device. If the condition exists, at least one action is performed for automatically preserving a speech information content of the first transmitted voice communication.
According to an aspect of the present invention, the action involves converting the speech information content to text or storing the speech information content for later presentation at the second LMR device. The speech-to-text conversion can be performed at the second LMR device or a network server remote from the second LMR device. The action also involves displaying the text at the second LMR device. At least one time stamp can be provided for the text. At least one identifier can also be provided for associating the text with the second LMR device. The text can be stored for subsequent use. In this scenario, the text is subsequently converted to speech and presented as audio at the second LMR device.
According to another aspect of the present invention, the condition comprises an audio output of the second LMR device set to a mute condition. Alternatively, the condition comprises receiving a second transmitted voice communication from a third LMR device for a second talk group to which the second LMR device and the third LMR device have been assigned. The second transmitted voice communication occurs at a time at least partially concurrent with the first transmitted voice communication.
A third method of the present invention generally involves receiving a first transmitted voice communication from a first communication device for a first social media profile to which the first communication device and a second communication device have been assigned. The third method also involves receiving a second transmitted voice communication from a third communication device for a second social media profile to which the first communication device and the third communication device has been assigned. The second transmitted voice communication occurs at a time at least partially concurrent with the first transmitted voice communication. In response to concurrently receiving said first and second transmitted voice communications, at least one action is performed to preserve a speech information content of the second transmitted voice communication.
A fourth method of the present invention generally involves receiving a first transmitted voice communication from a first communication device for a first social media profile to which the first communication device and a second communication device have been assigned. The fourth method also involves determining if a condition exists which prevents audio from the first transmitted voice communication from being played over a loudspeaker at the second communication device. If the condition exists, at least one action is performed to automatically preserve a speech information content of the first transmitted voice communication.
Embodiments will be described with reference to the following drawing figures, in which like numerals represent like items throughout the figures, and in which:
The present invention is described with reference to the attached figures. The figures are not drawn to scale and they are provided merely to illustrate the instant invention. Several aspects of the invention are described below with reference to example applications for illustration. It should be understood that numerous specific details, relationships, and methods are set forth to provide a full understanding of the invention. One having ordinary skill in the relevant art, however, will readily recognize that the invention can be practiced without one or more of the specific details or with other methods. In other instances, well-known structures or operation are not shown in detail to avoid obscuring the invention. The present invention is not limited by the illustrated ordering of acts or events, as some acts may occur in different orders and/or concurrently with other acts or events. Furthermore, not all illustrated acts or events are required to implement a methodology in accordance with the present invention.
Referring now to
The communication system 100 can also employ a single communication protocol or multiple communication protocols. For example, if the communication system 100 is a Land Mobile Radio (LMR) based system, then it can employ one or more of the following communication protocols: a Terrestrial Trunked Radio (TETRA) transport protocol; a P25 transport protocol; an OPENSKY® protocol; an Enhanced Digital Access Communication System (EDACS) protocol; a MPT1327 transport protocol; a Digital Mobile Radio (DMR) transport protocol; and a Digital Private Mobile Radio (DPMR) transport protocol. If the communication system 100 is a cellular network, then it can employ one or more of the following communication protocols: a Wideband Code Division Multiple Access (WCDMA) based protocol; a Code Division Multiple Access (CDMA) based protocol; a Wireless Local Area Network (WLAN) based protocol; an Enhanced Data rates for GSM Evolution (EDGE) network based protocol; and a Long Term Evolution (LTE) network based protocol. Embodiments of the present invention are not limited in this regard.
As shown in
The communication system 100 may include more or less components than those shown in
The network 104 allows for communications between the communication devices 102, 106, 108 and/or console/dispatch center 110. As such, the network 104 can include, but is not limited to, servers 114 and other devices to which each of the communication devices 102, 106, 108 and/or console/dispatch center 110 can connect via wired or wireless communication links. Notably, the network 104 can include one or more access points (not shown in
Referring now to
As shown in
The controller 210 also provides information to the transmitter circuitry 206 for encoding and modulating information into RF signals. Accordingly, the controller 210 is coupled to the transmitter circuitry 206 via an electrical connection 238. The transmitter circuitry 206 communicates the RF signals to the antenna 202 for transmission to an external device (e.g., network equipment of network 104 of
An antenna 240 is coupled to Global Positioning System (GPS) receiver circuitry 214 for receiving GPS signals. The GPS receiver circuitry 214 demodulates and decodes the GPS signals to extract GPS location information therefrom. The GPS location information indicates the location of the communication device 200. The GPS receiver circuitry 214 provides the decoded GPS location information to the controller 210. As such, the GPS receiver circuitry 214 is coupled to the controller 210 via an electrical connection 236. The controller 210 uses the decoded GPS location information in accordance with the function(s) of the communication device 200.
The controller 210 stores the decoded RF signal information and the decoded GPS location information in a memory 212 of the communication device 200. Accordingly, the memory 212 is connected to and accessible by the controller 210 through an electrical connection 232. The memory 212 may be a volatile memory and/or a non-volatile memory. For example, the memory 212 can include, but is not limited to, a Random Access Memory (RAM), a Dynamic Random Access Memory (DRAM), a Static Random Access Memory (SRAM), Read-Only Memory (ROM) and flash memory.
As shown in
The controller 210 is also connected to a user interface 230. The user interface 230 is comprised of input devices 216, output devices 224, and software routines (not shown in
The user interface 230 is operative to facilitate a user-software interaction for launching group call applications (not shown in
The PTT button 218 is given a form factor so that a user can easily access the PTT button 218. For example, the PTT button 218 can be taller than other keys or buttons of the communication device 200. Embodiments of the present invention are not limited in this regard. The PTT button 218 provides a user with a single key/button press to initiate a predetermined PTT application or function of the communication device 200. The PTT application facilitates the provision of a PTT service to a user of the communication device 200. As such, the PTT application is operative to perform PTT communication operations. The PTT communication operations can include, but are not limited to, message generation operations, message communication operations, voice packet recording operations, voice packet queuing operations and voice packet communication operations.
Referring now to
As shown in
System interface 322 allows the computing device 300 to communicate directly or indirectly with external communication devices (e.g., communication devices 102, 106, 108 of
Hardware entities 314 may include microprocessors, application specific integrated circuits (ASICs) and other hardware. Hardware entities 314 may include a microprocessor programmed for facilitating the provision of group call services to users thereof. In this regard, it should be understood that the microprocessor can access and run group call applications (not shown in
As shown in
As evident from the above discussion, the communication system 100 implements one or more method embodiments of the present invention. The method embodiments of the present invention provide implementing systems with certain advantages over conventional communication devices. For example, the present invention provides a communication device that can simultaneously capture speech exchanged between members of a plurality of talk groups or social media profiles. The present invention also provides a communication device that can have its audio output muted without losing information communicated during a group call. The present invention further provides a communication device with a means to receive messages in a silent manner (e.g., a text form). The present invention provides a console/dispatch center communication device that can simultaneously output speech associated with a first talk group or social media profile and text associated with a second talk group or social media profile. In effect, the console operator can easily understand the speech exchanged between members of the first talk group or social media profile. The console operator can also easily distinguish from which members of the first and second talk group or social media profile a particular communication is received. The manner in which the above listed advantages of the present invention are achieved will become more evident as the discussion progresses.
If the speech-to-text conversion function of a communication device 106, 108, 112 is enabled, then the group call communication is displayed as text on a user interface thereof. The text can be displayed in a scrolling text banner, a chat window and/or a history window. A time stamp and/or an identifier of a party to a group call may be displayed along with the text. Also, an audible and/or visible indicator can be output from the communication device 106, 108, 112 if a specific word and/or phrase is contained in the text. Further, a particular event (e.g., data logging or email forwarding) can be triggered if a specific word and/or phrase is contained in the text.
The speech-to-text conversion can be accomplished at a communication device 106, 108, 112 using speech recognition algorithms. Speech recognition algorithms are well known to those having ordinary skill in the art, and therefore will not be described herein. However, it should be understood that any speech recognition algorithm can be used without limitation. For example, a Hidden Markov Model (HMM) based speech recognition algorithm and/or a Dynamic Time Warping (DTW) based speech recognition algorithm can be employed by the communication device 106, 108, 112. Embodiments of the present invention are not limited in this regard.
Referring now to
At the communication device 106, the voice packets 410 are processed to convert speech to text. The text is displayed in an interface window of a display screen (e.g., display screen 228 of
At the communication device 108, the voice packets 410 are processed for outputting voice from a speaker (e.g., speaker 226 of
At the console/dispatch center communication device 112, the voice packets 410 are processed to convert speech to text. The text is displayed on a user interface (e.g., user interface 302 of
Referring now to
A user 504 of a communication device 506 also initiates a group call for a low priority talk group “LTG-2” or low priority social media profile “LSMP-2”. The group call can be initiated by depressing a button of the communication device 506 (e.g., the PTT button 218 of
At the communication device 106, the voice packets 510 are processed for outputting voice associated with a member of the high priority talk group “HTG-1” or high priority social media profile “HSMP-1” from a speaker (e.g., speaker 226 of
At the communication device 108, the voice packets 510 are processed for outputting voice associated with the high priority talk group “LTG-1” or high priority social media profile “LSMP-1” from a speaker (e.g., speaker 226 of
At the communication device 112, the voice packets 510 are processed for outputting voice associated with the high priority talk group “HTG-1” or high priority social media profile “HSMP-1” from a user interface (e.g., user interface 302 of
Also in some embodiments, the text is analyzed at the network 104 to determine if a word and/or a phrase is contained therein. If the word and/or phrase is contained in the text, then the network 104 generates a command message for outputting an audible and/or visible indicator. The network 104 may also generate a command to trigger an event (e.g., data logging or email forwarding) if the word and/or phrase is contained in the text. The command message(s) is(are) communicated from the network 104 to the communication device. In response to the command message(s), an indicator is output and/or an event is triggered by the communication device.
The speech-to-text conversion can be accomplished at the network 104 using speech recognition algorithms. Speech recognition algorithms are well known to those having ordinary skill in the art, and therefore will not be described herein. However, it should be understood that any voice recognition algorithm can be used without limitation. For example, a Hidden Markov Model (HMM) based speech recognition algorithm and/or a Dynamic Time Warping (DTW) based speech recognition algorithm can be employed by the network 104. Embodiments of the present invention are not limited in this regard.
Referring now to
At the network 104, the voice packets 610 are processed to convert speech to text. The network 104 forwards voice packets 610 to communication device 108 which does not have its speech-to-text function enabled. The network 104 communicates the text in text messages or IP packets 612 to the communication devices 106, 112 which have their speech-to-text conversion function enabled at least for the talk group “TG-1” or social media profile “SMP-1”. Notably, the network 104 can also store the voice packets 610 and/or text messages or IP packets 612 for subsequent processing by the network 104 and/or for subsequent retrieval by communication devices 106, 108, 112.
At the communication device 106, the text messages or IP packets 612 are processed for outputting text to a user thereof. As shown in
At the communication device 108, the voice packets 610 are processed for outputting voice from a speaker (e.g., speaker 226 of
At the dispatch center communication device 112, the text messages or IP packets 612 are processed to output text to a user thereof. The text is displayed on a user interface (e.g., user interface 302 of
Referring now to
A user 704 of a communication device 706 also initiates a group call for a low priority talk group “LTG-2” or a low priority social media profile “LSMP-2”. The group call can be initiated by depressing a button of the communication device 706 (e.g., the PTT button 218 of
The network 104 forwards the voice packets 710 associated with the high priority talk group “HTG-1” or high priority social media profile “HSMP-1” to the communication devices 106, 108, 112. However, the network 104 processes the voice packets 712 associated with a low priority talk group “LTG-2” or low priority social media profile “LSMP-2” to convert speech to text. The network 104 communicates the text in text messages or IP packets 714 to the communication devices 106, 112 which have their speech-to-text conversion function enabled at least for the low priority talk group “LTG-2” or low priority social media profile “LSMP-2”. The network 104 can also store the voice packets 710 and/or 712 for subsequent processing by the network 104 for conversion of speech to text, and/or for subsequent retrieval by communication devices 106, 108, 112. The network 104 can also store the text messages or IP packets 714 for subsequent retrieval and processing.
At the communication device 106, the voice packets 710 are processed for outputting voice associated with a member of the high priority talk group “HTG-1” or high priority social media profile “HSMP-1” to a user thereof. The voice can be output from a speaker (e.g., speaker 226 of
At the communication device 108, the voice packets 710 are processed for outputting voice associated with the high priority talk group “HTG-1” or high priority social media profile “HSMP-1” to a user thereof. The voice can be output from a speaker (e.g., speaker 226 of
At the communication device 112, the voice packets 710 are processed for outputting voice associated with the high priority talk group “HTG-1” or high priority social media profile “HSMP-1” to a user thereof. The voice can be output from a user interface (e.g., a user interface 302 of
Each set of
Referring now to
Referring now to
Step 838 is performed to determine if a speech-to-text conversion function is enabled for the low priority talk group “LTG-2” or low priority social media profile “LSMP-1”. If the speech-to-text conversion function is not enabled for the low priority talk group “LTG-2” or low priority social media profile “LSMP-1” [838:NO], then step 840 is performed. In step 840, speech associated with the low priority talk group “LTG-2” or low priority social media profile “LSMP-1” is output to a user of the fourth communication device via a user interface (e.g., a speaker) thereof. If the speech-to-text conversion function is enabled for the low priority talk group “LTG-2” or low priority social media profile “LSMP-1” [838:YES], then the method 800 continues with step 842.
Step 842 involves processing the voice packets to convert speech into text. Next, an optional step 844 is performed where the text is scanned to identify one or more pre-defined or pre-selected words and/or phrases. Upon completing the scan of the text, a decision step 846 is performed to determine if a pre-defined or pre-selected word and/or phrase was identified in the text. If the text contains at least one pre-defined or pre-selected word and/or phrase [846:YES], then step 848 is performed where an indicator is output to a user of the fourth communication device. The indicator can include, but is not limited to, an audible indicator and a visible indicator. Step 848 can additionally or alternatively involve triggering other actions (e.g., data logging and email forwarding). Subsequently, step 850 is performed which will be described below.
If the text does not contain one or more pre-defined or pre-selected words and/or phrases [846:NO], then step 850 is performed where the text is stored in a storage device of the fourth communication device. The text can be stored as a text string. Step 850 also involves outputting the text to the user of the fourth communication device via a user interface. Thereafter, step 852 is performed where the method 800 returns to step 802 or subsequent processing is performed.
Referring again to
If the speech-to-text conversion function of the third communication device is enabled [816:YES], then the method 800 continues with step 820. In step 820, the voice packets are processed to convert speech to text. Next, an optional step 822 is performed where the text is scanned to identify one or more pre-defined or pre-selected words and/or phrases. Upon completing the scan of the text, a decision step 824 is performed to determine if the pre-defined or pre-selected word and/or phrase was identified in the text. If the text contains at least one pre-defined or pre-selected word and/or phrase [824:YES], then step 826 is performed where an indicator is output to a user of the third communication device. The indicator can include, but is not limited to, a visible indicator and an audible indicator. Step 826 can additionally or alternatively involve triggering other actions (e.g., data logging and email forwarding). Subsequently, step 828 is performed which will be described below.
If the text does not contain one or more pre-defined or pre-selected words and/or phrases [824:NO], then step 828 is performed where the text is stored in a storage device of the third communication device. The text can be stored as a text string. Step 828 also involves outputting the text to the user of the third communication device via a user interface. Thereafter, step 830 is performed where the method 800 returns to step 802 or subsequent processing is performed.
Referring now to
If the speech-to-text conversion function of the third communication device is enabled [854:YES], then step 860 is performed where speech associated with the high priority talk group “HTG-1” or high priority social media profile “HSMP-1” is output to a user of the third communication device via a user interface thereof (e.g., a speaker). In a next step 862, the voice packets associated with the low priority talk group “LTG-2” or low priority social media profile “LSMP-2” are processed to convert text to speech. Next, an optional step 864 is performed where the text is scanned to identify one or more pre-defined or pre-selected words and/or phrases. Upon completing the scan of the text, a decision step 866 is performed to determine if at least one pre-defined or pre-selected word and/or phrase was identified in the text. If the text contains at least one pre-defined or pre-selected word and/or phrase [866:YES], then step 868 is performed where an indicator is output to a user of the third communication device. The indicator can include, but is not limited to, a visible indicator and an audible indicator. Step 868 can additionally or alternatively involve triggering one or more other events (e.g., data logging and email forwarding). Subsequently, step 870 is performed which will be described below.
If the text does not contain one or more pre-defined or pre-selected words and/or phrases [866:NO], then step 870 is performed where the text is stored in a storage device of the third communication device. The text can be stored as a text string. Step 870 can also involve outputting the text to the user of the third communication device via a user interface. Thereafter, step 872 is performed where the method 800 returns to step 802 or subsequent processing is performed.
Referring now to
After receiving the voice packets at network equipment of the network in step 910, decision steps 912 and 924 are performed. Decision step 912 is performed to determine if a speech-to-text conversion function of the third communication device is enabled. If the speech-to-text conversion function of the third communication device is not enabled [912:NO], then the step 914 is performed where the voice packets are forwarded to the third communication device. Step 914 can also involve storing the voice packets associated with one or more of the talk groups “HTG-1”, “LTG-2” or social media profiles “HSMP-1”, “LSMP-2” in a storage device of the network for subsequent retrieval and processing thereby.
In a next step 916, the voice packets are received at the third communication device. Thereafter, the voice packets are processed in step 918 to output speech associated with the high priority talk group “HTG-1” or high priority social media profile “HSMP-1” to a user of the third communication device. The speech associated with the high priority talk group “HTG-1” or high priority social media profile “HSMP-1” is output to the user via a user interface of the third communication device. If the voice packets associated with the low priority talk group “LTG-2” or low priority social media profile “LSMP-2” are also communicated to the third communication device, then step 920 is performed where these voice packet are discarded or stored in a storage device of the third communication device. Upon completing step 920, step 934 is performed where the method 900 returns to step 902 or subsequent processing is performed.
If the speech-to-text conversion function of the third communication device is enabled [912:YES], then the method 900 continues with step 936 of
Step 938 involves forwarding voice packets associated with the high priority talk group “HTG-1” or high priority social media profile “HSMP-1” to the third communication device. In step 940, the voice packets are received at the third communication device. At the third communication device, the voice packets are processed to output speech associated with the high priority talk group “HTG-1” or high priority social media profile “HSMP-1” to a user of the third communication device. The speech can be output via a user interface (e.g., a speaker). Thereafter, step 962 is performed where the method 900 returns to step 902 or subsequent processing is performed.
Step 944 involves processing the voice packets associated with the low priority talk group “LTG-2” or low priority social media profile “LSMP-2” for converting speech to text. In a next step 946, the text is stored in a storage device of the network for subsequent retrieval and processing thereby. The text can be stored in a log file of the storage device. Thereafter, an optional step 948 is performed where the text is scanned to identify at least one pre-defined or pre-selected word or phrase.
If one or more pre-defined or pre-selected words or phrases was identified [950:YES], then step 952 is performed where the network equipment generates at least one command for outputting an indicator and/or triggering other events (e.g., data logging and email forwarding). The text and command(s) are then communicated from the network to the third communication device in step 954. After receipt of the text and command(s) at the third communication device in step 958, the text and/or an indicator is output to a user thereof in step 960. The indicator can include, but is not limited to, an audible indicator and a visible indicator. Step 960 can also involve taking other actions (e.g., data logging and email forwarding) at the third communication device. Subsequently, step 962 is performed where the method 900 returns to step 902 or subsequent processing is performed.
If one or more pre-defined or pre-selected words or phrases was not identified [950:NO], then step 956 is performed where the text associated with the low priority talk group “LTG-2” or low priority social media profile “LSMP-2” is forwarded from the network to the third communication device. After receipt of the text at the third communication device in step 958, step 960 is performed. In step 960, the text associated with the low priority talk group “LTG-2” or low priority social media profile “LSMP-2” is output to a user of the third communication device via a user interface. Thereafter, step 962 is performed where the method 900 returns to step 902 or subsequent processing is performed.
Referring again to
If the speech-to-text conversion function of the fourth communication device is not enabled [924:YES], then the method 900 continues with steps 964 and 966 of
If the speech-to-text conversion function of the fourth communication device is not enabled for the high priority talk group “HTG-1” or high priority social media profile “HSMP-1” [964:NO], then the method 900 continues with step 968. Step 968 involves identifying voice packets associated with the respective talk group (e.g., high priority talk group “HTG-1”) or social media profile (e.g., high priority social media profile “HSMP-1”). In a next step 970, the identified voice packets associated with the respective talk group or social media profile are forwarded from the network to the fourth communication device. After receiving the voice packets at the fourth communication device in step 972, step 974 is performed where the voice packets are processed to output speech associated with the respective talk group or social media profile to a user of the fourth communication device. In step 976, the speech associated with the respective talk group or social media profile is output via a user interface of the communication device. Thereafter, step 999 is performed where the method 900 returns to step 902 or subsequent processing is performed.
The decision step 966 is performed to determine if a speech-to-text conversion function of the fourth communication device is enabled for the low priority talk group “LTG-2” or low priority social media profile “LSMP-2”. If the speech-to-text conversion function of the fourth communication device is not enabled for the low priority talk group “LTG-2” or low priority social media profile “LSMP-2” [966:NO], then the method continues with steps 968-999 which are described above. If the speech-to-text conversion function of the fourth communication device is enabled for the low priority talk group “LTG-2” or low priority social media profile “LSMP-2” [966:YES], then the method continues with step 980.
Step 980 involves identifying voice packets associated with a respective talk group (e.g., low priority talk group “LTG-2”) or social media profile (e.g., low priority social media profile “LSMP-2”). In a next step 982, the identified packets are processed for converting speech to text. The text can be stored as a log file in a storage device of the network in step 984. As such, the text can be subsequently retrieved and processed by the network equipment and/or other communication devices. After completing step 984, an optional step 986 is performed where the text is scanned to identify at least one pre-defined or pre-selected word or phrase.
If one or more pre-defined or pre-selected words or phrases was identified [988:YES], then step 990 is performed where the network equipment generates at least one command for outputting an indicator and/or triggering one or more other events (e.g., data logging and email forwarding). The text and command(s) are then communicated from the network to the fourth communication device in step 992. After receipt of the text and command(s) at the fourth communication device in step 996, the text and/or at least one indicator is output to a user of the fourth communication device in step 998. The indicator can include, but is not limited to, an audible indicator and a visible indicator. Step 998 can also involve taking other actions (e.g., data logging and email forwarding) at the fourth communication device. Subsequently, step 999 is performed where the method 900 returns to step 902 or subsequent processing is performed.
If one or more pre-defined or pre-selected words or phrases was not identified [988:NO], then step 994 is performed where the text associated with the respective talk group (e.g., the low priority talk group “LTG-2”) or social media profile (e.g., low priority social media profile “LSMP-2”) is forwarded from the network to the fourth communication device. After receipt of the text at the fourth communication device in step 996, step 998 is performed. In step 998, the text associated with the respective talk group (e.g., the low priority talk group “LTG-2”) or social media profile (e.g., low priority social media profile “LSMP-2”) is output to a user of the fourth communication device via a user interface. Thereafter, step 999 is performed where the method 900 returns to step 902 or subsequent processing is performed.
All of the apparatus, methods and algorithms disclosed and claimed herein can be made and executed without undue experimentation in light of the present disclosure. While the invention has been described in terms of preferred embodiments, it will be apparent to those of skill in the art that variations may be applied to the apparatus, methods and sequence of steps of the method without departing from the concept, spirit and scope of the invention. More specifically, it will be apparent that certain components may be added to, combined with, or substituted for the components described herein while the same or similar results would be achieved. All such similar substitutes and modifications apparent to those skilled in the art are deemed to be within the spirit, scope and concept of the invention as defined.