This invention relates generally to mobile radio communication systems, and more particularly to a system and method for terminating a voice call in any burst within a multi-burst superframe.
Communication systems typically include a plurality of communication devices, such as mobile or portable radio units, dispatch consoles and base stations, which are geographically distributed among various base sites and console sites. The radio units wirelessly communicate with the base stations and each other using radio frequency (RF) communication resources, and are often logically divided into various subgroups or talk-groups. The base stations are hard-wired to a controller that controls communications within the system.
In a time division multiple access (TDMA) system, for example, voice transmission channels are divided into periodically repeated superframes, each of which includes multiple digitized voice bursts. Typically, the first burst in each superframe includes a voice frame synchronization pattern surrounded by encoded voice information. The remaining bursts may include link control information in the center of the encoded voice information instead of the voice frame synchronization pattern.
In such TDMA systems, a typical method for ending a voice call is for the transmitting radio unit to send a stand-alone termination burst following the last burst of the superframe during which the end of call event is detected. The termination burst generally contains a data synchronization pattern that is a symbol complement to the voice frame synchronization pattern, thus minimizing the risk of mistakenly terminating a call.
This method of terminating a voice call, however, has several drawbacks. First, when a dekey event indicates the end of the voice call before the last burst in the superframe, the radio unit must nonetheless keep transmitting the remaining bursts with some predetermined information, as the termination burst can only be transmitted after the last burst in the superframe. As a result, the slot channel remains occupied (i.e., the call is still technically “active”) until the end of the superframe even though the dekey event occurred earlier in the superframe, which prevents other units from using the slot channel during that time.
Additionally, with some call scenarios, such as on takeovers with a console call interrupting a voice call, audio from the interrupting source must be buffered until the current call has properly been terminated at the end of a superframe so that the interrupting audio can be sent over the air. These interruptions may happen multiple times during a single call. Each time this happens, a delay up to the duration of the superframe may be introduced with the baseline operation. This delay will remain present until the call ends.
Accordingly, there is a need for a system and method of terminating a voice call in any burst within a multi-burst superframe in a more efficient manner than the method described above.
Various embodiments of the invention are now described, by way of example only, with reference to the accompanying figures.
Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clazrity and have not necessarily been drawn to scale. For example, the dimensions and/or relative positioning of some of the elements in the figures may be exaggerated relative to other elements to help improve the understanding of various embodiments of the present invention. Also, common but well-understood elements that are useful or necessary in a commercially feasible embodiment are often not depicted in order to facilitate a less obstructed view of these various embodiments of the present invention. It will further be appreciated that certain actions and/or steps may be described or depicted in a particular order of occurrence while those skilled in the art will understand that such specificity with respect to sequence is not actually required. It will also be understood that the terms and expressions used herein have the ordinary meaning as is accorded to such terms and expressions with respect to their corresponding respective areas of inquiry and study except where specific meanings have otherwise been set forth herein.
The present invention is an apparatus and method for effectively and reliably terminating a voice call in any burst within a multi-burst superframe. The present invention involves a transmitting unit generating a termination burst upon detecting a dekey event, and is capable of transmitting the termination burst in any burst within the multi-burst superframe after all the buffered voice information has been transmitted and prior to the end of the multi-burst superframe. If, however, the last portion of the buffered voice information transmission requires the last burst of the superframe, the termination burst is transmitted at the beginning of the next superframe as in the prior art. The termination burst includes a data synchronization pattern, a slot type field indicating an end of a call, and an information field surrounding the data synchronization pattern and the slot type field. The information field is encoded from a predetermined voice encoder frame bit pattern engineered and/or reserved for the termination burst. A base station or other receiving unit monitors the incoming signal from the transmitting unit (e.g., the radio). Upon detecting the data synchronization pattern, the receiving unit decodes the slot type field and the information field. The receiving unit determines whether the decoded slot type field is indicative of the end of a call, and whether a specific portion of the decoded information field matches that of the predetermined voice encoder frame bit pattern. If both are true, the receiving unit terminates the call. Let us now discuss the present invention in greater detail by referring to the figures below. For clarity and exemplary purposes only, the following description and examples assume a TDMA system, however, other types of multi-user systems, e.g., dual frequency division multiple access (FDMA)/TDMA systems, may be used.
As shown in
Each base station 102 is comprised of at least one repeater transceiver 120 that communicates wirelessly with the communication units 108. The repeater transceiver 120 is coupled, via Ethernet, to an associated router 122, which is in turn coupled to the core router 104. Each repeater transceiver 120 may also include a memory 124, and a processor 126 capable of decoding and processing the received signals.
For purposes of the following discussion, the term “transmitting unit” is used to mean any communication unit or dispatch console that is transmitting a wireless TDMA signal. The term “receiving unit” is used to mean any base station, communication unit or dispatch console that is receiving the transmitted wireless audio signal from the transmitting unit.
Each voice call may also begin with a header 202. The header 202 may include a link control header burst, which may contain information such as a manufacturer identifier, a talk-group identifier, a source identifier, and a destination identifier. The header 202 may also have an encryption synchronization header burst if the voice transmission is encrypted. The encryption synchronization header burst may include information such as a message indicator, an encryption algorithm identifier, an encryption key identifier, and a data synchronization pattern.
Each superframe 100 begins with burst A regardless if the voice transmission includes the link control header burst and/or the encryption synchronization header burst. As shown in
Bursts B through F may similarly include three independent information frames 214, 216, and 218. However, unlike burst A, bursts B through F do not include a voice frame synchronization pattern, but instead substitute either link control information or key identifier information 212 in the middle of the burst. When transmitting voice call information, each information frame in bursts A-F corresponds to 20 ms of voice information that is compressed and error protected into a 72-bit encoded voice code word.
One process for encoding the voice information into a 72-bit voice code word is illustrated in
The 49-bit voice encoder frame 300 is further encoded by the processor 118 using forward error correction. In one embodiment, the twelve most significant bits contained in vector u0 are encoded with a (24,12,8) Golay code 302, resulting in a code word c0. The next twelve most significant bits contained in vector u1 are encoded with a (23,12,7) Golay code 304. The result of the Golay encoding of u1 is exclusive-ored with a 23-bit pseudorandom noise sequence (PN sequence) 306 generated from the 12 bits of u0. The result of the exclusive-or sum is defined as c1. Unlike vectors u0 and u1, vectors u2 and u3, which contain the least significant bits, are not encoded. Thus, code words c2 and c3 in
Of course, while one specific embodiment of a voice signal, an associated superframe structure, and an encoding process is described, those skilled in the art will readily understand that other structures may be used for the voice signal and the superframe, and other processes may be used for performing the forward error correction.
According to the present invention, a termination burst is configured to comply with protocols of a typical superframe burst such that the termination burst is transmitted in any burst within a multi-burst superframe. Thus, as shown in
In one embodiment, the data synchronization pattern 402 and the slot type field 404 in the termination burst may be configured similar to typical stand-alone burst or data/control burst. For example, in the Motorola ASTRO 6.25e (F2) system, the data synchronization pattern may be 48 bits in length and a symbol complement to a voice frame synchronization pattern generally included in burst A. The slot type field 404 may be 20 bits in length total, with 10 bits positioned on each side of the data synchronization pattern.
IF1406, IF2408, and IF3410 of the termination burst 400 may include predetermined code words for a termination burst. In one embodiment, a first predetermined code word for both IF1406 and IF3410 may have a unique bit pattern reserved solely for a termination burst while a second predetermined code word for IF2408 may have a bit pattern corresponding to a silent voice signal. The unique code word chosen for IF1406 and IF3410 is used by a receiving unit to detect the presence of the termination burst, as described in more detail below.
Constructing the unique code word for IF1406 and IF3410 in the termination burst may be performed in the following manner. First, a unique voice encoder frame is determined based on the bit definitions for the voice frame generated by the voice encoder 114. In particular, the unique voice encoder frame is chosen to have a bit pattern that would not otherwise be used by the voice encoder 114 when synthesizing a voice signal. For example, in the Motorola ASTRO 6.25e (F2) system, setting each of the bits corresponding to the pitch setting in a voice encoder frame to the same value results in an invalid frame that would not be generated by the voice encoder when synthesizing a voice signal or otherwise used by the system. Accordingly, a unique 49-bit voice code frame may be formed by setting all of the bits 0-3 and 37-39 to either 0 or 1.
Additionally, the bits representing the voicing setting and the gain setting may be set to 0. This allows the termination burst 400 to have minimal audible effect and not create undesirable noise in the event the termination burst is not properly detected (as discussed below) but is instead treated like a normal voice burst. The remaining bits (those representing the quantized spectral information of the voice signal) have no significant effect on the termination burst 400 and can therefore be chosen as desired.
Accordingly, one exemplary unique 49-bit voice code frame according to the present invention may be defined as follows:
Unlike IF1406 and IF3410, IF2408 is not used by a receiving unit for detecting the termination burst 400. Accordingly, it may be desirable to choose a voice encoder frame pattern for IF2408 that minimizes any undesirable audio effects. Thus, in one embodiment, the 49-bit voice encoder frame used to generate IF2408 may be chosen to correspond to a silent voice signal, i.e., a 49-bit voice encoder frame pattern representative of silence. In one embodiment, this 49-bit silence pattern may be:
Each of the bit patterns for the unique voice encoder frame and the voice encoder silence frame may be stored in the memory of the transmitting unit such that they may be retrieved whenever a termination burst is generated. The unique voice encoder frame may also be stored in the memory of the receiving unit so that a received burst may be compared with the stored pattern to determine whether the received burst is a termination burst.
The unique voice encoder frame and the voice encoder silence frame described above are encoded using the same encoding process described with regards to a typical 49-bit voice encoder frame in
Although one specific 49-bit pattern is shown for generating IF1406 and IF3410 in the termination burst 400, it is understood that many other patterns may also be used so long as those patterns are unique and would never be created by the voice encoder 114 when synthesizing a voice signal. Additionally, the 49-bit pattern used to generate IF1406 may be different from that used for IF3410. Similarly, patterns other than the one silence pattern described for forming IF2408 may also be used so long as they are indicative of a silent voice signal. Alternatively, if IF2408 is intended to be used by a receiving unit for identifying a termination burst, a unique pattern similar to that described for IF1406 and IF3410 may also be used for IF2408.
Once the data synchronization pattern 402, slot type field 404, IF1406, IF2408, and IF3410 are generated, the termination burst is compiled by processor 118. As shown in
If a dekey event has occurred, the process continues to step 506. In step 506, the predetermined 49-bit voice encoder frames for a termination burst are obtained. This can be done by either generating the bits for the predetermined voice encoder frame based on stored information or retrieving the predetermined voice encoder frame directly from the memory of the transmitting unit. In step 508, the 49-bit voice encoder frames are encoded to form the 72-bit code words for IF1, IF2, and IF3 using the process shown in
However, if a data synchronization pattern is detected in step 606, the process proceeds to step 608. In step 608, the slot type field is decoded. In step 610, the processor associated with the receiving unit determines whether the decoded information in the slot type field indicates an EOC. In one embodiment, this is performed by determining whether the decoded slot type field information includes a specific pre-defined 4-bit field representative of an EOC signal. If the slot type field does indicate an EOC, the EOC term is set to the TRUE (step 612). If the slot type field does not indicate an EOC, the EOC term is set to FALSE (step 614). In either instance, the process proceeds to step 618.
In step 618, IF1 and IF3 are decoded to obtain a voice frame. In step 620, vectors u0 and u1 obtained from both decoded IF1 and decoded IF3 are compared to determine if they match with vectors u0 and u1 of the unique predetermined voice encoder frame pattern previously established and stored in the memory of the receiving unit. In particular, a first comparison is made between u0 of the voice frame decoded from IF1 of the received burst and u0 of the stored pattern; a second comparison is made between u1 of the voice frame decoded from IF1 of the received burst and u1 of the stored pattern; a third comparison is made between u0 of the voice frame decoded from IF3 of the received burst and u0 of the stored pattern; and a fourth comparison is made between u1 of the voice frame decoded from IF3 of the received burst and u1 of the stored pattern. In step 622, a value N is set to the number of times the decoded vectors u0 and u1, from IF1 and IF3, match the predetermined bit pattern.
If even further reliability is required in detecting whether a received burst is a termination burst, optional steps 624 may be performed. In optional step 624, vectors u2 and u3 of the voice frames obtained from decoded IF1 and IF3 are compared with vectors u2 and u3 of the unique predetermined voice encoder pattern stored in the memory to determine if there is a match. In one embodiment, a match is found if at least 18 of the 25 bits in vectors u2 and u3 of each information field are identical to those in vectors u2 and u3 of the stored predetermined bit pattern.
In step 626, the processor determines whether the value N is greater than or equal to 2. If N is not greater than or equal to 2, the receiving unit processes the received burst as a normal voice burst (i.e., by also decoding IF2 and processing IF1, IF2 and IF3 as in a normal burst) in step 628, and the process returns to step 602. If N is greater than or equal to 2, the process proceeds to either step 630 (if step 624 was performed) or step 632 (if step 624 was not performed). If step 624 was performed, step 630 determines whether 18 of the 25 bits in vectors u2 and u3 of both IF1 and IF3 match those in the stored predetermined bit pattern. If they match, the process continues to step 632. If they do not match, the receiving unit processes the received burst as a normal voice burst (step 628), and the process returns to step 602. Of course, it is understood that the specific criteria may be changed depending on the reliability requirements of the system. For example, the process may require that N is set to a number greater than 2 or less than 2. The process may also alternatively require a different number of matching bits in vectors u2 and u3, or that only one of the IF1 or IF3 have matching u2 and u3 vectors.
In step 632, the EOC term is checked to determine whether it is set to TRUE or FALSE. If the EOC term is set to FALSE, the process proceeds to step 634. In step 634, the audio is muted for the duration of the burst, and the process returns to step 602. If, however, the EOC term is set to TRUE, the call is terminated at step 636.
By means of the present invention, upon detection of a dekey event at a transmitting unit, a termination burst may be effectively transmitted in any burst within a multi-burst superframe after all of the buffered voice information has been transmitted in order to signal a receiving unit to terminate the call. In addition, as discussed below, some example simulations and calculations were performed to illustrate that the above-described system is also reliable, and that falsing and detection performance was acceptable for a multi-user system (e.g., a TDMA system).
First, the performance of the system was simulated to determine the probability of successfully detecting a single transmitted termination burst according to the present invention. The simulations were performed under various channel conditions, specifically with the receiving unit and transmitting unit static with respect to one another, and with the receiving unit and transmitting unit traveling 5 MPH and 60 MPH with respect to one another. The simulations were also performed assuming both a 2.6% bit error rate and a 5% bit error rate. The resulting data was as follows:
However, if a second termination burst is sent following the first termination burst, the probability of detecting the termination burst is even further increased as illustrated below:
In one embodiment described above, the actual decision of whether to mute or terminate a call is also qualified by verifying that at least 2 out of the 4 encoded vectors (u0 and u1 from IF1 and u0 an u1 from IF3) match the vectors u0 and u1 of the predetermined unique 49-bit pattern defined above. Accordingly, these criteria were used to calculate the probabilities of falsely muting or terminating a call.
The following calculations were performed based on the following assumptions: 1) two subscribers are continuously transmitting in both slots of a two-slot TDMA system respectively for 24 hours a day, and 2) both of the calls are secured or encrypted calls.
The probability of falsely muting a signal is the probability that the bits in at least 2 of the 4 encoded vectors (u0 and u1 from IF1 and u0 an u1 from IF3) match the unique predetermined voice encoder frame pattern after the vectors have been decoded by the receiving unit. For this to occur, at least 24 bits (i.e., 12 bits of one vector u0 or u1 and 12 bits of another vector u0 or 1 of IF1 or IF3) need to match. Assuming that 0s and 1s for each bit are equally probable, that there are four vectors from IF1 and IF3 (u0 and u1 from each) and that at least two or more of the four vectors must match, the probability can be computed as follows:
p—enc=4C2*(0.5)24+4C3*(0.5)24+4C4*(0.5)24=3.5769*10−7
If the time for one slot is 30 ms, the average time before the occurrence of a false mute is calculated as follows:
T(false_mute)=(1.0/p—enc)*30*10−3=23 hours
Additionally, if the bits in vectors u2 and u3 are also verified against the unique predetermined pattern, the time before the occurrence is even further increased. Assuming, as discussed in one embodiment above, that at least 18 of the 25 bits in vectors u2 and u3 of both IF1 and IF3 must match the unique predetermined bit pattern, the probability of this happening for one of the IF1 and IF3 is:
The probability of matching 18 of the 25 bits in u2 and u3 of both IF1 and IF3 is:
p—u2u32=4.6840*10−4
Accordingly, the probability of false muting using both the 2 out of 4 test for vectors u0 and u1 and the 18 out of 25 matching test for vectors u2 and u3 is:
p_false_mute=p—u2u32*p—enc=1.6754*10−10
As a result, when using both these tests, the average time before a false mute is:
T(false_mute)=(1.0/p-false_mute)*30*10−3=4.9*104 hours
The probability of falsely terminating a call was calculated by multiplying p_false_mute times the probability that a false data synchronization pattern is detected times the probability that the slot type field matches a slot type field for a voice term burst term. The probability of a false data synchronization pattern detection is calculated as:
where k is the maximum number of bits allowed in error for the data synchronization pattern. Assuming that the information in a slot type field after decoding is comprised of 4 bits, the probability of the slot type field looking like that of a voice burst term is 1 in 16. Accordingly the probability of a false termination is:
p_term=p_false_mute*p_sync*p_slot_type=7.9696*10−17
Therefore, assuming again that each burst in the superframe is 30 ms in duration, the average time before false termination is:
T(false_term)=(1.0/p_term)*30*10−3=3.764*1014 hours
Further advantages and modifications of the above described system and method will readily occur to those skilled in the art. The invention, in its broader aspects, is therefore not limited to the specific details, representative system and methods, and illustrative examples shown and described above. Various modifications and variations can be made to the above specification without departing from the scope or spirit of the present invention, and it is intended that the present invention cover all such modifications and variations provided they come within the scope of the following claims and their equivalents.
Number | Name | Date | Kind |
---|---|---|---|
RE28577 | Schmidt | Oct 1975 | E |
6373860 | O'Toole et al. | Apr 2002 | B1 |
6542718 | Kuo et al. | Apr 2003 | B1 |
7203207 | Hiben et al. | Apr 2007 | B2 |
20040240465 | Newberg et al. | Dec 2004 | A1 |
20070230407 | Petrie et al. | Oct 2007 | A1 |
Number | Date | Country | |
---|---|---|---|
20080049711 A1 | Feb 2008 | US |