1. Field of the Invention
The present invention relates to a decoding method of an error correction code which is used in a W-CDMA (wideband code division multiple access) communication.
2. Description of Related Art
Viterbi decoding is a decoding method for a convolution code, which has been known as one of the most general error correcting methods. The Viterbi decoding is the maximum likelihood decoding method, and the transition of the likelihood state is traced back to obtain a decoding result. Whether the decoding result is correct, or not, is determined by using an error detecting method such as CRC (cyclic redundancy check), and when the decoding result is in error, the retransmission of data is requested.
JP 07-288478 A discloses a conventional art in which the error correction performance of the above Viterbi decoding is improved. As shown in
Subsequently, in the ACS circuit 103, the metrics of the plural paths that transit to the respective states at the respective times are compared with each other. Then, not only a path that is highest in the metric is selected, and a path select signal is stored, but also metric differences from other paths are stored at the same time, in the path select memory circuit 104. Also, the path metric is stored and updated in the path metric memory circuit 105 as with the normal Viterbi signal. Then, the trace back is conducted by the multiple trace back circuit 106 and the stack memory 107 on the basis of information in the path select memory circuit 104. In this case, the number of survivor paths is not limited to one depending on the given permissible metric difference, but plural paths remain by plural times of trace back.
The operation of the decoding device 101 will be described.
Then, after a state before one time is calculated (Step S116), and the decoded data is calculated and stored (Step S117), the time is changed (Step S118), and it is determined whether the time is further traced back, or not (Step S119). Now, in the case where it is determined that one trace back has been completed, it is determined whether the information has been stored in the stack memory, or not (Step S120). In the case where the information has not been stored in the stack memory, the processing is terminated (Step S129). Also, in the case where the information has been stored in the stack memory, the branch flag of one information item of the most significant (an address that has been finally stored) is determined as shown in
Then, the cumulative value of the number of branches to be traced back (TB) is calculated (Step S124). In the case where the cumulative value is equal to or lower than a limit value, the TB counter counts up (Step S126), and the decoded data up to a branch point is copied from the previous decoded data (Step S127), and the path select signal in the state of the branch point is reversed (Step S128). Thereafter, the operation is returned to Step S116, and the trace back is conducted. On the other hand, in the case where the cumulative value exceeds the limit value, it is determined that the processing is short, and the decoding process is terminated (Step S129).
That is, in the Viterbi decoding device 101, in the case where the survivor paths in the respective states are selected in the ACS calculation, not only only one path that is highest in the likelihood is selected, and its path select signal is stored, but also the likelihood differences between the path that is highest in the likelihood and other paths are also stored therein together. Then, in the case where the decoded data is obtained by the trace back, the multiple trace back is conducted in which not only the decoding path having the highest likelihood but also the paths that are equal to or lower than the permissible metric difference which is a predetermined set threshold value in the likelihood of the trace back section of the highest likelihood path are traced back, respectively, to obtain plural decoding path candidates.
Then, in the case where the multiple trace back is conducted, the plural candidates are searched by a recursive method, and the permissible metric difference is the likelihood difference that has been stored in that state are compared with each other in the respective states that are traced back in this situation. In the case where the likelihood difference is equal to lower than the permissible metric difference, the plural branches can be selected. A branch flag indicating a time of that state, the state, and which branch is selected to conduct the trace back, and a value resulting from subtracting the likelihood difference from the permissible metric difference are stored in the stack memory.
When decoding is conducted by the recursive method, in the subsequent trace back, the decoding results of the common partial paths among the decoded data of the paths that have passed by the previous trace back are used as a copy, the different partial paths are newly analyzed from the time and the state which have been stored in the stack memory, and only the path select signal at the branch point is reversed to conduct the trace back. As a result, since the decoded data of the remaining partial paths is obtained, the processing time can be reduced when the plural paths are set as the survivor paths in the Viterbi decoding to obtain the plural decoded data candidates. In this way, the plural survivor paths that are high in the likelihood in each of the states are saved, and the plural times of trace back is conducted, to thereby increase the possibility that decoding is correctly conducted. Also, there is cited JP 06-284018 A as a technique related to JP 07-288478 A.
However, We have now discovered that in the method disclosed in JP 07-288478 A, the plural paths having the likelihood difference that is equal to or lower than a certain threshold value are saved, and all of the decoding results of those plural paths must be saved. In addition, because what is the time point (time), what is the state of the path, and what is the likelihood difference of the path must be saved, there arises such a problem that a large quantity of memory that saves the decoded data is required as compared with the normal Viterbi decoding device.
According to one aspect of the present invention, a decoding method includes obtaining a first decoded result from a first decoding path being on a trellis diagram; determining whether the first decoded result is incorrect or not; creating a second decoding path when the first decoded result is incorrect; and obtaining a second decoded result from the second decoding path.
According to the present invention, there can be provided the Viterbi decoding method that improves the error correction performance without increasing the memory.
These and other objects and many of the attendant advantages of the invention will be readily appreciated as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings.
The above and other objects, advantages and features of the present invention will be more apparent from the following description of certain preferred embodiments taken in conjunction with the accompanying drawings, in which:
The invention will be now described herein with reference to illustrative embodiments. Those skilled in the art will recognize that many alternative embodiments can be accomplished using the teachings of the present invention and that the invention is not limited to the embodiments illustrated for explanatory purposes. Now, a description will be given in more detail of specific embodiments of the present invention with reference to the accompanying drawings. In general, a Viterbi decoding method is a maximum likelihood decoding method, which traces back the transition of the likelihood state to obtain the decoding results. It is determined whether the decoding results are correct, or not, by using an error detecting method such as a CRC, and when an error is detected, the retransmission of data is requested. On the contrary, in this embodiment, in the case where the error determination is conducted in the Viterbi decoding, the transition is changed to again conduct trace back, thereby again obtaining the decoding results. This operation is repeated until the error determination is accepted or reaches a specific number of times. The above operation makes it possible to increase the possibility that the decoding can be conducted. Further, in this embodiment, no specific memory is required, and an increase in logic can be suppressed to a minor amount.
First, a description will be given in brief of a Viterbi algorithm in order to facilitate the understanding of the present invention. The convolutional decoding will be first described. Since the Viterbi algorithm is a decoding algorithm, it is necessary that the input data is coded. As the coding method, the conventional coding is used.
The convolutional coder 20 shown in
The constraint length is directed to the number of past input bits required to obtain the output. When the constraint length is longer, the error correction performance is increased, but the configuration of the Viterbi decoding device becomes complicated. The coding rate is directed to the ratio of the input bits to the output bits with respect to the coder. In the case where the coding rate is smaller, that is, the number of bits of the output with respect to the input is large, the transmission rate becomes low, but the error correction performance is increased.
Now, a description will be given of a case in which data indicated in the following Table 1 is inputted. The input data to the convolutional coder 20 shown in
In this example, the initial values of the registers 21 and 22 at the time of starting the registers 21 and 22 are set to “0”, and therefore the initial states are (D0, D1)=(0, 0). In this state, when, for example, “1” is inputted from the input terminal in this state, the outputs become “output0, output1”=“1,1”, and at the next time, the states of, the registers 21 and 22 become (D0, D1)=(1, 0). On the other hand, when “0” is inputted from the input terminal, the outputs become “output 0, output 1”=“0, 0”, and at the next time, the states of the registers 21 and 22 become (D0, D1)=(0,0). In
For example, in the case where “0” of one bit is inputted to the convolutional coder 20 shown in
A course of the state transition which extends from a certain state at the initial time point on the trellis diagram to a certain state at a time point after the initial time point is called “path”. One decoding result is obtained by determining one path extending from the initial time point to a final time point on the trellis diagram. This is because when the path that extends from the initial time point to the final time point on the trellis diagram is determined, the branches at the respective time points which constitute the path are determined, and the code words at the respective time points including the initial time point to the final time point can be determined by the respective branches that constitute the path. When the code words at the respective time points including the initial time point to the final time point can be determined, the input data at the respective time points to the coder can be determined from the state transition of the coder, thereby obtaining the decoding result. When the determined path is correct with respect to the receive word, the decoding result is also correct whereas when the determined path is incorrect with respect to the receive word, the decoding result is also incorrect. Accordingly, it is necessary to determine a correct path with respect to the receive word in the Viterbi decoding. In order to determine the correct path with respect to the receive word in the Viterbi decoding, the likelihoods of the respective paths that extend from the state at the initial time point to the states at the time points after the initial time point on the trellis diagram are evaluated. In order to evaluate the likelihoods of the paths, the likelihoods of the respective state transitions with respect to the receive words at the respective time points are first required. This is because the path is the connection of the branches.
The likelihood of the transition from one state at one time point to another state at a subsequent time point of the one time point is called “branch metric”. The branch metric is calculated on the basis of the code words at the respective branches on the trellis diagram and the receive words corresponding to the respective branches. How to obtain the branch metric is different between hard decision and soft decision. For example, in the hard decision, hamming distances between the code words corresponding to the respective branches on the trellis diagram and the receive words corresponding to the respective branches are used. The hamming distance is directed to the different bit counts of two bit strings. Accordingly, in the case where the hamming distance is used in the branch metric, the likelihood of the transition becomes higher as the value of the branch metric is smaller.
As an example, let us consider the branch metric with respect to the state transition of from the state (D0, D1)=(0,0) at the time t=0 to the state (D0, D1)=(0,0) at the time t=0 in
On the other hand, the branch metric with respect to the state transition of from the state (D0, D1)=(0,0) at t=0 to the state (D0, D1)=(0,0) at t=1 is calculated in the same manner, and becomes 0. The branch metric represents the likelihood of the state transition, and the hamming distance is used for the branch metric in
As described above, in order to determine the correct path with respect to the receive word in the Viterbi decoding, the likelihoods of the respective paths, which extend from the state at the initial time point to the states at the time points after the initial time point on the trellis diagram are evaluated to determine the maximum likelihood path that is the highest in the likelihood. In the Viterbi decoding, the likelihood of the path that comes to the respective states at the respective time points on the trellis diagram is evaluated by a value called “path metric”. The path metric is calculated by the sum of the branch metrics of the branches that constitute the respective paths that come to one state on the trellis diagram. However, when the math metrics of all the paths that come to the respective states at the respective time points are going to be obtained, the calculation amount becomes enormous. Under the circumstances, the following method is used in the Viterbi decoding. That is, only the paths that are determined to be highest in the likelihood among the paths that come to the respective states at the respective time points on the basis of the path metric are adopted as the survivor paths, and other paths are discarded to reduce the calculation amount.
As shown in
For example, attention is paid to the time t=2 and the state (D0, D1)=(0,1) on the trellis diagram shown in
Through the above-mentioned method, the maximum likelihood path from the initial time point to the final time point on the trellis diagram with respect to the receive words at the respective time points is determined on the basis of the path metrics in the respective states at the respective time points, thereby obtaining the decoding results on the basis of the path. In obtaining the decoding results in the Viterbi decoding, there is used a manner that is called “trace back”. The trace back is directed to a method in which information on whether a path that comes to the respective states from any one of two branches that come to the respective states at the respective time points on the trellis is set as the survivor path during a process of determining the maximum likelihood path on the basis of the path metric, or not, is stored in advance, and the path is traced back toward the initial time point from a sate that is highest in the likelihood at the final time point on the trellis diagram according to the information. In
The convolutional coder used in the actual W-CDMA communication is formed of a convolutional coder (refer to
Subsequently, a description will be given of the Viterbi decoding device according to this embodiment.
The received data storage section 11 receives and stores the received data as data used for receive decoding. The ACS calculation section 12 is a portion that is a core of the Viterbi decoding calculation. A state A and a state B exist at one time point on the trellis diagram, and each of the states A and B includes a branch that extends from the one time point to a state C at a time point after the one time point. In this situation, the ACS calculation section 12 compares the result of adding the branch metric of a branch that extends from the state A to the state C to the path metric of the survivor path that reaches the state A with the result of adding the branch metric of a branch that extends from the state B to the state C to the path metric of the survivor path that reaches the state B. Then, the path that is higher in the likelihood which reaches the state C is adopted as the survivor path on the basis of the comparison result. Then, a process of selecting one branch that constitutes the survivor path from the branches that extend from the states A and B to the state C is conducted.
The likelihood information storage section 13 is a portion that stores the likelihood information that has been added in the ACS calculation, and reuses the stored likelihood information at the time of the subsequent likelihood calculation. The likelihood information is repetitively used, thereby making it possible to enhance the error correction performance.
The ACS result storage section 14 is a portion that stores the selection results of the ACS calculation, and obtains the final decoding results from that information.
The ACS result conversion section 15 is a portion that forcedly converts the results of the ACS, and changes the transition due to the converting operation to enable the trace back. More specifically, this is a process of inverting the select of the read ACS calculation by bits. The details of this operation will be described later.
The decoding calculation section 16 conducts trace back to obtain the decoding result related to the decoding path of a first candidate (first decoding path) that is highest in the likelihood. The CRC calculation section 17 conducts the CRC calculation to conduct the error determination of the decoding result in the case where the CRC is included in the obtained data. In this situation, in the case where an error is found in the decoding result due to the CRC, in this embodiment, the decoding path that is a second candidate is traced back by using the results of the ACS result conversion section 15 without conducting the retransmission of the data to again conduct the decoding. The decoding result storage section 18 stores the decoded data. The decoding control section 19 controls the Viterbi decoding device. That is, the decoding control section 19 controls the operation of the respective blocks.
In the Viterbi decoding device, upon receiving the received data, the decoding calculation section 16 obtains the first decoding path on the basis of the results of the ACS result storage section 14. Then, the decoding calculation section 16 obtains the decoded data with the use of the first decoding path. The CRC calculation section 17 detects an error in the decoded data. In the case where an error is in the first decoded data as a result of the error detection, the ACS result conversion section 15 converts the results of the ACS.
With the above operation, the decoding calculation section 16 changes the branch that has been selected as a branch which comes to the first state to another branch in the first decoding path on the trellis diagram, conducts trace back, and obtains the second decoding path. Then, the decoding calculation section 16 again obtains the decoded data with the use of the second decoding path. The CRC calculation section 17 again conducts the CRC determination to obtain the decoding results with the use of the second decoding path that is the second candidate without conducting the retransmission of the data even if an error is detected in the first decoding path that is the first candidate. As a result, the decoding calculation section 16 improves the probability that the error is corrected. In this example, in the case where an error is detected in the decoded data of the second decoding path, the ACS result conversion section 15 again converts the results of the ACS, likewise. With the above conversion, the decoding calculation section 16 changes the branch that has been selected as a branch that comes to the second state which has been traced back from the first state in the first decoding path to another branch in the first decoding path, conducts trace back, and obtains the third decoding path. Then, the decoding calculation section 16 again obtains the decoded data with the use of the third decoding path. In this way, for example, the decoding calculation section 16 changes the decoding path until the CRC determination result is acceptable or by a given number to obtain the decoding results. As a result, it is possible to improve the error correction performance.
Also, as will be described later, the decoding path that is selected as the second candidate is not limited to the decoding path that traces back the branch which is different from that branch that reaches the final state of the first decoding path. Alternatively, in the final state on the trellis diagram, in the case where the branch is traced back from a state in which the path metric is smallest to select the first decoding path, the decoding path that is traced back from a state in which the path metric is second smallest in the final state can be set as the second candidate.
Subsequently, a description will be given in more detail of the Viterbi decoding device according to this embodiment. First, the ACS calculation section 12 will be described. FIG. 7 shows the ACS at the time of transiting from a certain time point t to t+1. As described above, there exist two paths that reach the state C at the time point t+1. It is assumed that the original states of those paths are the states A and B. When it is assumed that the path metric from the time point 0 to t is PMt, PM of the respective states is represented as PM (A) and PM (B) t. The ACS calculation section 12 compares the results of adding the branch metric BM (hamming distance/Euclidean distance) BM (BM(A→C)t, BM(B→C)t) which is obtained from the input data at the time point t to the PM (PM(A)t, PM(B)t) of the respective states with each other, and selects the smaller value, that is, the path whose likelihood is larger. The following expression is a selection expression:
In the above expression, the likelihood is larger as PM and BM are smaller, which is attributable to the fact that this embodiment uses the hamming distance of the receive word and the code word for the branch metric. Accordingly, it is possible that the different branch metrics are defined, and the likelihood is larger as the branch metric and the path metric are larger. In the case where the likelihood is larger as the branch metric and the path metric are smaller, the initial value can be set so that the PM in the initial state is 0 and the PM in other states is sufficiently large. For example, when the initial state is 0, PM (0)0 is 0, and PM (X) (X≠0) is a sufficient large value. In the case where the likelihood is larger as PM and BM are larger, the reverse initial setting is conducted.
SEL is information on the selected path (survivor path) Since there exist only two paths that reach the state C, it is found which path is selected by the information (0,1) of 1 bit. This example shows a case in which “0” is stored in the memory when the state A is selected (for example, a state in which the branch passes through the upper arrow in
Subsequently, the decoding process of the decoding calculation section 16 will be described.
The 8-bit data of normally “0” which is called “tail bit” is inserted at the end of the convolutional coding in the W-CDMA communication, to thereby set the state in which the likelihood is highest at the final time point of the ACS calculation to 0. That is, in this example, “0” of two bits is inputted as the tail bit, to thereby set the final state to (00). In the present specification, this state is called “state zero”.
When “0” is inputted twice after all of data has been input to the coder 20, (D0, D1)=(0,0) is always satisfied. That is, the trellis is terminated with the state of zero, and the input/output data is represented by the above Table 2. As shown in
The path that traces back the survivor path from the state in which the likelihood in the above manner is highest is the first decoding path. As described above, the state can be decoded to “0” or “1” according to the even or odd of the state. The decoding results are inputted to the CRC calculation section 17, and the decoding results are inspected by the CRC.
Subsequently, the details of the decoding calculation section 16 will be described. The ACS results are stored with binary (1 bit) information of 0 and 1 as shown in
The decoding calculation section 16 reads data indicated by circles in
D0′(31) and D1′(32) denote flip-flops. First, a state in which the trace back starts is set in the flip-flops 31 and 32 (D0′, D1′). In this example, since the state (11) is a state of the trace back start, D0′=1 and D1′=1 are set. Then, the value “0”(fifth row and fourth column) which is read from the ACS result storing RAM is inputted to the flip-flop 32 (D1′). As a result, the data of the flip-flop 32 (D1′) is outputted to the flip-flop 32 (D0′), and the data of the flip-flop 32 (D0′) is outputted as the decoding results (decoded data). Since the data that has been stored in the flip-flop 31 (D0′) is “1”, the first decoding results are “1”.
Subsequently, the ACS results of the flip-flops 31 and 32 (D0′, D1′) at the time point 4(t=4) are read. Since the ACS result “0” at t=5 is inputted to the flip-flop 32 (D1′), (D0′,D1′)=(1,0) is now satisfied. The ACS results of the state (1,0) at t=4 are read. In this example, “0” is obtained. When “0” is inputted to the flip-flop 32 (D1′), “1” is outputted as the decoding result. When the above process is executed until t=1, the decoding result of “11001” can be obtained. Since the results are obtained from behind, the order is reversed, and “10011” becomes the final decoding results.
Then, the decoding results are determined by CRC. In this situation, when an error exists in the decoding results, the ACS result conversion section 15 converts the ACS result, and conducts the following process for selecting the second decoding candidate.
Subsequently, the processing of the ACS result conversion section 15 will be described. As shown in
The processing of the ACS result conversion section 15 will be described in more detail.
In the case where the decoding result of the second decoding path that is the second candidate is detected as the error, the ACS result conversion section 15 reverses the ACS result so as to select the third decoding path that is a third candidate. As shown in
The contents in the ACS result memory are rewritten in
Now, a description will be given of a case in which the trellis is terminated as shown in
That is, as described above, in this embodiment, it is possible that the decoding path of the first candidate is set as the decoding path that is traced back from the state in which the likelihood is highest in the final state. Also, it is possible that the decoding path of the second candidate is set as the decoding path that is traced back from the state in which the likelihood is second highest in the final state. The details will be described later. However, in this example, there is a rule that the final state becomes 0 in the case where the tail bit is inserted, that is, since states other than the state (0,0) cannot be taken, a manner in which trace back is conducted in the likelihood order of the final state cannot be taken. Hence, in this case, the second candidate becomes the decoding path that traces back the branch which reaches the final time point=state zero of the first decoding path which is the first candidate as a branch different from the first decoding path. For that reason, the decoding path of the second candidate becomes the path of the thin solid line that starts from the state zero (state (0,0)). In this example, the PM of the final state (state (0,0)) of the decoding path of the second candidate is 5.
Subsequently, a description will be given of the Viterbi decoding method according to this embodiment.
Upon completion of inputting all of the data, the decoding calculation section 16.obtains the final decoding result due to the trace back (Step S6). For example, in the W-CDMA communication, because CRC is added to not each of the input data but the data block of a certain unit, the decoding calculation section 16 acquires the decoding result of the unit data block. Then, the decoding calculation section 16 detects the error bit due to the CRC (Step S7), and sequentially restores the second candidate and the third candidate if there is an error bit. First, it is determined whether the number of redecoding reaches a given number of times, or not (Step S8), and when the number of redecoding is equal to or lower than a given number of times, the ACS result conversion section 15 reads the ACS result that is stored in the ACS result storage section 14, and inverts the bits (Step S9). Then, the decoding calculation section 16 traces back the decoding path of the second candidate with reference to the inverted ACS result to acquire the decoding result (Step S6). In this way, the decoding is repeated until no error is detected, or by a given number of times. In this example, the decoding path of the subsequent candidate is selected until the number of decoding reaches the given number of times in Step S8 as described above.
It is determined whether the decoding result of the first candidate is correct, or not, and when the determination is affirmative, the decoding is terminated. On the other hand, when the determination is negative, the second candidate is decoded. The second candidate is one of the final paths which is not selected, and the path higher in the likelihood is traced back from the second candidate to obtain the decoding result. The decoding result of the second candidate is again subjected to the error determination, and when there is no error, the decoding is terminated. Also, when there is an error, the third candidate starts to be decoded. The third candidate selects a path which is in a state next to the final path and has not been selected, and traces back the path that is higher in the likelihood. The fourth candidate and the fifth candidate select paths that are second and third to the final path which have not been selected, and thereafter the path higher in the likelihood is traced back.
As described above, because the ACS result storage section 14 stores the information (ACS result) on the path of the maximum likelihood which reaches each of the states, even if the ACS result storage section 14 selects the path such as the second candidate or the third candidate, the ACS result storage section 14 is capable of tracing back the subsequent maximum likelihood path, and is capable of obtaining the decoding result.
This example can be applied to even a case in which the trellis is not terminated, and in this case, the first candidate traces back the maximum likelihood path from a state in which the likelihood is highest (at the present time point), and the second candidate traces back the maximum likelihood path from a path that reaches a'state that is highest in the likelihood at the present time point and is not maximum likelihood. Alternatively, the second candidate can trace back the maximum likelihood path from a state that is second highest in the likelihood at the present time point.
In this embodiment, when the decoding result is the error in the first candidate, the candidate path that is the second candidate is selected to conduct the redecoding. When the decoding result of the second candidate is the error, the decoding path that is the third candidate is selected to conduct the redecoding. That is, in the case where an error is detected in the decoded data of an N-th (N≧2) decoding path, there is used the decoded data of an (N+1)-th decoding path that has been traced back at the time point t=(N−1) from the final state (state zero) in the first decoding path. As a result, it is possible to improve the error correction performance as compared with the conventional method that decodes nothing other than the first decoding path. Also, in this embodiment, there is always one kind of survivor path that is selected as the second and third candidates, and the redecoding can be conducted by using the ACS result which is normally held. For that reason, no newly added hardware is required, and the error correction performance can be improved in the extremely simple method.
Subsequently, the reason that the error correction performance is improved in this embodiment will be described. Because the data that has been transmitted from the transmitting side is added with noises on a transmission line, ideal data is not received at the receiving side. The Viterbi decoding estimates the original data on the basis of those data, and issues a reply.
The normal Viterbi decoder (decoding device) issues a reply having the maximum likelihood from the given conditions. Since it is not known whether the likelihood reply is correct, or not, at that time point, the error determination is conducted by using the error detection code such as the CRC. In this case, it is necessary to transmit the CRC together at the time of transmission. In the case where the determination is the error, the receiving side gives the transmitting side a retransmission request.
In the general Viterbi decoding device, in the case where the maximum likelihood reply is in error, the decoding is given up at that time point. However, in this embodiment, even in the case where the maximum likelihood reply is in error, it is possible to provide a second maximum likelihood reply by the second candidate. In the case where the second maximum likelihood reply is in error, a third maximum likelihood reply is provided by the third candidate. As described above, the provision of plural replies makes it possible to increase the possibility of correct answers (error correction performance)
Subsequently, a reason that the upgrading of the error correction performance is larger as the data size is smaller will be described. The following two reasons are proposed.
First, the number of error data is smaller as the data size is smaller. That the data size is smaller means that the number of incorrect data is also smaller. For example, in the case of a pattern 1 bit is incorrect per 10 bits, 1 bit is incorrect when the data size is 10 bits, and 10 bits are incorrect when the data size is 100 bits although the fact is not as simple as this example. When even 1 bit is incorrect, the decoding fails, however, the decoding is successful when 1 bit is corrected in the former. Because the probability that 1 bit is corrected is higher than the probability that 10 bits are corrected, the probability that the decoding is conducted by applying the method according to this embodiment is higher as the data size is smaller.
Also, in the case where the data size is larger, even if a portion that is made incorrect by the first decoding is corrected by second or subsequent decoding, the possibility that other portions (originally correct portions) are newly made incorrect becomes high.
Second, as the data size is smaller, the error correction performance is lower. As described above, the likelihood information (input history) on the data that has been input before is also used in obtaining some data in the Viterbi decoding. The accumulated likelihood information is larger as the data volume is larger, and even when the error data is inputted at a certain time point, the possibility that the error data is corrected from the input history becomes high. In the case where the data volume is small, the input history is small with the result that the possibility that the error data is not corrected becomes higher. Accordingly, the error rate is smaller as the data size is larger to some degree. That is, since the correction performance is lower as the data size is smaller, the error is readily corrected by conducting the decoding by plural times.
The present invention is not limit to only the above-mentioned embodiments. It will be obvious to those skilled in the art that various changes may be made without departing from the scope of the invention. For example, in the above embodiment, the second candidate uses the decoding path that is traced back as a branch different from the first candidate when only one branch is traced back from the state at the time of terminating the trellis. In the W-CDMA, because the tail bit is inserted, the state zero at the time of terminating the trellis is highest in the likelihood, and the state zero is selected as the final state. On the contrary, in the case where no tail bit is inserted, the state that is highest in the likelihood in the final state in the final state on the trellis diagram is not limited to the state zero. Accordingly, in this case, as shown in
That is, in the case where an error is detected in the decoded data of the N-th decoding path, the (N+1)-th decoding path that is traced back from the state that is (N+1)-th highest in the likelihood is obtained in the final state. Then, redecoding is conducted by the (N+1)-th decoding path, to thereby enable the error correction performance to be improved.
Also, in the above embodiment, the configuration of hardware is described. However, the present invention is not limited to this configuration, arbitrary processing can be realized by allowing a CPU (central processing unit) to execute computer program. In this case, it is possible that the computer program is recorded on a recording medium and supplied. Also, the computer program can be supplied by transmitting the computer program through the Internet or another transmission medium.
It is apparent that the present invention is not limited to the above embodiments, but maybe modified and changed without departing from the scope and spirit of the invention.
Number | Date | Country | Kind |
---|---|---|---|
298621/2006 | Nov 2006 | JP | national |