Jointly optimizing signal equalization and bit detection in a read channel

Information

  • Patent Grant
  • 9768988
  • Patent Number
    9,768,988
  • Date Filed
    Thursday, December 20, 2012
    11 years ago
  • Date Issued
    Tuesday, September 19, 2017
    7 years ago
  • CPC
  • Field of Search
    • US
    • 375 231000
    • 375 232000
    • 375 262000
    • CPC
    • G11B20/10009
    • G11B20/10046
    • G11B20/10055
    • G11B20/10296
    • G11B20/10481
    • G11B2220/2516
  • International Classifications
    • H03H7/40
    • H03H7/30
    • H03K5/159
    • H04L25/03
    • Term Extension
      739
Abstract
An apparatus and associated methodology providing read channel circuitry having a signal equalizer that sends an equalized signal to a bit detector. The read channel circuitry is capable of sampling values of the equalized signal to identify a bit transition from among a predefined plurality of different bit transitions. The apparatus may have channel optimization (CO) logic that, based on the input signal and the sampling of the equalized signal, defines first values for a programmable parameter of the bit detector that substantially maximizes vector separations among vectors of waveform target samples corresponding to the predefined plurality of different bit transitions, while the CO logic also defines second values for a programmable parameter of the equalizer that substantially minimizes the mean squared separation of the equalized signal segment for each bit transition from the waveform target corresponding to that bit transition.
Description
SUMMARY

Some embodiments of the described technology contemplate an apparatus for processing an input signal. The apparatus includes read channel circuitry having a signal equalizer that sends an equalized signal to a bit detector. The read channel circuitry is capable of sampling values of the equalized signal to identify a bit transition from among a predefined plurality of different bit transitions. In an embodiment the apparatus may include channel optimization (CO) logic that, based on the input signal and the sampling of the equalized signal, defines first values for a programmable parameter of the bit detector that substantially maximizes vector separations among vectors of waveform target samples corresponding to the predefined plurality of different bit transitions while also defining second values for a programmable parameter of the equalizer that substantially minimizes the mean squared separation of the equalized signal segment for each bit transition from the waveform target corresponding to that bit transition.


Some embodiments of the described technology contemplate a method including operations such as: setting tap weights in a finite impulse response (FIR) filter to predetermined nominal values; defining targets for a Viterbi bit detector based on results of digitized (ADC) data samples of a known signal, the targets substantially maximizing a target vector separation between target vectors corresponding to different predefined bit transitions; and after defining the targets, defining optimized values for the tap weights that substantially minimize the mean square difference between the FIR filter output and the targets.


Some embodiments of the described technology contemplate a read channel circuit that may have channel optimization (CO) logic defining targets for a Viterbi detector by jointly substantially maximizing target vector separations among a plurality of predefined different bit transitions while minimizing a finite impulse response (FIR) filter variance in terms of FIR filter output with respect to the defined target vectors.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 depicts a portion of read channel circuitry.



FIG. 2 depicts an input signal waveform sampled at four samples per dibit cycle.



FIG. 3 depicts a binary branch metric trellis for binary value shift possibilities from a first bit value to a second bit value.



FIG. 4 illustrates example operations for optimizing a read channel.





DETAILED DESCRIPTION

Initially, it is to be appreciated that this disclosure is by way of example only, not by limitation. The read channel concepts herein are not limited to use or application with any specific system or method that reads a data signal. Thus, although the instrumentalities described herein are for the convenience of explanation, shown and described with respect to exemplary embodiments, it will be appreciated that the principles herein may be applied equally in other types of systems and methods employing a read channel.



FIG. 1 depicts a portion 100 of read channel circuitry (or read channel 100 herein) in a disc drive data storage device, for purposes of this illustrative example. The disc drive includes a plurality of magnetic recording disks that are mounted to a rotatable hub of a spindle motor and rotated at a high speed. An array of read/write heads is selectively moved close to surfaces of the discs in a data transfer relationship therebetween. The heads are radially positioned by a rotary actuator and a servo control system.


The servo control system operates in two primary modes: seeking and track following. During a seek, a selected head is moved from an initial track to a target track on the corresponding disc surface. Upon reaching the target track, the servo control system switches to the track following mode to maintain the head in the data transfer relationship with the target track. During track following, prerecorded servo burst fields are sensed by the head and demodulated to extract a position error signal (PES) 101, which provides an indication of the position error of the head away from a desired location along the track (e.g., the track center). The PES is then converted into an actuator control signal, which is fed back to a head actuator that positions the head.


As the areal storage density of magnetic disc drives increases, so does the need for more precise position control when track following, especially in the presence of vibrations which can cause non-repeatable runout (NRRO) of the position error.


The read channel 100 receives an analog input signal 102, and is capable of mapping a sample sequence (such as the four samples per symbol, or dibit, discussed below) to one of a plurality of predefined different transition waveforms corresponding to bit transitions.


The read channel 100 as depicted includes a variable gain amplifier (VGA) 104, a continuous time filter (CTF) 105, and an adaptive gain control (AGC) 106 to normalize the input signal 102 to a selected magnitude range suitable for signal processing. A sampling switch 108 sends selectively timed sequences of the processed input signal to an analog-to-digital converter (ADC) 110 for digital sampling.


An n-order equalizer 112 performs time-domain equalization to filter the digitized signal to a desired response waveform in an equalized signal 113. For purposes of this description the n-order equalizer 112 may be a finite impulse response (FIR) filter, although the contemplated embodiments of the described technology are not so limited. While a variety of digital signal processing techniques may be utilized, the FIR filter advantageously utilizes a series of internal delay blocks and tap weight coefficient addition blocks to filter the samples to a selected class of ideal equalized target waveforms, such as PR4 or EPR4. The FIR filter updates the waveform via outputs that cover multiple clock cycles.


The output of the ADC 110 (or alternatively the FIR filter output) is provided to a timing recovery (TR) block 114 that applies timing recovery functions including the periodic timewise shifting of the sampling by control of the sampling switch 108.


A channel detector 116 generally maps the waveform samples to user bits. For purposes of this description the detector 116 may be a Viterbi detector although the contemplated embodiments of the described technology are not so limited. The Viterbi detector calculates branch metrics via a statistical derivation of the likelihood that an observed FIR filter output (equalized signal 113 waveform transition) matches one of a predefined number of different pattern-dependent targets.


A channel optimization (CO) block 118 executes computerized instructions performing CO logic that reduces bit errors when reading the input signal 102. The CO logic, based on the input signal 102 and the sampling of the equalized signal 113, may jointly maximize vector separations among vectors of waveform target samples corresponding to the predefined plurality of different bit transitions while minimizing the mean squared distance of a FIR filter output signal segment vector for each bit transition from the waveform target corresponding to that bit transition. By “based on,” it is meant that the CO logic has information about the input signal 102 and the sampling of the equalized signal 113 and bases its action upon that information.



FIG. 2 graphically depicts an illustrative waveform for defining wide bi-phase encoded symbols. FIG. 3 depicts each branch metric is a calculated likelihood that a particular predefined waveform transition was observed from a detected bit combination. That is, in these illustrative embodiments the trellis depicts all the binary value shift possibilities from a first bit value 300 to a second bit value 302. The four possible bit transitions are (1→1), (1→0), (0→1), and (0→0), and for purposes of the equations herein the bit transitions are denoted by respective subscripts 1, 2, 3, 4 on variables as follows:

Variable1: (1→1)  (1)
Variable2: (1→0)  (2)
Variable3: (0→1)  (3)
Variable4: (0→0)  (4)


The corresponding branch metrics are:

(1→1): Σi=03[yn+i−T1i]2  (5)
(1→0): Σi=03[yn+i−T2i]2  (6)
(0→1): Σi=03[yn+i−T3i]2  (7)
(0→0): Σi=03[yn+i−T4i]2  (8)


Where Tk represents the target vector of expected values for transition type “k” (listed above as 1, 2, 3, 4) against which the FIR output samples yn are compared.


Assuming an M tap FIR filter w and the digitized signal samples x(n), the FIR filter output is:

y(n)=wTX(n)  (9)

where:










X


(
n
)


=

[




x


(
n
)







x


(

n
-
1

)












x


(

n
-
M
+
1

)





]





(
10
)







X(n) need not contain consecutive x(n) vectors. For example, without limitation, the sampling could be conducted so that the FIR filter output is downsampled. A downsampling of two, for example, would contain only the odd or even indexed x(n) vectors. Generally speaking for purposes of this description, the ith vector element of X(n) is denoted as x(n)i, where i=1, 2, 3 . . . L. Thus, the FIR filter output samples passed on to the Viterbi detector for purposes of determining the branch metric may be defined in terms of:

y(n)=[wTx(n)1 wTx(n)2 . . . wTx(n)L]  (11)


Each of the FIR filter outputs y(n) corresponds to one of the waveform transitions in equations (1)-(4) and thus will be labeled y1(n), y2(n), y3(n), and y4(n) respectively corresponding to the transitions 1, 2, 3, 4 as stated in equations (1)-(4).


transition 1 (1→1), with corresponding y1(n)


transition 2 (1→0), with corresponding y2(n)


transition 3 (0→1), with corresponding y3(n)


transition 4 (0→0), with corresponding y4(n)


For each transition class k, there is a target vector Tk against which y(n) is compared by the Viterbi detector to determine the branch metric. The present embodiments optimize that determination, thereby reducing bit error rate, by jointly optimizing two conditions: (1) maximizing the separation between the target vector Tj of one transition class “j” and a different target vector Tk of another transition class “k”; and (2) minimizing the mean square distance (variance) between the FIR filter output yk (n) and the target vector Tk for the respective transition class (or type, k=1, 2, 3, or 4, as listed in (1)-(4)).


In determining the spacing between each component Tji of vector Tj and the corresponding components Tki of vector Tk, where Tk is the mean of all yk(n)(the vector y at time index n corresponding to transition type, or class, k) for waveform transition k such that:

Tk=custom characteryk(n)custom charactern  (12)

where custom character*custom charactern represents the average over n.


First formulating the analysis for just the ith component, for simplicity sake Tki, yk(n)i, and the like will be denoted by simply Tk, yk(n) and the like with the understanding they are the ith component of the respective vector. The vector spacing (squared difference) between Tj and Tk is:










D

k





j


=





[


T
j

-

T
k


]

2

=


[






y
j



(
n
)




n

-





y
k



(
n
)




n


]

2









(
13
)







=




[


w
T



(






x
j



(
n
)




n

-





x
k



(
n
)




n


)


]

2








(
14
)







=





w
T



(


m
j

-

m
k


)





(


m
j

-

m
k


)

T


w








(
15
)







=




w
T



S

k





j



w





                                          


(
16
)









where mk=custom characterxk(n)custom charactern and Skj is the between-class scatter matrix of transition type (or class) k with respect to transition type (or class) j.


As for the second optimization, the variance of yk(n) is:










V
k

=







(



y
k



(
n
)


-





y
k



(
n
)




n


)

2



n





                                               


(
17
)







=







[


w
T



(



x
k



(
n
)


-

m
k


)


]

2



n








(
18
)







=




w
T







(



x
k



(
n
)


-

m
k


)




(



x
k



(
n
)


-

m
k


)

T




n


w








(
19
)







=




w
T



S
k


w








(
20
)









where Sk is the within classes scatter matrix of transition k. The combined variance for transitions j and k, assuming they are not correlated, is:

Vkj=Vk+Vj=wT(Sk+Sj)w=wT(S′kj)w  (21)

Where S′kj=Sk+Sj is the within class scatter matrix for transitions k and j.


Thus, the optimal FIR filter w that jointly maximizes the separation between transitions k and j while minimizing the FIR filter variance is that which maximizes the cost function:











J

k





j




(
w
)


=



D

k





j



V

k





j



=



w
T



S

k





j



w



w
T



S

k





j




w







(
22
)







An equivalent but computationally more favorable definition for the within-class scatter matrix for transitions k and j is given by:

Skja=(mk−m)(mk−m)T+(mj−m)(mj−m)T  (23)

This is a computationally more tractable formulation that can be used instead of S′kj in equation 22. Here, m is the overall mean of the data, m=custom characterx(n)custom charactern.


Reiterating, each of the y(n) can be classified and represented as yk(n) coming from one of the four transitions (1→1, 1→0, 0→1, 0→0) indexed as k=1, 2, 3, 4. Correspondingly, X(n) can be represented as Xk (n) and x(n)i can be represented as xk(n)i belonging to the transition class k. It is advantageous to make the yk(n) for each class transition k as far from all the other transition classes as possible and also keep the variance of the estimates low. Preferably, the linear discriminant problem is formulated for each of the components of the yk(n), solving for all four of the bit transition possibilities in the illustrative embodiments described herein. Thus, considering only the ith component of y(n), resulting from xk(n)i (the ith component for the transition class/type k obtained from equation (11), yk(n)i=wTxk(n)i), the between-class scatter matrix for the ith component can be written for all bit transition possibilities:

SBik=14ki−μi)(μki−μi)T  (24)

where μki=custom characterxk(n)icustom charactern and μi=custom characterxk(n)icustom characternk. That is, custom characterxk(n)icustom characternk represents the average of xk(n)i over indices n and k and custom characterxk(n)icustom charactern represents the average over n.


The CO logic also derives the within-class scatter matrix for the ith component of y(n) by likewise including all of the terms corresponding to the four waveform transitions:

SWik=14custom character(xk(n)i−μki)(xk(n)i−μki)Tcustom charactern  (25)


The described technology contemplates the overall between-class and within-class scatter matrices as being the sum of the corresponding scatter matrices with the individual L components defined as above:

SBk=14Σi=1Lki−μi)T  (26)
SWk=14Σi=1Lcustom character(xk(n)i−μki)(xk(n)i−μki)Tcustom character  (27)


The optimal FIR filter w* is that which maximizes the objective function:










J


(
w
)


=



w
T



S
B


w



w
T



S
W


w






(
28
)







Thus, the optimal FIR filter w*, the stationary point of equation (28), satisfies the following relationship:

SBw*=J(w*)SWw*  (29)


This relationship is solved as J(w*) being the largest Eigen value with the corresponding Eigen vector w. * The solution w* can be scaled as desired for advantageous reasons such as targeted output levels, channel requirements, and the like. The corresponding target for use by the Viterbi bit detector for the waveform transition class k is:

vk=w*Tk1 μk2 . . . μkL]  (30)


Turning now to FIG. 4 which illustrates example operations in a method 400 for CHANNEL OPTIMIZATION. The method 400 begins in block 402 which samples a signal such as the equalized signal 113 in FIG. 1. As discussed previously, this signal sampling may include appropriate gain adjustment and digital processing, as well as providing predetermined nominal values for the FIR tap weights.


On the basis of using the predetermined nominal FIR tap weights, block 404 calculates the between-class and within-class scatter matrices, SB and SW respectively, such as by processing the data with equations (26) and (27) discussed in the illustrative embodiments above. In block 406 the optimized FIR output vector w* is derived by finding the principle generalized Eigen vector of SB and SW, such as by processing the signal data to obtain the two scatter matrices in equation (29) and then using any of the Eigen decomposition methods known in the area of numerical linear algebra as discussed in the illustrative embodiments above. In block 408 the optimized Viterbi target vector vk is derived in relation to the optimized FIR output vector w*, such as by processing the signal data with equation (30) discussed in the illustrative examples above. The optimized values w* and vk can be scaled in block 410 as discussed above as might be appropriate for targeted output values or other channel requirements.


In block 412 the read channel is parameterized by replacing the nominal tap weight values in the FIR filter and replacing any existing target values in the Viterbi detector with the optimized values w* and vk, respectively, that are derived in accordance with embodiments of the described technology. In some embodiments the method 400 may be performed during an initialization sequence, such as during a power on or power level change event. In other embodiments the method 400 may be routinely performed even after an initialization, such as during an operational sequence of the read channel. The latter advantageously provides for a continually iterative parameterization of the read channel to compensate for transient variations such as might be experienced by changes in input signal quality or external influences such as environmental changes. In any event, block 414 makes a determination as to whether the optimized parameters need to be recalculated. If the determination of block 414 is “no,” then processing continues in block 412 according to the previously derived values w* and vk. However, if the determination of block 414 is “yes,” then control returns to block 402 where a new set of optimized values w* and vk begins to be derived by once again sampling a known input signal.


A linear phase constraint may be maintained if the optimized FIR is constrained to being symmetric. Generally, for a FIR filter of length M as in equation (10) the linear transforms to reduce the dimensionality based on symmetry:

x′(n)=Ax(n),  (31)
w′=Bw,  (32)

where the size of






A





is









(

M
2

)

×
M







and









A
=

[



1


0


0





0


0


1




0


1


0





0


1


0

































0


0


0


1


0


0


0



]





(
33
)








and B is the same size as A and









B
=


[



1


0


0


0


0





0




0


1


0


0


0





0





































0


0


0


1


0


0


0



]

.





(
34
)







Once the problem is solved in the lower dimensional space obtaining the optimal w′, the corresponding optimal w can be obtained simply by tiling the first








(

M
2

)






elements of w′ at the end in reverse order as described in the expression below:

w=ATw′  (35)
Thus:
y(n)=wTx(n)=(ATw′)Tx(n)=w′TAx(n)=w′Tx′(n)  (36)


This can be demonstrated with an example. If we are using a 5 tap FIR, the corresponding time embedded vectors are five dimensional. With the FIR being symmetric, it can be represented as:









w
=

[



a




b




c




b




a



]





(
37
)







And the input data vector at given time n (embedded with the samples corresponding to each FIR tap) can be represented as:









x
=

[




x
1






x
2






x
3






x
4






x
5




]





(
38
)







The output of the filter would be y(n)=wTx=a(x1+x5)+b(x2+x4)+cx3. Thus, a linearly transformed three dimension version of x can be created as:










x


=



[



1


0


0


0


1




0


1


0


1


0




0


0


1


0


0



]



[




x
1






x
2






x
3






x
4






x
5




]


=

[





x
1

+

x
5








x
2

+

x
4







x
3




]






(
39
)








With the corresponding filter being










w


=

[



a




b




c



]





(
40
)







It is to be understood that even though numerous characteristics and advantages of various embodiments of the described technology have been set forth in the foregoing description, together with the details of the structure and function of various embodiments of the invention, this disclosure is illustrative only, and changes may be made in detail, especially in matters of structure and arrangement of parts within the principles of the described technology to the full extent indicated by the broad general meaning of the terms in which the appended claims are expressed. For example, other read channel signal processing components and various arrangements thereof than the FIR filter and Viterbi detector described are contemplated while still maintaining substantially the same functionality without departing from the scope and spirit of the claimed invention. Further, although the illustrative embodiments described herein are directed to data storage drives, and related technology, it will be appreciated by those skilled in the art that the claimed invention can be applied to other devices employing a read channel as well without departing from the spirit and scope of the described technology.

Claims
  • 1. An apparatus for processing an input signal, comprising: read channel circuitry including a signal equalizer that sends an equalized signal to a bit detector, the read channel circuitry capable of sampling values of the equalized signal to identify a bit transition from among a predefined plurality of different bit transitions; andchannel optimization (CO) logic that, in response to the input signal and the sampling value of the equalized signal, defines first values for a programmable parameter of the bit detector, the first values substantially maximizing vector separations among vectors of waveform target samples corresponding to the predefined plurality of different bit transitions while also defining second values for a programmable parameter of the signal equalizer that substantially minimizes the mean squared separation of the equalized signal segment for each bit transition from the waveform target corresponding to that bit transition.
  • 2. The apparatus of claim 1 wherein the CO logic derives a between-class scatter matrix for one of the first values of one bit transition with respect to another one of the first values of another bit transition.
  • 3. The apparatus of claim 2 wherein the CO logic derives a within-class scatter matrix of the average variance within each of the bit transitions.
  • 4. The apparatus of claim 1 wherein the predefined plurality of different bit transitions includes all binary value shift possibilities from a first bit value to a second bit value.
  • 5. The apparatus of claim 1 wherein the bit detector comprises a Viterbi detector and the first values are detector target values.
  • 6. The apparatus of claim 5 wherein the signal equalizer comprises a finite impulse response (FIR) filter and the second values are tap weight values.
  • 7. The apparatus of claim 6 wherein the CO logic parameterizes the read channel circuitry with the target values and the tap weight values during an initialization sequence of processing activities by the read channel circuitry.
  • 8. The apparatus of claim 7 wherein the CO logic redefines the target values and the tap weight values after the initialization sequence is completed by parameterizing the read channel circuitry with second target values and second tap weight values during an operational sequence of processing activities by the read channel circuitry.
  • 9. The apparatus of claim 6 wherein the tap weight values are constrained to be symmetric to maintain a linear phase filter.
  • 10. A method comprising: setting tap weights in a finite impulse response (FIR) filter to predetermined nominal values;defining targets for a Viterbi bit detector in response to results of sampling a known signal, the targets substantially maximizing a target vector separation between target vectors corresponding to different predefined bit transitions; andafter defining the targets, defining optimized values for the tap weights that substantially minimize the mean square difference between the output of the FIR filter output and the targets.
  • 11. The method of claim 10 wherein the defining targets comprises deriving a between-class scatter matrix of the sampling values classified to one of a plurality of bit transitions with respect to the sampling values classified to another of the plurality of bit transitions.
  • 12. The method of claim 11 wherein the FIR filter output includes equalized values and defining the optimized values for the tap weights comprises substantially minimizing the difference between the equalized values and the targets.
  • 13. The method of claim 10 wherein the defining targets is characterized by substantially maximizing vector separations among a plurality of different predefined bit transitions encompassing all binary value shift possibilities from a first bit value to a second bit value.
  • 14. The method of claim 10 comprising parameterizing the FIR filter by changing the tap weights from the nominal values to the optimized values and parameterizing the Viterbi bit detector by setting the targets to the Viterbi bit detector.
  • 15. The method of claim 14 wherein the parameterization occurs during an initialization sequence of processing activities.
  • 16. The method of claim 15 comprising further sampling a known signal after the parameterization.
  • 17. The method of claim 16 comprising, from results of the further sampling and after the parameterization, defining second targets for the Viterbi detector that substantially maximize a vector separation between target vectors corresponding to different predefined bit transitions.
  • 18. The method of claim 17 comprising defining second optimized values for the tap weights in relation to minimizing FIR filter variance from the second targets.
  • 19. The method of claim 18 further comprising reparameterizing the Viterbi bit detector with the second targets and reparameterizing the FIR filter with the second optimized values for the tap weights after the initialization sequence during an operational sequence of processing activities.
  • 20. A system comprising: a viterbi detector; anda read channel circuit comprising channel optimization (CO) logic defining targets for the Viterbi detector by jointly substantially maximizing target vector separations among a plurality of predefined different bit transitions while minimizing filter variance of a finite impulse response (FIR) filter in terms of FIR filter output with respect to the defined target vectors.
US Referenced Citations (12)
Number Name Date Kind
5961658 Reed et al. Oct 1999 A
6594103 Despain et al. Jul 2003 B1
7194674 Okumura et al. Mar 2007 B2
7440208 McEwen et al. Oct 2008 B1
20030011918 Heydari et al. Jan 2003 A1
20030026016 Heydari et al. Feb 2003 A1
20050180298 Horibe et al. Aug 2005 A1
20050264922 Erden et al. Dec 2005 A1
20060087947 Minemura et al. Apr 2006 A1
20060262687 Minemura Nov 2006 A1
20070234188 Shiraishi Oct 2007 A1
20080175422 Kates Jul 2008 A1