SELECTIVE RESOLUTION SPEECH PROCESSING

Abstract
A hearing prosthesis, including receiver means for receiving a signal representative of a sound signal over a frequency range; a first filter bank, having a relatively higher resolution, adapted to process said received signal and produce a first set of channel outputs relating to a selected region or regions of said frequency range; and a second filter bank having a relatively lower resolution, adapted to process said received signal and produce a second set of channel outputs relating to at least the rest of said frequency range; combination means to combine the first and second sets of channel outputs, and processing means operative upon the combined outputs so as to produce a set of stimulation signals for said hearing prosthesis.
Description
BACKGROUND

1. Field of the Invention


The present invention relates generally to signal and speech processing for coding strategies in medical devices, and more particularly, to hearing prostheses such as cochlear implants.


2. Related Art


There are several electrical stimulation devices that use an electrical signal to stimulate nerve, tissue or muscle fibers in a user. Cochlear implants and similar hearing devices apply a stimulating signal to the cochlea of the ear to stimulate a percept of hearing. More particularly, these systems include a microphone that receives ambient sounds, a signal processor that converts selected sounds according to a speech coding strategy into corresponding stimulating signals, and an implanted electrode array for delivering stimuli to the recipient. The recipient (also referred to as a patient herein) receives a perception of hearing based on the nerve stimulation.


Although hearing implants have been widely used, there is an on-going need to improve the fidelity of speech and sound percepts which are experienced by the users.


SUMMARY

According to a first aspect of the present invention, there is provided a method for processing sound signals for use in a hearing prosthesis, the method comprising: receiving a signal representative of a sound signal over a frequency range; applying a first filter bank, having a relatively higher spectral resolution, to a first selected region or regions of said frequency range to produce a first set of a plurality of substantially equally spaced channel outputs; applying a second filter bank, having a relatively lower spectral resolution, to a second selected region or regions of said frequency range to produce a second set of a plurality of substantially equally spaced channel outputs; and combining the first and second sets of channel outputs, and processing the combined outputs so as to produce a set of stimulation signals for said hearing prosthesis; wherein the spacing of the second set of channel outputs is greater than the spacing of the first set of channel outputs.


According to another aspect of the present invention, there is provided a hearing prosthesis comprising: a receiver configured to receive a signal representative of a sound signal over a frequency range; a first filter bank, having a relatively higher resolution, adapted to process said received signal and produce a first set of a plurality of substantially equally spaced channel outputs relating to a first selected region or regions of said frequency range; a second filter bank having a relatively lower resolution, adapted to process said received signal and produce a second set of a plurality of substantially equally spaced channel outputs relating to at least a second region or regions of said frequency range; and combination unit configured to combine the first and second sets of channel outputs; and a processor configured to produce a set of stimulation signals for said hearing prosthesis using the combined outputs; wherein the spacing of the second set of channel outputs is greater than the spacing of the first set of channel outputs.


According to yet another aspect of the present invention, there is provided a system for processing sound signals, the system comprising: means for receiving a signal representative of a sound signal over a frequency range; first means for filtering a first selected region or regions of said frequency range to produce a first set of a plurality of substantially equally spaced channel outputs, wherein the means for filtering has a relatively higher spectral resolution; second means for filtering a second selected region or regions of said frequency range to produce a second set of a plurality of substantially equally spaced channel outputs, wherein the second means for filtering has a relatively lower spectral resolution; means for combining the first and second sets of channel outputs; and means for processing the combined outputs so as to produce a set of stimulation signals for said hearing prosthesis; wherein the spacing of the second set of channel outputs is greater than the spacing of the first set of channel outputs.





BRIEF DESCRIPTION OF THE DRAWINGS

The invention will be described in conjunction with the accompanying drawings, in which:



FIG. 1 shows a block diagram of a conventional speech processor;



FIG. 2 shows a block diagram of a dual channel filter bank according to an embodiment of the present invention;



FIG. 3 shows a spectrum graph of the vowel “a” having a fundamental frequency (F0) of 130 Hz, first formant (F1) of 780 Hz and second formant (F2) of 1040 Hz;



FIG. 4 shows a spectrum graph of the vowel “a” having a fundamental frequency (F0) of 180 Hz, first formant (F1) of 720 Hz and second formant (F2) of 1080 Hz;



FIG. 5 shows a stimulating graph of 22 electrodes of the processed vowel in FIGS. 2 and 3 using a conventional speech processor;



FIG. 6 shows a stimulating graph of 22 electrodes of the processed vowel in FIGS. 2 and 3 using a 128 pt filter bank;



FIG. 7 shows a stimulating graph of 22 electrodes of the processed vowel in FIGS. 2 and 3 using a 256 pt filter bank;



FIG. 8 shows a stimulating graph of 22 electrodes of the processed vowel in FIGS. 2 and 3 using a 512 pt filter bank;



FIG. 9 shows a graph comparing channels of an embodiment of the present invention with the channels of a conventional speech processor;



FIG. 10 shows a block diagram of a dual filter bank according to one example of an embodiment of the present invention;



FIGS. 11A and 11B shows a chart comparing the high resolution output with the low resolution output of the dual filter bank shown in FIG. 10;



FIG. 12A shows a chart comparing the channel intensity of an embodiment of the present invention with a conventional speech processor;



FIG. 12B shows a chart comparing the channel intensity of an embodiment of the present invention with a SPrint™ Frequency Allocation Table (FAT); and



FIGS. 13A-13J depicts a bin allocation table for the use of two FFT outputs in accordance with one embodiment of the present invention.





DETAILED DESCRIPTION

Embodiments of the present invention recognize that certain areas of the hearing frequency range are of more significance than others to speech perception. Accordingly, instead of employing a conventional approach of generally equally-spaced analysis channels, aspects of the present invention provide more closely spaced analysis channels in one or more regions of the hearing frequency range, thereby providing higher spectral resolution in those selected regions.


In one exemplary embodiment of the present invention there is provided a new filter bank specification to be implemented with speech coding strategies and may emphasize, with high spectral resolution, the speech fundamental or speech harmonics over a specific region or regions. One advantage of such an embodiment may be to increase spectral cues in one or more parts of the processed audio spectrum. In addition, such a filter bank of the present invention may specify the region or regions that are able to resolve increased spectral harmonics from speech signals to allow a prosthetic hearing implant patient to better distinguish different harmonic structures in speech by providing cues to voice-pitch perception, and thus aid tasks such as identification of male/female talker, perception of tonal languages and appreciation of music.


Although an exemplary embodiment will be described in use with prosthetic hearing devices, the present invention may also be used in other stimulating applications that require emphasizing particular spectrums. For example, embodiments may also be applied to other neural stimulation applications, so that higher spectral resolution is provided in some regions of interest than in the broader frequency range of interest.


Examples of prosthetic hearing devices systems are shown in U.S. Pat. Nos. 6,537,200, 6,575,894, and 6,697,674, and PCT Published Application No. WO 02/17679, the entire contents and disclosures of which are hereby incorporated by reference herein. In typical prosthetic hearing implant devices, there may be as many as 22-24 electrodes. Depending on the strategy used, a portion of the 22-24 electrodes may carry a transmitted stimulating signal to the nerves in a cochlea.


Embodiments of the present invention may be used in combination with any speech strategy now or later developed, including but not limited to, Continuous Interleaved Sampling (CIS), Spectral PEAK Extraction (SPEAK), and Advanced Combination Encoders (ACE™). An example of such speech strategies is described in U.S. Pat. No. 5,271,397, the entire contents and disclosures of which is hereby incorporated by reference herein. Embodiments of the present invention may also be used with other speech coding strategies. Preferably, the present invention may be used on Cochlear Limited's Nucleus™ implant system that uses a range of coding strategies alternatives, including SPEAK, ACE™, and CIS. Among other things, these strategies offer a trade-off between temporal and spectral resolution of the coded audio signal by changing the number of frequency channels chosen in the signal path. A typical ACE™ signal path is shown in FIG. 1.



FIG. 1 shows a block diagram of a signal path 100 that is processed by a signal processor 102 that comprises a signal processing module 104 and a series of further signal processing modules 106. Once a signal is processed, signal processor 102 sends the signal to a stimulator controller 108 to activate the electrodes or electrode array (not shown) using stimulating unit 110.


Specifically, a signal is received by a microphone (not shown) and is multiplied by a smoothing window and passed through a filter bank process 112 using a Fast Fourier Transform (FFT) to produce 64 signals for channel combination unit 114 to process. In conventional systems, channel combination unit 114 may be limited by the number of electrodes available in the system, e.g. 22 electrodes. Once channel combination unit 114 combines the number of channels to match the number of electrodes, the processed signal is sent to an equalizer 116 and a maxima extractor unit 118. Maxima extractor unit 118 may extract the largest amplitude channels for stimulating the electrodes according to the speech strategy employed. Once the electrodes are chosen, a mapping unit 120 arranges the signals for stimulating the corresponding electrodes.


For example, with ACE™ on the commercially available SPrint™ speech processor from Cochlear Limited, the number of analysis filter channels may be varied between 6 and 22, depending on the number of electrodes available and the overall requirements for the filter bank. If the frequency range over which these channels are formed remains constant, e.g. 80 Hz-8000 Hz, then a setting of 6 will consist of a set of 6 wide filters while a setting of 22 will consist of a set of 22 considerably narrower filters. In some cases, overlapping filters may also be desirable, such that more filters does not necessarily mean they will be narrower, but “more overlapped” with other filters. It is known that prosthetic hearing implant patients may be able to make use of both spectral and temporal cues with the stimuli presented to their cochlea, and thus the use of wider filters may provide more temporal information.


Certain embodiments of the present invention provide a filter bank that may increase the number of channels to enhance any region of the spectrum where finer spectral detail might be required via many narrow filters. Currently, approximately logarithmic, center frequency spaced filters are typically used in prosthetic hearing implants. An embodiment of the present invention may include a region of high spectral resolution filters within an otherwise logarithmically spaced filter bank. An advantage of the present invention may be to provide more channels in the filter bank path, so that more channels would become available for selection in the following stages of processing, such as maxima extraction. Channel combination unit 114 may be able to increase the number of available channels for selection by post processing modules 106.


The number of channels used in embodiments of the present invention may be more than the number of electrodes present in the system. An additional channel may be placed between each existing electrode channel to emphasis certain regions. For example, an electrode array with 10 electrodes may use 19 channels in processing the audio signal. An increase in the number of channels may allow such embodiments of the present invention to easily accommodate prosthetic hearing implants that have increased numbers of electrodes without any major modifications to the implants.


Alternatively, embodiments of the present invention may use any number of filters and are not limited to the number of electrodes in the system, since any number of intermediate stimulation sites may be created via mechanisms such as described in U.S. Pat. No. 5,649,970 the entire contents and disclosures of which are hereby incorporated by reference.


A filter bank of the present invention may be designed to select a particular harmonic region of the speech spectrum. Any portion of the sound range captured by a prosthetic hearing implant, i.e., approximately 0 Hz to 16000 Hz, may be selected by embodiments of the present invention. The selected portion of the speech spectrum may be divided according to formants, i.e., large concentrations of energy in speech, in particular which together determine the characteristic quality of a vowel sound. Examples of regions to select may be the F1 region of speech, approximately 300 Hz to 1000 Hz, or a subset of this region, e.g., 400 Hz to 800 Hz. Another region to select may be F2 region of speech, approximately 850 Hz to 2500 Hz. Additionally, embodiments of the present invention may be extended to the fundamental frequency range that would target the F0 region of speech, approximately 80 Hz to 400 Hz. In addition, multiple portions or non-consecutive ranges, i.e., 400 Hz to 700 Hz and 1000 Hz to 1500 Hz, may be selected.


Any type of filter bank construction now or later developed may be used, such as FIR, IIR or FFT if implemented in a Digital Signal Processor (DSP). With increasing numbers of channels, it often becomes more efficient to use a FFT. In addition, a dual FFT structure may be used where the high resolution FFT covers the 400 Hz-800 Hz frequency region and a low resolution FFT covers the remaining spectrum.


A filter bank of the present invention may be based on a dual FFT filter bank. The first FFT, low resolution, may have a wide filter (128 pt) and operates over the full audio input bandwidth, which is 0-8 kHz. The second FFT, high resolution, may be narrower filter (256 pt) and operates over the 0-4 kHz band. The second FFT provides four times increased resolution for low frequencies compared to standard ACE™ based on a single 128 pt FFT, assuming a 16 kHz sample rate.



FIG. 2 shows a modification of speech processing module 104 in accordance with an embodiment of the present invention. FIG. 2 shows a block diagram of a dual FFT filter bank 202. Both low resolution FFT 204 and high resolution FFT 206 operate on an input buffer of ADC samples from a signal 208. Low resolution FFT 204 requires an audio sample at Fs (16 kHz). Low resolution FFT 204 uses a window function w1(n) 210 and a 128 pt FFT 212. A delay 211 or buffer may be used in low resolution FFT 204. A channel combination unit 214 uses bins 216 that contain mostly high frequency bins, while bins 218 that are not used are discarded. High resolution FFT 206 requires an audio sample at Fs/2 (8 kHz). A low pass filter (LPF) 220 is used to achieve the audio sample for high resolution 206 and then filtered signal is down sampled or decimated by 2 in process 221. High resolution FFT 206 uses a window function w2(n) 222 and a 256 pt FFT 224. Channel combination unit 214 uses bins 226 that contain mostly low frequency bins.


Because the high resolution 256 pt FFT 206 filter bank requires twice as many samples at half the Fs sample rate, there may be a processing latency of four times the low resolution 128 pt FFT 204 filter bank. To allow more time to align the low 204 and high resolution FFTs 206, a FIFO delay 211 (or other similar buffering operation) may be used before the low resolution 128 pt FFT 204 window function, since the high resolution 256 pt FFT 206 will be approximately 12 ms behind. The 12 ms delay results from processing delay through the high resolution path, which in this example is 16 ms, less the processing delay though the low resolution path, which is 4 ms. The exact length of the FIFO is dependant on the implementation, including the delay through the down sampling low pass filter (LPF) 220. This filter could be an IIR or FIR.


It is illustrative to look at the spectrums of the two synthetic vowels, identical except for fundamental frequency. The two vowels are both “a”, with the first having a fundamental frequency of 130 Hz and the second having a fundamental frequency of 180 Hz, both typical speech fundamentals, are shown in FIG. 3 and FIG. 4, respectively. The spacing of each harmonic in the spectrum is given by the fundamental (in this case, 130 or 180 Hz), since vowels are periodic signals.


In conventional speech strategy processing, such as ACE™ on the SPrint™ speech processor, available from Cochlear Limited, filters are typically of the order 180 Hz wide, spaced for example at center frequencies 250 Hz, 375 Hz, and 500 Hz, etc. The 180 Hz spacing and overlap between filters means that the change in the vowel fundamental by 50 Hz and the resultant harmonic spacing does not have much of a change in the energy coming out of each ACE™ filter, which results in audio stimulation. This is shown in FIG. 5, which shows the two vowel spectrums of FIGS. 3 and 4 processed through ACE™ via the Nucleus™ Matlab™ Toolbox (NMT) to produce an Electrodogram. Electrodograms plot stimulus intensity per channel as a function of time. Time is shown along the abscissa and electrode number along the vertex. For each stimulus pulse generated by the device, a vertical bar is shown in the Electrodogram at the time and electrode position of the stimulus. The height of the vertical bar represents the stimulus level (log current in clinical units) where minimum amplitude corresponds to threshold, and maximum amplitude corresponds to comfortable level.


In FIG. 5, display 502 is the “a” vowel of FIG. 3 with a fundamental frequency of 130 Hz, and a display 504 is the “a” vowel of FIG. 4 with a fundamental of 180 Hz. FIG. 5 shows the intensity of the stimulation over each electrode versus time. The height of each segment in each horizontal channel gives the stimulus level. As shown in FIG. 5, there is a small difference between the height of the stimuli of the two vowels in each display. In this case, spectral resolution, which determines which channel is being stimulated, is low. Therefore, in order to resolve different harmonics, the patient must use temporal fluctuations per channel in order to differentiate each vowel sound in conventional speech processing systems.


Embodiments of the present invention may provide an improved spectral resolution by providing many narrow filters in regions of high harmonic energy. In general, for segments of voiced speech, one or more filters in this region will have a relatively large amount of energy in them, while one or more other nearby filters will have relatively little energy in them. Using more filters in regions of relatively large amounts of energy allows the present invention to gives an emphasised cue of the spectral content of a particular region of the speech spectrum.


Using the Nucleus™ Matlab™ Toolbox (NMT), it is possible to examine what happens when spectral resolution is increased with a common prosthetic implant processing strategy, such as ACE™.


The same two vowels “a” with different fundamentals, as shown in FIGS. 3 and 4 are processed with a modified ACE™ strategy, using an FFT filter bank of single bin spacing 125 Hz, 62.5 Hz and 31.25 Hz, corresponding to a 16000 Hz bandwidth 128 pt, 256 pt and 512 pt FFT, and show in FIGS. 6, 7 and 8, respectively. In FIGS. 6, 7 and 8, the first display, from approximately 0 ms to 1000 ms, represents the output from sampling the sound in FIG. 3, while the second display, from approximately 1000 ms to 2000 ms represents the output from sampling the sound in FIG. 4. Outputs for a selection of 22 FFT bins starting at the same bin centre frequency are shown for comparison. A hamming window was used before the FFT. The plots show the actual output stimulation current levels that would be applied to the cochlea with these filter bank spacings. The lowest channel of each plot is set to be the same frequency, in this case approx 250 Hz. Note the frequency information differences on the Y axis, due to the different bin spacing.


The greatest spectral discrimination between fundamental frequencies for each vowel is given by the last filter bank, as shown in FIG. 8, which has the filter spacing and bandwidth narrow enough to provide one or more filters with relatively large amount of energy, while one or more other nearby filters have relatively little energy in them. The 256 pt FFT bin spacing in FIG. 6 at 16000 Hz sampling rate shows relatively no spectral differences over the 128 pt FFT bin spacing in FIG. 7 at the same sampling rate.


Embodiments of the present invention enhances the spectral cues, such as those shown in FIGS. 6, 7 and 8, to define the region and filters necessary to increase harmonic resolution.


Example 1

One example of an implementation of a filter bank for a prosthetic implant speech processor may define a region where the analyze of spectral harmonics with channel spacing equal to or better than the 512 pt FFT, as shown in FIG. 8, is desired.


A specific implementation of the concept is defined for use in a cochlear implant system using a defined region of 400 Hz to 800 Hz as the target region for the increased resolution. This region carries considerable F1 (1st) formant energy for typical voiced speech. The total number of filters used is 43, i.e., one additional channel in between each existing electrode channel in the Nucleus® 24 system (22+21 in between=43). Since there is a desire for the higher frequency resolution in a particular region of the spectrum (400 Hz to 800 Hz), wider filters can be used above and below this region, such as a logarithmically spaced fashion normal with ACE™. Two wider filters are chosen to cover the F0 region, below 400 Hz, and approximately log spaced filters following a shifted version of the natural characteristic cochlea filters are chosen above 800 Hz. The total number of filters, including the high resolution ones, is 43.


A center frequency plot of an embodiment of the present invention, namely a Harmonic Emphasis Filter bank (HEF), is compared to a SPrint™ ACE™ filter bank as shown in FIG. 9. FIG. 9 shows the channel selection or number of filters of the present example in comparison with the channel allocation of a conventional SPrint™ FAT 6. Plot 902 shows the present example and has more channels 904 within a F1 subset region 906 than plot 908, which represents the conventional SPrint™ FAT 6. Two channels 904 on plot 902 are shown below F1 subset region 906, fourteen channels 904 are shown within F1 subset region 906 and twenty-seven channels 904 are shown above F1 subset region 906. In comparison, two channels 910 on plot 908 are shown below F1 subset region 906, four channels 910 are shown within F1 subset region 906 and sixteen channels 910 are shown above F1 subset region 906.


As shown in FIG. 9, in the F1 subset region there are filters equivalent to single bins of a 512 pt FFT, each of the high resolution channels is spaced 31.25 Hz apart. A Hanning (cosine squared) window may be used in the high frequency resolution processing path to provide an equivalent filter bandwidth of 45 Hz in each single bin filter. Embodiments of the present invention are able to obtain four times the resolution of standard ACE™, at 180 Hz.


Computer simulating software, such Simulink™, was used to represent an example of a dual FFT 1002 constructed in accordance with the present invention. As shown in FIG. 10, a discrete impulse 1004 with gain 1006 or a sine wave 1008 may be used by dual FFT 1002 by using a switch 1010. Low resolution FFT 1012 processes the signal using a delay line 1014 and buffer 1016 before a window function 1018 and a 128 pt FFT 1020. The absolute value block of processed signal is used to extract the FFT bin magnitude and sent to a multiport selector 1022 that selects any desired row for bin information. High resolution FFT 1030 process the signal using a 45 low pass filter 1032 and down samples by 2. Next a buffer 1034 is used before a window function 1036 and a 256 pt FFT 1038. The absolute value of processed signal is sent to a multiport selector 1040 that selects the rows for bin information. Computer simulator uses a matrix concatenation 1050 and outputs the sampled values. The Simulink model can be used to demonstrate a method to time align the low and high resolution paths, by aligning the impulse and/or step responses of the two paths. This model required for example a delay line of 224 samples to time align the impulse and step responses, which are shown in FIGS. 11A and 11B, respectively. The high resolution response is shown by plot 1102 and the low resolution response is shown by plot 1104.


The magnitude output from the low and high resolution FFTs may be made available as a dual buffer of values representing the energy in each bin of each FFT. A Frequency Allocation Table (FAT) may be arbitrarily constructed to make use of any bins (either single or combined) for the required filter bank. FIGS. 12A and 12B compare a FAT of the present invention with a FAT of a Contour™ Electrode Chart in FIG. 12A, and SPrint™ FAT 6 in FIG. 12B. Lines 1202 represent the outputs of the present invention, while 1204 represents the outputs of Contour™ and 1206 the outputs of SPrint™ FAT 6.


The following example used a bin allocation table for the use of the two FFT outputs as shown in the table illustrated in FIGS. 13A-13J. The bins used in the FIG. 13A-J table may be optimized with different embodiments and examples of the present invention. In particular, the availability of the upper frequency bins from the high resolution FFT may depend on the LPF cutoff and filter shape, and so should be adjusted accordingly. The first and second columns are the low (128 pt) and high (256 pt) resolution FFTs bin centre frequencies, respectively. The fourth column shows which bins are grouped into a channel, using the FFT (1=low or 2=high resolution) in the third column. For example, channel 1 consists of 5 bins from the high resolution 256 pt FFT from the 93.75 Hz bin to the 218.75 Hz bin inclusive, and so on for all other channels.


Although the present invention has been fully described in conjunction with the certain embodiment thereof with reference to the accompanying drawings, it is to be understood that various changes and modifications may be apparent to those skilled in the art. For example, embodiments of the present invention have been described in connection with a prosthetic hearing device. As noted, the present invention may be implemented in any electrical stimulating device now or later developed.

Claims
  • 1. A method for processing sound signals for use in a hearing prosthesis, the method comprising: receiving a signal representative of a sound signal over a frequency range;applying a first filter bank, having a relatively higher spectral resolution, to a first selected region or regions of said frequency range to produce a first set of a plurality of substantially equally spaced channel outputs;applying a second filter bank, having a relatively lower spectral resolution, to a second selected region or regions of said frequency range to produce a second set of a plurality of substantially equally spaced channel outputs;combining the first and second sets of channel outputs; andprocessing the combined outputs so as to produce a set of stimulation signals for said hearing prosthesis;wherein the spacing of the second set of channel outputs is greater than the spacing of the first set of channel outputs.
  • 2. The method of claim 1, further including the step of receiving the stimulation signal at a stimulator unit, and delivering corresponding stimuli to a user.
  • 3. The method of claim 1, further including the step of delaying the second set of channel outputs for a predetermined period before combining the second set of channel outputs with the first set of channel outputs.
  • 4. The method of claim 1, wherein the first filter bank includes a relatively larger number of filter channels.
  • 5. The method of claim 1, wherein the selected region or regions correspond to parts of the frequency spectrum important to speech perception.
  • 6. The method of claim 5, wherein the selected regions correspond to formants.
  • 7. The method of claim 1, wherein the selected region or regions is selected from one or more of the following frequency ranges: 80-400 Hz, 300-1000 Hz, and 850-2500 Hz.
  • 8. The method of claim 7, wherein the selected region or regions are a subset of one of the frequency ranges.
  • 9. The method of claim 1, wherein first and second filter banks are part of the same filter bank.
  • 10. A hearing prosthesis comprising: a receiver configured to receive a signal representative of a sound signal over a frequency range;a first filter bank, having a relatively higher resolution, adapted to process said received signal and produce a first set of a plurality of substantially equally spaced channel outputs relating to a first selected region or regions of said frequency range;a second filter bank, having a relatively lower resolution, adapted to process said received signal and produce a second set of a plurality of substantially equally spaced channel outputs relating to at least a second region or regions of said frequency range; andcombination unit configured to combine the first and second sets of channel outputs; anda processor configured to produce a set of stimulation signals for said hearing prosthesis using the combined outputs;wherein the spacing of the second set of channel outputs is greater than the spacing of the first set of channel outputs.
  • 11. The prosthesis of claim 10, further comprising: a delay configured to delay the second set of channel outputs for a predetermined period before delivering it to said combination means.
  • 12. The prosthesis of claim 11, wherein the first filter bank includes a relatively larger number of filter channels.
  • 13. The prosthesis of claim 10, wherein the selected region or regions correspond to parts of the frequency spectrum important to speech perception.
  • 14. The prosthesis of claim 13, wherein the selected regions correspond to formants.
  • 15. The prosthesis of claim 10, wherein the selected region or regions is selected from one or more of the following frequency ranges: 80-400 Hz, 300-1000 Hz, and 850-2500 Hz.
  • 16. The prosthesis of claim 15, wherein the selected one or more regions is a subset of one of the frequency ranges.
  • 17. A system for processing sound signals, the system comprising: means for receiving a signal representative of a sound signal over a frequency range;first means for filtering a first selected region or regions of said frequency range to produce a first set of a plurality of substantially equally spaced channel outputs, wherein the means for filtering has a relatively higher spectral resolution;second means for filtering a second selected region or regions of said frequency range to produce a second set of a plurality of substantially equally spaced channel outputs, wherein the second means for filtering has a relatively lower spectral resolution; andmeans for combining the first and second sets of channel outputs; andmeans for processing the combined outputs so as to produce a set of stimulation signals for said hearing prosthesis;wherein the spacing of the second set of channel outputs is greater than the spacing of the first set of channel outputs.
  • 18. The system of claim 17, further comprising: means for delivering stimuli corresponding to the stimulation signals to a user.
  • 19. The system of claim 17, further comprising: means for delaying the second set of channel outputs for a predetermined period before combining the second set of channel outputs with the first set of channel outputs.
  • 20. The system of claim 17, wherein the selected region or regions is selected from one or more of the following frequency ranges: 80-400 Hz, 300-1000 Hz, and 850-2500 Hz.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. Utility patent application Ser. No. 11/167,283, filed Jun. 28, 2005 entitled “Selective Resolution Speech Processing” and makes reference to and claims the priority of U.S. Provisional Patent Application No. 60/583,013, entitled, “Harmonic Emphasis Filter bank,” filed Jun. 28, 2004. The entire disclosure and contents of the above applications are hereby incorporated by reference.

Provisional Applications (1)
Number Date Country
60583013 Jun 2004 US
Continuations (1)
Number Date Country
Parent 11167283 Jun 2005 US
Child 12772823 US