ACOUSTIC SIGNAL PROCESSING APPARATUS, ACOUSTIC SIGNAL PROCESSING METHOD, AND COMPUTER READABLE STORAGE MEDIUM

Information

  • Patent Application
  • 20130274632
  • Publication Number
    20130274632
  • Date Filed
    June 10, 2013
    11 years ago
  • Date Published
    October 17, 2013
    10 years ago
Abstract
An acoustic signal processing apparatus includes a processor; and a memory. The processor executes: detecting an acoustic signal that is contained in a sound input signal and that is generated by a human body; calculating an effective volume indicating an effective volume of the acoustic signal detected at the detecting the acoustic signal, based on the input signal; calculating an accumulated effective volume by accumulating effective volumes calculated at the calculating the effective volume; and determining whether the accumulated effective volume calculated at the calculating the accumulated effective volume has reached a predetermined threshold.
Description
FIELD

The embodiments discussed herein are directed to an acoustic signal processing apparatus, an acoustic signal processing method, and an acoustic signal processing program.


BACKGROUND

Conventionally, there is a known technology for improving user's sleep by detecting an acoustic signal near a sleeping user. For example, there is an acoustic signal processing apparatus that detects user's snoring and causes the user to stop snoring. For example, when the volume of an acoustic signal detected near a user is equal to or greater than a threshold, the acoustic signal processing apparatus detects the corresponding acoustic signal as snoring. Then, when the number of detections of snoring reaches a predetermined number, the acoustic signal processing apparatus wakes the user up by sound or vibration to cause the user to stop snoring.


With reference to FIG. 18, a case will be explained below in which a conventional acoustic signal processing apparatus detects snoring. FIG. 18 is a diagram for explaining the conventional technology. In FIG. 18, the vertical axis represents the volume of an acoustic signal and the horizontal axis represents time. Values on the horizontal axis represent the number of detections of the acoustic signal. In the example illustrated in FIG. 18, it is assumed that a threshold for the volume is denoted by Pw and the threshold for the number of detections is three. As illustrated in FIG. 18, for example, the acoustic signal processing apparatus compares the volume of an acoustic signal with the threshold Pw at every detection of the acoustic signal, and detects the first, the third, and the sixth acoustic signals as snoring. When the number of detections of snoring reaches three, that is, when the sixth acoustic signal is detected, the acoustic signal processing apparatus wakes the user up by using sound or vibration.


SUMMARY

However, the conventional technology as described above has a problem in that is difficult to accurately determine annoying snoring.


For example, in the conventional technology, a “detection failure” occurs, in which even when other people subjectively perceives a detected acoustic signal as snoring, the acoustic signal with the volume lower than the threshold is not detected as snoring. In the example illustrated in FIG. 18, even when other people subjectively perceive the second acoustic signal as snoring, the conventional acoustic signal processing apparatus does not detect the second acoustic signal as snoring. Even if the threshold for the volume is lowered, the same problem occurs with respect to the lowered threshold.


Furthermore, the volume of an acoustic signal generated by the user is not always constant, but varies such that the volume becomes higher or lower. Therefore, in the conventional technology, a “detection delay” occurs, in which even when other people subjectively perceive user's snoring as annoying snoring, the user's snoring is not detected as snoring until the number of detections of snoring reaches a predetermined number and detection is delayed. In the example illustrated in FIG. 18, even if other people subjectively perceive user's snoring as annoying when the third acoustic signal is detected, the conventional acoustic signal processing apparatus does not detect the user's snoring until the sixth acoustic signal is detected. Even if the threshold for the number of detections is lowered, the same problem occurs with respect to the lowered threshold. The above problem occurs not only when snoring is detected, but also when, in the same manner, an acoustic signal, such as teeth grinding or monology, generated by a human body is detected.


The disclosed technology has been made in view of the above circumstances, and an object is to provide an acoustic signal processing apparatus, an acoustic signal processing method, and an acoustic signal processing program capable of accurately determining an acoustic signal.


In order to solve the above problems and to achieve the object, in the disclosed apparatus, a detecting unit detects an acoustic signal that is contained in a sound input signal and that is generated by a human body. Then, a calculating unit calculates an effective volume indicating an effective volume of the acoustic signal detected by the detecting unit, based on the input signal. Then, an accumulating unit accumulates the effective volumes calculated by the calculating unit to calculate an accumulated effective volume. Then, a determining unit determines whether the accumulated effective volume calculated by the accumulating unit has reached a predetermined threshold.


The object and advantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the claims.


It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention.





BRIEF DESCRIPTION OF DRAWINGS


FIG. 1 is a block diagram illustrating an example of a configuration of an acoustic signal processing apparatus according to a first embodiment.



FIG. 2 is a diagram for explaining a degree of annoyance.



FIG. 3 is a diagram for explaining the degree of annoyance.



FIG. 4 is a diagram for explaining a relationship between the volume of snoring and a subjective evaluation value.



FIG. 5 is a diagram for explaining a relationship between the number of times of snoring and the subjective evaluation value.



FIG. 6 is a block diagram illustrating an example of a configuration of a detecting unit according to the first embodiment.



FIG. 7 is a diagram for explaining characteristics of snoring.



FIG. 8 is a diagram for explaining characteristics of snoring.



FIG. 9 is a diagram for explaining a process performed by the duration calculating unit.



FIG. 10 is a diagram illustrating an example of a detection result obtained by the detecting unit.



FIG. 11 is a diagram for explaining a method for calculating the degree of annoyance.



FIG. 12 is a diagram illustrating an example of a calculation result obtained by a calculating unit.



FIG. 13 is a diagram illustrating an example of an accumulation result obtained by an accumulating unit.



FIG. 14 is a flowchart of the flow of a process performed by the acoustic signal processing apparatus according to the first embodiment.



FIG. 15 is a block diagram illustrating an example of a configuration of a mobile terminal according to a second embodiment.



FIG. 16 is a diagram for explaining a method for calculating the degree of annoyance.



FIG. 17 is a diagram illustrating an example of a computer that executes an acoustic signal processing program.



FIG. 18 is a diagram for explaining a conventional technology.





DESCRIPTION OF EMBODIMENTS

Preferred embodiments of an acoustic signal processing apparatus, an acoustic signal processing method, and an acoustic signal processing program of the present application will be explained in detail below with reference to accompanying drawings. The present application is not limited to the embodiments. The embodiments can be arbitrarily combined within a scope that does not contradict the processing contents.


First Embodiment

An example of a configuration of an acoustic signal processing apparatus according to a first embodiment will be explained. FIG. 1 is a block diagram illustrating an example of the configuration of the acoustic signal processing apparatus according to the first embodiment. As illustrated in FIG. 1, an acoustic signal processing apparatus 100 includes an input memory 110, a detecting unit 120, a calculating unit 130, an accumulation memory 140, an accumulating unit 150, and a determining unit 160. The acoustic signal processing apparatus 100 accumulates degrees of annoyance that are calculated based on the volume of snoring, and determines annoying snoring. In the following, the degree of annoyance is explained.



FIG. 2 and FIG. 3 are diagrams for explaining the degree of annoyance. In FIG. 2 and FIG. 3, the vertical axis represents an accumulated degree of annoyance and the horizontal axis represents time. The value on the horizontal axis represents the number of detections of snoring. In the example illustrated in FIG. 2 and FIG. 3, a threshold is denoted by Pw. FIG. 2 illustrates an example in which the degree of annoyance calculated per snoring is large because the volume of snoring is high, and FIG. 3 illustrates an example in which the degree of annoyance calculated per snoring is small because the volume of snoring is low. As illustrated in FIG. 2, when the volume of snoring is high, the number of times of snoring until the accumulated degree of annoyance reaches the threshold Pw is, for example, three. In this case, the acoustic signal processing apparatus 100 determines that the user's snoring is annoying upon detection of the third snoring. Incidentally, as illustrated in FIG. 3, when the volume of snoring is low, the number of times of snoring until the accumulated degree of annoyance reaches the threshold Pw is, for example, seven. In this case, the acoustic signal processing apparatus 100 determines that the user's snoring is annoying upon detection of the seventh snoring.


Meanwhile, the degree of annoyance is used for the determination because the inventors have learned that the volume of snoring and the number of times of snoring have influences on whether a person other than a user perceive the snoring as annoying. FIG. 4 is a diagram for explaining a relationship between the volume of snoring and a subjective evaluation value. The subjective evaluation value is a score representing subjective evaluation made by other people when the other people hear the snoring. In FIG. 4, evaluation of “not annoyed” is represented by “2”, evaluation of “rather not annoyed” is represented by “1”, evaluation of “no opinion” is represented by “0”, evaluation of “annoyed” is represented by “1”, and evaluation of “very annoyed” is represented by “2”. In FIG. 4, the vertical axis represents the subjective evaluation value and the horizontal axis represents the volume of snoring. FIG. 4 illustrates a plot of experimental data representing the average of subjective evaluation values provided by the test subjects when the number of times of snoring is fixed to five and the volume of snoring is changed. As illustrated in FIG. 4, there is a relationship between the volume of snoring and the subjective evaluation value such that the subjective evaluation value decreases as the volume of snoring increases. Namely, the experimental data illustrated in FIG. 4 indicates that the increased volume of snoring increases the annoyance of the other people.



FIG. 5 is a diagram for explaining a relationship between the number of times of snoring and the subjective evaluation value. In FIG. 5, the vertical axis represents the subjective evaluation value and the horizontal axis represents the number of times of snoring. FIG. 5 illustrates a plot of experimental data representing the average of the subjective evaluation values provided by the test subjects when the volume of snoring is fixed to 70 dBA and the number of times of snoring is changed. As illustrated in FIG. 5, there is a relationship between the number of times of snoring and the subjective evaluation value such that the subjective evaluation value decreases as the number of times of snoring increases. Namely, the experimental data illustrated in FIG. 5 indicates that the increased number of times of snoring increases the annoyance of the other people. The acoustic signal processing apparatus 100 illustrated in FIG. 1 accumulates the degree of annoyance calculated based on the volume of snoring, to thereby determine annoying snoring.


Referring back to the explanation of FIG. 1, the input memory 110 accumulates a sound input signal. For example, the input memory 110 accumulates, as input signal data, input signal data divided into a plurality of frames with a predetermined sample size, in association with a frame identification number. The sound input signal accumulated by the input memory 110 is, for example, an input signal of sound collected by a sound collecting device, such as a microphone, near a sleeping user.


The detecting unit 120 detects an acoustic signal that is generated by a human body and that is contained in the sound input signal. For example, when an input signal similar to a past input signal is continued for a predetermined time, the detecting unit 120 detects a corresponding input signal as snoring. FIG. 6 is a block diagram illustrating an example of a configuration of the detecting unit according to the first embodiment. As illustrated in FIG. 6, the detecting unit 120 includes an FFT (Fast Fourier Transform) unit 121, a power spectrum calculating unit 122, a similarity calculating unit 123, a duration calculating unit 124, and a snoring determining unit 125. The snoring is one example of the acoustic signal.


The detecting unit 120 uses the characteristics of snoring in order to detect snoring. FIG. 7 and FIG. 8 are diagrams for explaining the characteristics of snoring. In FIG. 7, the vertical axis represents the magnitude of the frequency of an input signal and the horizontal axis represents time. In the example illustrated in FIG. 7, signals present in respective time periods 7a, 7b, 7c, 7d, and 7e correspond to snoring. As illustrated in FIG. 7, snoring is repeated at a cycle T1. For example, T1 is about 3 to 5 seconds. Furthermore, each snoring continues for a time T2. For example, T2 is about 0.4 to 2 seconds. In FIG. 8, the vertical axis represents the power of the input signal and the horizontal axis represents the frequency. In the example illustrated in FIG. 8, a waveform 8a indicated by a solid line corresponds to the frequency characteristic of one frame in the time period 7a, and a waveform 8b indicated by a dashed line corresponds to the frequency characteristic of one frame in the time period 7b. As illustrated in FIG. 8, the waveform 8a and the waveform 8b are similar to each other. Namely, the frequency characteristics at respective times of snoring are similar to one another. In the following, a process performed by the detecting unit 120 to detect snoring by using the characteristics of the snoring will be explained.


Referring back to the explanation of FIG. 6. The FFT unit 121 transforms the sound input signal into a signal in the frequency domain by Fourier transform. For example, the FFT unit 121 acquires an input signal that is divided into a plurality of frames with a predetermined sample size, and performs Fourier transform on the input signal of the acquired frame, to thereby generate a signal in the frequency domain for each of the frames. The FFT unit 121 outputs the signal in the frequency domain generated for each frame to the power spectrum calculating unit 122. The sound input signal input to the FFT unit 121 is, for example, an input signal of sound collected by a sound collecting device, such as a microphone, near a sleeping user.


The power spectrum calculating unit 122 calculates the power spectrum from the signal in the frequency domain transformed by the FFT unit 121. For example, the power spectrum calculating unit 122 calculates the sum of squares of the real part and the imaginary part of each band for the signal in the frequency domain of the current frame received by the FFT unit 121, to thereby calculate the power spectrum of each band. Then, the power spectrum calculating unit 122 outputs the calculated power spectrum of each band to the similarity calculating unit 123.


The similarity calculating unit 123 calculates the similarity between a current power spectrum and a past power spectrum. The similarity is a value indicating a resemblance between the current power spectrum and the past power spectrum. For example, the value increases as the resemblance between the waveform 8a and the waveform 8b illustrated in FIG. 8 increases. For example, when receiving the power spectrum at a current time “t” from the power spectrum calculating unit 122, the similarity calculating unit 123 selects, as the past power spectrum, a power spectrum at a time “t−x” that is backward by a time “x” from the time “t”. The similarity calculating unit 123 calculates a difference between the selected past power spectrum and the current power spectrum for each band, and determines whether the calculated difference is equal to or smaller than a threshold. When determining that the calculated difference is equal to or smaller than the threshold, the similarity calculating unit 123 adds “1” to the similarity. On the other hand, when determining that the calculated difference is greater than the threshold, the similarity calculating unit 123 adds “0” to the similarity. The similarity calculating unit 123 performs the above process on all of the bands to calculate the degree of similarity for each of the bands. Then, the similarity calculating unit 123 adds up the calculated degrees of similarity of all the bands, to thereby calculate the degree of similarity between the power spectrum at the time “t” and the power spectrum at the time “t−x”. The threshold is, for example, 3 dB. The threshold is not limited to this example, but may be set to an arbitrary value by a user using the acoustic signal processing apparatus 100.


Furthermore, the similarity calculating unit 123 calculates the degree of similarity by changing the time “x” from “x1” to “x2”. Namely, the similarity calculating unit 123 calculates the similarity between the power spectrum at each time ranging from a time “t−x1” to a time “t−x2” and the power spectrum at the time “t”. For example, the time “x” corresponds to the cycle T1 illustrated in FIG. 7, x1=3 seconds, and x2=5 seconds. The similarity calculating unit 123 outputs the calculated similarity to the duration calculating unit 124 as the similarity of the frame at the time “t”. The similarity calculating unit 123 stores therein the past power spectrum.


The duration calculating unit 124 calculates a duration of the similarity. FIG. 9 is a diagram for explaining a process performed by the duration calculating unit. FIG. 9 is a table containing the similarities, where the horizontal axis represents the current time “t” and the vertical axis represents the time “x” backward from the current time. A shaded area in FIG. 9 indicates that an identifier F is stored. The identifier F indicates that the similarity of the frame at the time “t” is equal to or greater than the threshold. As illustrated in FIG. 9, for example, when receiving the similarity of the frame at the current time “t” from the similarity calculating unit 123, the duration calculating unit 124 stores the received similarity in the table. For example, when the similarity between the power spectrum at the current time “t” and the power spectrum at the time “t−x1” is “2”, the duration calculating unit 124 stores “2” in a lower left field of the table. Similarly, the duration calculating unit 124 stores the similarities at other times “x”. The duration calculating unit 124 determines whether the stored similarity is equal to or greater than the threshold. When determining that the degree of similarity is equal to or greater than the threshold, the duration calculating unit 124 stores the identifier F in a corresponding field of the table. The threshold is, for example, “10”. The threshold is not limited to this example, but may be set to an arbitrary value by a user using the acoustic signal processing apparatus 100.


Furthermore, when receiving the similarity of the frame at a next time “t+1” from the similarity calculating unit 123, the duration calculating unit 124 stores the received similarity in the table and determines whether the stored similarity is equal to or greater than the threshold in the same manner as described above. When the identifier F continues from the time “t” to the time “t+1” with respect to the same time “x”, the duration calculating unit 124 repeats the above process. The duration calculating unit 124 repeats the above process until the continuation of the identifier F is stopped. In the example illustrated in FIG. 9, a process on the frame at a time “t+7” is performed and the above process is repeated until the continuation of the identifier F is stopped.


When the continuation of the identifier F is stopped, the duration calculating unit 124 counts the number of the continued identifiers F. The duration calculating unit 124 converts the counted number of the continued identifiers F into a duration, and outputs the duration to the snoring determining unit 125. In the example illustrated in FIG. 9, the identifier F is continued seven times in seven frames from the time “t” to the time “t+6”. The duration calculating unit 124 outputs, as a duration, a time corresponding to the consecutive frames to the snoring determining unit 125. If the identifier F is continued in a plurality of times “x”, the duration calculating unit 124 performs the above process with respect to the greatest number of the continued identifiers F.


Referring back to the explanation of FIG. 6, the snoring determining unit 125 determines presence or absence of snoring. For example, the snoring determining unit 125 determines whether the duration received from the duration calculating unit 124 is within a time range “y”. The time range “y” corresponds to, for example, the time T2 illustrated in FIG. 7, and is about 0.4 to 2 seconds. When determining that the duration is within the time range “y”, the snoring determining unit 125 determines that the sound input signal corresponding to a subject duration contains snoring. Then, the snoring determining unit 125 determines, as a snoring interval, an interval corresponding to the input signal that is determined to contain snoring.


In this way, the detecting unit 120 detects snoring from the sound input signal by using the characteristics of snoring. FIG. 10 is a diagram illustrating an example of a detection result obtained by the detecting unit. In FIG. 10, the vertical axis represents volume and the horizontal axis represents time. Values on the horizontal axis represent the number of detections of snoring. As illustrated in FIG. 10, for example, the detecting unit 120 detects a snoring interval corresponding to one snoring. The snoring interval is information containing information on a frame in which snoring starts and information on a frame in which the snoring stops. The detecting unit 120 outputs the detected snoring interval to the calculating unit 130 every time the detecting unit 120 detects the snoring interval.


Referring back to the explanation of FIG. 1, the calculating unit 130 calculates a degree of annoyance with the snoring detected by the detecting unit 120, based on the sound input signal. For example, the calculating unit 130 acquires input signal data corresponding to the snoring interval detected by the detecting unit 120 from the input memory 110. The calculating unit 130 calculates a volume of snoring by using the acquired input signal data. As the volume of snoring, for example, an average volume is used that is calculated as a mean square value of a power value of the snoring interval. For example, the calculating unit 130 calculates the volume of snoring by using Equation (1) below.










fpow


(
n
)


=





k
=
0


N
-
1





sample


(
k
)


2


N





(
1
)







In Equation (1), n represents a frame number. fpow(n) represents a volume of an n-th frame. sample(k) represents a k-th sample. N represents a frame length. For example, the calculating unit 130 calculates an average volume of each of the frames contained in the snoring interval by using Equation (1), and adds up the calculated average volumes of the respective frames, to thereby calculate the volume of snoring.


Furthermore, the calculating unit 130 calculates the degree of annoyance with snoring based on the volume of snoring. For example, the calculating unit 130 calculates the degree of annoyance with snoring by using Equation (2) below.










Annoy


(
n
)


=

{




A





min




(


fpow


(
n
)


<

TH





1


)






A





Max
×



fpow


(
n
)


-

TH





1




TH





2

-

TH





1







(


TH





1



fpow


(
n
)




TH





2








A





Max




(


fpow


(
n
)


>

TH





2


)









(
2
)







In Equation (2), n represents the number of times of snoring. Annoy(n) represents the degree of annoyance with n-th snoring. fpow(n) represents the volume of n-th snoring. AMax represents a predetermined maximum value of the degree of annoyance, and is set to “100” for example. Amin represents a predetermined minimum value of the degree of annoyance, and is set to “0” for example. TH1 and TH2 represent thresholds. For example, TH1 is set to “60 dBA” and TH2 is set to “80 dBA”. For example, AMax corresponds to a minimum value of the volume of single snoring that causes other person to feel annoyed, and is set to the same value as the threshold set by the determining unit 160 as will be described later. However, the method to set AMax is not limited to the setting method described herein. For example, to prevent the determining unit 160 from erroneously determining that the degree of annoyance reaches the threshold when loud sound other than snoring is detected, AMax may be set to a value smaller than the threshold set by the determining unit 160. Furthermore, AMax, Amin, TH1, and TH2 are also not limited to the example described herein, but may be set to arbitrary values by a user using the acoustic signal processing apparatus 100.


The method for calculating the degree of annoyance using Equation (2) by the calculating unit 130 will be explained below. FIG. 11 is a diagram for explaining the method for calculating the degree of annoyance. In FIG. 11, the vertical axis represents the degree of annoyance and the horizontal axis represents a volume (dBA). As illustrated in FIG. 11, for example, when the volume fpow(n) is equal to or greater than the threshold TH1 and smaller than the threshold TH2, the calculating unit 130 calculates, as the degree of annoyance Annoy(n), a value proportional to the volume fpow(n). Furthermore, when the volume fpow(n) is smaller than the threshold TH1, the calculating unit 130 calculates Amin as the degree of annoyance Annoy(n). Moreover, when the volume fpow(n) is equal to or greater than the threshold TH2, the calculating unit 130 calculates Amax as the degree of annoyance Annoy(n).



FIG. 12 is a diagram illustrating an example of a calculation result obtained by the calculating unit. In FIG. 12, the vertical axis represents the degree of annoyance and the horizontal axis represents time. Values on the horizontal axis represent the number of detections of snoring. As illustrated in FIG. 12, for example, the calculating unit 130 calculates the degree of annoyance with n-th snoring every time the calculating unit 130 detects n-th snoring from the detecting unit 120. The calculating unit 130 outputs the calculated degree of annoyance to the accumulating unit 150 every time the calculating unit 130 calculates the degree of annoyance with the n-th snoring. The degree of annoyance is an example of an effective volume.


Referring back to the explanation of FIG. 1, the accumulation memory 140 accumulates an accumulated degree of annoyance that has been accumulated by the accumulating unit 150 to be described later. For example, the accumulation memory 140 accumulates an accumulated degree of annoyance, in which the degrees of annoyance with the first to the n−1-th snoring are accumulated.


The accumulating unit 150 accumulates the degrees of annoyance calculated by the calculating unit, and calculates the accumulated degree of annoyance. For example, when receiving the degree of annoyance with the n-th snoring from the calculating unit 130, the accumulating unit 150 acquires the accumulated degrees of annoyance with the first to the n−1-th snoring from the accumulation memory 140. Then, the calculating unit 130 calculates the accumulated degree of annoyance with the first to the n-th snoring by using, for example, Equation (3) below.





Annoy_total(n)=Annoy_total(n−1)+Annoy(n)  (3)


In Equation (3), n represents the number of times of snoring. Annoy_total(n) represents the accumulated degree of annoyance with the first to the n-th snoring. Annoy_total(n−1) represents the accumulated degree of annoyance with the first to the n−1-th snoring. Annoy(n) represents the degree of annoyance with the n-th snoring. Namely, the accumulating unit 150 adds the degree of annoyance with the n-th snoring and the accumulated degree of annoyance with the first to the n−1-th snoring, to thereby calculate the accumulated degree of annoyance with the first to the n-th snoring.



FIG. 13 is a diagram illustrating an example of an accumulation result obtained by the accumulating unit. In FIG. 13, the vertical axis represents the accumulated degree of annoyance and the horizontal axis represents time. Values on the horizontal axis represent the number of times of snoring. As illustrated in FIG. 13, the accumulating unit 150 adds the degree of annoyance with the n-th snoring and the accumulated degree of annoyance with the first to the n−1-th snoring every time the accumulating unit 150 receives the degree of annoyance with the n-th snoring from the calculating unit 130. Then, the accumulating unit 150 outputs the accumulated degree of annoyance with the first to the n-th snoring to the determining unit 160. Furthermore, the accumulating unit 150 stores the accumulated degree of annoyance with the first to the n-th snoring in the accumulation memory 140. The accumulated degree of annoyance is an example of an accumulated effective volume.


The determining unit 160 determines whether the accumulated degree of annoyance accumulated by the accumulating unit 150 reaches a threshold. For example, the determining unit 160 determines whether the received accumulated degree of annoyance reaches the threshold every time the determining unit 160 receives the accumulated degree of annoyance with the first to the n-th snoring from the accumulating unit 150. When determining that the accumulated degree of annoyance reaches the threshold, the determining unit 160 outputs information indicating that the accumulated degree of annoyance has reached the threshold to a speaker or a monitor. On the other hand, when determining that the accumulated degree of annoyance has not reached the threshold, the determining unit 160 waits to receive a next accumulated degree of annoyance from the accumulating unit 150.


The flow of a process performed by the acoustic signal processing apparatus 100 according to the first embodiment will be explained below. FIG. 14 is a flowchart of the flow of the process performed by the acoustic signal processing apparatus according to the first embodiment. The process illustrated in FIG. 14 is executed upon detection of snoring by the detecting unit 120 for example.


As illustrated in FIG. 14, when the detecting unit 120 detects snoring (YES at Step S101), the calculating unit 130 calculates the degree of annoyance with the snoring detected by the detecting unit 120 based on the sound input signal (Step S102). The accumulating unit 150 accumulates the degrees of annoyance calculated by the calculating unit 130 and calculates an accumulated degree of annoyance (Step S103). The determining unit 160 determines whether the accumulated degree of annoyance accumulated by the accumulating unit 150 has reached the threshold (Step S104). When determining that the accumulated degree of annoyance has reached the threshold (YES at Step S104), the determining unit 160 outputs information indicating that the accumulated degree of annoyance has reached the threshold to a speaker or a monitor (Step S105). On the other hand, when determining that the accumulated degree of annoyance has not reached the threshold (NO at Step S104), the determining unit 160 returns the process to Step S101.


Advantageous effects of the acoustic signal processing apparatus 100 according to the first embodiment will be explained below. As described above, the acoustic signal processing apparatus 100 detects user's snoring contained in the sound input signal, and calculates the degree of annoyance with the detected snoring based on the sound input signal. Then, the acoustic signal processing apparatus 100 calculates the accumulated degree of annoyance by accumulating the calculated degrees of annoyance, and determines whether the calculated accumulated degree of annoyance has reached the threshold. Therefore, the acoustic signal processing apparatus 100 can accurately determine an acoustic signal generated by a human body. For example, even when the volume of snoring fluctuates, the acoustic signal processing apparatus 100 can prevent a detection failure or a detection delay when detecting annoying snoring, and can accurately determine the annoying snoring.


Furthermore, when the volume of snoring is equal to or greater than the threshold TH1 and smaller than the threshold TH2, the acoustic signal processing apparatus 100 calculates, as the degree of annoyance, a value proportional to the volume. Moreover, when the volume of snoring is smaller than the threshold TH1, the acoustic signal processing apparatus 100 calculates a predetermined minimum value as the degree of annoyance. Furthermore, when the volume of snoring is equal to or greater than the threshold TH2, the acoustic signal processing apparatus 100 calculates a predetermined maximum value as the degree of annoyance. Therefore, the acoustic signal processing apparatus 100 can accurately determine the annoying snoring in consideration of the degree to which other person feels annoyed with user's snoring.


Meanwhile, the configuration of the acoustic signal processing apparatus 100 illustrated in FIG. 1 is one example. Therefore, the acoustic signal processing apparatus 100 does not necessarily have to include all of the processing units illustrated in FIG. 1. For example, it is sufficient that the acoustic signal processing apparatus 100 includes the detecting unit, the calculating unit, the accumulating unit, and the determining unit.


Specifically, the detecting unit detects an acoustic signal that is generated by a human body and that is contained in the sound input signal. The calculating unit calculates an effective volume indicating the effective volume of the acoustic signal detected by the detecting unit based on the input signal. The accumulating unit calculates the accumulated effective volume by accumulating the effective volumes calculated by the calculating unit. The determining unit determines whether the accumulated effective volume calculated by the accumulating unit has reached a predetermined threshold.


Second Embodiment

A mobile terminal according to a second embodiment will be explained. In the second embodiment, a case is explained that the mobile terminal determines annoying snoring. FIG. 15 is a block diagram illustrating an example of a configuration of the mobile terminal according to the second embodiment. As illustrated in FIG. 15, a mobile terminal 200 includes a sound collecting unit 210, an acoustic signal processing circuit 220, and an operation executing unit 230. The mobile terminal 200 is one example of the acoustic signal processing apparatus.


The sound collecting unit 210 collects a sound input signal. For example, the sound collecting unit 210 collects a sound input signal of sound generated near a sleeping user. The sound collecting unit 210 divides the collected sound input signal into a plurality of frames with a predetermined sample size and generates input signal data. The sound collecting unit 210 outputs the input signal data to the acoustic signal processing circuit 220. The sound collecting unit 210 corresponds to a sound collecting device, such as a microphone.


The acoustic signal processing circuit 220 detects snoring of the user from the sound input signal, and calculates the degree of annoyance according to the detected volume of snoring. The acoustic signal processing circuit 220 accumulates the degree of annoyance at every detection of user's snoring, and determines whether the accumulated degree of annoyance has reached a threshold. For example, the acoustic signal processing circuit 220 receives the input signal data from the sound collecting unit 210, and detects user's snoring from the input signal data. The acoustic signal processing circuit 220 calculates the degree of annoyance according to the detected volume of snoring, and accumulates the calculated degrees of annoyance. The acoustic signal processing circuit 220 determines that the user's snoring is annoying when the accumulated degree of annoyance reaches the threshold.


As illustrated in FIG. 15, the acoustic signal processing circuit 220 includes an input memory 221, a detecting unit 222, a calculating unit 223, an accumulation memory 224, an accumulating unit 225, and a determining unit 226. The explanation of the input memory 221, the detecting unit 222, the calculating unit 223, the accumulation memory 224, the accumulating unit 225, and the determining unit 226 illustrated in FIG. 15 is the same as the explanation of the input memory 110, the detecting unit 120, the calculating unit 130, the accumulation memory 140, the accumulating unit 150, and the determining unit 160 illustrated in FIG. 1.


The operation executing unit 230 executes a predetermined operation when the determining unit 226 determines that the accumulated degree of annoyance has reached the threshold. The operation executing unit 230 corresponds to, for example, a vibrating device or a speaker. For example, the operation executing unit 230 vibrates in a predetermined pattern or outputs sound with a predetermined volume when the user's snoring is determined as annoying.


As described above, the mobile terminal 200 according to the second embodiment can wake the user up by executing a predetermined operation to cause the user to stop snoring when the mobile terminal 200 determines that the accumulated degree of annoyance has reached the threshold.


Third Embodiment

An acoustic signal processing apparatus according to a third embodiment will be explained. The acoustic signal processing apparatus according to the third embodiment includes, similarly to the acoustic signal processing apparatus 100 illustrated in FIG. 1, the input memory 110, the detecting unit 120, the calculating unit 130, the accumulation memory 140, the accumulating unit 150, and the determining unit 160. Of these units, the explanations of the input memory 110, the detecting unit 120, the accumulation memory 140, the accumulating unit 150, and the determining unit 160 are the same as the explanation of the input memory 110, the detecting unit 120, the accumulation memory 140, the accumulating unit 150, and the determining unit 160 illustrated in FIG. 1.


The calculating unit 130 has the same functions as those of the calculating unit 130 illustrated in FIG. 1. As the method for calculating the degree of annoyance, while the method using the above Equation (1) has been explained, the method is not limited to this example. For example, the calculating unit 130 may calculate the degree of annoyance caused by snoring by using Equation (4) below.





Annoy(n)=fpow(n)×COEFF  (4)


In Equation (4), n represents the number of times of snoring. Annoy(n) represents the degree of annoyance with the n-th snoring. fpow(n) represents the volume of n-th snoring. COEFF represents an arbitrary constant.


The method for calculating the degree of annoyance using Equation (4) by the calculating unit 130 will be explained below. FIG. 16 is a diagram for explaining the method for calculating the degree of annoyance. In FIG. 16, the vertical axis represents the degree of annoyance and the horizontal axis represents a volume (dBA). As illustrated in FIG. 16, for example, the calculating unit 130 calculates, as the degree of annoyance Annoy(n), a value proportional to the volume fpow(n). Then, the calculating unit 130 outputs the calculated degree of annoyance to the accumulating unit 150 every time the calculating unit 130 calculates the degree of annoyance with the n-th snoring.


As described above, the acoustic signal processing apparatus 100 according to the third embodiment calculates a value proportional to the volume of snoring as the degree of annoyance with the snoring. Therefore, the acoustic signal processing apparatus 100 can accurately determine annoying snoring with low processing load.


Fourth Embodiment

An acoustic signal processing apparatus according to a fourth embodiment will be explained. The acoustic signal processing apparatus according to the fourth embodiment reduces the accumulated degree of annoyance when a predetermined minimum value is continuously calculated as the degree of annoyance for a predetermined time. This takes advantage of the fact that, even when the degrees of annoyance are accumulated, snoring becomes less annoying to other people when the snoring with a low volume continues for a predetermined time. The acoustic signal processing apparatus according to the fourth embodiment includes the input memory 110, the detecting unit 120, the calculating unit 130, the accumulation memory 140, the accumulating unit 150, and the determining unit 160 similarly to the acoustic signal processing apparatus 100 illustrated in FIG. 1. Of these units, the explanation of the input memory 110, the detecting unit 120, the calculating unit 130, the accumulation memory 140, and the determining unit 160 is the same as the explanation of the input memory 110, the detecting unit 120, the calculating unit 130, the accumulation memory 140, and the determining unit 160 illustrated in FIG. 1.


The accumulating unit 150 has the same functions as those of the accumulating unit 150 illustrated in FIG. 1. The accumulating unit 150 reduces the accumulated degree of annoyance when the calculating unit 130 continuously calculates a predetermined minimum value as the degree of annoyance for a predetermined time. For example, the accumulating unit 150 calculates the degree of annoyance with snoring by using Equation (5) below.










Annoy_total


(
n
)


=

{



0



(


Annoy


(
n
)


=


Annoy


(

n
-
1

)


=


=


Annoy


(

n
-
N

)


=
0




)







Annoy_total


(

n
-
1

)


+

Annoy


(
n
)






(
otherwise
)









(
5
)







In Equation (5), n represents the number of times of snoring. Annoy_total(n) represents the accumulated degree of annoyance with the first to the n-th snoring. Annoy_total(n−1) represents the accumulated degree of annoyance with the first to the n−1-th snoring. Annoy(n) represents the degree of annoyance with the n-th snoring. N represents the number of times of snoring contained in a predetermined time that is determined in advance to reduce the accumulated degree of annoyance. Namely, when all the degrees of annoyance with snoring received from the n-N-th time to the n-th time are “0”, the accumulating unit 150 reduces the accumulated degree of annoyance with the first to the n-th snoring to “0”. On the other hand, in other cases, the accumulating unit 150 adds the degree of annoyance with the n-th snoring and the accumulated degree of annoyance with the first to the n−1-th snoring to calculate the accumulated degree of annoyance with the first to the n-th snoring. Then, the accumulating unit 150 outputs the accumulated degree of annoyance with the first to the n-th snoring to the determining unit 160. Furthermore, the accumulating unit 150 stores the accumulated degree of annoyance with the first to the n-th snoring in the accumulation memory 140. While a case has been explained that the accumulated degree of annoyance is reduced to “0”, it is not limited thereto. For example, the accumulating unit 150 may reduce the accumulated degree of annoyance to half.


As described above, the acoustic signal processing apparatus 100 according to the fourth embodiment reduces the accumulated degree of annoyance when the calculating unit 130 continuously detects the predetermined minimum value as the degree of annoyance for a predetermined time. Therefore, the acoustic signal processing apparatus 100 can accurately determine the annoying snoring in consideration of the fact that snoring becomes less annoying to other people when the snoring with a low volume continues for a predetermined time.


Fifth Embodiment

While the embodiments of the present invention have been described above, the present invention may be embodied in various forms other than the embodiments described above. Therefore, other embodiments will be described below.


For example, in the embodiment described above, it is explained that the acoustic signal processing apparatus 100 and the mobile terminal 200 detects an acoustic signal generated by a human body as snoring; however, the present invention is not limited thereto. For example, the acoustic signal processing apparatus 100 and the mobile terminal 200 may detect other acoustic signals, such as teeth grinding or monology, as the acoustic signal generated by a human body. For another example, the acoustic signal processing apparatus 100 and the mobile terminal 200 may detect an acoustic signal in a frequency band that is annoying to other people. In this way, as a technology for detecting the acoustic signal generated by a human body, a well-known technology, such as a technology disclosed in Japanese Laid-open Patent Publication No. 7-184948 or a technology disclosed in Japanese Laid-open Patent Publication No. 2004-187961, may arbitrarily be selected and used.


Moreover, the components of the acoustic signal processing apparatus 100 and the mobile terminal 200 illustrated in FIGS. 1 and 15 are functionally conceptual and do not necessarily have to be physically configured in the manner illustrated in the drawings. Namely, specific forms of distribution and integration of the acoustic signal processing apparatus 100 and the mobile terminal 200 are not limited to those illustrated in the drawings, and all or part of the acoustic signal processing apparatus 100 and the mobile terminal 200 can be functionally or physically distributed or integrated in arbitrary units according to various loads and the state of use. For example, it may be possible to integrate the calculating unit 130 and the accumulating unit 150 illustrated in FIG. 1.


Furthermore, the processing functions implemented by the detecting units 120 and 222, the calculating units 130 and 223, the accumulating units 150 and 225, and the determining units 160 and 226 are realized as described below. Specifically, all or an arbitrary part of the processing functions may be realized by a CPU (Central Processing Unit) and a program analyzed and executed by the CPU, or may be realized by hardware using wired logic.


Moreover, the input memories 110 and 221 and the accumulation memories 140 and 224 correspond to a semiconductor memory device, such as a RAM (Random Access Memory), a ROM (Read Only Memory), or a flash memory (Flash Memory), or corresponds to a storage device, such as a hard disk or an optical disk.


The acoustic signal processing apparatus 100 and the mobile terminal 200 may be realized by mounting the functions of the acoustic signal processing apparatus 100 and the mobile terminal 200 onto a known information processing apparatus. The known information processing apparatus corresponds to a device, such as a personal computer, a workstation, a mobile phone, a PHS (Personal Handy-phone System) terminal, a mobile communication terminal, or a PDA (Personal Digital Assistant).



FIG. 17 is a diagram illustrating an example of a computer that executes an acoustic signal processing program. As illustrated in FIG. 17, a computer 300 includes a CPU 301 that executes various types of arithmetic processing, an input device 302 that receives input of data from a user, and a monitor 303. The computer 300 also includes a medium reading device 304 that reads a program or the like from a storage medium, and an interface device 305 that transmits and receives data to and from other devices. The computer 300 also includes a RAM (Random Access Memory) 306 for temporarily storing various types of data, and a hard disk device 307. The devices 301 to 307 are connected to a bus 308.


The hard disk device 307 stores therein an acoustic signal processing program 307a that has the same functions as those of the processing units of the detecting units 120 and 222, the calculating units 130 and 223, the accumulating units 150 and 225, and the determining units 160 and 226 illustrated in FIGS. 1 and 15. The hard disk device 307 also stores therein various types of data 307b for realizing the acoustic signal processing program 307a.


The CPU 301 reads the acoustic signal processing program 307a from the hard disk device 307 and loads and executes it on the RAM 306, so that the acoustic signal processing program 307a functions as an acoustic signal processing process 306a. Namely, the acoustic signal processing program 307a functions as a process that is the same as the processing units of the detecting units 120 and 222, the calculating units 130 and 223, the accumulating units 150 and 225, and the determining units 160 and 226.


The acoustic signal processing program 307a described above does not necessarily have to be stored in the hard disk device 307. For example, the computer 300 may read the program stored in a computer-readable recording medium and execute the read program. The computer-readable recording medium corresponds to, for example, a portable recording medium, such as a CD-ROM, a DVD disk, or a USB memory; a semiconductor memory, such as a flash memory; or a hard disk drive. It may also be possible to store the program in a device connected to a public line, the Internet, a LAN (Local Area Network), or a WAN (Wide Area Network), and cause the computer 300 to read the program from the device and execute the read program.


According to one embodiment of the technology disclosed herein, it is possible to accurately determine an acoustic signal.


All examples and conditional language provided herein are intended for the pedagogical purposes of aiding the reader in understanding the invention and the concepts contributed by the inventor to further the art, and are not to be construed as limitations to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority and inferiority of the invention. Although one or more embodiments of the present invention have been described in detail, it should be understood that the various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the invention.

Claims
  • 1. An acoustic signal processing apparatus comprising: a processor; anda memory, wherein the processor executes a process comprising:detecting an acoustic signal that is contained in a sound input signal and that is generated by a human body;calculating an effective volume indicating an effective volume of the acoustic signal detected at the detecting the acoustic signal, based on the input signal;calculating an accumulated effective volume by accumulating effective volumes calculated at the calculating the effective volume; anddetermining whether the accumulated effective volume calculated at the calculating the accumulated effective volume has reached a predetermined threshold.
  • 2. The acoustic signal processing apparatus according to claim 1, wherein the calculating the effective volume includes:calculating, as the effective volume, a value proportional to the effective volume when a level of the input signal is equal to or greater than a first threshold and smaller than a second threshold;calculating, as the effective volume, a predetermined minimum value when the level is smaller than the first threshold; andcalculating, as the effective volume, a predetermined maximum value when the level is equal to or greater than the second threshold.
  • 3. The acoustic signal processing apparatus according to claim 1, wherein the calculating the accumulated effective volume includes reducing the accumulated effective volume when a predetermined minimum value is continuously calculated as the effective volume for a predetermined time at the calculating the effective volume.
  • 4. The acoustic signal processing apparatus according to claim 1, wherein the process further comprises executing a predetermined operation when it is determined that the accumulated effective volume has reached the threshold at the determining whether the accumulated effective volume has reached the threshold.
  • 5. An acoustic signal processing method executed by a computer, the method comprising: detecting, using a processor, an acoustic signal that is contained in a sound input signal and that is generated by a human body;calculating, using the processor, an effective volume indicating an effective volume of the acoustic signal detected at the detecting the acoustic signal, based on the input signal;calculating, using the processor, an accumulated effective volume by accumulating effective volumes calculated at the calculating the effective volume; anddetermining, using the processor, whether the accumulated effective volume calculated at the calculating the accumulated effective volume has reached a predetermined threshold.
  • 6. The acoustic signal processing method according to claim 5, wherein the calculating the effective volume includes:calculating, as the effective volume, a value proportional to the effective volume when a level of the input signal is equal to or greater than a first threshold and smaller than a second threshold;calculating, as the effective volume, a predetermined minimum value when the level is smaller than the first threshold; andcalculating, as the effective volume, a predetermined maximum value when the level is equal to or greater than the second threshold.
  • 7. The acoustic signal processing method according to claim 5, wherein the calculating the accumulated effective volume includes reducing the accumulated effective volume when a predetermined minimum value is continuously calculated as the effective volume for a predetermined time at the calculating the effective volume.
  • 8. The acoustic signal processing method according to claim 5, further comprising executing, using a processor, a predetermined operation when it is determined that the accumulated effective volume has reached the threshold at the determining whether the accumulated effective volume has reached the threshold.
  • 9. A computer readable storage medium having stored therein an acoustic signal processing program causing a computer to execute a process comprising: detecting an acoustic signal that is contained in a sound input signal and that is generated by a human body;calculating an effective volume indicating an effective volume of the acoustic signal detected at the detecting the acoustic signal, based on the input signal;calculating an accumulated effective volume by accumulating effective volumes calculated at the calculating the effective volume; anddetermining whether the accumulated effective volume calculated at the calculating the accumulated effective volume has reached a predetermined threshold.
  • 10. The computer readable storage medium according to claim 9, wherein the calculating the effective volume includes:calculating, as the effective volume, a value proportional to the effective volume when a level of the input signal is equal to or greater than a first threshold and smaller than a second threshold;calculating, as the effective volume, a predetermined minimum value when the level is smaller than the first threshold; andcalculating, as the effective volume, a predetermined maximum value when the level is equal to or greater than the second threshold.
  • 11. The computer readable storage medium according to claim 9, wherein the calculating the accumulated effective volume includes reducing the accumulated effective volume when a predetermined minimum value is continuously calculated as the effective volume for a predetermined time at the calculating the effective volume.
  • 12. The computer readable storage medium according to claim 9, the process further comprising executing a predetermined operation when it is determined that the accumulated effective volume has reached the threshold at the determining whether the accumulated effective volume has reached the threshold.
CROSS-REFERENCE TO RELATED APPLICATION

This application is a continuation of International Application No. PCT/JP2010/072290, filed on Dec. 10, 2010, the entire contents of which are incorporated herein by reference.

Continuations (1)
Number Date Country
Parent PCT/JP2010/072290 Dec 2010 US
Child 13913956 US