Systems and methods for suppressing sound leakage

Information

  • Patent Grant
  • 11832060
  • Patent Number
    11,832,060
  • Date Filed
    Thursday, April 1, 2021
    3 years ago
  • Date Issued
    Tuesday, November 28, 2023
    a year ago
Abstract
A speaker comprises a housing, a transducer residing inside the housing, and at least one sound guiding hole located on the housing. The transducer generates vibrations. The vibrations produce a sound wave inside the housing and cause a leaked sound wave spreading outside the housing from a portion of the housing. The at least one sound guiding hole guides the sound wave inside the housing through the at least one sound guiding hole to an outside of the housing. The guided sound wave interferes with the leaked sound wave in a target region. The interference at a specific frequency relates to a distance between the at least one sound guiding hole and the portion of the housing.
Description
FIELD OF THE INVENTION

This application relates to a bone conduction device, and more specifically, relates to methods and systems for reducing sound leakage by a bone conduction device.


BACKGROUND

A bone conduction speaker, which may be also called a vibration speaker, may push human tissues and bones to stimulate the auditory nerve in cochlea and enable people to hear sound. The bone conduction speaker is also called a bone conduction headphone.


An exemplary structure of a bone conduction speaker based on the principle of the bone conduction speaker is shown in FIGS. 1A and 1B. The bone conduction speaker may include an open housing 110, a vibration board 121, a transducer 122, and a linking component 123. The transducer 122 may transduce electrical signals to mechanical vibrations. The vibration board 121 may be connected to the transducer 122 and vibrate synchronically with the transducer 122. The vibration board 121 may stretch out from the opening of the housing 110 and contact with human skin to pass vibrations to auditory nerves through human tissues and bones, which in turn enables people to hear sound. The linking component 123 may reside between the transducer 122 and the housing 110, configured to fix the vibrating transducer 122 inside the housing 110. To minimize its effect on the vibrations generated by the transducer 122, the linking component 123 may be made of an elastic material.


However, the mechanical vibrations generated by the transducer 122 may not only cause the vibration board 121 to vibrate, but may also cause the housing 110 to vibrate through the linking component 123. Accordingly, the mechanical vibrations generated by the bone conduction speaker may push human tissues through the bone board 121, and at the same time a portion of the vibrating board 121 and the housing 110 that are not in contact with human issues may nevertheless push air. Air sound may thus be generated by the air pushed by the portion of the vibrating board 121 and the housing 110. The air sound may be called “sound leakage.” In some cases, sound leakage is harmless. However, sound leakage should be avoided as much as possible if people intend to protect privacy when using the bone conduction speaker or try not to disturb others when listening to music.


Attempting to solve the problem of sound leakage, Korean patent KR10-2009-0082999 discloses a bone conduction speaker of a dual magnetic structure and double-frame. As shown in FIG. 2, the speaker disclosed in the patent includes: a first frame 210 with an open upper portion and a second frame 220 that surrounds the outside of the first frame 210. The second frame 220 is separately placed from the outside of the first frame 210. The first frame 210 includes a movable coil 230 with electric signals, an inner magnetic component 240, an outer magnetic component 250, a magnet field formed between the inner magnetic component 240, and the outer magnetic component 250. The inner magnetic component 240 and the out magnetic component 250 may vibrate by the attraction and repulsion force of the coil 230 placed in the magnet field. A vibration board 260 connected to the moving coil 230 may receive the vibration of the moving coil 230. A vibration unit 270 connected to the vibration board 260 may pass the vibration to a user by contacting with the skin. As described in the patent, the second frame 220 surrounds the first frame 210, in order to use the second frame 220 to prevent the vibration of the first frame 210 from dissipating the vibration to outsides, and thus may reduce sound leakage to some extent.


However, in this design, since the second frame 220 is fixed to the first frame 210, vibrations of the second frame 220 are inevitable. As a result, sealing by the second frame 220 is unsatisfactory. Furthermore, the second frame 220 increases the whole volume and weight of the speaker, which in turn increases the cost, complicates the assembly process, and reduces the speaker's reliability and consistency.


SUMMARY

The embodiments of the present application disclose methods and system of reducing sound leakage of a bone conduction speaker.


In one aspect, the embodiments of the present application disclose a method of reducing sound leakage of a bone conduction speaker, including: providing a bone conduction speaker including a vibration board fitting human skin and passing vibrations, a transducer, and a housing, wherein at least one sound guiding hole is located in at least one portion of the housing; the transducer drives the vibration board to vibrate; the housing vibrates, along with the vibrations of the transducer, and pushes air, forming a leaked sound wave transmitted in the air; the air inside the housing is pushed out of the housing through the at least one sound guiding hole, interferes with the leaked sound wave, and reduces an amplitude of the leaked sound wave.


In some embodiments, one or more sound guiding holes may locate in an upper portion, a central portion, and/or a lower portion of a sidewall and/or the bottom of the housing.


In some embodiments, a damping layer may be applied in the at least one sound guiding hole in order to adjust the phase and amplitude of the guided sound wave through the at least one sound guiding hole.


In some embodiments, sound guiding holes may be configured to generate guided sound waves having a same phase that reduce the leaked sound wave having a same wavelength; sound guiding holes may be configured to generate guided sound waves having different phases that reduce the leaked sound waves having different wavelengths.


In some embodiments, different portions of a same sound guiding hole may be configured to generate guided sound waves having a same phase that reduce the leaked sound wave having same wavelength. In some embodiments, different portions of a same sound guiding hole may be configured to generate guided sound waves having different phases that reduce leaked sound waves having different wavelengths.


In another aspect, the embodiments of the present application disclose a bone conduction speaker, including a housing, a vibration board and a transducer, wherein: the transducer is configured to generate vibrations and is located inside the housing; the vibration board is configured to be in contact with skin and pass vibrations; at least one sound guiding hole may locate in at least one portion on the housing, and preferably, the at least one sound guiding hole may be configured to guide a sound wave inside the housing, resulted from vibrations of the air inside the housing, to the outside of the housing, the guided sound wave interfering with the leaked sound wave and reducing the amplitude thereof.


In some embodiments, the at least one sound guiding hole may locate in the sidewall and/or bottom of the housing.


In some embodiments, preferably, the at least one sound guiding sound hole may locate in the upper portion and/or lower portion of the sidewall of the housing.


In some embodiments, preferably, the sidewall of the housing is cylindrical and there are at least two sound guiding holes located in the sidewall of the housing, which are arranged evenly or unevenly in one or more circles. Alternatively, the housing may have a different shape.


In some embodiments, preferably, the sound guiding holes have different heights along the axial direction of the cylindrical sidewall.


In some embodiments, preferably, there are at least two sound guiding holes located in the bottom of the housing. In some embodiments, the sound guiding holes are distributed evenly or unevenly in one or more circles around the center of the bottom. Alternatively or additionally, one sound guiding hole is located at the center of the bottom of the housing.


In some embodiments, preferably, the sound guiding hole is a perforative hole. In some embodiments, there may be a damping layer at the opening of the sound guiding hole.


In some embodiments, preferably, the guided sound waves through different sound guiding holes and/or different portions of a same sound guiding hole have different phases or a same phase.


In some embodiments, preferably, the damping layer is a tuning paper, a tuning cotton, a nonwoven fabric, a silk, a cotton, a sponge, or a rubber.


In some embodiments, preferably, the shape of a sound guiding hole is circle, ellipse, quadrangle, rectangle, or linear. In some embodiments, the sound guiding holes may have a same shape or different shapes.


In some embodiments, preferably, the transducer includes a magnetic component and a voice coil. Alternatively, the transducer includes piezoelectric ceramic.


The design disclosed in this application utilizes the principles of sound interference, by placing sound guiding holes in the housing, to guide sound wave(s) inside the housing to the outside of the housing, the guided sound wave(s) interfering with the leaked sound wave, which is formed when the housing's vibrations push the air outside the housing. The guided sound wave(s) reduces the amplitude of the leaked sound wave and thus reduces the sound leakage. The design not only reduces sound leakage, but is also easy to implement, doesn't increase the volume or weight of the bone conduction speaker, and barely increase the cost of the product.





BRIEF DESCRIPTION OF THE DRAWINGS


FIGS. 1A and 1B are schematic structures illustrating a bone conduction speaker of prior art;



FIG. 2 is a schematic structure illustrating another bone conduction speaker of prior art;



FIG. 3 illustrates the principle of sound interference according to some embodiments of the present disclosure;



FIGS. 4A and 4B are schematic structures of an exemplary bone conduction speaker according to some embodiments of the present disclosure;



FIG. 4C is a schematic structure of the bone conduction speaker according to some embodiments of the present disclosure;



FIG. 4D is a diagram illustrating reduced sound leakage of the bone conduction speaker according to some embodiments of the present disclosure;



FIG. 4E is a schematic diagram illustrating exemplary two-point sound sources according to some embodiments of the present disclosure;



FIG. 5 is a diagram illustrating the equal-loudness contour curves according to some embodiments of the present disclosure;



FIG. 6 is a flow chart of an exemplary method of reducing sound leakage of a bone conduction speaker according to some embodiments of the present disclosure;



FIGS. 7A and 7B are schematic structures of an exemplary bone conduction speaker according to some embodiments of the present disclosure;



FIG. 7C is a diagram illustrating reduced sound leakage of a bone conduction speaker according to some embodiments of the present disclosure;



FIGS. 8A and 8B are schematic structure of an exemplary bone conduction speaker according to some embodiments of the present disclosure;



FIG. 8C is a diagram illustrating reduced sound leakage of a bone conduction speaker according to some embodiments of the present disclosure;



FIGS. 9A and 9B are schematic structures of an exemplary bone conduction speaker according to some embodiments of the present disclosure;



FIG. 9C is a diagram illustrating reduced sound leakage of a bone conduction speaker according to some embodiments of the present disclosure;



FIGS. 10A and 10B are schematic structures of an exemplary bone conduction speaker according to some embodiments of the present disclosure;



FIG. 10C is a diagram illustrating reduced sound leakage of a bone conduction speaker according to some embodiments of the present disclosure;



FIG. 10D is a schematic diagram illustrating an acoustic route according to some embodiments of the present disclosure;



FIG. 10E is a schematic diagram illustrating another acoustic route according to some embodiments of the present disclosure;



FIG. 10F is a schematic diagram illustrating a further acoustic route according to some embodiments of the present disclosure;



FIGS. 11A and 11B are schematic structures of an exemplary bone conduction speaker according to some embodiments of the present disclosure;



FIG. 11C is a diagram illustrating reduced sound leakage of a bone conduction speaker according to some embodiments of the present disclosure; and



FIGS. 12A and 12B are schematic structures of an exemplary bone conduction speaker according to some embodiments of the present disclosure;



FIGS. 13A and 13B are schematic structures of an exemplary bone conduction speaker according to some embodiments of the present disclosure;



FIG. 14 is a schematic diagram illustrating an exemplary speaker customized for augmented reality according to some embodiments of the present disclosure;



FIG. 15 is a flowchart illustrating an exemplary process for replaying an audio message according to some embodiments of the present disclosure;



FIG. 16 is a schematic diagram illustrating an exemplary speaker focusing on sounds in a certain direction according to some embodiments of the present disclosure; and



FIG. 17 is a schematic diagram illustrating an exemplary user interface of a speaker according to some embodiments of the present disclosure.





The meanings of the mark numbers in the figures are as followed:



110, open housing; 121, vibration board; 122, transducer; 123, linking component; 210, first frame; 220, second frame; 230, moving coil; 240, inner magnetic component; 250, outer magnetic component; 260; vibration board; 270, vibration unit; 10, housing; 11, sidewall; 12, bottom; 21, vibration board; 22, transducer; 23, linking component; 24, elastic component; 30, sound guiding hole.


DETAILED DESCRIPTION

Followings are some further detailed illustrations about this disclosure. The following examples are for illustrative purposes only and should not be interpreted as limitations of the claimed invention. There are a variety of alternative techniques and procedures available to those of ordinary skill in the art, which would similarly permit one to successfully perform the intended invention. In addition, the figures just show the structures relative to this disclosure, not the whole structure.


To explain the scheme of the embodiments of this disclosure, the design principles of this disclosure will be introduced here. FIG. 3 illustrates the principles of sound interference according to some embodiments of the present disclosure. Two or more sound waves may interfere in the space based on, for example, the frequency and/or amplitude of the waves. Specifically, the amplitudes of the sound waves with the same frequency may be overlaid to generate a strengthened wave or a weakened wave. As shown in FIG. 3, sound source 1 and sound source 2 have the same frequency and locate in different locations in the space. The sound waves generated from these two sound sources may encounter in an arbitrary point A. If the phases of the sound wave 1 and sound wave 2 are the same at point A, the amplitudes of the two sound waves may be added, generating a strengthened sound wave signal at point A; on the other hand, if the phases of the two sound waves are opposite at point A, their amplitudes may be offset, generating a weakened sound wave signal at point A.


This disclosure applies above-noted the principles of sound wave interference to a bone conduction speaker and disclose a bone conduction speaker that can reduce sound leakage.


Embodiment One


FIGS. 4A and 4B are schematic structures of an exemplary bone conduction speaker. The bone conduction speaker may include a housing 10, a vibration board 21, and a transducer 22. The transducer 22 may be inside the housing 10 and configured to generate vibrations. The housing 10 may have one or more sound guiding holes 30. The sound guiding hole(s) 30 may be configured to guide sound waves inside the housing 10 to the outside of the housing 10. In some embodiments, the guided sound waves may form interference with leaked sound waves generated by the vibrations of the housing 10, so as to reducing the amplitude of the leaked sound. The transducer 22 may be configured to convert an electrical signal to mechanical vibrations. For example, an audio electrical signal may be transmitted into a voice coil that is placed in a magnet, and the electromagnetic interaction may cause the voice coil to vibrate based on the audio electrical signal. As another example, the transducer 22 may include piezoelectric ceramics, shape changes of which may cause vibrations in accordance with electrical signals received.


Furthermore, the vibration board 21 may be connected to the transducer 22 and configured to vibrate along with the transducer 22. The vibration board 21 may stretch out from the opening of the housing 10, and touch the skin of the user and pass vibrations to auditory nerves through human tissues and bones, which in turn enables the user to hear sound. The linking component 23 may reside between the transducer 22 and the housing 10, configured to fix the vibrating transducer 122 inside the housing. The linking component 23 may include one or more separate components, or may be integrated with the transducer 22 or the housing 10. In some embodiments, the linking component 23 is made of an elastic material.


The transducer 22 may drive the vibration board 21 to vibrate. The transducer 22, which resides inside the housing 10, may vibrate. The vibrations of the transducer 22 may drives the air inside the housing 10 to vibrate, producing a sound wave inside the housing 10, which can be referred to as “sound wave inside the housing.” Since the vibration board 21 and the transducer 22 are fixed to the housing 10 via the linking component 23, the vibrations may pass to the housing 10, causing the housing 10 to vibrate synchronously. The vibrations of the housing 10 may generate a leaked sound wave, which spreads outwards as sound leakage.


The sound wave inside the housing and the leaked sound wave are like the two sound sources in FIG. 3. In some embodiments, the sidewall 11 of the housing 10 may have one or more sound guiding holes 30 configured to guide the sound wave inside the housing 10 to the outside. The guided sound wave through the sound guiding hole(s) 30 may interfere with the leaked sound wave generated by the vibrations of the housing 10, and the amplitude of the leaked sound wave may be reduced due to the interference, which may result in a reduced sound leakage. Therefore, the design of this embodiment can solve the sound leakage problem to some extent by making an improvement of setting a sound guiding hole on the housing, and not increasing the volume and weight of the bone conduction speaker.


In some embodiments, one sound guiding hole 30 is set on the upper portion of the sidewall 11. As used herein, the upper portion of the sidewall 11 refers to the portion of the sidewall 11 starting from the top of the sidewall (contacting with the vibration board 21) to about the ⅓ height of the sidewall.



FIG. 4C is a schematic structure of the bone conduction speaker illustrated in FIGS. 4A-4B. The structure of the bone conduction speaker is further illustrated with mechanics elements illustrated in FIG. 4C. As shown in FIG. 4C, the linking component 23 between the sidewall 11 of the housing 10 and the vibration board 21 may be represented by an elastic element 23 and a damping element in the parallel connection. The linking relationship between the vibration board 21 and the transducer 22 may be represented by an elastic element 24.


Outside the housing 10, the sound leakage reduction is proportional to

(∫∫SholePds−∫∫ShousingPdds),  (1)

wherein Shole is the area of the opening of the sound guiding hole 30, Shousing is the area of the housing 10 (e.g., the sidewall 11 and the bottom 12) that is not in contact with human face.


The pressure inside the housing may be expressed as

P=Pa+Pb+Pc+Pe,  (2)

wherein Pa, Pb, Pc and Pe are the sound pressures of an arbitrary point inside the housing 10 generated by side a, side b, side c and side e (as illustrated in FIG. 4C), respectively. As used herein, side a refers to the upper surface of the transducer 22 that is close to the vibration board 21, side b refers to the lower surface of the vibration board 21 that is close to the transducer 22, side c refers to the inner upper surface of the bottom 12 that is close to the transducer 22, and side e refers to the lower surface of the transducer 22 that is close to the bottom 12.


The center of the side b, O point, is set as the origin of the space coordinates, and the side b can be set as the z=0 plane, so Pa, Pb, Pc and Pe may be expressed as follows:












P
a

(

x
,
y
,
z

)

=



-
j


ω


ρ
0








S
a






W
a

(


x
a


,

y
a



)

·


e

jkR

(


x
a


,

y
a



)



4

π


R

(


x
a


,

y
a



)






dx
a




dy
a






-

P

a

R




,




(
3
)















P
b

(

x
,
y
,
z

)

=



-
j


ω


ρ
0








S
b






W
b

(


x


,

y



)

·


e

jkR

(


x


,

y



)



4

π


R

(


x


,

y



)






dx




dy






-

P

b

R




,




(
4
)















P
c

(

x
,
y
,
z

)

=



-
j


ω


ρ
0








S
c






W
c

(


x
c


,

y
c



)

·


e

jkR

(


x
c


,

y
c



)



4

π


R

(


x
c


,

y
c



)






dx
c




dy
c






-

P

c

R




,




(
5
)















P
e

(

x
,
y
,
z

)

=



-
j


ω


ρ
0








S
e






W
e

(


x
e


,

y
e



)

·


e

jkR

(


x
e


,

y
e



)



4

π


R

(


x
e


,

y
e



)






dx
e




dy
e






-

P

e

R




,




(
6
)








wherein R(x′, y′)=√{square root over ((x−x′)2+(y−y′)2+z2)} is the distance between an observation point (x, y, z) and a point on side b (x′, y′, 0); Sa, Sb, Sc and Se are the areas of side a, side b, side c and side e, respectively;

    • R(xa′,ya′)=√{square root over ((x−xa′)2+(y−ya′)2+(z−za)2)} is the distance between the observation point (x, y, z) and a point on side a (xa′, ya′, za);
    • R(xc′, yc′)=√{square root over ((x−xc′)2+(y−yc′)2)} is the distance between the observation point (x, y, z) and a point on side c (xc′, yc′, zc);
    • R(xe′,ye′)=(x−xe′)2+(y−ye′)2+(z−ze) is the distance between the observation point (x, y, z) and a point on side e (xe′, ye′, ze);
    • k=ω/u(u is the velocity of sound) is wave number, ρ0 is an air density, ω is an angular frequency of vibration.


PaR, PbR, PcR and PeR are acoustic resistances of air, which respectively are:











P
aR

=


A
·




z
a

·
r

+

j


ω
·

z
a

·

r





φ


+
δ


,




(
7
)














P

b

R


=


A
·




z
b

·
r

+

j


ω
·

z
b

·

r





φ


+
δ


,




(
8
)














P
cR

=


A
·




z
c

·
r

+

j


ω
·

z
c

·

r





φ


+
δ


,




(
9
)














P

e

R


=


A
·




z
e

·
r

+

j


ω
·

z
e

·

r





φ


+
δ


,




(
10
)








wherein r is the acoustic resistance per unit length, r′ is the sound quality per unit length, za is the distance between the observation point and side a, zb is the distance between the observation point and side b, zc is the distance between the observation point and side c, ze is the distance between the observation point and side e.


Wa(x, y), Wb(x, y), Wc(x, y), We(x, y) and Wd(x, y) are the sound source power per unit area of side a, side b, side c, side e and side d, respectively, which can be derived from following formulas (11):

Fe=Fa=F−k1 cos ωt−∫∫SaWa(x,y)dxdy−∫∫Se dxdy−f
Fb=−F+k1 cos ωt+∫∫SbWb(x,y)dxdy−∫∫SeWe(x,y)dxdy−L
Fc=Fd=Fb−k2 cos ωt−∫∫SdWc(x,y)dxdy−f−γ
Fd=Fb−k2 cos ωt−∫∫SdWd(x,y)dxdy  (11)

wherein F is the driving force generated by the transducer 22, Fa, Fb, Fc, Fd, and Fe are the driving forces of side a, side b, side c, side d and side e, respectively. As used herein, side d is the outside surface of the bottom 12. Sd is the region of side d, f is the viscous resistance formed in the small gap of the sidewalls, and f=ηΔs(dv/dy).


L is the equivalent load on human face when the vibration board acts on the human face, γ is the energy dissipated on elastic element 24, k1 and k2 are the elastic coefficients of elastic element 23 and elastic element 24 respectively, η is the fluid viscosity coefficient, dv/dy is the velocity gradient of fluid, Δs is the cross-section area of a subject (board), A is the amplitude, φ is the region of the sound field, and δ is a high order minimum (which is generated by the incompletely symmetrical shape of the housing).


The sound pressure of an arbitrary point outside the housing, generated by the vibration of the housing 10 is expressed as:











P
d

=


-
j


ω



ρ


0










W
d

(


x
d


,

y
d



)

·


e

jkR

(


x
d


,

y
d



)



4

π


R

(


x
d


,

y
d



)






dx
d




dy
d







,




(
12
)








wherein R(xd′, yd′)=√{square root over ((x−xd′)2+(y−yd′)2+(z−zd)2)} is the distance between the observation point (x, y, z) and a point on side d (xd′, yd′, zd).


Pa, Pb, Pc and Pe are functions of the position, when we set a hole on an arbitrary position in the housing, if the area of the hole is Shole, the sound pressure of the hole is ∫∫SholePds.


In the meanwhile, because the vibration board 21 fits human tissues tightly, the power it gives out is absorbed all by human tissues, so the only side that can push air outside the housing to vibrate is side d, thus forming sound leakage. As described elsewhere, the sound leakage is resulted from the vibrations of the housing 10. For illustrative purposes, the sound pressure generated by the housing 10 may be expressed as ∫∫Shousing Pdds.


The leaked sound wave and the guided sound wave interference may result in a weakened sound wave, i.e., to make ∫∫Shole Pds and ∫∫Shousing Pdds have the same value but opposite directions, and the sound leakage may be reduced. In some embodiments, ∫∫Shole Pds may be adjusted to reduce the sound leakage. Since ∫∫SholePds corresponds to information of phases and amplitudes of one or more holes, which further relates to dimensions of the housing of the bone conduction speaker, the vibration frequency of the transducer, the position, shape, quantity and/or size of the sound guiding holes and whether there is damping inside the holes. Thus, the position, shape, and quantity of sound guiding holes, and/or damping materials may be adjusted to reduce sound leakage.


According to the formulas above, a person having ordinary skill in the art would understand that the effectiveness of reducing sound leakage is related to the dimensions of the housing of the bone conduction speaker, the vibration frequency of the transducer, the position, shape, quantity and size of the sound guiding hole(s) and whether there is damping inside the sound guiding hole(s). Accordingly, various configurations, depending on specific needs, may be obtained by choosing specific position where the sound guiding hole(s) is located, the shape and/or quantity of the sound guiding hole(s) as well as the damping material.



FIG. 5 is a diagram illustrating the equal-loudness contour curves according to some embodiments of the present disclose. The horizontal coordinate is frequency, while the vertical coordinate is sound pressure level (SPL). As used herein, the SPL refers to the change of atmospheric pressure after being disturbed, i.e., a surplus pressure of the atmospheric pressure, which is equivalent to an atmospheric pressure added to a pressure change caused by the disturbance. As a result, the sound pressure may reflect the amplitude of a sound wave. In FIG. 5, on each curve, sound pressure levels corresponding to different frequencies are different, while the loudness levels felt by human ears are the same. For example, each curve is labeled with a number representing the loudness level of said curve. According to the loudness level curves, when volume (sound pressure amplitude) is lower, human ears are not sensitive to sounds of high or low frequencies; when volume is higher, human ears are more sensitive to sounds of high or low frequencies. Bone conduction speakers may generate sound relating to different frequency ranges, such as 1000 Hz˜4000 Hz, or 1000 Hz˜4000 Hz, or 1000 Hz˜3500 Hz, or 1000 Hz˜3000 Hz, or 1500 Hz˜3000 Hz. The sound leakage within the above-mentioned frequency ranges may be the sound leakage aimed to be reduced with a priority.



FIG. 4D is a diagram illustrating the effect of reduced sound leakage according to some embodiments of the present disclosure, wherein the test results and calculation results are close in the above range. The bone conduction speaker being tested includes a cylindrical housing, which includes a sidewall and a bottom, as described in FIGS. 4A and 4B. The cylindrical housing is in a cylinder shape having a radius of 22 mm, the sidewall height of 14 mm, and a plurality of sound guiding holes being set on the upper portion of the sidewall of the housing. The openings of the sound guiding holes are rectangle. The sound guiding holes are arranged evenly on the sidewall. The target region where the sound leakage is to be reduced is 50 cm away from the outside of the bottom of the housing. The distance of the leaked sound wave spreading to the target region and the distance of the sound wave spreading from the surface of the transducer 20 through the sound guiding holes 30 to the target region have a difference of about 180 degrees in phase. As shown, the leaked sound wave is reduced in the target region dramatically or even be eliminated.


According to the embodiments in this disclosure, the effectiveness of reducing sound leakage after setting sound guiding holes is very obvious. As shown in FIG. 4D, the bone conduction speaker having sound guiding holes greatly reduce the sound leakage compared to the bone conduction speaker without sound guiding holes.


In the tested frequency range, after setting sound guiding holes, the sound leakage is reduced by about 10 dB on average. Specifically, in the frequency range of 1500 Hz˜3000 Hz, the sound leakage is reduced by over 10 dB. In the frequency range of 2000 Hz˜2500 Hz, the sound leakage is reduced by over 20 dB compared to the scheme without sound guiding holes.


A person having ordinary skill in the art can understand from the above-mentioned formulas that when the dimensions of the bone conduction speaker, target regions to reduce sound leakage and frequencies of sound waves differ, the position, shape and quantity of sound guiding holes also need to adjust accordingly.


For example, in a cylinder housing, according to different needs, a plurality of sound guiding holes may be on the sidewall and/or the bottom of the housing. Preferably, the sound guiding hole may be set on the upper portion and/or lower portion of the sidewall of the housing. The quantity of the sound guiding holes set on the sidewall of the housing is no less than two. Preferably, the sound guiding holes may be arranged evenly or unevenly in one or more circles with respect to the center of the bottom. In some embodiments, the sound guiding holes may be arranged in at least one circle. In some embodiments, one sound guiding hole may be set on the bottom of the housing. In some embodiments, the sound guiding hole may be set at the center of the bottom of the housing.


The quantity of the sound guiding holes can be one or more. Preferably, multiple sound guiding holes may be set symmetrically on the housing. In some embodiments, there are 6-8 circularly arranged sound guiding holes.


The openings (and cross sections) of sound guiding holes may be circle, ellipse, rectangle, or slit. Slit generally means slit along with straight lines, curve lines, or arc lines. Different sound guiding holes in one bone conduction speaker may have same or different shapes.


A person having ordinary skill in the art can understand that, the sidewall of the housing may not be cylindrical, the sound guiding holes can be arranged asymmetrically as needed. Various configurations may be obtained by setting different combinations of the shape, quantity, and position of the sound guiding. Some other embodiments along with the figures are described as follows.


In some embodiments, the leaked sound wave may be generated by a portion of the housing 10. The portion of the housing may be the sidewall 11 of the housing 10 and/or the bottom 12 of the housing 10. Merely by way of example, the leaked sound wave may be generated by the bottom 12 of the housing 10. The guided sound wave output through the sound guiding hole(s) 30 may interfere with the leaked sound wave generated by the portion of the housing 10. The interference may enhance or reduce a sound pressure level of the guided sound wave and/or leaked sound wave in the target region.


In some embodiments, the portion of the housing 10 that generates the leaked sound wave may be regarded as a first sound source (e.g., the sound source 1 illustrated in FIG. 3), and the sound guiding hole(s) 30 or a part thereof may be regarded as a second sound source (e.g., the sound source 2 illustrated in FIG. 3). Merely for illustration purposes, if the size of the sound guiding hole on the housing 10 is small, the sound guiding hole may be approximately regarded as a point sound source. In some embodiments, any number or count of sound guiding holes provided on the housing 10 for outputting sound may be approximated as a single point sound source. Similarly, for simplicity, the portion of the housing 10 that generates the leaked sound wave may also be approximately regarded as a point sound source. In some embodiments, both the first sound source and the second sound source may approximately be regarded as point sound sources (also referred to as two-point sound sources).



FIG. 4E is a schematic diagram illustrating exemplary two-point sound sources according to some embodiments of the present disclosure. The sound field pressure p generated by a single point sound source may satisfy Equation (13):










p
=



j


ωρ
0



4

π

r




Q
0


exp


j

(


ω

t

-
kr

)



,




(
13
)








where ω denotes an angular frequency, ρ0 denotes an air density, r denotes a distance between a target point and the sound source, Q0 denotes a volume velocity of the sound source, and k denotes a wave number. It may be concluded that the magnitude of the sound field pressure of the sound field of the point sound source is inversely proportional to the distance to the point sound source.


It should be noted that, the sound guiding hole(s) for outputting sound as a point sound source may only serve as an explanation of the principle and effect of the present disclosure, and the shape and/or size of the sound guiding hole(s) may not be limited in practical applications. In some embodiments, if the area of the sound guiding hole is large, the sound guiding hole may also be equivalent to a planar sound source. Similarly, if an area of the portion of the housing 10 that generates the leaked sound wave is large (e.g., the portion of the housing 10 is a vibration surface or a sound radiation surface), the portion of the housing 10 may also be equivalent to a planar sound source. For those skilled in the art, without creative activities, it may be known that sounds generated by structures such as sound guiding holes, vibration surfaces, and sound radiation surfaces may be equivalent to point sound sources at the spatial scale discussed in the present disclosure, and may have consistent sound propagation characteristics and the same mathematical description method. Further, for those skilled in the art, without creative activities, it may be known that the acoustic effect achieved by the two-point sound sources may also be implemented by alternative acoustic structures. According to actual situations, the alternative acoustic structures may be modified and/or combined discretionarily, and the same acoustic output effect may be achieved.


The two-point sound sources may be formed such that the guided sound wave output from the sound guiding hole(s) may interfere with the leaked sound wave generated by the portion of the housing 10. The interference may reduce a sound pressure level of the leaked sound wave in the surrounding environment (e.g., the target region). For convenience, the sound waves output from an acoustic output device (e.g., the bone conduction speaker) to the surrounding environment may be referred to as far-field leakage since it may be heard by others in the environment. The sound waves output from the acoustic output device to the ears of the user may also be referred to as near-field sound since a distance between the bone conduction speaker and the user may be relatively short. In some embodiments, the sound waves output from the two-point sound sources may have a same frequency or frequency range (e.g., 800 Hz, 1000 Hz, 1500 Hz, 3000 Hz, etc.). In some embodiments, the sound waves output from the two-point sound sources may have a certain phase difference. In some embodiments, the sound guiding hole includes a damping layer. The damping layer may be, for example, a tuning paper, a tuning cotton, a nonwoven fabric, a silk, a cotton, a sponge, or a rubber. The damping layer may be configured to adjust the phase of the guided sound wave in the target region. The acoustic output device described herein may include a bone conduction speaker or an air conduction speaker. For example, a portion of the housing (e.g., the bottom of the housing) of the bone conduction speaker may be treated as one of the two-point sound sources, and at least one sound guiding holes of the bone conduction speaker may be treated as the other one of the two-point sound sources. As another example, one sound guiding hole of an air conduction speaker may be treated as one of the two-point sound sources, and another sound guiding hole of the air conduction speaker may be treated as the other one of the two-point sound sources. It should be noted that, although the construction of two-point sound sources may be different in bone conduction speaker and air conduction speaker, the principles of the interference between the various constructed two-point sound sources are the same. Thus, the equivalence of the two-point sound sources in a bone conduction speaker disclosed elsewhere in the present disclosure is also applicable for an air conduction speaker.


In some embodiments, when the position and phase difference of the two-point sound sources meet certain conditions, the acoustic output device may output different sound effects in the near field (for example, the position of the user's ear) and the far field. For example, if the phases of the point sound sources corresponding to the portion of the housing 10 and the sound guiding hole(s) are opposite, that is, an absolute value of the phase difference between the two-point sound sources is 180 degrees, the far-field leakage may be reduced according to the principle of reversed phase cancellation.


In some embodiments, the interference between the guided sound wave and the leaked sound wave at a specific frequency may relate to a distance between the sound guiding hole(s) and the portion of the housing 10. For example, if the sound guiding hole(s) are set at the upper portion of the sidewall of the housing 10 (as illustrated in FIG. 4A), the distance between the sound guiding hole(s) and the portion of the housing 10 may be large. Correspondingly, the frequencies of sound waves generated by such two-point sound sources may be in a mid-low frequency range (e.g., 1500-2000 Hz, 1500-2500 Hz, etc.). Referring to FIG. 4D, the interference may reduce the sound pressure level of the leaked sound wave in the mid-low frequency range (i.e., the sound leakage is low).


Merely by way of example, the low frequency range may refer to frequencies in a range below a first frequency threshold. The high frequency range may refer to frequencies in a range exceed a second frequency threshold. The first frequency threshold may be lower than the second frequency threshold. The mid-low frequency range may refer to frequencies in a range between the first frequency threshold and the second frequency threshold. For example, the first frequency threshold may be 1000 Hz, and the second frequency threshold may be 3000 Hz. The low frequency range may refer to frequencies in a range below 1000 Hz, the high frequency range may refer to frequencies in a range above 3000 Hz, and the mid-low frequency range may refer to frequencies in a range of 1000-2000 Hz, 1500-2500 Hz, etc. In some embodiments, a middle frequency range, a mid-high frequency range may also be determined between the first frequency threshold and the second frequency threshold. In some embodiments, the mid-low frequency range and the low frequency range may partially overlap. The mid-high frequency range and the high frequency range may partially overlap. For example, the mid-high frequency range may refer to frequencies in a range above 3000 Hz, and the mid-low frequency range may refer to frequencies in a range of 2800-3500 Hz. It should be noted that the low frequency range, the mid-low frequency range, the middle frequency range, the mid-high frequency range, and/or the high frequency range may be set flexibly according to different situations, and are not limited herein.


In some embodiments, the frequencies of the guided sound wave and the leaked sound wave may be set in a low frequency range (e.g., below 800 Hz, below 1200 Hz, etc.). In some embodiments, the amplitudes of the sound waves generated by the two-point sound sources may be set to be different in the low frequency range. For example, the amplitude of the guided sound wave may be smaller than the amplitude of the leaked sound wave. In this case, the interference may not reduce sound pressure of the near-field sound in the low-frequency range. The sound pressure of the near-field sound may be improved in the low-frequency range. The volume of the sound heard by the user may be improved.


In some embodiments, the amplitude of the guided sound wave may be adjusted by setting an acoustic resistance structure in the sound guiding hole(s) 30. The material of the acoustic resistance structure disposed in the sound guiding hole 30 may include, but not limited to, plastics (e.g., high-molecular polyethylene, blown nylon, engineering plastics, etc.), cotton, nylon, fiber (e.g., glass fiber, carbon fiber, boron fiber, graphite fiber, graphene fiber, silicon carbide fiber, or aramid fiber), other single or composite materials, other organic and/or inorganic materials, etc. The thickness of the acoustic resistance structure may be 0.005 mm, 0.01 mm, 0.02 mm, 0.5 mm, 1 mm, 2 mm, etc. The structure of the acoustic resistance structure may be in a shape adapted to the shape of the sound guiding hole. For example, the acoustic resistance structure may have a shape of a cylinder, a sphere, a cubic, etc. In some embodiments, the materials, thickness, and structures of the acoustic resistance structure may be modified and/or combined to obtain a desirable acoustic resistance structure. In some embodiments, the acoustic resistance structure may be implemented by the damping layer.


In some embodiments, the amplitude of the guided sound wave output from the sound guiding hole may be relatively low (e.g., zero or almost zero). The difference between the guided sound wave and the leaked sound wave may be maximized, thus achieving a relatively large sound pressure in the near field. In this case, the sound leakage of the acoustic output device having sound guiding holes may be almost the same as the sound leakage of the acoustic output device without sound guiding holes in the low frequency range (e.g., as shown in FIG. 4D).


Embodiment Two


FIG. 6 is a flowchart of an exemplary method of reducing sound leakage of a bone conduction speaker according to some embodiments of the present disclosure. At 601, a bone conduction speaker including a vibration plate 21 touching human skin and passing vibrations, a transducer 22, and a housing 10 is provided. At least one sound guiding hole 30 is arranged on the housing 10. At 602, the vibration plate 21 is driven by the transducer 22, causing the vibration 21 to vibrate. At 603, a leaked sound wave due to the vibrations of the housing is formed, wherein the leaked sound wave transmits in the air. At 604, a guided sound wave passing through the at least one sound guiding hole 30 from the inside to the outside of the housing 10. The guided sound wave interferes with the leaked sound wave, reducing the sound leakage of the bone conduction speaker.


The sound guiding holes 30 are preferably set at different positions of the housing 10.


The effectiveness of reducing sound leakage may be determined by the formulas and method as described above, based on which the positions of sound guiding holes may be determined.


A damping layer is preferably set in a sound guiding hole 30 to adjust the phase and amplitude of the sound wave transmitted through the sound guiding hole 30.


In some embodiments, different sound guiding holes may generate different sound waves having a same phase to reduce the leaked sound wave having the same wavelength. In some embodiments, different sound guiding holes may generate different sound waves having different phases to reduce the leaked sound waves having different wavelengths.


In some embodiments, different portions of a sound guiding hole 30 may be configured to generate sound waves having a same phase to reduce the leaked sound waves with the same wavelength. In some embodiments, different portions of a sound guiding hole 30 may be configured to generate sound waves having different phases to reduce the leaked sound waves with different wavelengths.


Additionally, the sound wave inside the housing may be processed to basically have the same value but opposite phases with the leaked sound wave, so that the sound leakage may be further reduced.


Embodiment Three


FIGS. 7A and 7B are schematic structures illustrating an exemplary bone conduction speaker according to some embodiments of the present disclosure. The bone conduction speaker may include an open housing 10, a vibration board 21, and a transducer 22. The housing 10 may cylindrical and have a sidewall and a bottom. A plurality of sound guiding holes 30 may be arranged on the lower portion of the sidewall (i.e., from about the ⅔ height of the sidewall to the bottom). The quantity of the sound guiding holes 30 may be 8, the openings of the sound guiding holes 30 may be rectangle. The sound guiding holes 30 may be arranged evenly or evenly in one or more circles on the sidewall of the housing 10.


In the embodiment, the transducer 22 is preferably implemented based on the principle of electromagnetic transduction. The transducer may include components such as magnetizer, voice coil, and etc., and the components may locate inside the housing and may generate synchronous vibrations with a same frequency.



FIG. 7C is a diagram illustrating reduced sound leakage according to some embodiments of the present disclosure. In the frequency range of 1400 Hz˜4000 Hz, the sound leakage is reduced by more than 5 dB, and in the frequency range of 2250 Hz˜2500 Hz, the sound leakage is reduced by more than 20 dB.


In some embodiments, the sound guiding hole(s) at the lower portion of the sidewall of the housing 10 may also be approximately regarded as a point sound source. In some embodiments, the sound guiding hole(s) at the lower portion of the sidewall of the housing 10 and the portion of the housing 10 that generates the leaked sound wave may constitute two-point sound sources. The two-point sound sources may be formed such that the guided sound wave output from the sound guiding hole(s) at the lower portion of the sidewall of the housing 10 may interfere with the leaked sound wave generated by the portion of the housing 10. The interference may reduce a sound pressure level of the leaked sound wave in the surrounding environment (e.g., the target region) at a specific frequency or frequency range.


In some embodiments, the sound waves output from the two-point sound sources may have a same frequency or frequency range (e.g., 1000 Hz, 2500 Hz, 3000 Hz, etc.). In some embodiments, the sound waves output from the first two-point sound sources may have a certain phase difference. In this case, the interference between the sound waves generated by the first two-point sound sources may reduce a sound pressure level of the leaked sound wave in the target region. When the position and phase difference of the first two-point sound sources meet certain conditions, the acoustic output device may output different sound effects in the near field (for example, the position of the user's ear) and the far field. For example, if the phases of the first two-point sound sources are opposite, that is, an absolute value of the phase difference between the first two-point sound sources is 180 degrees, the far-field leakage may be reduced.


In some embodiments, the interference between the guided sound wave and the leaked sound wave may relate to frequencies of the guided sound wave and the leaked sound wave and/or a distance between the sound guiding hole(s) and the portion of the housing 10. For example, if the sound guiding hole(s) are set at the lower portion of the sidewall of the housing 10 (as illustrated in FIG. 7A), the distance between the sound guiding hole(s) and the portion of the housing 10 may be small. Correspondingly, the frequencies of sound waves generated by such two-point sound sources may be in a high frequency range (e.g., above 3000 Hz, above 3500 Hz, etc.). Referring to FIG. 7C, the interference may reduce the sound pressure level of the leaked sound wave in the high frequency range.


Embodiment Four


FIGS. 8A and 8B are schematic structures illustrating an exemplary bone conduction speaker according to some embodiments of the present disclosure. The bone conduction speaker may include an open housing 10, a vibration board 21, and a transducer 22. The housing 10 is cylindrical and have a sidewall and a bottom. The sound guiding holes 30 may be arranged on the central portion of the sidewall of the housing (i.e., from about the ⅓ height of the sidewall to the ⅔ height of the sidewall). The quantity of the sound guiding holes 30 may be 8, and the openings (and cross sections) of the sound guiding hole 30 may be rectangle. The sound guiding holes 30 may be arranged evenly or unevenly in one or more circles on the sidewall of the housing 10.


In the embodiment, the transducer 21 may be implemented preferably based on the principle of electromagnetic transduction. The transducer 21 may include components such as magnetizer, voice coil, etc., which may be placed inside the housing and may generate synchronous vibrations with the same frequency.



FIG. 8C is a diagram illustrating reduced sound leakage. In the frequency range of 1000 Hz˜4000 Hz, the effectiveness of reducing sound leakage is great. For example, in the frequency range of 1400 Hz˜2900 Hz, the sound leakage is reduced by more than 10 dB; in the frequency range of 2200 Hz˜2500 Hz, the sound leakage is reduced by more than 20 dB.


It's illustrated that the effectiveness of reduced sound leakage can be adjusted by changing the positions of the sound guiding holes, while keeping other parameters relating to the sound guiding holes unchanged.


Embodiment Five


FIGS. 9A and 9B are schematic structures of an exemplary bone conduction speaker according to some embodiments of the present disclosure. The bone conduction speaker may include an open housing 10, a vibration board 21 and a transducer 22. The housing 10 is cylindrical, with a sidewall and a bottom. One or more perforative sound guiding holes 30 may be along the circumference of the bottom. In some embodiments, there may be 8 sound guiding holes 30 arranged evenly of unevenly in one or more circles on the bottom of the housing 10. In some embodiments, the shape of one or more of the sound guiding holes 30 may be rectangle.


In the embodiment, the transducer 21 may be implemented preferably based on the principle of electromagnetic transduction. The transducer 21 may include components such as magnetizer, voice coil, etc., which may be placed inside the housing and may generate synchronous vibration with the same frequency.



FIG. 9C is a diagram illustrating the effect of reduced sound leakage. In the frequency range of 1000 Hz˜3000 Hz, the effectiveness of reducing sound leakage is outstanding. For example, in the frequency range of 1700 Hz˜2700 Hz, the sound leakage is reduced by more than 10 dB; in the frequency range of 2200 Hz˜2400 Hz, the sound leakage is reduced by more than 20 dB.


Embodiment Six


FIGS. 10A and 10B are schematic structures of an exemplary bone conduction speaker according to some embodiments of the present disclosure. The bone conduction speaker may include an open housing 10, a vibration board 21 and a transducer 22. One or more perforative sound guiding holes 30 may be arranged on both upper and lower portions of the sidewall of the housing 10. The sound guiding holes 30 may be arranged evenly or unevenly in one or more circles on the upper and lower portions of the sidewall of the housing 10. In some embodiments, the quantity of sound guiding holes 30 in every circle may be 8, and the upper portion sound guiding holes and the lower portion sound guiding holes may be symmetrical about the central cross section of the housing 10. In some embodiments, the shape of the sound guiding hole 30 may be circle.


The shape of the sound guiding holes on the upper portion and the shape of the sound guiding holes on the lower portion may be different; One or more damping layers may be arranged in the sound guiding holes to reduce leaked sound waves of the same wave length (or frequency), or to reduce leaked sound waves of different wave lengths.



FIG. 10C is a diagram illustrating the effect of reducing sound leakage according to some embodiments of the present disclosure. In the frequency range of 1000 Hz˜4000 Hz, the effectiveness of reducing sound leakage is outstanding. For example, in the frequency range of 1600 Hz˜2700 Hz, the sound leakage is reduced by more than 15 dB; in the frequency range of 2000 Hz˜2500 Hz, where the effectiveness of reducing sound leakage is most outstanding, the sound leakage is reduced by more than 20 dB. Compared to embodiment three, this scheme has a relatively balanced effect of reduced sound leakage on various frequency range, and this effect is better than the effect of schemes where the height of the holes are fixed, such as schemes of embodiment three, embodiment four, embodiment five, and so on.


In some embodiments, the sound guiding hole(s) at the upper portion of the sidewall of the housing 10 (also referred to as first hole(s)) may be approximately regarded as a point sound source. In some embodiments, the first hole(s) and the portion of the housing 10 that generates the leaked sound wave may constitute two-point sound sources (also referred to as first two-point sound sources). As for the first two-point sound sources, the guided sound wave generated by the first hole(s) (also referred to as first guided sound wave) may interfere with the leaked sound wave or a portion thereof generated by the portion of the housing 10 in a first region. In some embodiments, the sound waves output from the first two-point sound sources may have a same frequency (e.g., a first frequency). In some embodiments, the sound waves output from the first two-point sound sources may have a certain phase difference. In this case, the interference between the sound waves generated by the first two-point sound sources may reduce a sound pressure level of the leaked sound wave in the target region. When the position and phase difference of the first two-point sound sources meet certain conditions, the acoustic output device may output different sound effects in the near field (for example, the position of the user's ear) and the far field. For example, if the phases of the first two-point sound sources are opposite, that is, an absolute value of the phase difference between the first two-point sound sources is 180 degrees, the far-field leakage may be reduced according to the principle of reversed phase cancellation.


In some embodiments, the sound guiding hole(s) at the lower portion of the sidewall of the housing 10 (also referred to as second hole(s)) may also be approximately regarded as another point sound source. Similarly, the second hole(s) and the portion of the housing 10 that generates the leaked sound wave may also constitute two-point sound sources (also referred to as second two-point sound sources). As for the second two-point sound sources, the guided sound wave generated by the second hole(s) (also referred to as second guided sound wave) may interfere with the leaked sound wave or a portion thereof generated by the portion of the housing 10 in a second region. The second region may be the same as or different from the first region. In some embodiments, the sound waves output from the second two-point sound sources may have a same frequency (e.g., a second frequency).


In some embodiments, the first frequency and the second frequency may be in certain frequency ranges. In some embodiments, the frequency of the guided sound wave output from the sound guiding hole(s) may be adjustable. In some embodiments, the frequency of the first guided sound wave and/or the second guided sound wave may be adjusted by one or more acoustic routes. The acoustic routes may be coupled to the first hole(s) and/or the second hole(s). The first guided sound wave and/or the second guided sound wave may be propagated along the acoustic route having a specific frequency selection characteristic. That is, the first guided sound wave and the second guided sound wave may be transmitted to their corresponding sound guiding holes via different acoustic routes. For example, the first guided sound wave and/or the second guided sound wave may be propagated along an acoustic route with a low-pass characteristic to a corresponding sound guiding hole to output guided sound wave of a low frequency. In this process, the high frequency component of the sound wave may be absorbed or attenuated by the acoustic route with the low-pass characteristic. Similarly, the first guided sound wave and/or the second guided sound wave may be propagated along an acoustic route with a high-pass characteristic to the corresponding sound guiding hole to output guided sound wave of a high frequency. In this process, the low frequency component of the sound wave may be absorbed or attenuated by the acoustic route with the high-pass characteristic.



FIG. 10D is a schematic diagram illustrating an acoustic route according to some embodiments of the present disclosure. FIG. 10E is a schematic diagram illustrating another acoustic route according to some embodiments of the present disclosure. FIG. 10F is a schematic diagram illustrating a further acoustic route according to some embodiments of the present disclosure. In some embodiments, structures such as a sound tube, a sound cavity, a sound resistance, etc., may be set in the acoustic route for adjusting frequencies for the sound waves (e.g., by filtering certain frequencies). It should be noted that FIGS. 10D-10F may be provided as examples of the acoustic routes, and not intended be limiting.


As shown in FIG. 10D, the acoustic route may include one or more lumen structures. The one or more lumen structures may be connected in series. An acoustic resistance material may be provided in each of at least one of the one or more lumen structures to adjust acoustic impedance of the entire structure to achieve a desirable sound filtering effect. For example, the acoustic impedance may be in a range of 5MKS Rayleigh to 500MKS Rayleigh. In some embodiments, a high-pass sound filtering, a low-pass sound filtering, and/or a band-pass filtering effect of the acoustic route may be achieved by adjusting a size of each of at least one of the one or more lumen structures and/or a type of acoustic resistance material in each of at least one of the one or more lumen structures. The acoustic resistance materials may include, but not limited to, plastic, textile, metal, permeable material, woven material, screen material or mesh material, porous material, particulate material, polymer material, or the like, or any combination thereof. By setting the acoustic routes of different acoustic impedances, the acoustic output from the sound guiding holes may be acoustically filtered. In this case, the guided sound waves may have different frequency components.


As shown in FIG. 10E, the acoustic route may include one or more resonance cavities. The one or more resonance cavities may be, for example, Helmholtz cavity. In some embodiments, a high-pass sound filtering, a low-pass sound filtering, and/or a band-pass filtering effect of the acoustic route may be achieved by adjusting a size of each of at least one of the one or more resonance cavities and/or a type of acoustic resistance material in each of at least one of the one or more resonance cavities.


As shown in FIG. 10F, the acoustic route may include a combination of one or more lumen structures and one or more resonance cavities. In some embodiments, a high-pass sound filtering, a low-pass sound filtering, and/or a band-pass filtering effect of the acoustic route may be achieved by adjusting a size of each of at least one of the one or more lumen structures and one or more resonance cavities and/or a type of acoustic resistance material in each of at least one of the one or more lumen structures and one or more resonance cavities. It should be noted that the structures exemplified above may be for illustration purposes, various acoustic structures may also be provided, such as a tuning net, tuning cotton, etc.


In some embodiments, the interference between the leaked sound wave and the guided sound wave may relate to frequencies of the guided sound wave and the leaked sound wave and/or a distance between the sound guiding hole(s) and the portion of the housing 10. In some embodiments, the portion of the housing that generates the leaked sound wave may be the bottom of the housing 10. The first hole(s) may have a larger distance to the portion of the housing 10 than the second hole(s). In some embodiments, the frequency of the first guided sound wave output from the first hole(s) (e.g., the first frequency) and the frequency of second guided sound wave output from second hole(s) (e.g., the second frequency) may be different.


In some embodiments, the first frequency and second frequency may associate with the distance between the at least one sound guiding hole and the portion of the housing 10 that generates the leaked sound wave. In some embodiments, the first frequency may be set in a low frequency range. The second frequency may be set in a high frequency range. The low frequency range and the high frequency range may or may not overlap.


In some embodiments, the frequency of the leaked sound wave generated by the portion of the housing 10 may be in a wide frequency range. The wide frequency range may include, for example, the low frequency range and the high frequency range or a portion of the low frequency range and the high frequency range. For example, the leaked sound wave may include a first frequency in the low frequency range and a second frequency in the high frequency range. In some embodiments, the leaked sound wave of the first frequency and the leaked sound wave of the second frequency may be generated by different portions of the housing 10. For example, the leaked sound wave of the first frequency may be generated by the sidewall of the housing 10, the leaked sound wave of the second frequency may be generated by the bottom of the housing 10. As another example, the leaked sound wave of the first frequency may be generated by the bottom of the housing 10, the leaked sound wave of the second frequency may be generated by the sidewall of the housing 10. In some embodiments, the frequency of the leaked sound wave generated by the portion of the housing 10 may relate to parameters including the mass, the damping, the stiffness, etc., of the different portion of the housing 10, the frequency of the transducer 22, etc.


In some embodiments, the characteristics (amplitude, frequency, and phase) of the first two-point sound sources and the second two-point sound sources may be adjusted via various parameters of the acoustic output device (e.g., electrical parameters of the transducer 22, the mass, stiffness, size, structure, material, etc., of the portion of the housing 10, the position, shape, structure, and/or number (or count) of the sound guiding hole(s) so as to form a sound field with a particular spatial distribution. In some embodiments, a frequency of the first guided sound wave is smaller than a frequency of the second guided sound wave.


A combination of the first two-point sound sources and the second two-point sound sources may improve sound effects both in the near field and the far field.


Referring to FIGS. 4D, 7C, and 10C, by designing different two-point sound sources with different distances, the sound leakage in both the low frequency range and the high frequency range may be properly suppressed. In some embodiments, the closer distance between the second two-point sound sources may be more suitable for suppressing the sound leakage in the far field, and the relative longer distance between the first two-point sound sources may be more suitable for reducing the sound leakage in the near field. In some embodiments, the amplitudes of the sound waves generated by the first two-point sound sources may be set to be different in the low frequency range. For example, the amplitude of the guided sound wave may be smaller than the amplitude of the leaked sound wave. In this case, the sound pressure level of the near-field sound may be improved. The volume of the sound heard by the user may be increased.


Embodiment Seven


FIGS. 11A and 11B are schematic structures illustrating a bone conduction speaker according to some embodiments of the present disclosure. The bone conduction speaker may include an open housing 10, a vibration board 21 and a transducer 22. One or more perforative sound guiding holes 30 may be set on upper and lower portions of the sidewall of the housing 10 and on the bottom of the housing 10. The sound guiding holes 30 on the sidewall are arranged evenly or unevenly in one or more circles on the upper and lower portions of the sidewall of the housing 10. In some embodiments, the quantity of sound guiding holes 30 in every circle may be 8, and the upper portion sound guiding holes and the lower portion sound guiding holes may be symmetrical about the central cross section of the housing 10. In some embodiments, the shape of the sound guiding hole 30 may be rectangular. There may be four sound guiding holds 30 on the bottom of the housing 10. The four sound guiding holes 30 may be linear-shaped along arcs, and may be arranged evenly or unevenly in one or more circles with respect to the center of the bottom. Furthermore, the sound guiding holes 30 may include a circular perforative hole on the center of the bottom.



FIG. 11C is a diagram illustrating the effect of reducing sound leakage of the embodiment. In the frequency range of 1000 Hz˜4000 Hz, the effectiveness of reducing sound leakage is outstanding. For example, in the frequency range of 1300 Hz˜3000 Hz, the sound leakage is reduced by more than 10 dB; in the frequency range of 2000 Hz˜2700 Hz, the sound leakage is reduced by more than 20 dB. Compared to embodiment three, this scheme has a relatively balanced effect of reduced sound leakage within various frequency range, and this effect is better than the effect of schemes where the height of the holes are fixed, such as schemes of embodiment three, embodiment four, embodiment five, and etc. Compared to embodiment six, in the frequency range of 1000 Hz˜1700 Hz and 2500 Hz˜4000 Hz, this scheme has a better effect of reduced sound leakage than embodiment six.


Embodiment Eight


FIGS. 12A and 12B are schematic structures illustrating a bone conduction speaker according to some embodiments of the present disclosure. The bone conduction speaker may include an open housing 10, a vibration board 21 and a transducer 22. A perforative sound guiding hole 30 may be set on the upper portion of the sidewall of the housing 10. One or more sound guiding holes may be arranged evenly or unevenly in one or more circles on the upper portion of the sidewall of the housing 10. There may be 8 sound guiding holes 30, and the shape of the sound guiding holes 30 may be circle.


After comparison of calculation results and test results, the effectiveness of this embodiment is basically the same with that of embodiment one, and this embodiment can effectively reduce sound leakage.


Embodiment Nine


FIGS. 13A and 13B are schematic structures illustrating a bone conduction speaker according to some embodiments of the present disclosure. The bone conduction speaker may include an open housing 10, a vibration board 21 and a transducer 22.


The difference between this embodiment and the above-described embodiment three is that to reduce sound leakage to greater extent, the sound guiding holes 30 may be arranged on the upper, central and lower portions of the sidewall 11. The sound guiding holes 30 are arranged evenly or unevenly in one or more circles. Different circles are formed by the sound guiding holes 30, one of which is set along the circumference of the bottom 12 of the housing 10. The size of the sound guiding holes 30 are the same.


The effect of this scheme may cause a relatively balanced effect of reducing sound leakage in various frequency ranges compared to the schemes where the position of the holes are fixed. The effect of this design on reducing sound leakage is relatively better than that of other designs where the heights of the holes are fixed, such as embodiment three, embodiment four, embodiment five, etc.


Embodiment Ten

The sound guiding holes 30 in the above embodiments may be perforative holes without shields.


In order to adjust the effect of the sound waves guided from the sound guiding holes, a damping layer (not shown in the figures) may locate at the opening of a sound guiding hole 30 to adjust the phase and/or the amplitude of the sound wave.


There are multiple variations of materials and positions of the damping layer. For example, the damping layer may be made of materials which can damp sound waves, such as tuning paper, tuning cotton, nonwoven fabric, silk, cotton, sponge or rubber. The damping layer may be attached on the inner wall of the sound guiding hole 30, or may shield the sound guiding hole 30 from outside.


More preferably, the damping layers corresponding to different sound guiding holes 30 may be arranged to adjust the sound waves from different sound guiding holes to generate a same phase. The adjusted sound waves may be used to reduce leaked sound wave having the same wavelength. Alternatively, different sound guiding holes 30 may be arranged to generate different phases to reduce leaked sound wave having different wavelengths (i.e., leaked sound waves with specific wavelengths).


In some embodiments, different portions of a same sound guiding hole can be configured to generate a same phase to reduce leaked sound waves on the same wavelength (e.g., using a pre-set damping layer with the shape of stairs or steps). In some embodiments, different portions of a same sound guiding hole can be configured to generate different phases to reduce leaked sound waves on different wavelengths.


The above-described embodiments are preferable embodiments with various configurations of the sound guiding hole(s) on the housing of a bone conduction speaker, but a person having ordinary skills in the art can understand that the embodiments don't limit the configurations of the sound guiding hole(s) to those described in this application.


In the past bone conduction speakers, the housing of the bone conduction speakers is closed, so the sound source inside the housing is sealed inside the housing. In the embodiments of the present disclosure, there can be holes in proper positions of the housing, making the sound waves inside the housing and the leaked sound waves having substantially same amplitude and substantially opposite phases in the space, so that the sound waves can interfere with each other and the sound leakage of the bone conduction speaker is reduced. Meanwhile, the volume and weight of the speaker do not increase, the reliability of the product is not comprised, and the cost is barely increased. The designs disclosed herein are easy to implement, reliable, and effective in reducing sound leakage.


In practical applications, the speaker as described elsewhere (e.g., the speaker in FIG. 4A through FIG. 13B) may include different application forms such as bracelets, glasses, helmets, watches, clothing, or backpacks, smart headsets, etc. In some embodiments, an augmented reality technology and/or a virtual reality technology may be applied in the speaker so as to enhance a user's audio experience. For illustration purposes, a pair of glasses (e.g., a pair of glasses worn by a user shown in FIG. 16 or 17) with a sound output function may be provided as an example. Exemplary glasses may be or include augmented reality (AR) glasses, virtual reality (VR) glasses, etc.



FIG. 14 is a schematic diagram illustrating an exemplary speaker customized for augmented reality according to some embodiments of the present disclosure. As shown in FIG. 14, the speaker 1400 may include a sensor module 1410 and a processing engine 1420. In some embodiments, a power source assembly (not shown in FIG. 14) may also provide electrical power to the sensor module 1410 and/or the processing engine 1420.


The sensor module 1410 may include a plurality of sensors of various types. The plurality of sensors may detect status information of a user (e.g., a wearer) of the speaker. The status information may include, for example, a location of the user, a gesture of the user, a direction that the user faces, an acceleration of the user, a speech of the user, etc. A controller (e.g., the processing engine 1420) may process the detected status information, and cause one or more components of the speaker 1400 to implement various functions or methods described in the present disclosure. For example, the controller may cause at least one acoustic driver to output sound based on the detected status information. The sound output may be originated from audio data from an audio source (e.g., a terminal device of the user, a virtual audio marker associated with a geographic location, etc.). The plurality of sensors may include a locating sensor 1411, an orientation sensor 1412, an inertial sensor 1413, an audio sensor 1414, and a wireless transceiver 1415. Merely for illustration, only one sensor of each type is illustrated in FIG. 14. Multiple sensors of each type may also be contemplated. For example, two or more audio sensors may be used to detect sounds from different directions.


The locating sensor 1411 may determine a geographic location of the speaker 1400. The locating sensor 1411 may determine the location of the speaker 1400 based on one or more location-based detection systems such as a global positioning system (GPS), a Wi-Fi location system, an infra-red (IR) location system, a bluetooth beacon system, etc. The locating sensor 1411 may detect changes in the geographic location of the speaker 1400 and/or a user (e.g., the user may wear the speaker 1400, or may be separated from the speaker 1400) and generate sensor data indicating the changes in the geographic location of the speaker 1400 and/or the user.


The orientation sensor 1412 may track an orientation of the user and/or the speaker 1400. The orientation sensor 1412 may include a head-tracking device and/or a torso-tracking device for detecting a direction in which the user is facing, as well as the movement of the user and/or the speaker 1400. Exemplary head-tracking devices or torso-tracking devices may include an optical-based tracking device (e.g., an optical camera), an accelerometer, a magnetometer, a gyroscope, a radar, etc. In some embodiments, the orientation sensor 1412 may detect a change in the user's orientation, such as a turning of the torso or an about-face movement, and generate sensor data indicating the change in the orientation of the body of the user.


The inertial sensor 1413 may sense gestures of the user or a body part (e.g., head, torso, limbs) of the user. The inertial sensor 1413 may include an accelerometer, a gyroscope, a magnetometer, or the like, or any combination thereof. In some embodiments, the accelerometer, the gyroscope, and/or the magnetometer may be independent components. In some embodiments, the accelerometer, the gyroscope, and/or the magnetometer may be integrated or collectively housed in a single sensor component. In some embodiments, the inertial sensor 1413 may detect an acceleration, a deceleration, a tilt level, a relative position in the three-dimensional (3D) space, etc. of the user or a body part (e.g., an arm, a finger, a leg, etc.) of the user, and generate sensor data regarding the gestures of the user accordingly.


The audio sensor 1414 may detect sound from the user, a smart device 1440, and/or ambient environment. In some embodiments, the audio sensor 1414 may include one or more microphones, or a microphone array. The one or more microphones or the microphone array may be housed within the speaker 1400 or in another device connected to the speaker 1400. In some embodiments, the one or more microphones or the microphone array may be generic microphones. In some embodiments, the one or more microphones or the microphone array may be customized for VR and/or AR.


In some embodiments, the audio sensor 1414 may be positioned so as to receive audio signals proximate to the speaker 1400, e.g., speech/voice input by the user to enable a voice control functionality. For example, the audio sensor 1414 may detect sounds of the user wearing the speaker 1400 and/or other users proximate to or interacting with the user. The audio sensor 1414 may further generate sensor data based on the received audio signals.


The wireless transceiver 1415 may communicate with other transceiver devices in distinct locations. The wireless transceiver 1415 may include a transmitter and a receiver. Exemplary wireless transceivers may include, for example, a Local Area Network (LAN) transceiver, a Wide Area Network (WAN) transceiver, a ZigBee transceiver, a Near Field Communication (NFC) transceiver, a Bluetooth (BT) transceiver, a Bluetooth Low Energy (BTLE) transceiver, or the like, or any combination thereof. In some embodiments, the wireless transceiver 1415 may be configured to detect an audio message (e.g., an audio cache or pin) proximate to the speaker 1400, e.g., in a local network at a geographic location or in a cloud storage system connected with the geographic location. For example, another user, a business establishment, a government entity, a tour group, etc. may leave an audio message at a particular geographic or virtual location, and the wireless transceiver 1415 may detect the audio message, and prompt the user to initiate a playback of the audio message.


In some embodiments, the sensor module 1410 (e.g., the locating sensor 1411, the orientation sensor 1412, and the inertial sensor 1413) may detect that the user moves toward or looks in a direction of a point of interest (POI). The POI may be an entity corresponding to a geographic or virtual location. The entity may include a building (e.g., a school, a skyscraper, a bus station, a subway station, etc.), a landscape (e.g., a park, a mountain, etc.), or the like. In some embodiments, the entity may be an object specified by a user. For example, the entity may be a favorite coffee shop of the user. In some embodiments, the POI may be associated with a virtual audio marker. One or more localized audio messages may be attached to the audio marker. The one or more localized audio message may include, for example, a song, a pre-recorded message, an audio signature, an advertisement, a notification, or the like, or any combination thereof.


The processing engine 1420 may include a sensor data processing module 1421 and a retrieve module 1422. The sensor data processing module 1421 may process sensor data obtained from the sensor module 1410 (e.g., the locating sensor 1411, the orientation sensor 1412, the inertial sensor 1413, the audio sensor 1414, and/or the wireless transceiver 1415), and generate processed information and/or data. The information and/or data generated by the sensor data processing module 1421 may include a signal, a representation, an instruction, or the like, or any combination thereof. For example, the sensor data processing module 1421 may receive sensor data indicating the location of the speaker 1400, and determine whether the user is proximate to a POI or whether the user is facing towards a POI. In response to a determination that the user is proximate to the POI or the user is facing towards the POI, the sensor data processing module 1421 may generate a signal and/or an instruction used for causing the retrieve module 1422 to obtain an audio message (i.e., a localized audio message associated with the POI). The audio message may be further provided to the user via the speaker 1400 for playback.


Optionally or additionally, during the playback of the audio message, an active noise reduction (ANR) technique may be performed so as to reduce noise. As used herein, the ANR may refer to a method for reducing undesirable sound by generating additional sound specifically designed to cancel the noise in the audio message according to the reversed phase cancellation principle. The additional sound may have a reversed phase, a same amplitude, and a same frequency as the noise. Merely by way of example, the speaker 1400 may include an ANR component (not shown) configured to reduce the noise. The ANR component may receive sensor data generated by the audio sensor 1414, signals generated by the processing engine 1420 based on the sensor data, or the audio messages received via the wireless transceiver 1415, etc. The received data, signals, audio messages, etc. may include sound from a plurality of directions, which may include desired sound received from a certain direction and undesired sound (i.e., noise) received from other directions. The ANR component may analyze the noise, and perform an ANR operation to suppress or eliminate the noise.


In some embodiments, the ANR component may provide a signal to a transducer (e.g., the transducer 22, or any other transducers) disposed in the speaker to generate an anti-noise acoustic signal. The anti-noise acoustic signal may reduce or substantially prevent the noises from being heard by the user. In some embodiments, the anti-noise acoustic signal may be generated according to the noise detected by the speaker. The speaker may receive sound. In some embodiments, the noise may include background noise in the ambient environment around the user, sound that is not intended to be collected when a user wears the audio device, for example, a traffic noise, a wind noise, etc. For example, the noise may be detected by a noise detection component (e.g., the audio sensor 1414) of the speaker. As the audio sensor is close to the ear of the user, the detected noise may also be referred to noise heard by the user. In some embodiments, the anti-noise acoustic signal may have a same amplitude, a same frequency, and a reverse phase as the detected noise.


The processing engine 1420 may be coupled (e.g., via wireless and/or wired connections) to a memory 1430. The memory 1430 may be implemented by any storage device capable of storing data. In some embodiments, the memory 1430 may be located in a local server or a cloud-based server, etc. In some embodiments, the memory 1430 may include a plurality of audio files 1431 for playback by the speaker 1400 and/or user data 1432 of one or more users. The audio files 1431 may include audio messages (e.g., audio pins or caches created by the user or other users), audio information provided by automated agents, or other audio files available from network sources coupled with a network interface, such as a network-attached storage (NAS) device, a DLNA server, etc. The audio files 1431 may be accessible by the speaker 1400 over a local area network such as a wireless (e.g., Wi-Fi) or wired (e.g., Ethernet) network. For example, the audio files 1431 may include localized audio messages attached to virtual audio markers associated with a POI, which may be accessed when a user is proximate to or facing towards a POI.


The user data 1432 may be user-specific, community-specific, device-specific, location-specific, etc. In some embodiments, the user data 1432 may include audio information related to one or more users. Merely by ways of example, the user data 1432 may include user-defined playlists of digital music files, audio messages stored by the user or other users, information about frequently played audio files associated with the user or other similar users (e.g., those with common audio file listening histories, demographic traits, or Internet browsing histories), “liked” or otherwise favored audio files associated with the user or other users, a frequency at which the audio files 1431 are updated by the user or other users, or the like, or any combination thereof. In some embodiments, the user data 1432 may further include basic information of the one or more users. Exemplary basic information may include names, ages, careers, habits, preferences, etc.


The processing engine 1420 may also be coupled with a smart device 1440 that has access to user data (e.g., the user data 1432) or biometric information about the user. The smart device 1440 may include one or more personal computing devices (e.g., a desktop or laptop computer), wearable smart devices (e.g., a smart watch, a smart glasses), a smart phone, a remote control device, a smart beacon device (e.g., a smart Bluetooth beacon system), a stationary speaker system, or the like, or any combination thereof. In some embodiments, the smart device 1440 may include a conventional user interface for permitting interaction with the user, one or more network interfaces for interacting with the processing engine 1420 and other components in the speaker 1400. In some embodiments, the smart device 1440 may be utilized to connect the speaker 1400 to a Wi-Fi network, creating a system account for the user, setting up music and/or location-based audio services, browsing content for playback, setting assignments of the speaker 1400 or other audio playback devices, transporting control (e.g., play/pause, fast forward/rewind, etc.) of the speaker 1400, selecting one or more speaker for content playback (e.g., a single room playback or a synchronized multi-room playback), etc. In some embodiments, the smart device 1440 may further include sensors for measuring biometric information about the user. Exemplary biometric information may include travel, sleep, or exercise patterns, body temperature, heart rates, paces of gait (e.g., via accelerometers), or the like, or any combination thereof.


The retrieve module 1422 may be configured to retrieve data from the memory 1430 and/or the smart device 1440 based on the information and/or data generated by the sensor data processing module 1421, and determine audio message for playback. For example, the sensor data processing module 1421 may analyze one or more voice commands from the user (obtained from the audio sensor 1414), and determine an instruction based on the one or more voice commands. The retrieve module 1422 may obtain and/or modify a localized audio message based on the instruction. As another example, the sensor data processing module 1421 may generate signals indicating that a user is proximate to a POI and/or the user is facing towards the POI. Accordingly, the retrieve module 1422 may obtain a localized audio message associated with the POI based on the signals. As a further example, the sensor data processing module 1421 may generate a representation indicating a characteristic of a location as a combination of factors from the sensor data, the user data 1432 and/or information from the smart device 1440. The retrieve module 1422 may obtain the audio message based on the representation.



FIG. 15 is a flowchart illustrating an exemplary process for replaying an audio message according to some embodiments of the present disclosure.


In 1510, a point of interest (POI) may be detected. In some embodiments, the POI may be detected by the sensor module 1410 of the speaker 1400.


As used herein, the POI may be an entity corresponding to a geographic or virtual location. The entity may include a building (e.g., a school, a skyscraper, a bus station, a subway station, etc.), a landscape (e.g., a park, a mountain, etc.), or the like, or any combination thereof. In some embodiments, the entity may be an object specified by the user. For example, the entity may be a favorite coffee shop of the user. In some embodiments, the POI may be associated with a virtual audio marker. One or more localized audio messages may be attached to the audio marker. The one or more localized audio message may include, for example, a song, a pre-recorded message, an audio signature, an advertisement, a notification, or the like, or any combination thereof.


In some embodiments, the sensor module 1410 (e.g., the locating sensor 1411, the orientation sensor 1412, and the inertial sensor 1413) may detect that a user wearing the speaker 1400 moves toward to or looks in the direction of the POI. Specifically, the sensor module 1410 (e.g., the locating sensor 1411) may detect changes in a geographic location of the user, and generate sensor data indicating the changes in the geographic location of the user. The sensor module 1410 (e.g., the orientation sensor 1412) may detect changes in an orientation of the user (e.g., the head of the user), and generate sensor data indicating the changes in the orientation of the user. The sensor module 1410 (e.g., the inertial sensor 1413) may also detect gestures (e.g., via an acceleration, a deceleration, a tilt level, a relative position in the three-dimensional (3D) space, etc. of the user or a body part (e.g., an arm, a finger, a leg, etc.)) of the user, and generate sensor data indicating the gestures of the user. The sensor data may be transmitted, for example, to the processing engine 1420 for further processing. For example, the processing engine 1420 (e.g., the sensor data processing module 1421) may process the sensor data, and determine whether the user moves toward to or looks in the direction of the POI.


In some embodiments, other information may also be detected. For example, the sensor module 1410 (e.g., the audio sensor 1414) may detect sound from the user, a smart device (e.g., the smart device 1440), and/or ambient environment. Specifically, one or more microphones or a microphone array may be housed within the speaker 1400 or in another device connected to the speaker 1400. The sensor module 1410 may detect sound using the one or more microphones or the microphone array. In some embodiments, the sensor module 1410 (e.g., the wireless transceiver 1415) may communicate with transceiver devices in distinct locations, and detect an audio message (e.g., an audio cache or pin) when the speaker 1400 is proximate to the transceiver devices. In some embodiments, other information may also be transmitted as part of the sensor data to the processing engine 1420 for processing.


In 1520, an audio message related to the POI may be determined. In some embodiments, the audio message related to the POI may be determined by the processing engine 1420.


In some embodiments, the processing engine 1420 (e.g., the sensor data processing module 1421) may generate information and/or data based at least in part on the sensor data. The information and/or data include a signal, a representation, an instruction, or the like, or any combination thereof. Merely by way of example, the sensor data processing module 1421 may receive sensor data indicating a location of a user, and determine whether the user is proximate to or facing towards the POI. In response to a determination that the user is proximate to the POI or facing towards the POI, the sensor data processing module 1421 may generate a signal and/or an instruction causing the retrieve module 1422 to obtain an audio message (i.e., a localized audio message attached to an audio marker associated with the POI). As another example, the sensor data processing module 1421 may analyze sensor data related to a voice command detected from a user (e.g., by performing a natural language processing), and generate a signal and/or an instruction related to the voice command. As a further example, the sensor data processing module 1421 may generate a representation by weighting the sensor data, user data (e.g., the user data 1432), and other available data (e.g., a demographic profile of a plurality of users with at least one common attribute with the user, a categorical popularity of an audio file, etc.). The representation may indicate a general characteristic of a location as a combination of factors from the sensor data, the user data and/or information from a smart device.


Further, the processing engine 1420 (e.g., the retrieve module 1422) may determine an audio message related to the POI based on the generated information and/or the data. For example, the processing engine 1420 may retrieve an audio message from the audio files 1431 in the memory 1430 based on a signal and/or an instruction related to a voice command. As another example, the processing engine 1420 may retrieve an audio message based on a representation and relationships between the representation and the audio files 1431. The relationships may be predetermined and stored in a storage device. As a further example, the processing engine 1420 may retrieve a localized audio message related to a POI when a user is proximate to or facing towards the POI. In some embodiments, the processing engine 1420 may determine two or more audio messages related to the POI based on the information and/or the data. For example, when a user is proximate to or facing towards the POI, the processing engine 1420 may determine audio messages including “liked” music files, audio files accessed by other users at the POI, or the like, or any combination thereof.


Taking a speaker customized for VR as an example, the speaker may determine an audio message related to a POI based at least in part on sensor data obtained by sensors disposed in the speaker. For example, the POI may be a historical site associated with a virtual audio marker having one or more localized audio messages. When the user wearing the speaker is proximate to or facing towards the historical site, the localized audio messages may be recommended to the user via a virtual interface. The one or more localized audio messages may include virtual environment data used to relive historical stories of the historical site. In the virtual environment data, sound data may be properly designed for simulating sound effects of different scenarios. For example, sound may be transmitted from different sound guiding holes to simulate sound effects of different directions. As another example, the volume and/or delay of sound may be adjusted to simulate sound effects at different distances.


Taking a speaker customized for AR as another example, the speaker may determine an audio message related to a POI based at least in part on sensor data obtained by sensors disposed in the speaker. Additionally, the audio message may be combined with real-world sound in ambient environment so as to enhance an audio experience of the user. The real-world sound in ambient environment may include sounds in all directions of the ambient environment, or may be sounds in a certain direction. Merely by way of example, FIG. 16 is a schematic diagram illustrating an exemplary speaker focusing on sounds in a certain direction according to some embodiments of the present disclosure. As illustrated in FIG. 16, when a user is proximate to a POI P, a speaker (e.g., the speaker 1400) worn by the user may focus on sound received from a virtual audio cone. The vertex of the virtual audio cone may be the speaker. The virtual audio cone may have any suitable size, which may be determined by an angle of the virtual audio cone. For example, the speaker may focus on sound of a virtual audio cone with an angle of, for example, 20°, 40°, 60°, 80°, 120°, 180°, 270°, 360°, etc. In some embodiments, to focus on sound within the range of the virtual audio cone, the speaker may improve audibility of most or all sound in the virtual audio cone. For example, an ANR technique may be used by the speaker so as to reduce or substantially prevent sound in other directions (e.g., sounds outside of the virtual audio cones) from being heard by the user. Additionally, the POI may be associated with virtual audio markers to which localized audio messages may be attached. The localized audio messages may be accessed when the user is proximate to or facing towards the POI. That is, the localized audio messages may be overlaid on the sound in the virtual audio cone so as to enhance an audio experience of the user. In some embodiments, a direction and/or a virtual audio cone of the sound focused by the speaker may be determined according to actual needs. For example, the speaker may focus on sound in a plurality of virtual audio cones in different directions simultaneously. As another example, the speaker may focus on sound in a specified direction (e.g., the north direction). As a further example, the speaker may focus on sound in a walking direction and/or a facing direction of the user.


In 1530, the audio message may be replayed. In some embodiments, the audio message may be replayed by the processing engine 1420.


In some embodiments, the processing engine 1420 may replay the audio message via the speaker 1400 directly. In some embodiments, the processing engine 1420 may prompt the user to initiate a playback of the audio message. For example, the processing engine 1420 may output a prompt (e.g., a voice prompt via a sound guiding hole (e.g., one of the one or more sound guiding holes 30), a visual representation via a virtual user-interface) to the user. The user may respond to the prompt by interacting with the speaker 1400. For example, the user may interact with the speaker 1400 using, for example, gestures of his/her body (e.g., head, torso, limbs, eyeballs), voice command, etc.


Taking a speaker customized for AR as another example, the user may interact with the speaker via a virtual user-interface (UI). FIG. 17 is a schematic diagram illustrating an exemplary UI of the speaker. As illustrated in FIG. 17, the virtual UI may be present in a head position and/or a gaze direction of the user. In some embodiments, the speaker may provide a plurality of audio samples, information, or choices corresponding to spatially delineated zones (e.g., 1710, 1720, 1730, 1740) in an array defined relative to a physical position of the speaker. Each audio sample or piece of information provided to the user may correspond to an audio message to be replayed. In some embodiments, the audio samples may include a selection of an audio file or stream, such as a representative segment of the audio content (e.g., an introduction to an audio book, a highlight from a sporting broadcast, a description of the audio file or stream, a description of an audio pin, an indicator of the presence of an audio pin, an audio beacon, a source of an audio message). In some embodiments, the audio samples may include entire audio content (e.g., an entire audio file). In some embodiments, the audio samples, information, or choices may be used as prompts for the user. The user may respond to the prompts by interacting with the speaker. For example, the user may click on a zone (e.g., 1720) to initiate a playback of entire audio content corresponding to the audio sample presented in the zone. As another example, the user may shake his/her head to switch between different zones.


It's noticeable that above statements are preferable embodiments and technical principles thereof. A person having ordinary skill in the art is easy to understand that this disclosure is not limited to the specific embodiments stated, and a person having ordinary skill in the art can make various obvious variations, adjustments, and substitutes within the protected scope of this disclosure. Therefore, although above embodiments state this disclosure in detail, this disclosure is not limited to the embodiments, and there can be many other equivalent embodiments within the scope of the present disclosure, and the protected scope of this disclosure is determined by following claims.

Claims
  • 1. A speaker, comprising: a housing;a transducer residing inside the housing and configured to generate vibrations, the vibrations producing a sound wave inside the housing and causing a leaked sound wave spreading outside the housing;at least one sound guiding hole located on the housing and configured to guide the sound wave inside the housing through the at least one sound guiding hole to an outside of the housing, the guided sound wave having a phase different from a phase of the leaked sound wave, the guided sound wave interfering with the leaked sound wave in a target region, and the interference reducing a sound pressure level of the leaked sound wave in the target region;one or more sensors configured to detect status information of a user, wherein at least one of the one or more sensors is configured to detect a point of interest (POI) that the user is proximate to or facing towards; anda controller configured to cause the transducer to output sound based on the detected status information of the user by determining an audio message related to the POI;outputting a prompt to the user; andcausing the transducer to replay the audio message in response to the user responding to the prompt.
  • 2. The speaker of claim 1, wherein the status information includes at least one of a location of the user, a gesture of the user, a direction that the user faces, an acceleration of the user, or a speech of the user.
  • 3. The speaker of claim 1, wherein the one or more sensors include at least one of a locating sensor, an orientation sensor, an inertial sensor, an audio sensor, and a wireless transceiver.
  • 4. The speaker of claim 1, wherein the POI is a virtual audio marker with which the audio message is associated.
  • 5. The speaker of claim 1, wherein the prompt includes a voice prompt via the at least one sound guiding hole or a visual representation via a virtual user-interface.
  • 6. The speaker of claim 1, wherein the user responds to the prompt via a virtual user-interface.
  • 7. The speaker of claim 4, wherein the controller is further configured to determine, based on the detected status information of the user, whether the user is proximate to the POI or facing towards the POI.
  • 8. The speaker of claim 1, further comprising: a noise detection component configured to detect noise heard by the user;an active noise reduction (ANR) component configured to generate, according to the detected noise, an anti-noise acoustic signal to reduce the detected noise.
  • 9. The speaker of claim 8, wherein the anti-noise acoustic signal has a same amplitude, a same frequency, and a reverse phase as the detected noise.
  • 10. The speaker of claim 1, wherein: the housing includes a bottom or a sidewall; andthe at least one sound guiding hole is located on the bottom or the sidewall of the housing.
  • 11. The speaker of claim 1, wherein the at least one sound guiding hole includes a damping layer, the damping layer being configured to adjust the phase of the guided sound wave in the target region.
  • 12. The speaker of claim 11, wherein the damping layer includes at least one of a tuning paper, a tuning cotton, a nonwoven fabric, a silk, a cotton, a sponge, or a rubber.
  • 13. The speaker of claim 1, wherein the guided sound wave includes at least two sound waves having different phases.
  • 14. The speaker of claim 13, wherein the at least one sound guiding hole includes two sound guiding holes located on the housing.
  • 15. The speaker of claim 14, wherein the two sound guiding holes are arranged to generate the at least two sound waves having different phases to reduce the sound pressure level of the leaked sound wave having different wavelengths.
  • 16. The speaker of claim 1, wherein: the housing includes a bottom or a sidewall; andthe at least one sound guiding hole is located on the bottom or the sidewall of the housing.
  • 17. The speaker of claim 1, wherein a location of the at least one sound guiding hole is determined based on at least one of a vibration frequency of the transducer, a shape of the at least one sound guiding hole, the target region, or a frequency range within which the sound pressure level of the leaked sound wave is to be reduced.
  • 18. The speaker of claim 1, wherein the PIO is an entity corresponding to a geographic or virtual location.
  • 19. The speaker of claim 1, wherein the controller is coupled to a memory including a plurality of audio message or user data, and to cause the transducer to output sound based on the detected status information of the user, the controller is further configured to: determine the audio message from the memory based on at least one of the detected status information or the user data; andcause the transducer to replay the audio message.
  • 20. The speaker of claim 1, wherein to cause the transducer to output sound based on the detected status information of the user, the controller is further configured to: determine one or more audio messages based on the detected status information;causing the speaker to display a plurality of zones via a virtual user-interface, each of the plurality of zones corresponding to one of the one or more audio messages;receive, from the user, a selection of a zone from the plurality of zones; andcause the transducer to replay an audio message corresponding to the zone.
Priority Claims (4)
Number Date Country Kind
201410005804.0 Jan 2014 CN national
201910364346.2 Apr 2019 CN national
201910888067.6 Sep 2019 CN national
201910888762.2 Sep 2019 CN national
CROSS-REFERENCE TO RELATED APPLICATIONS

The present application is a continuation-in-part of U.S. patent application Ser. No. 17/074,762 filed on Oct. 20, 2020, which is a continuation-in-part of U.S. patent application Ser. No. 16/813,915 (now U.S. Pat. No. 10,848,878) filed on Mar. 10, 2020, which is a continuation of U.S. patent application Ser. No. 16/419,049 (now U.S. Pat. No. 10,616,696) filed on May 22, 2019, which is a continuation of U.S. patent application Ser. No. 16/180,020 (now U.S. Pat. No. 10,334,372) filed on Nov. 5, 2018, which is a continuation of U.S. patent application Ser. No. 15/650,909 (now U.S. Pat. No. 10,149,071) filed on Jul. 16, 2017, which is a continuation of U.S. patent application Ser. No. 15/109,831 (now U.S. Pat. No. 9,729,978) filed on Jul. 6, 2016, which is a U.S. National Stage entry under 35 U.S.C. § 371 of International Application No. PCT/CN2014/094065, filed on Dec. 17, 2014, designating the United States of America, which claims priority to Chinese Patent Application No. 201410005804.0, filed on Jan. 6, 2014; the present application is also a continuation-in-part of U.S. patent application Ser. No. 17/170,920 filed on Feb. 9, 2021, which is a continuation of international Application No, PCT/CN2020/087002, filed on Apr. 26, 2020, which claims priority to Chinese Patent Application No, 201910888067.6, filed on Sep. 19, 2019, Chinese Patent Application No. 201910888762.2, filed on Sep. 19, 2019, and Chinese Patent Application No. 201910364346.2, filed on Apr. 30, 2019. Each of the above-referenced applications is hereby incorporated by reference.

US Referenced Citations (75)
Number Name Date Kind
2327320 Shapiro Aug 1943 A
4987597 Haertl Jan 1991 A
5327506 Stites, III Jul 1994 A
5430803 Kimura et al. Jul 1995 A
5692059 Kruger Nov 1997 A
5757935 Kang et al. May 1998 A
5790684 Niino et al. Aug 1998 A
6062337 Zinserling May 2000 A
6850138 Sakai Feb 2005 B1
8141678 Ikeyama et al. Mar 2012 B2
9226075 Lee Dec 2015 B2
9729978 Qi et al. Aug 2017 B2
10149071 Qi et al. Dec 2018 B2
10334372 Qi et al. Jun 2019 B2
10555106 Mehra Feb 2020 B1
10609465 Wakeland et al. Mar 2020 B1
10631075 Patil Apr 2020 B1
10897677 Walraevens et al. Jan 2021 B2
11122359 Zhang et al. Sep 2021 B2
11197106 Qi et al. Dec 2021 B2
20030048913 Lee et al. Mar 2003 A1
20040131219 Polk, Jr. Jul 2004 A1
20050251952 Johnson Nov 2005 A1
20060098829 Kobayashi May 2006 A1
20070041595 Carazo et al. Feb 2007 A1
20070098198 Hildebrandt May 2007 A1
20090095613 Lin Apr 2009 A1
20090141920 Suyama Jun 2009 A1
20090147981 Blanchard et al. Jun 2009 A1
20090190781 Fukuda Jul 2009 A1
20090208031 Abolfathi Aug 2009 A1
20090285417 Shin et al. Nov 2009 A1
20090290730 Fukuda et al. Nov 2009 A1
20100054492 Eaton et al. Mar 2010 A1
20100310106 Blanchard et al. Dec 2010 A1
20100322454 Ambrose et al. Dec 2010 A1
20110150262 Nakama et al. Jun 2011 A1
20110170730 Zhu Jul 2011 A1
20120020501 Lee Jan 2012 A1
20120070022 Saiki Mar 2012 A1
20120177206 Yamagishi Jul 2012 A1
20120300956 Horii Nov 2012 A1
20130051585 Karkkainen et al. Feb 2013 A1
20130108068 Poulsen et al. May 2013 A1
20130329919 He Dec 2013 A1
20140009008 Li et al. Jan 2014 A1
20140064533 Kasic, II Mar 2014 A1
20140185822 Kunimoto et al. Jul 2014 A1
20140185837 Kunimoto et al. Jul 2014 A1
20140274229 Fukuda Sep 2014 A1
20140355777 Nabata et al. Dec 2014 A1
20150030189 Nabata et al. Jan 2015 A1
20150256656 Horii Sep 2015 A1
20150264473 Fukuda Sep 2015 A1
20150326967 Otani Nov 2015 A1
20160037243 Lippert et al. Feb 2016 A1
20160150337 Nandy May 2016 A1
20160165357 Morishita et al. Jun 2016 A1
20160295328 Park Oct 2016 A1
20160329041 Qi et al. Nov 2016 A1
20170201823 Shetye Jul 2017 A1
20170223445 Bullen et al. Aug 2017 A1
20170230741 Matsuo et al. Aug 2017 A1
20180167710 Silver Jun 2018 A1
20180182370 Hyde et al. Jun 2018 A1
20180376231 Pfaffinger Dec 2018 A1
20190014425 Liao et al. Jan 2019 A1
20190052954 Rusconi Clerici Beltrami Feb 2019 A1
20190238971 Wakeland et al. Aug 2019 A1
20200137476 Shinmen et al. Apr 2020 A1
20200169801 Zhu May 2020 A1
20200252708 Zhu Aug 2020 A1
20200367008 Walsh et al. Nov 2020 A1
20210099027 Larsson et al. Apr 2021 A1
20210219059 Qi et al. Jul 2021 A1
Foreign Referenced Citations (18)
Number Date Country
201616895 Oct 2010 CN
201690580 Dec 2010 CN
102014328 Apr 2011 CN
102421043 Apr 2012 CN
202435600 Sep 2012 CN
103167390 Jun 2013 CN
103347235 Oct 2013 CN
204206450 Mar 2015 CN
106792304 May 2017 CN
109640209 Apr 2019 CN
2011367 Dec 2014 EP
2006332715 Dec 2006 JP
2007251358 Sep 2007 JP
20050030183 Mar 2005 KR
20090082999 Aug 2009 KR
20170133754 Dec 2017 KR
2004095878 Nov 2004 WO
2015087093 Jun 2015 WO
Non-Patent Literature Citations (9)
Entry
International Search Report in PCT/CN2014/094065 dated Mar. 17, 2015, 5 pages.
Written Opinion in PCT/CN2014/094065 dated Mar. 17, 2015, 10 pages.
First Office Action in Chinese Application No. 201410005804.0 dated Dec. 7, 2015, 9 pages.
Notice of Reasons for Refusal in Japanese Application No. 2016545828 dated Jun. 20, 2017, 10 pages.
The Extended European Search Report in European Application No. 14877111.6 dated Mar. 17, 2017, 6 pages.
First Examination Report in Indian Application No. 201617026062 dated Nov. 13, 2020, 6 pages.
International Search Report in PCT/CN2020/087002 dated Jul. 14, 2020, 4 pages.
Written Opinion in PCT/CN2020/087002 dated Jul. 14, 2020, 5 pages.
Notice of Preliminary Rejection in Korean Application No. 10-2022-7010046 dated Jun. 20, 2022, 15 pages.
Related Publications (1)
Number Date Country
20210219072 A1 Jul 2021 US
Continuations (5)
Number Date Country
Parent PCT/CN2020/087002 Apr 2020 US
Child 17170920 US
Parent 16419049 May 2019 US
Child 16813915 US
Parent 16180020 Nov 2018 US
Child 16419049 US
Parent 15650909 Jul 2017 US
Child 16180020 US
Parent 15109831 US
Child 15650909 US
Continuation in Parts (3)
Number Date Country
Parent 17170920 Feb 2021 US
Child 17219882 US
Parent 17074762 Oct 2020 US
Child 17170920 US
Parent 16813915 Mar 2020 US
Child 17074762 US