This application relates to a bone conduction device, and more specifically, relates to methods and systems for reducing sound leakage by a bone conduction device.
A bone conduction speaker, which may be also called a vibration speaker, may push human tissues and bones to stimulate the auditory nerve in cochlea and enable people to hear sound. The bone conduction speaker is also called a bone conduction headphone.
An exemplary structure of a bone conduction speaker based on the principle of the bone conduction speaker is shown in
However, the mechanical vibrations generated by the transducer 122 may not only cause the vibration board 121 to vibrate, but may also cause the housing 110 to vibrate through the linking component 123. Accordingly, the mechanical vibrations generated by the bone conduction speaker may push human tissues through the bone board 121, and at the same time a portion of the vibrating board 121 and the housing 110 that are not in contact with human issues may nevertheless push air. Air sound may thus be generated by the air pushed by the portion of the vibrating board 121 and the housing 110. The air sound may be called “sound leakage.” In some cases, sound leakage is harmless. However, sound leakage should be avoided as much as possible if people intend to protect privacy when using the bone conduction speaker or try not to disturb others when listening to music.
Attempting to solve the problem of sound leakage, Korean patent KR10-2009-0082999 discloses a bone conduction speaker of a dual magnetic structure and double-frame. As shown in
However, in this design, since the second frame 220 is fixed to the first frame 210, vibrations of the second frame 220 are inevitable. As a result, sealing by the second frame 220 is unsatisfactory. Furthermore, the second frame 220 increases the whole volume and weight of the speaker, which in turn increases the cost, complicates the assembly process, and reduces the speaker's reliability and consistency.
The embodiments of the present application disclose methods and system of reducing sound leakage of a bone conduction speaker.
In one aspect, the embodiments of the present application disclose a method of reducing sound leakage of a bone conduction speaker, including: providing a bone conduction speaker including a vibration board fitting human skin and passing vibrations, a transducer, and a housing, wherein at least one sound guiding hole is located in at least one portion of the housing; the transducer drives the vibration board to vibrate; the housing vibrates, along with the vibrations of the transducer, and pushes air, forming a leaked sound wave transmitted in the air; the air inside the housing is pushed out of the housing through the at least one sound guiding hole, interferes with the leaked sound wave, and reduces an amplitude of the leaked sound wave.
In some embodiments, one or more sound guiding holes may locate in an upper portion, a central portion, and/or a lower portion of a sidewall and/or the bottom of the housing.
In some embodiments, a damping layer may be applied in the at least one sound guiding hole in order to adjust the phase and amplitude of the guided sound wave through the at least one sound guiding hole.
In some embodiments, sound guiding holes may be configured to generate guided sound waves having a same phase that reduce the leaked sound wave having a same wavelength; sound guiding holes may be configured to generate guided sound waves having different phases that reduce the leaked sound waves having different wavelengths.
In some embodiments, different portions of a same sound guiding hole may be configured to generate guided sound waves having a same phase that reduce the leaked sound wave having same wavelength. In some embodiments, different portions of a same sound guiding hole may be configured to generate guided sound waves having different phases that reduce leaked sound waves having different wavelengths.
In another aspect, the embodiments of the present application disclose a bone conduction speaker, including a housing, a vibration board and a transducer, wherein: the transducer is configured to generate vibrations and is located inside the housing; the vibration board is configured to be in contact with skin and pass vibrations; at least one sound guiding hole may locate in at least one portion on the housing, and preferably, the at least one sound guiding hole may be configured to guide a sound wave inside the housing, resulted from vibrations of the air inside the housing, to the outside of the housing, the guided sound wave interfering with the leaked sound wave and reducing the amplitude thereof.
In some embodiments, the at least one sound guiding hole may locate in the sidewall and/or bottom of the housing.
In some embodiments, preferably, the at least one sound guiding sound hole may locate in the upper portion and/or lower portion of the sidewall of the housing.
In some embodiments, preferably, the sidewall of the housing is cylindrical and there are at least two sound guiding holes located in the sidewall of the housing, which are arranged evenly or unevenly in one or more circles. Alternatively, the housing may have a different shape.
In some embodiments, preferably, the sound guiding holes have different heights along the axial direction of the cylindrical sidewall.
In some embodiments, preferably, there are at least two sound guiding holes located in the bottom of the housing. In some embodiments, the sound guiding holes are distributed evenly or unevenly in one or more circles around the center of the bottom. Alternatively or additionally, one sound guiding hole is located at the center of the bottom of the housing.
In some embodiments, preferably, the sound guiding hole is a perforative hole. In some embodiments, there may be a damping layer at the opening of the sound guiding hole.
In some embodiments, preferably, the guided sound waves through different sound guiding holes and/or different portions of a same sound guiding hole have different phases or a same phase.
In some embodiments, preferably, the damping layer is a tuning paper, a tuning cotton, a nonwoven fabric, a silk, a cotton, a sponge, or a rubber.
In some embodiments, preferably, the shape of a sound guiding hole is circle, ellipse, quadrangle, rectangle, or linear. In some embodiments, the sound guiding holes may have a same shape or different shapes.
In some embodiments, preferably, the transducer includes a magnetic component and a voice coil. Alternatively, the transducer includes piezoelectric ceramic.
The design disclosed in this application utilizes the principles of sound interference, by placing sound guiding holes in the housing, to guide sound wave(s) inside the housing to the outside of the housing, the guided sound wave(s) interfering with the leaked sound wave, which is formed when the housing's vibrations push the air outside the housing. The guided sound wave(s) reduces the amplitude of the leaked sound wave and thus reduces the sound leakage. The design not only reduces sound leakage, but is also easy to implement, doesn't increase the volume or weight of the bone conduction speaker, and barely increase the cost of the product.
The meanings of the mark numbers in the figures are as followed:
110, open housing; 121, vibration board; 122, transducer; 123, linking component; 210, first frame; 220, second frame; 230, moving coil; 240, inner magnetic component; 250, outer magnetic component; 260; vibration board; 270, vibration unit; 10, housing; 11, sidewall; 12, bottom; 21, vibration board; 22, transducer; 23, linking component; 24, elastic component; 30, sound guiding hole.
Followings are some further detailed illustrations about this disclosure. The following examples are for illustrative purposes only and should not be interpreted as limitations of the claimed invention. There are a variety of alternative techniques and procedures available to those of ordinary skill in the art, which would similarly permit one to successfully perform the intended invention. In addition, the figures just show the structures relative to this disclosure, not the whole structure.
To explain the scheme of the embodiments of this disclosure, the design principles of this disclosure will be introduced here.
This disclosure applies above-noted the principles of sound wave interference to a bone conduction speaker and disclose a bone conduction speaker that can reduce sound leakage.
Furthermore, the vibration board 21 may be connected to the transducer 22 and configured to vibrate along with the transducer 22. The vibration board 21 may stretch out from the opening of the housing 10, and touch the skin of the user and pass vibrations to auditory nerves through human tissues and bones, which in turn enables the user to hear sound. The linking component 23 may reside between the transducer 22 and the housing 10, configured to fix the vibrating transducer 122 inside the housing. The linking component 23 may include one or more separate components, or may be integrated with the transducer 22 or the housing 10. In some embodiments, the linking component 23 is made of an elastic material.
The transducer 22 may drive the vibration board 21 to vibrate. The transducer 22, which resides inside the housing 10, may vibrate. The vibrations of the transducer 22 may drives the air inside the housing 10 to vibrate, producing a sound wave inside the housing 10, which can be referred to as “sound wave inside the housing.” Since the vibration board 21 and the transducer 22 are fixed to the housing 10 via the linking component 23, the vibrations may pass to the housing 10, causing the housing 10 to vibrate synchronously. The vibrations of the housing 10 may generate a leaked sound wave, which spreads outwards as sound leakage.
The sound wave inside the housing and the leaked sound wave are like the two sound sources in
In some embodiments, one sound guiding hole 30 is set on the upper portion of the sidewall 11. As used herein, the upper portion of the sidewall 11 refers to the portion of the sidewall 11 starting from the top of the sidewall (contacting with the vibration board 21) to about the ⅓ height of the sidewall.
Outside the housing 10, the sound leakage reduction is proportional to
(∫∫S
wherein Shole is the area of the opening of the sound guiding hole 30, Shousing is the area of the housing 10 (e.g., the sidewall 11 and the bottom 12) that is not in contact with human face.
The pressure inside the housing may be expressed as
P=Pa+Pb+Pc+Pe, (2)
wherein Pa, Pb, Pc and Pe are the sound pressures of an arbitrary point inside the housing 10 generated by side a, side b, side c and side e (as illustrated in
The center of the side b, O point, is set as the origin of the space coordinates, and the side b can be set as the z=0 plane, so Pa, Pb, Pc and Pe may be expressed as follows:
wherein R(x′, y′)=√{square root over ((x−x′)2+(y−y′)2+z2)} is the distance between an observation point (x, y, z) and a point on side b (x′, y′, 0); Sa, Sb, Sc and Se are the areas of side a, side b, side c and side e, respectively;
PaR, PbR, PcR and PeR are acoustic resistances of air, which respectively are:
wherein r is the acoustic resistance per unit length, r′ is the sound quality per unit length, za is the distance between the observation point and side a, zb is the distance between the observation point and side b, zc is the distance between the observation point and side c, ze is the distance between the observation point and side e.
Wa(x, y), Wb(x, y), Wc(x, y), We(x, y) and Wd(x, y) are the sound source power per unit area of side a, side b, side c, side e and side d, respectively, which can be derived from following formulas (11):
Fe=Fa=F−k1 cos ωt−∫∫S
Fb=−F+k1 cos ωt+∫∫S
Fc=Fd=Fb−k2 cos ωt−∫∫S
Fd=Fb−k2 cos ωt−∫∫S
wherein F is the driving force generated by the transducer 22, Fa, Fb, Fc, Fd, and Fe are the driving forces of side a, side b, side c, side d and side e, respectively. As used herein, side d is the outside surface of the bottom 12. Sd is the region of side d, f is the viscous resistance formed in the small gap of the sidewalls, and f=ηΔs(dv/dy).
L is the equivalent load on human face when the vibration board acts on the human face, γ is the energy dissipated on elastic element 24, k1 and k2 are the elastic coefficients of elastic element 23 and elastic element 24 respectively, η is the fluid viscosity coefficient, dv/dy is the velocity gradient of fluid, Δs is the cross-section area of a subject (board), A is the amplitude, φ is the region of the sound field, and δ is a high order minimum (which is generated by the incompletely symmetrical shape of the housing).
The sound pressure of an arbitrary point outside the housing, generated by the vibration of the housing 10 is expressed as:
wherein R(xd′, yd′)=√{square root over ((x−xd′)2+(y−yd′)2+(z−zd)2)} is the distance between the observation point (x, y, z) and a point on side d (xd′, yd′, zd).
Pa, Pb, Pc and Pe are functions of the position, when we set a hole on an arbitrary position in the housing, if the area of the hole is Shole, the sound pressure of the hole is ∫∫S
In the meanwhile, because the vibration board 21 fits human tissues tightly, the power it gives out is absorbed all by human tissues, so the only side that can push air outside the housing to vibrate is side d, thus forming sound leakage. As described elsewhere, the sound leakage is resulted from the vibrations of the housing 10. For illustrative purposes, the sound pressure generated by the housing 10 may be expressed as ∫∫S
The leaked sound wave and the guided sound wave interference may result in a weakened sound wave, i.e., to make ∫∫S
According to the formulas above, a person having ordinary skill in the art would understand that the effectiveness of reducing sound leakage is related to the dimensions of the housing of the bone conduction speaker, the vibration frequency of the transducer, the position, shape, quantity and size of the sound guiding hole(s) and whether there is damping inside the sound guiding hole(s). Accordingly, various configurations, depending on specific needs, may be obtained by choosing specific position where the sound guiding hole(s) is located, the shape and/or quantity of the sound guiding hole(s) as well as the damping material.
According to the embodiments in this disclosure, the effectiveness of reducing sound leakage after setting sound guiding holes is very obvious. As shown in
In the tested frequency range, after setting sound guiding holes, the sound leakage is reduced by about 10 dB on average. Specifically, in the frequency range of 1500 Hz˜3000 Hz, the sound leakage is reduced by over 10 dB. In the frequency range of 2000 Hz˜2500 Hz, the sound leakage is reduced by over 20 dB compared to the scheme without sound guiding holes.
A person having ordinary skill in the art can understand from the above-mentioned formulas that when the dimensions of the bone conduction speaker, target regions to reduce sound leakage and frequencies of sound waves differ, the position, shape and quantity of sound guiding holes also need to adjust accordingly.
For example, in a cylinder housing, according to different needs, a plurality of sound guiding holes may be on the sidewall and/or the bottom of the housing. Preferably, the sound guiding hole may be set on the upper portion and/or lower portion of the sidewall of the housing. The quantity of the sound guiding holes set on the sidewall of the housing is no less than two. Preferably, the sound guiding holes may be arranged evenly or unevenly in one or more circles with respect to the center of the bottom. In some embodiments, the sound guiding holes may be arranged in at least one circle. In some embodiments, one sound guiding hole may be set on the bottom of the housing. In some embodiments, the sound guiding hole may be set at the center of the bottom of the housing.
The quantity of the sound guiding holes can be one or more. Preferably, multiple sound guiding holes may be set symmetrically on the housing. In some embodiments, there are 6-8 circularly arranged sound guiding holes.
The openings (and cross sections) of sound guiding holes may be circle, ellipse, rectangle, or slit. Slit generally means slit along with straight lines, curve lines, or arc lines. Different sound guiding holes in one bone conduction speaker may have same or different shapes.
A person having ordinary skill in the art can understand that, the sidewall of the housing may not be cylindrical, the sound guiding holes can be arranged asymmetrically as needed. Various configurations may be obtained by setting different combinations of the shape, quantity, and position of the sound guiding. Some other embodiments along with the figures are described as follows.
In some embodiments, the leaked sound wave may be generated by a portion of the housing 10. The portion of the housing may be the sidewall 11 of the housing 10 and/or the bottom 12 of the housing 10. Merely by way of example, the leaked sound wave may be generated by the bottom 12 of the housing 10. The guided sound wave output through the sound guiding hole(s) 30 may interfere with the leaked sound wave generated by the portion of the housing 10. The interference may enhance or reduce a sound pressure level of the guided sound wave and/or leaked sound wave in the target region.
In some embodiments, the portion of the housing 10 that generates the leaked sound wave may be regarded as a first sound source (e.g., the sound source 1 illustrated in
where ω denotes an angular frequency, ρ0 denotes an air density, r denotes a distance between a target point and the sound source, Q0 denotes a volume velocity of the sound source, and k denotes a wave number. It may be concluded that the magnitude of the sound field pressure of the sound field of the point sound source is inversely proportional to the distance to the point sound source.
It should be noted that, the sound guiding hole(s) for outputting sound as a point sound source may only serve as an explanation of the principle and effect of the present disclosure, and the shape and/or size of the sound guiding hole(s) may not be limited in practical applications. In some embodiments, if the area of the sound guiding hole is large, the sound guiding hole may also be equivalent to a planar sound source. Similarly, if an area of the portion of the housing 10 that generates the leaked sound wave is large (e.g., the portion of the housing 10 is a vibration surface or a sound radiation surface), the portion of the housing 10 may also be equivalent to a planar sound source. For those skilled in the art, without creative activities, it may be known that sounds generated by structures such as sound guiding holes, vibration surfaces, and sound radiation surfaces may be equivalent to point sound sources at the spatial scale discussed in the present disclosure, and may have consistent sound propagation characteristics and the same mathematical description method. Further, for those skilled in the art, without creative activities, it may be known that the acoustic effect achieved by the two-point sound sources may also be implemented by alternative acoustic structures. According to actual situations, the alternative acoustic structures may be modified and/or combined discretionarily, and the same acoustic output effect may be achieved.
The two-point sound sources may be formed such that the guided sound wave output from the sound guiding hole(s) may interfere with the leaked sound wave generated by the portion of the housing 10. The interference may reduce a sound pressure level of the leaked sound wave in the surrounding environment (e.g., the target region). For convenience, the sound waves output from an acoustic output device (e.g., the bone conduction speaker) to the surrounding environment may be referred to as far-field leakage since it may be heard by others in the environment. The sound waves output from the acoustic output device to the ears of the user may also be referred to as near-field sound since a distance between the bone conduction speaker and the user may be relatively short. In some embodiments, the sound waves output from the two-point sound sources may have a same frequency or frequency range (e.g., 800 Hz, 1000 Hz, 1500 Hz, 3000 Hz, etc.). In some embodiments, the sound waves output from the two-point sound sources may have a certain phase difference. In some embodiments, the sound guiding hole includes a damping layer. The damping layer may be, for example, a tuning paper, a tuning cotton, a nonwoven fabric, a silk, a cotton, a sponge, or a rubber. The damping layer may be configured to adjust the phase of the guided sound wave in the target region. The acoustic output device described herein may include a bone conduction speaker or an air conduction speaker. For example, a portion of the housing (e.g., the bottom of the housing) of the bone conduction speaker may be treated as one of the two-point sound sources, and at least one sound guiding holes of the bone conduction speaker may be treated as the other one of the two-point sound sources. As another example, one sound guiding hole of an air conduction speaker may be treated as one of the two-point sound sources, and another sound guiding hole of the air conduction speaker may be treated as the other one of the two-point sound sources. It should be noted that, although the construction of two-point sound sources may be different in bone conduction speaker and air conduction speaker, the principles of the interference between the various constructed two-point sound sources are the same. Thus, the equivalence of the two-point sound sources in a bone conduction speaker disclosed elsewhere in the present disclosure is also applicable for an air conduction speaker.
In some embodiments, when the position and phase difference of the two-point sound sources meet certain conditions, the acoustic output device may output different sound effects in the near field (for example, the position of the user's ear) and the far field. For example, if the phases of the point sound sources corresponding to the portion of the housing 10 and the sound guiding hole(s) are opposite, that is, an absolute value of the phase difference between the two-point sound sources is 180 degrees, the far-field leakage may be reduced according to the principle of reversed phase cancellation.
In some embodiments, the interference between the guided sound wave and the leaked sound wave at a specific frequency may relate to a distance between the sound guiding hole(s) and the portion of the housing 10. For example, if the sound guiding hole(s) are set at the upper portion of the sidewall of the housing 10 (as illustrated in
Merely by way of example, the low frequency range may refer to frequencies in a range below a first frequency threshold. The high frequency range may refer to frequencies in a range exceed a second frequency threshold. The first frequency threshold may be lower than the second frequency threshold. The mid-low frequency range may refer to frequencies in a range between the first frequency threshold and the second frequency threshold. For example, the first frequency threshold may be 1000 Hz, and the second frequency threshold may be 3000 Hz. The low frequency range may refer to frequencies in a range below 1000 Hz, the high frequency range may refer to frequencies in a range above 3000 Hz, and the mid-low frequency range may refer to frequencies in a range of 1000-2000 Hz, 1500-2500 Hz, etc. In some embodiments, a middle frequency range, a mid-high frequency range may also be determined between the first frequency threshold and the second frequency threshold. In some embodiments, the mid-low frequency range and the low frequency range may partially overlap. The mid-high frequency range and the high frequency range may partially overlap. For example, the mid-high frequency range may refer to frequencies in a range above 3000 Hz, and the mid-low frequency range may refer to frequencies in a range of 2800-3500 Hz. It should be noted that the low frequency range, the mid-low frequency range, the middle frequency range, the mid-high frequency range, and/or the high frequency range may be set flexibly according to different situations, and are not limited herein.
In some embodiments, the frequencies of the guided sound wave and the leaked sound wave may be set in a low frequency range (e.g., below 800 Hz, below 1200 Hz, etc.). In some embodiments, the amplitudes of the sound waves generated by the two-point sound sources may be set to be different in the low frequency range. For example, the amplitude of the guided sound wave may be smaller than the amplitude of the leaked sound wave. In this case, the interference may not reduce sound pressure of the near-field sound in the low-frequency range. The sound pressure of the near-field sound may be improved in the low-frequency range. The volume of the sound heard by the user may be improved.
In some embodiments, the amplitude of the guided sound wave may be adjusted by setting an acoustic resistance structure in the sound guiding hole(s) 30. The material of the acoustic resistance structure disposed in the sound guiding hole 30 may include, but not limited to, plastics (e.g., high-molecular polyethylene, blown nylon, engineering plastics, etc.), cotton, nylon, fiber (e.g., glass fiber, carbon fiber, boron fiber, graphite fiber, graphene fiber, silicon carbide fiber, or aramid fiber), other single or composite materials, other organic and/or inorganic materials, etc. The thickness of the acoustic resistance structure may be 0.005 mm, 0.01 mm, 0.02 mm, 0.5 mm, 1 mm, 2 mm, etc. The structure of the acoustic resistance structure may be in a shape adapted to the shape of the sound guiding hole. For example, the acoustic resistance structure may have a shape of a cylinder, a sphere, a cubic, etc. In some embodiments, the materials, thickness, and structures of the acoustic resistance structure may be modified and/or combined to obtain a desirable acoustic resistance structure. In some embodiments, the acoustic resistance structure may be implemented by the damping layer.
In some embodiments, the amplitude of the guided sound wave output from the sound guiding hole may be relatively low (e.g., zero or almost zero). The difference between the guided sound wave and the leaked sound wave may be maximized, thus achieving a relatively large sound pressure in the near field. In this case, the sound leakage of the acoustic output device having sound guiding holes may be almost the same as the sound leakage of the acoustic output device without sound guiding holes in the low frequency range (e.g., as shown in
The sound guiding holes 30 are preferably set at different positions of the housing 10.
The effectiveness of reducing sound leakage may be determined by the formulas and method as described above, based on which the positions of sound guiding holes may be determined.
A damping layer is preferably set in a sound guiding hole 30 to adjust the phase and amplitude of the sound wave transmitted through the sound guiding hole 30.
In some embodiments, different sound guiding holes may generate different sound waves having a same phase to reduce the leaked sound wave having the same wavelength. In some embodiments, different sound guiding holes may generate different sound waves having different phases to reduce the leaked sound waves having different wavelengths.
In some embodiments, different portions of a sound guiding hole 30 may be configured to generate sound waves having a same phase to reduce the leaked sound waves with the same wavelength. In some embodiments, different portions of a sound guiding hole 30 may be configured to generate sound waves having different phases to reduce the leaked sound waves with different wavelengths.
Additionally, the sound wave inside the housing may be processed to basically have the same value but opposite phases with the leaked sound wave, so that the sound leakage may be further reduced.
In the embodiment, the transducer 22 is preferably implemented based on the principle of electromagnetic transduction. The transducer may include components such as magnetizer, voice coil, and etc., and the components may locate inside the housing and may generate synchronous vibrations with a same frequency.
In some embodiments, the sound guiding hole(s) at the lower portion of the sidewall of the housing 10 may also be approximately regarded as a point sound source. In some embodiments, the sound guiding hole(s) at the lower portion of the sidewall of the housing 10 and the portion of the housing 10 that generates the leaked sound wave may constitute two-point sound sources. The two-point sound sources may be formed such that the guided sound wave output from the sound guiding hole(s) at the lower portion of the sidewall of the housing 10 may interfere with the leaked sound wave generated by the portion of the housing 10. The interference may reduce a sound pressure level of the leaked sound wave in the surrounding environment (e.g., the target region) at a specific frequency or frequency range.
In some embodiments, the sound waves output from the two-point sound sources may have a same frequency or frequency range (e.g., 1000 Hz, 2500 Hz, 3000 Hz, etc.). In some embodiments, the sound waves output from the first two-point sound sources may have a certain phase difference. In this case, the interference between the sound waves generated by the first two-point sound sources may reduce a sound pressure level of the leaked sound wave in the target region. When the position and phase difference of the first two-point sound sources meet certain conditions, the acoustic output device may output different sound effects in the near field (for example, the position of the user's ear) and the far field. For example, if the phases of the first two-point sound sources are opposite, that is, an absolute value of the phase difference between the first two-point sound sources is 180 degrees, the far-field leakage may be reduced.
In some embodiments, the interference between the guided sound wave and the leaked sound wave may relate to frequencies of the guided sound wave and the leaked sound wave and/or a distance between the sound guiding hole(s) and the portion of the housing 10. For example, if the sound guiding hole(s) are set at the lower portion of the sidewall of the housing 10 (as illustrated in
In the embodiment, the transducer 21 may be implemented preferably based on the principle of electromagnetic transduction. The transducer 21 may include components such as magnetizer, voice coil, etc., which may be placed inside the housing and may generate synchronous vibrations with the same frequency.
It's illustrated that the effectiveness of reduced sound leakage can be adjusted by changing the positions of the sound guiding holes, while keeping other parameters relating to the sound guiding holes unchanged.
In the embodiment, the transducer 21 may be implemented preferably based on the principle of electromagnetic transduction. The transducer 21 may include components such as magnetizer, voice coil, etc., which may be placed inside the housing and may generate synchronous vibration with the same frequency.
The shape of the sound guiding holes on the upper portion and the shape of the sound guiding holes on the lower portion may be different; One or more damping layers may be arranged in the sound guiding holes to reduce leaked sound waves of the same wave length (or frequency), or to reduce leaked sound waves of different wave lengths.
In some embodiments, the sound guiding hole(s) at the upper portion of the sidewall of the housing 10 (also referred to as first hole(s)) may be approximately regarded as a point sound source. In some embodiments, the first hole(s) and the portion of the housing 10 that generates the leaked sound wave may constitute two-point sound sources (also referred to as first two-point sound sources). As for the first two-point sound sources, the guided sound wave generated by the first hole(s) (also referred to as first guided sound wave) may interfere with the leaked sound wave or a portion thereof generated by the portion of the housing 10 in a first region. In some embodiments, the sound waves output from the first two-point sound sources may have a same frequency (e.g., a first frequency). In some embodiments, the sound waves output from the first two-point sound sources may have a certain phase difference. In this case, the interference between the sound waves generated by the first two-point sound sources may reduce a sound pressure level of the leaked sound wave in the target region. When the position and phase difference of the first two-point sound sources meet certain conditions, the acoustic output device may output different sound effects in the near field (for example, the position of the user's ear) and the far field. For example, if the phases of the first two-point sound sources are opposite, that is, an absolute value of the phase difference between the first two-point sound sources is 180 degrees, the far-field leakage may be reduced according to the principle of reversed phase cancellation.
In some embodiments, the sound guiding hole(s) at the lower portion of the sidewall of the housing 10 (also referred to as second hole(s)) may also be approximately regarded as another point sound source. Similarly, the second hole(s) and the portion of the housing 10 that generates the leaked sound wave may also constitute two-point sound sources (also referred to as second two-point sound sources). As for the second two-point sound sources, the guided sound wave generated by the second hole(s) (also referred to as second guided sound wave) may interfere with the leaked sound wave or a portion thereof generated by the portion of the housing 10 in a second region. The second region may be the same as or different from the first region. In some embodiments, the sound waves output from the second two-point sound sources may have a same frequency (e.g., a second frequency).
In some embodiments, the first frequency and the second frequency may be in certain frequency ranges. In some embodiments, the frequency of the guided sound wave output from the sound guiding hole(s) may be adjustable. In some embodiments, the frequency of the first guided sound wave and/or the second guided sound wave may be adjusted by one or more acoustic routes. The acoustic routes may be coupled to the first hole(s) and/or the second hole(s). The first guided sound wave and/or the second guided sound wave may be propagated along the acoustic route having a specific frequency selection characteristic. That is, the first guided sound wave and the second guided sound wave may be transmitted to their corresponding sound guiding holes via different acoustic routes. For example, the first guided sound wave and/or the second guided sound wave may be propagated along an acoustic route with a low-pass characteristic to a corresponding sound guiding hole to output guided sound wave of a low frequency. In this process, the high frequency component of the sound wave may be absorbed or attenuated by the acoustic route with the low-pass characteristic. Similarly, the first guided sound wave and/or the second guided sound wave may be propagated along an acoustic route with a high-pass characteristic to the corresponding sound guiding hole to output guided sound wave of a high frequency. In this process, the low frequency component of the sound wave may be absorbed or attenuated by the acoustic route with the high-pass characteristic.
As shown in
As shown in
As shown in
In some embodiments, the interference between the leaked sound wave and the guided sound wave may relate to frequencies of the guided sound wave and the leaked sound wave and/or a distance between the sound guiding hole(s) and the portion of the housing 10. In some embodiments, the portion of the housing that generates the leaked sound wave may be the bottom of the housing 10. The first hole(s) may have a larger distance to the portion of the housing 10 than the second hole(s). In some embodiments, the frequency of the first guided sound wave output from the first hole(s) (e.g., the first frequency) and the frequency of second guided sound wave output from second hole(s) (e.g., the second frequency) may be different.
In some embodiments, the first frequency and second frequency may associate with the distance between the at least one sound guiding hole and the portion of the housing 10 that generates the leaked sound wave. In some embodiments, the first frequency may be set in a low frequency range. The second frequency may be set in a high frequency range. The low frequency range and the high frequency range may or may not overlap.
In some embodiments, the frequency of the leaked sound wave generated by the portion of the housing 10 may be in a wide frequency range. The wide frequency range may include, for example, the low frequency range and the high frequency range or a portion of the low frequency range and the high frequency range. For example, the leaked sound wave may include a first frequency in the low frequency range and a second frequency in the high frequency range. In some embodiments, the leaked sound wave of the first frequency and the leaked sound wave of the second frequency may be generated by different portions of the housing 10. For example, the leaked sound wave of the first frequency may be generated by the sidewall of the housing 10, the leaked sound wave of the second frequency may be generated by the bottom of the housing 10. As another example, the leaked sound wave of the first frequency may be generated by the bottom of the housing 10, the leaked sound wave of the second frequency may be generated by the sidewall of the housing 10. In some embodiments, the frequency of the leaked sound wave generated by the portion of the housing 10 may relate to parameters including the mass, the damping, the stiffness, etc., of the different portion of the housing 10, the frequency of the transducer 22, etc.
In some embodiments, the characteristics (amplitude, frequency, and phase) of the first two-point sound sources and the second two-point sound sources may be adjusted via various parameters of the acoustic output device (e.g., electrical parameters of the transducer 22, the mass, stiffness, size, structure, material, etc., of the portion of the housing 10, the position, shape, structure, and/or number (or count) of the sound guiding hole(s) so as to form a sound field with a particular spatial distribution. In some embodiments, a frequency of the first guided sound wave is smaller than a frequency of the second guided sound wave.
A combination of the first two-point sound sources and the second two-point sound sources may improve sound effects both in the near field and the far field.
Referring to
After comparison of calculation results and test results, the effectiveness of this embodiment is basically the same with that of embodiment one, and this embodiment can effectively reduce sound leakage.
The difference between this embodiment and the above-described embodiment three is that to reduce sound leakage to greater extent, the sound guiding holes 30 may be arranged on the upper, central and lower portions of the sidewall 11. The sound guiding holes 30 are arranged evenly or unevenly in one or more circles. Different circles are formed by the sound guiding holes 30, one of which is set along the circumference of the bottom 12 of the housing 10. The size of the sound guiding holes 30 are the same.
The effect of this scheme may cause a relatively balanced effect of reducing sound leakage in various frequency ranges compared to the schemes where the position of the holes are fixed. The effect of this design on reducing sound leakage is relatively better than that of other designs where the heights of the holes are fixed, such as embodiment three, embodiment four, embodiment five, etc.
The sound guiding holes 30 in the above embodiments may be perforative holes without shields.
In order to adjust the effect of the sound waves guided from the sound guiding holes, a damping layer (not shown in the figures) may locate at the opening of a sound guiding hole 30 to adjust the phase and/or the amplitude of the sound wave.
There are multiple variations of materials and positions of the damping layer. For example, the damping layer may be made of materials which can damp sound waves, such as tuning paper, tuning cotton, nonwoven fabric, silk, cotton, sponge or rubber. The damping layer may be attached on the inner wall of the sound guiding hole 30, or may shield the sound guiding hole 30 from outside.
More preferably, the damping layers corresponding to different sound guiding holes 30 may be arranged to adjust the sound waves from different sound guiding holes to generate a same phase. The adjusted sound waves may be used to reduce leaked sound wave having the same wavelength. Alternatively, different sound guiding holes 30 may be arranged to generate different phases to reduce leaked sound wave having different wavelengths (i.e., leaked sound waves with specific wavelengths).
In some embodiments, different portions of a same sound guiding hole can be configured to generate a same phase to reduce leaked sound waves on the same wavelength (e.g., using a pre-set damping layer with the shape of stairs or steps). In some embodiments, different portions of a same sound guiding hole can be configured to generate different phases to reduce leaked sound waves on different wavelengths.
The above-described embodiments are preferable embodiments with various configurations of the sound guiding hole(s) on the housing of a bone conduction speaker, but a person having ordinary skills in the art can understand that the embodiments don't limit the configurations of the sound guiding hole(s) to those described in this application.
In the past bone conduction speakers, the housing of the bone conduction speakers is closed, so the sound source inside the housing is sealed inside the housing. In the embodiments of the present disclosure, there can be holes in proper positions of the housing, making the sound waves inside the housing and the leaked sound waves having substantially same amplitude and substantially opposite phases in the space, so that the sound waves can interfere with each other and the sound leakage of the bone conduction speaker is reduced. Meanwhile, the volume and weight of the speaker do not increase, the reliability of the product is not comprised, and the cost is barely increased. The designs disclosed herein are easy to implement, reliable, and effective in reducing sound leakage.
In practical applications, the speaker as described elsewhere (e.g., the speaker in
The sensor module 1410 may include a plurality of sensors of various types. The plurality of sensors may detect status information of a user (e.g., a wearer) of the speaker. The status information may include, for example, a location of the user, a gesture of the user, a direction that the user faces, an acceleration of the user, a speech of the user, etc. A controller (e.g., the processing engine 1420) may process the detected status information, and cause one or more components of the speaker 1400 to implement various functions or methods described in the present disclosure. For example, the controller may cause at least one acoustic driver to output sound based on the detected status information. The sound output may be originated from audio data from an audio source (e.g., a terminal device of the user, a virtual audio marker associated with a geographic location, etc.). The plurality of sensors may include a locating sensor 1411, an orientation sensor 1412, an inertial sensor 1413, an audio sensor 1414, and a wireless transceiver 1415. Merely for illustration, only one sensor of each type is illustrated in
The locating sensor 1411 may determine a geographic location of the speaker 1400. The locating sensor 1411 may determine the location of the speaker 1400 based on one or more location-based detection systems such as a global positioning system (GPS), a Wi-Fi location system, an infra-red (IR) location system, a bluetooth beacon system, etc. The locating sensor 1411 may detect changes in the geographic location of the speaker 1400 and/or a user (e.g., the user may wear the speaker 1400, or may be separated from the speaker 1400) and generate sensor data indicating the changes in the geographic location of the speaker 1400 and/or the user.
The orientation sensor 1412 may track an orientation of the user and/or the speaker 1400. The orientation sensor 1412 may include a head-tracking device and/or a torso-tracking device for detecting a direction in which the user is facing, as well as the movement of the user and/or the speaker 1400. Exemplary head-tracking devices or torso-tracking devices may include an optical-based tracking device (e.g., an optical camera), an accelerometer, a magnetometer, a gyroscope, a radar, etc. In some embodiments, the orientation sensor 1412 may detect a change in the user's orientation, such as a turning of the torso or an about-face movement, and generate sensor data indicating the change in the orientation of the body of the user.
The inertial sensor 1413 may sense gestures of the user or a body part (e.g., head, torso, limbs) of the user. The inertial sensor 1413 may include an accelerometer, a gyroscope, a magnetometer, or the like, or any combination thereof. In some embodiments, the accelerometer, the gyroscope, and/or the magnetometer may be independent components. In some embodiments, the accelerometer, the gyroscope, and/or the magnetometer may be integrated or collectively housed in a single sensor component. In some embodiments, the inertial sensor 1413 may detect an acceleration, a deceleration, a tilt level, a relative position in the three-dimensional (3D) space, etc. of the user or a body part (e.g., an arm, a finger, a leg, etc.) of the user, and generate sensor data regarding the gestures of the user accordingly.
The audio sensor 1414 may detect sound from the user, a smart device 1440, and/or ambient environment. In some embodiments, the audio sensor 1414 may include one or more microphones, or a microphone array. The one or more microphones or the microphone array may be housed within the speaker 1400 or in another device connected to the speaker 1400. In some embodiments, the one or more microphones or the microphone array may be generic microphones. In some embodiments, the one or more microphones or the microphone array may be customized for VR and/or AR.
In some embodiments, the audio sensor 1414 may be positioned so as to receive audio signals proximate to the speaker 1400, e.g., speech/voice input by the user to enable a voice control functionality. For example, the audio sensor 1414 may detect sounds of the user wearing the speaker 1400 and/or other users proximate to or interacting with the user. The audio sensor 1414 may further generate sensor data based on the received audio signals.
The wireless transceiver 1415 may communicate with other transceiver devices in distinct locations. The wireless transceiver 1415 may include a transmitter and a receiver. Exemplary wireless transceivers may include, for example, a Local Area Network (LAN) transceiver, a Wide Area Network (WAN) transceiver, a ZigBee transceiver, a Near Field Communication (NFC) transceiver, a Bluetooth (BT) transceiver, a Bluetooth Low Energy (BTLE) transceiver, or the like, or any combination thereof. In some embodiments, the wireless transceiver 1415 may be configured to detect an audio message (e.g., an audio cache or pin) proximate to the speaker 1400, e.g., in a local network at a geographic location or in a cloud storage system connected with the geographic location. For example, another user, a business establishment, a government entity, a tour group, etc. may leave an audio message at a particular geographic or virtual location, and the wireless transceiver 1415 may detect the audio message, and prompt the user to initiate a playback of the audio message.
In some embodiments, the sensor module 1410 (e.g., the locating sensor 1411, the orientation sensor 1412, and the inertial sensor 1413) may detect that the user moves toward or looks in a direction of a point of interest (POI). The POI may be an entity corresponding to a geographic or virtual location. The entity may include a building (e.g., a school, a skyscraper, a bus station, a subway station, etc.), a landscape (e.g., a park, a mountain, etc.), or the like. In some embodiments, the entity may be an object specified by a user. For example, the entity may be a favorite coffee shop of the user. In some embodiments, the POI may be associated with a virtual audio marker. One or more localized audio messages may be attached to the audio marker. The one or more localized audio message may include, for example, a song, a pre-recorded message, an audio signature, an advertisement, a notification, or the like, or any combination thereof.
The processing engine 1420 may include a sensor data processing module 1421 and a retrieve module 1422. The sensor data processing module 1421 may process sensor data obtained from the sensor module 1410 (e.g., the locating sensor 1411, the orientation sensor 1412, the inertial sensor 1413, the audio sensor 1414, and/or the wireless transceiver 1415), and generate processed information and/or data. The information and/or data generated by the sensor data processing module 1421 may include a signal, a representation, an instruction, or the like, or any combination thereof. For example, the sensor data processing module 1421 may receive sensor data indicating the location of the speaker 1400, and determine whether the user is proximate to a POI or whether the user is facing towards a POI. In response to a determination that the user is proximate to the POI or the user is facing towards the POI, the sensor data processing module 1421 may generate a signal and/or an instruction used for causing the retrieve module 1422 to obtain an audio message (i.e., a localized audio message associated with the POI). The audio message may be further provided to the user via the speaker 1400 for playback.
Optionally or additionally, during the playback of the audio message, an active noise reduction (ANR) technique may be performed so as to reduce noise. As used herein, the ANR may refer to a method for reducing undesirable sound by generating additional sound specifically designed to cancel the noise in the audio message according to the reversed phase cancellation principle. The additional sound may have a reversed phase, a same amplitude, and a same frequency as the noise. Merely by way of example, the speaker 1400 may include an ANR component (not shown) configured to reduce the noise. The ANR component may receive sensor data generated by the audio sensor 1414, signals generated by the processing engine 1420 based on the sensor data, or the audio messages received via the wireless transceiver 1415, etc. The received data, signals, audio messages, etc. may include sound from a plurality of directions, which may include desired sound received from a certain direction and undesired sound (i.e., noise) received from other directions. The ANR component may analyze the noise, and perform an ANR operation to suppress or eliminate the noise.
In some embodiments, the ANR component may provide a signal to a transducer (e.g., the transducer 22, or any other transducers) disposed in the speaker to generate an anti-noise acoustic signal. The anti-noise acoustic signal may reduce or substantially prevent the noises from being heard by the user. In some embodiments, the anti-noise acoustic signal may be generated according to the noise detected by the speaker. The speaker may receive sound. In some embodiments, the noise may include background noise in the ambient environment around the user, sound that is not intended to be collected when a user wears the audio device, for example, a traffic noise, a wind noise, etc. For example, the noise may be detected by a noise detection component (e.g., the audio sensor 1414) of the speaker. As the audio sensor is close to the ear of the user, the detected noise may also be referred to noise heard by the user. In some embodiments, the anti-noise acoustic signal may have a same amplitude, a same frequency, and a reverse phase as the detected noise.
The processing engine 1420 may be coupled (e.g., via wireless and/or wired connections) to a memory 1430. The memory 1430 may be implemented by any storage device capable of storing data. In some embodiments, the memory 1430 may be located in a local server or a cloud-based server, etc. In some embodiments, the memory 1430 may include a plurality of audio files 1431 for playback by the speaker 1400 and/or user data 1432 of one or more users. The audio files 1431 may include audio messages (e.g., audio pins or caches created by the user or other users), audio information provided by automated agents, or other audio files available from network sources coupled with a network interface, such as a network-attached storage (NAS) device, a DLNA server, etc. The audio files 1431 may be accessible by the speaker 1400 over a local area network such as a wireless (e.g., Wi-Fi) or wired (e.g., Ethernet) network. For example, the audio files 1431 may include localized audio messages attached to virtual audio markers associated with a POI, which may be accessed when a user is proximate to or facing towards a POI.
The user data 1432 may be user-specific, community-specific, device-specific, location-specific, etc. In some embodiments, the user data 1432 may include audio information related to one or more users. Merely by ways of example, the user data 1432 may include user-defined playlists of digital music files, audio messages stored by the user or other users, information about frequently played audio files associated with the user or other similar users (e.g., those with common audio file listening histories, demographic traits, or Internet browsing histories), “liked” or otherwise favored audio files associated with the user or other users, a frequency at which the audio files 1431 are updated by the user or other users, or the like, or any combination thereof. In some embodiments, the user data 1432 may further include basic information of the one or more users. Exemplary basic information may include names, ages, careers, habits, preferences, etc.
The processing engine 1420 may also be coupled with a smart device 1440 that has access to user data (e.g., the user data 1432) or biometric information about the user. The smart device 1440 may include one or more personal computing devices (e.g., a desktop or laptop computer), wearable smart devices (e.g., a smart watch, a smart glasses), a smart phone, a remote control device, a smart beacon device (e.g., a smart Bluetooth beacon system), a stationary speaker system, or the like, or any combination thereof. In some embodiments, the smart device 1440 may include a conventional user interface for permitting interaction with the user, one or more network interfaces for interacting with the processing engine 1420 and other components in the speaker 1400. In some embodiments, the smart device 1440 may be utilized to connect the speaker 1400 to a Wi-Fi network, creating a system account for the user, setting up music and/or location-based audio services, browsing content for playback, setting assignments of the speaker 1400 or other audio playback devices, transporting control (e.g., play/pause, fast forward/rewind, etc.) of the speaker 1400, selecting one or more speaker for content playback (e.g., a single room playback or a synchronized multi-room playback), etc. In some embodiments, the smart device 1440 may further include sensors for measuring biometric information about the user. Exemplary biometric information may include travel, sleep, or exercise patterns, body temperature, heart rates, paces of gait (e.g., via accelerometers), or the like, or any combination thereof.
The retrieve module 1422 may be configured to retrieve data from the memory 1430 and/or the smart device 1440 based on the information and/or data generated by the sensor data processing module 1421, and determine audio message for playback. For example, the sensor data processing module 1421 may analyze one or more voice commands from the user (obtained from the audio sensor 1414), and determine an instruction based on the one or more voice commands. The retrieve module 1422 may obtain and/or modify a localized audio message based on the instruction. As another example, the sensor data processing module 1421 may generate signals indicating that a user is proximate to a POI and/or the user is facing towards the POI. Accordingly, the retrieve module 1422 may obtain a localized audio message associated with the POI based on the signals. As a further example, the sensor data processing module 1421 may generate a representation indicating a characteristic of a location as a combination of factors from the sensor data, the user data 1432 and/or information from the smart device 1440. The retrieve module 1422 may obtain the audio message based on the representation.
In 1510, a point of interest (POI) may be detected. In some embodiments, the POI may be detected by the sensor module 1410 of the speaker 1400.
As used herein, the POI may be an entity corresponding to a geographic or virtual location. The entity may include a building (e.g., a school, a skyscraper, a bus station, a subway station, etc.), a landscape (e.g., a park, a mountain, etc.), or the like, or any combination thereof. In some embodiments, the entity may be an object specified by the user. For example, the entity may be a favorite coffee shop of the user. In some embodiments, the POI may be associated with a virtual audio marker. One or more localized audio messages may be attached to the audio marker. The one or more localized audio message may include, for example, a song, a pre-recorded message, an audio signature, an advertisement, a notification, or the like, or any combination thereof.
In some embodiments, the sensor module 1410 (e.g., the locating sensor 1411, the orientation sensor 1412, and the inertial sensor 1413) may detect that a user wearing the speaker 1400 moves toward to or looks in the direction of the POI. Specifically, the sensor module 1410 (e.g., the locating sensor 1411) may detect changes in a geographic location of the user, and generate sensor data indicating the changes in the geographic location of the user. The sensor module 1410 (e.g., the orientation sensor 1412) may detect changes in an orientation of the user (e.g., the head of the user), and generate sensor data indicating the changes in the orientation of the user. The sensor module 1410 (e.g., the inertial sensor 1413) may also detect gestures (e.g., via an acceleration, a deceleration, a tilt level, a relative position in the three-dimensional (3D) space, etc. of the user or a body part (e.g., an arm, a finger, a leg, etc.)) of the user, and generate sensor data indicating the gestures of the user. The sensor data may be transmitted, for example, to the processing engine 1420 for further processing. For example, the processing engine 1420 (e.g., the sensor data processing module 1421) may process the sensor data, and determine whether the user moves toward to or looks in the direction of the POI.
In some embodiments, other information may also be detected. For example, the sensor module 1410 (e.g., the audio sensor 1414) may detect sound from the user, a smart device (e.g., the smart device 1440), and/or ambient environment. Specifically, one or more microphones or a microphone array may be housed within the speaker 1400 or in another device connected to the speaker 1400. The sensor module 1410 may detect sound using the one or more microphones or the microphone array. In some embodiments, the sensor module 1410 (e.g., the wireless transceiver 1415) may communicate with transceiver devices in distinct locations, and detect an audio message (e.g., an audio cache or pin) when the speaker 1400 is proximate to the transceiver devices. In some embodiments, other information may also be transmitted as part of the sensor data to the processing engine 1420 for processing.
In 1520, an audio message related to the POI may be determined. In some embodiments, the audio message related to the POI may be determined by the processing engine 1420.
In some embodiments, the processing engine 1420 (e.g., the sensor data processing module 1421) may generate information and/or data based at least in part on the sensor data. The information and/or data include a signal, a representation, an instruction, or the like, or any combination thereof. Merely by way of example, the sensor data processing module 1421 may receive sensor data indicating a location of a user, and determine whether the user is proximate to or facing towards the POI. In response to a determination that the user is proximate to the POI or facing towards the POI, the sensor data processing module 1421 may generate a signal and/or an instruction causing the retrieve module 1422 to obtain an audio message (i.e., a localized audio message attached to an audio marker associated with the POI). As another example, the sensor data processing module 1421 may analyze sensor data related to a voice command detected from a user (e.g., by performing a natural language processing), and generate a signal and/or an instruction related to the voice command. As a further example, the sensor data processing module 1421 may generate a representation by weighting the sensor data, user data (e.g., the user data 1432), and other available data (e.g., a demographic profile of a plurality of users with at least one common attribute with the user, a categorical popularity of an audio file, etc.). The representation may indicate a general characteristic of a location as a combination of factors from the sensor data, the user data and/or information from a smart device.
Further, the processing engine 1420 (e.g., the retrieve module 1422) may determine an audio message related to the POI based on the generated information and/or the data. For example, the processing engine 1420 may retrieve an audio message from the audio files 1431 in the memory 1430 based on a signal and/or an instruction related to a voice command. As another example, the processing engine 1420 may retrieve an audio message based on a representation and relationships between the representation and the audio files 1431. The relationships may be predetermined and stored in a storage device. As a further example, the processing engine 1420 may retrieve a localized audio message related to a POI when a user is proximate to or facing towards the POI. In some embodiments, the processing engine 1420 may determine two or more audio messages related to the POI based on the information and/or the data. For example, when a user is proximate to or facing towards the POI, the processing engine 1420 may determine audio messages including “liked” music files, audio files accessed by other users at the POI, or the like, or any combination thereof.
Taking a speaker customized for VR as an example, the speaker may determine an audio message related to a POI based at least in part on sensor data obtained by sensors disposed in the speaker. For example, the POI may be a historical site associated with a virtual audio marker having one or more localized audio messages. When the user wearing the speaker is proximate to or facing towards the historical site, the localized audio messages may be recommended to the user via a virtual interface. The one or more localized audio messages may include virtual environment data used to relive historical stories of the historical site. In the virtual environment data, sound data may be properly designed for simulating sound effects of different scenarios. For example, sound may be transmitted from different sound guiding holes to simulate sound effects of different directions. As another example, the volume and/or delay of sound may be adjusted to simulate sound effects at different distances.
Taking a speaker customized for AR as another example, the speaker may determine an audio message related to a POI based at least in part on sensor data obtained by sensors disposed in the speaker. Additionally, the audio message may be combined with real-world sound in ambient environment so as to enhance an audio experience of the user. The real-world sound in ambient environment may include sounds in all directions of the ambient environment, or may be sounds in a certain direction. Merely by way of example,
In 1530, the audio message may be replayed. In some embodiments, the audio message may be replayed by the processing engine 1420.
In some embodiments, the processing engine 1420 may replay the audio message via the speaker 1400 directly. In some embodiments, the processing engine 1420 may prompt the user to initiate a playback of the audio message. For example, the processing engine 1420 may output a prompt (e.g., a voice prompt via a sound guiding hole (e.g., one of the one or more sound guiding holes 30), a visual representation via a virtual user-interface) to the user. The user may respond to the prompt by interacting with the speaker 1400. For example, the user may interact with the speaker 1400 using, for example, gestures of his/her body (e.g., head, torso, limbs, eyeballs), voice command, etc.
Taking a speaker customized for AR as another example, the user may interact with the speaker via a virtual user-interface (UI).
It's noticeable that above statements are preferable embodiments and technical principles thereof. A person having ordinary skill in the art is easy to understand that this disclosure is not limited to the specific embodiments stated, and a person having ordinary skill in the art can make various obvious variations, adjustments, and substitutes within the protected scope of this disclosure. Therefore, although above embodiments state this disclosure in detail, this disclosure is not limited to the embodiments, and there can be many other equivalent embodiments within the scope of the present disclosure, and the protected scope of this disclosure is determined by following claims.
Number | Date | Country | Kind |
---|---|---|---|
201410005804.0 | Jan 2014 | CN | national |
201910364346.2 | Apr 2019 | CN | national |
201910888067.6 | Sep 2019 | CN | national |
201910888762.2 | Sep 2019 | CN | national |
The present application is a continuation-in-part of U.S. patent application Ser. No. 17/074,762 filed on Oct. 20, 2020, which is a continuation-in-part of U.S. patent application Ser. No. 16/813,915 (now U.S. Pat. No. 10,848,878) filed on Mar. 10, 2020, which is a continuation of U.S. patent application Ser. No. 16/419,049 (now U.S. Pat. No. 10,616,696) filed on May 22, 2019, which is a continuation of U.S. patent application Ser. No. 16/180,020 (now U.S. Pat. No. 10,334,372) filed on Nov. 5, 2018, which is a continuation of U.S. patent application Ser. No. 15/650,909 (now U.S. Pat. No. 10,149,071) filed on Jul. 16, 2017, which is a continuation of U.S. patent application Ser. No. 15/109,831 (now U.S. Pat. No. 9,729,978) filed on Jul. 6, 2016, which is a U.S. National Stage entry under 35 U.S.C. § 371 of International Application No. PCT/CN2014/094065, filed on Dec. 17, 2014, designating the United States of America, which claims priority to Chinese Patent Application No. 201410005804.0, filed on Jan. 6, 2014; the present application is also a continuation-in-part of U.S. patent application Ser. No. 17/170,920 filed on Feb. 9, 2021, which is a continuation of international Application No, PCT/CN2020/087002, filed on Apr. 26, 2020, which claims priority to Chinese Patent Application No, 201910888067.6, filed on Sep. 19, 2019, Chinese Patent Application No. 201910888762.2, filed on Sep. 19, 2019, and Chinese Patent Application No. 201910364346.2, filed on Apr. 30, 2019. Each of the above-referenced applications is hereby incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
2327320 | Shapiro | Aug 1943 | A |
4987597 | Haertl | Jan 1991 | A |
5327506 | Stites, III | Jul 1994 | A |
5430803 | Kimura et al. | Jul 1995 | A |
5692059 | Kruger | Nov 1997 | A |
5757935 | Kang et al. | May 1998 | A |
5790684 | Niino et al. | Aug 1998 | A |
6062337 | Zinserling | May 2000 | A |
6850138 | Sakai | Feb 2005 | B1 |
8141678 | Ikeyama et al. | Mar 2012 | B2 |
9226075 | Lee | Dec 2015 | B2 |
9729978 | Qi et al. | Aug 2017 | B2 |
10149071 | Qi et al. | Dec 2018 | B2 |
10334372 | Qi et al. | Jun 2019 | B2 |
10555106 | Mehra | Feb 2020 | B1 |
10609465 | Wakeland et al. | Mar 2020 | B1 |
10631075 | Patil | Apr 2020 | B1 |
10897677 | Walraevens et al. | Jan 2021 | B2 |
11122359 | Zhang et al. | Sep 2021 | B2 |
11197106 | Qi et al. | Dec 2021 | B2 |
20030048913 | Lee et al. | Mar 2003 | A1 |
20040131219 | Polk, Jr. | Jul 2004 | A1 |
20050251952 | Johnson | Nov 2005 | A1 |
20060098829 | Kobayashi | May 2006 | A1 |
20070041595 | Carazo et al. | Feb 2007 | A1 |
20070098198 | Hildebrandt | May 2007 | A1 |
20090095613 | Lin | Apr 2009 | A1 |
20090141920 | Suyama | Jun 2009 | A1 |
20090147981 | Blanchard et al. | Jun 2009 | A1 |
20090190781 | Fukuda | Jul 2009 | A1 |
20090208031 | Abolfathi | Aug 2009 | A1 |
20090285417 | Shin et al. | Nov 2009 | A1 |
20090290730 | Fukuda et al. | Nov 2009 | A1 |
20100054492 | Eaton et al. | Mar 2010 | A1 |
20100310106 | Blanchard et al. | Dec 2010 | A1 |
20100322454 | Ambrose et al. | Dec 2010 | A1 |
20110150262 | Nakama et al. | Jun 2011 | A1 |
20110170730 | Zhu | Jul 2011 | A1 |
20120020501 | Lee | Jan 2012 | A1 |
20120070022 | Saiki | Mar 2012 | A1 |
20120177206 | Yamagishi | Jul 2012 | A1 |
20120300956 | Horii | Nov 2012 | A1 |
20130051585 | Karkkainen et al. | Feb 2013 | A1 |
20130108068 | Poulsen et al. | May 2013 | A1 |
20130329919 | He | Dec 2013 | A1 |
20140009008 | Li et al. | Jan 2014 | A1 |
20140064533 | Kasic, II | Mar 2014 | A1 |
20140185822 | Kunimoto et al. | Jul 2014 | A1 |
20140185837 | Kunimoto et al. | Jul 2014 | A1 |
20140274229 | Fukuda | Sep 2014 | A1 |
20140355777 | Nabata et al. | Dec 2014 | A1 |
20150030189 | Nabata et al. | Jan 2015 | A1 |
20150256656 | Horii | Sep 2015 | A1 |
20150264473 | Fukuda | Sep 2015 | A1 |
20150326967 | Otani | Nov 2015 | A1 |
20160037243 | Lippert et al. | Feb 2016 | A1 |
20160150337 | Nandy | May 2016 | A1 |
20160165357 | Morishita et al. | Jun 2016 | A1 |
20160295328 | Park | Oct 2016 | A1 |
20160329041 | Qi et al. | Nov 2016 | A1 |
20170201823 | Shetye | Jul 2017 | A1 |
20170223445 | Bullen et al. | Aug 2017 | A1 |
20170230741 | Matsuo et al. | Aug 2017 | A1 |
20180167710 | Silver | Jun 2018 | A1 |
20180182370 | Hyde et al. | Jun 2018 | A1 |
20180376231 | Pfaffinger | Dec 2018 | A1 |
20190014425 | Liao et al. | Jan 2019 | A1 |
20190052954 | Rusconi Clerici Beltrami | Feb 2019 | A1 |
20190238971 | Wakeland et al. | Aug 2019 | A1 |
20200137476 | Shinmen et al. | Apr 2020 | A1 |
20200169801 | Zhu | May 2020 | A1 |
20200252708 | Zhu | Aug 2020 | A1 |
20200367008 | Walsh et al. | Nov 2020 | A1 |
20210099027 | Larsson et al. | Apr 2021 | A1 |
20210219059 | Qi et al. | Jul 2021 | A1 |
Number | Date | Country |
---|---|---|
201616895 | Oct 2010 | CN |
201690580 | Dec 2010 | CN |
102014328 | Apr 2011 | CN |
102421043 | Apr 2012 | CN |
202435600 | Sep 2012 | CN |
103167390 | Jun 2013 | CN |
103347235 | Oct 2013 | CN |
204206450 | Mar 2015 | CN |
106792304 | May 2017 | CN |
109640209 | Apr 2019 | CN |
2011367 | Dec 2014 | EP |
2006332715 | Dec 2006 | JP |
2007251358 | Sep 2007 | JP |
20050030183 | Mar 2005 | KR |
20090082999 | Aug 2009 | KR |
20170133754 | Dec 2017 | KR |
2004095878 | Nov 2004 | WO |
2015087093 | Jun 2015 | WO |
Entry |
---|
International Search Report in PCT/CN2014/094065 dated Mar. 17, 2015, 5 pages. |
Written Opinion in PCT/CN2014/094065 dated Mar. 17, 2015, 10 pages. |
First Office Action in Chinese Application No. 201410005804.0 dated Dec. 7, 2015, 9 pages. |
Notice of Reasons for Refusal in Japanese Application No. 2016545828 dated Jun. 20, 2017, 10 pages. |
The Extended European Search Report in European Application No. 14877111.6 dated Mar. 17, 2017, 6 pages. |
First Examination Report in Indian Application No. 201617026062 dated Nov. 13, 2020, 6 pages. |
International Search Report in PCT/CN2020/087002 dated Jul. 14, 2020, 4 pages. |
Written Opinion in PCT/CN2020/087002 dated Jul. 14, 2020, 5 pages. |
Notice of Preliminary Rejection in Korean Application No. 10-2022-7010046 dated Jun. 20, 2022, 15 pages. |
Number | Date | Country | |
---|---|---|---|
20210219072 A1 | Jul 2021 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2020/087002 | Apr 2020 | US |
Child | 17170920 | US | |
Parent | 16419049 | May 2019 | US |
Child | 16813915 | US | |
Parent | 16180020 | Nov 2018 | US |
Child | 16419049 | US | |
Parent | 15650909 | Jul 2017 | US |
Child | 16180020 | US | |
Parent | 15109831 | US | |
Child | 15650909 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17170920 | Feb 2021 | US |
Child | 17219882 | US | |
Parent | 17074762 | Oct 2020 | US |
Child | 17170920 | US | |
Parent | 16813915 | Mar 2020 | US |
Child | 17074762 | US |