Ambient sounds, also referred to as background noise, often include unwanted sounds that interfere with the ability of a person to accurately hear and process audible information. For example, a user may wish to listen to music, spoken word, or other audible content that is emitted as acoustic signals from headphones, ear pods, or similar accessories. Background noise of sufficient amplitude can superimpose itself on the desired audible signals rendering a combination of sounds that is either unintelligible, or at least unenjoyable, to the user. Noise cancelation techniques, such as active noise cancelation (ANC) and adaptive filtering, represent a form of technology that may be integrated into wearable user equipment such (such as headphones or ear pods) to reduce the effects of background noise while still clearly delivering desired audible content. For example, a set of headphones having noise cancelation functionality may include a speaker to deliver the desired audible content, one or more microphones (for example, to measure ambient noise), and processing logic to generate an anti-noise signal. The anti-noise signal may be mixed with signals carrying the desired audible content in order to cancel background noise from the acoustic signals that reach the user's ear(s). Such noise cancelation techniques substantially increase the complexity of the user equipment (for example, in terms of components needed and local processing resources) as compared to the nominal baseline need of a speaker. Further, the equipment to provide such noise cancelation technologies need to be individually replicated for each user consuming the acoustic content since it is implemented at the user level. Moreover, when the headphone are used in environments where there is little background noise and little need for generating a noise cancelation, the noise cancelation either continues to operate (unnecessarily consuming power resources) or are turned off to become inefficiently utilized idle resources.
The present disclosure is directed, in part to systems and methods for ambient noise mitigation as a network service, substantially as shown and/or described in connection with at least one of the Figures, and as set forth more completely in the claims.
Systems and methods for ambient noise mitigation as a network service are provided. In contrast to available ambient sound mitigation technologies, embodiments of the present disclosure deliver an ambient sound cancelation signal to user equipment (UE) using an ambient sound mitigation server that may be hosted at the network edge of a telecommunications operator core network. Ambient sound mitigation is provided using a low latency network connection between the UE and the ambient sound mitigation server (e.g., a low latency network slice). In some embodiments the network slice may be established to create an end-to-end logical channel between the UE and the ambient sound mitigation server using a low latency network protocol, such as 5G New Radio (NR) ultra-reliable low latency communications (URLLC). Hosting the ambient sound mitigation server at the network edge may reduce latency and increase reliability, for example by lowering the number of nodes on the data path of the network slice for a UE as compared to the a data path through the operator core network.
The ambient sound mitigation server implements a sound wave prediction function to generate the cancelation signal received by the UE. The sound wave prediction function may receive inputs including, for example, a digitized audio signal representing ambient sounds within a venue, a listener position within the venue (e.g., a position of a user's UE), and a venue acoustic profile. Based on these inputs, the sound wave prediction function may predict at least a portion of the ambient sound expected to be received at the location of the UE at a given point in time, and deliver a cancelation signal to cancel that portion of the ambient sound at it is received at the location of the UE at that given point in time. The sound wave prediction function can adjust a phase and/or delay of the cancelation signal in order to adjust the amount of subtractive interference caused by the sum of the cancelation signal with local ambient sound. For example, in some embodiments, the sound wave prediction function may control the latency of the network slice carrying the cancelation signal to adjust the time of arrival of the cancelation signal as received at the UE.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used in isolation as an aid in determining the scope of the claimed subject matter.
Aspects of the present disclosure are described in detail herein with reference to the attached Figures, which are intended to be exemplary and non-limiting, wherein:
In the following detailed description, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of specific illustrative embodiments in which the embodiments may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the embodiments, and it is to be understood that other embodiments may be utilized and that logical, mechanical and electrical changes may be made without departing from the scope of the present disclosure. The following detailed description is, therefore, not to be taken in a limiting sense.
Embodiments of the present disclosure provide for ambient sound mitigation as a network service. Background ambient sounds often interfere with the ability of a person to comprehend and/or enjoy audible content. Background ambient sounds can also represent a source of distraction. However, user devices typically used to deliver audible content typically do not incorporate ambient sound mitigation technologies, and those that do reply on additional supporting sensors and signal processing resources that increase both the complexity and expense of the devices.
In contrast to currently available ambient sound mitigation technologies, embodiments of the present disclosure deliver an ambient sound cancelation signal to user equipment (UE) using an ambient sound mitigation server that may hosted at the network edge of a telecommunications operator core network. Moreover, ambient sound mitigation is provided using a low latency network connection between the UE and the ambient sound mitigation server. For example, in some embodiments a network slice may be established to create an end-to-end logical channel between the UE and the ambient sound mitigation server using a low latency network protocol, such as 5G New Radio (NR) ultra-reliable low latency communications (URLLC), that supports very low end-to-end latencies (e.g., from under 0.5 ms to 50 ms on the application layer and under 1 ms on the 5G radio interface). In some embodiments, individual end-to-end logical channels for a plurality of different UE to the ambient sound mitigation server may be created using URLLC network slicing. Further, hosting the ambient sound mitigation server at the network edge may reduce latency and increase reliability, for example by lowering the number of nodes on the data path of the network slice for a UE as compared to the a data path through the operator core network.
In some embodiments, the ambient sound mitigation server implements a sound wave prediction function to generate the cancelation signal received by the UE. The sound wave prediction function may receive inputs including, for example, a digitized audio signal representing ambient sounds within a venue, a listener position within the venue (e.g., a position of a user's UE), and a venue acoustic profile. Based on these inputs, the sound wave prediction function may predict at least a portion of the ambient sound expected to be received at the location of the UE at a given point in time. Using that prediction, the ambient sound mitigation server may deliver a cancelation signal to cancel that portion of the ambient sound at it is received at the location of the UE at that given point in time. Moreover, the sound wave prediction function can adjust a phase and/or delay of the cancelation signal in order to adjust the amount of subtractive interference caused by the sum of the cancelation signal with local ambient sound. For example, in some embodiments, the sound wave prediction function may control the latency of the network slice carrying the cancelation signal to adjust the time of arrival of the cancelation signal as received at the UE.
In some embodiments, the UE receiving the cancelation signal may comprise (and/or be coupled to) a personal device that outputs the cancelation signal as an acoustic signal from personal wearable speakers (such as headphones or ear pods, for example). For example, the UE may include an application that captures ambient sounds in the proximity of the UE using a microphone, and sends a digitized audio signal representing the captured ambient sounds to the ambient sound mitigation server with an indication of the location of the UE. The resulting cancelation signal produced by the ambient sound mitigation server is played as an acoustic signal from the personal wearable speakers to cancel at least a portion of ambient sounds in the proximity of the UE reaching the ears of the user. As explained in greater detail below, the resulting cancelation signal may be computed by the sound wave prediction function of the ambient sound mitigation server at least in part as a function of the location of the UE and a venue acoustic profile. The venue acoustic profile may comprise, for example, an acoustic map of a volume of a space of the venue where the UE is located. The venue acoustic profile may be actively generated using a calibration protocol that sends acoustic calibration signals (e.g., high frequency tones) into the venue and measures resulting return signals. In some embodiments, the venue acoustic profile may account for environmental characteristics that affect the speed of propagation of sound though air, such as temperature and/or humidity. Additionally or alternatively, the venue acoustic profile may be selected from one or more predefined or default profiles. The UE location may be correlated with the venue acoustic profile to determine parameters such as, propagation delays and phase shifts corresponding to the ambient sounds to be canceled and/or multipath characteristics at the UE location (such as reverberations and/or echo effects, for example). In some embodiments, the indication of UE location may be represented as an estimated distance of the UE from a source generating the ambient sounds to be canceled.
In other embodiments, the ambient sound mitigation server may be used for an open space ambient sound mitigation implementation. That is, the UE may receive from the ambient sound mitigation server a cancelation signal for broadcast as an acoustic signal from a speaker into an area proximate to the UE. In some embodiments, the speaker may be integral to the UE, or coupled to the UE (e.g., via a wired or wireless connection). For example, the UE may include an application that captures ambient sounds in the proximity of the UE using a microphone, and sends a digitized audio signal representing the captured ambient sounds to the ambient sound mitigation server with an indication of the location of the UE. The resulting cancelation signal received in return from the ambient sound mitigation server is broadcast into the area where the UE is located and cancels at least a portion of ambient sounds in the proximity of the speaker from reaching the ears of one or more users in that area.
Advantageously, embodiments presented herein provide technical solutions representing advancements over existing noise cancelation techniques. More specifically, one or more of the embodiments described herein provide ambient noise mitigation functionality to devices that do not themselves have integrated noise cancelation capabilities. For example, a standard smart phone and connected earbuds that do not have active noise cancelation can subscribe to ambient noise mitigation as a service (on an as-needed basis) from the phone's wireless connectivity provider. Advances in ambient noise mitigation algorithms and/or network latency control may be implemented at the ambient sound mitigation server (or other network node) so that the benefits of such advances can be realized without the need to replace hardware at the UE level. Moreover, by providing ambient noise mitigation as a network service from the ambient noise mitigation server, subscriber scaling may be realized without a corresponding need to increase computing resources at the UE level. That is, additional subscribers may be served by the ambient sound mitigation server as long as low latency bandwidth is available at the edge network. Further, the solutions provided by locating the ambient sound mitigation server at the core network edge facilitate low latency communications while reduce network congestion and consumption of processing resource within the network operating core itself. Consumers benefit by using the services of the ambient sound mitigation server without the need to substantially upgrade their UE.
Throughout the description provided herein several acronyms and shorthand notations are used to aid the understanding of certain concepts pertaining to the associated system and services. These acronyms and shorthand notations are intended to help provide an easy methodology of communicating the ideas expressed herein and are not meant to limit the scope of embodiments described in the present disclosure. Unless otherwise indicated, acronyms are used in their common sense in the telecommunication arts as one skilled in the art would readily comprehend. Further, various technical terms are used throughout this description. An illustrative resource that fleshes out various aspects of these terms can be found in Newton's Telecom Dictionary, 31st Edition (2018).
It should be understood that the UE discussed herein are in general forms of equipment and machines such as but, not limited to, Internet-of-Things (IoT) devices and smart appliances, autonomous or semi-autonomous vehicles including cars, trucks, trains, aircraft, urban air mobility (UAM) vehicles and/or drones, industrial machinery, robotic devices, exoskeletons, manufacturing tooling, thermostats, locks, smart speakers, lighting devices, smart receptacles, controllers, mechanical actuators, remote sensors, weather or other environmental sensors, wireless beacons, or any other smart device that at least in part operated based on service data received via a network. That said, in some embodiments, UE may also include handheld personal computing devices such as cellular phones, tablets, and similar consumer equipment, or stationary desktop computing devices, workstations, servers and/or network infrastructure equipment. As such, the UE may include both mobile UE and stationary UE configured to request ambient noise mitigation service from a network.
As shown in
In particular, each UE 110 communicates with the operator core network 106 via the RAN 104 over one or both of uplink (UL) radio frequency (RF) signals and downlink (DL) radio frequency (RF) signals. The RAN 104 may be coupled to the operator core network 106 via a core network edge 105 that comprises wired and/or wireless network connections that may themselves include wireless relays and/or repeaters. In some embodiments, the RAN 104 is coupled to the operator core network 106 and/or network edge 105 at least in part by a backhaul network such as the Internet or other public or private network infrastructure. The network edge 105 comprises one or more network nodes or other elements of the operator core network 106 that define the boundary of the operator core network 106, including user plane functions 136 (as further discussed herein). In some embodiments, the network edge 105 may serve as the architectural demarcation point where the operator core network 106 connects to other networks such as, but not limited to RAN 104, the Internet, or other third-party networks.
As shown in
As further shown in
In embodiments, ambient sound mitigation data, such as the digitized audio signal representing the captured ambient sounds and/or the location of the UE, may be transmitted from the venue 103 (e.g., by UE 110) and transported to the ambient sound mitigation server 170 via the RAN 104 through a low latency network slice (e.g. using 5G NR URLLC) as illustrated at 120. In turn, the ambient sound mitigation server 170 computes a cancelation signal based on the ambient sound mitigation data and a venue acoustic profile. The cancelation signal is transported back through the low latency network slice 120 for acoustic emission by the acoustic emitter 116. In some embodiments, when UE 110 subscribes to the ambient sound mitigation service, the ambient sound mitigation server 170 may generate a profile to use the low latency network slice 120.
It should be understood that in some aspects, the operating environment 100 may not comprise a distinct operator core network 106, but rather may implement one or more features of the operator core network 106 within other portions of the network, or may not implement them at all, depending on various carrier preferences. The operating environment 100, in some embodiments, may be configured for wirelessly connecting UE 110 to other UE 110 or other telecommunication networks, or a publicly-switched telecommunication network (PSTN). Generally, each UE 110 is a device capable of unidirectional or bidirectional communication with RAN 104 using radio frequency (RF) waves. The operating environment 100 may be generally configured, in some embodiments, for wirelessly connecting UE 110 to data or services that may be accessible on one or more application servers or other functions, nodes, or servers (such as services from ambient sound mitigation server 170 or other servers of data network 107).
Still referring to
Notably, nomenclature used herein is used with respect to the 3GPP 5G architecture. In other aspects, one or more of the network functions of the operator core network 106 may take different forms, including consolidated or distributed forms that perform the same general operations. For example, the AMF 130 in the 3GPP 5G architecture is configured for various functions relating to security and access management and authorization, including registration management, connection management, paging, and mobility management; in other forms, such as a 4G architecture, the AMF 130 of
As shown in
The AMF 130 facilitates mobility management, registration management, and connection management for 3GPP devices such as a UE 110. ANDSP 132 facilitates mobility management, registration management, and connection management for non-3GPP devices. AUSF 134 receives authentication requests from the AMF 130 and interacts with UDM 144, for example, for SIM authentication. N3IWF 138 provides a secure gateway for non-3GPP network access, which may be used for providing connections for UE 110 access to the operator core network 106 over a non-3GPP access network. SMF module 140 facilitates initial creation of protocol data unit (PDU) sessions using session establishment procedures. The PCF 142 maintains and applies policy control decisions and subscription information. Additionally, in some aspects, the PCF 142 maintains quality of service (QoS) policy rules. For example, the QoS rules stored in a unified data repository 146 can identify a set of access permissions, resource allocations, or any other QoS policy established by an operator. In some embodiments, the PCF 142 maintains subscription information indicating one or more services and/or micro-services subscribed to by each UE 110. Such subscription information may include subscription information pertaining to a subscription for ambient sound mitigation services provided by the ambient sound mitigation server 170. UDM 144 manages network user data including, but not limited to, data storage management, subscription management, policy control, and core network 106 exposure. NWDAF 148 collects data (for example, from UE, other network functions, application functions and operations, administration, and maintenance (OAM) systems) that can be used for network data analytics. The OSS 152 is responsible for the management and orchestration of the operator core network 106, and the various physical and virtual network functions, controllers, compute nodes, and other elements that implement the operator core network 106.
Some aspects of operating environment 100 include the UDR 146 storing information relating to access control and service and/or micro-service subscriptions, for example subscription information pertaining to a subscription for ambient sound mitigation services provided by the ambient sound mitigation server 170. The UDR 146 may be configured to store information relating to such subscriber information and may be accessible by multiple different NFs in order to perform desirable functions. For example, the UDR 146 may be accessed by the AMF 130 in order to determine subscriber information pertaining the ambient sound mitigation server 170, accessed by a PCF 142 to obtain policy related data, accessed by NEF 150 to obtain data that is permitted for exposure to third party applications (such as an application 112 executed by UE 110, for example). Other functions of the NEF 150 include monitoring of UE related events and posting information about those events for use by external entities, and providing an interface for provisioning UEs (via PCF 142) and reporting provisioning events to the UDR 146. Although depicted as a unified data management module, UDR 146 can be implemented as a plurality of network function (NF) specific data management modules.
The UPF 136 is generally configured to facilitate user plane operation relating to packet routing and forwarding, interconnection to a data network (e.g., DN 107), policy enforcement, and data buffering, among other operations. As discussed in greater detail herein, in accordance with one or more of embodiments, the UPF 136 may implement URLLC protocols to provide an extremely low latency network slice (e.g., a communication path) between the UE 110, acoustic sensor 114 and/or acoustic emitter 116 located in venue 103, and the ambient sound mitigation server 170. Using network slicing (e.g., using 5G software-defined networking (SDN) and/or 5G network slice selection function (NSSF)), the UPF 136 may establish a dedicated URLLC network slice that operates, in essence, as a distinct network (for example, establishing its own QoS, provisioning, and/or security) within the same physical network architecture of the core network edge 105 that may be used to establish other network slices. Using the URLLC protocols, the RAN 104 may reserve network capacity for uplink and/or downlink communications between the UE 110 and the ambient sound mitigation server 170 without the latency that otherwise might be introduced from sending scheduling requests and waiting for access grants, thus reducing the latency involved in sending uplink ambient sound mitigation data to the ambient sound mitigation server 170 and/or providing in the downlink, the cancelation signal for output by the acoustic emitter 116. In embodiments where one or more portions of the operating environment 100 are not structured according to the 3GPP 5G architecture, the UPF 136 may take other forms to establish an extremely low latency network slice that are equivalent in function to the URLLC network slice described herein.
As previously discussed and now illustrated
Returning to
Because ambient sounds travel as a propagating wave, the phase of the ambient sound at any instance in time will vary both as a function of time and as a function of distance between the UE 110 and the source of the ambient sound. Moreover, the amplitude of the ambient sound will attenuate as a function of that distance. Accordingly, the wave cancelation signal generator 212 may further use the UE position data 252 to determine an estimate of the distance between the source of the ambient sound and the UE 110 and control the transmission timing of the cancelation signal 256 (and/or the network slice latency) such that the acoustic cancelation signal emitted by the acoustic emitter 116 will be out of phase with the ambient sound (ideally by 180 degrees) then arriving at the UE 110. The amplitude of the acoustic cancelation signal emitted by the acoustic emitter 116 may be controlled by the sound wave cancelation estimator 210 to approximately match that of the ambient sound then arriving at the UE 110, given an estimated attenuation of the ambient sound do to the distance traveled.
In different implementations, the UE position data 252 used by the wave cancelation signal generator 212 may be based on different types of data to represent the location of the UE 110. For example, the UE 110 may comprise an active positioning technology, such as a global navigation satellite system receiver (e.g., such as a global positioning system (GPS) receiver), ultra-wide band (UWB) localization receiver, or other positioning technology, and transmit UE position data 252 based on coordinates determined using such technologies. In some embodiments, the UE 110 may comprise a range finding technology, such as a laser or ultrasonic range finder, that may be used to determine a distance from the UE 110 to the ambient sound source, and include that distance as the UE position data 252. In some embodiments, the UE position data 252 may comprise distance based on user entered data. For example, the application 112 of the UE 110 may display a user interface (such as shown in
In some embodiments, the wave cancelation signal generator 212 may control the network latency control function 214 in order to generate the latency control signal 258. For example, the wave cancelation signal generator 212 may indicate a specific a timing delay of the network slice supporting the ambient mitigation services for UE 110 that will result in the cancelation signal 256 arriving out of phase with ambient sounds within a target threshold. The network latency control function 214 may correlate that timing delay to network parameters (such as a 5QI code, for example) corresponding to that timing delay and generate a latency control signal 258 (e.g., in the format of a message or control command) that will cause the RAN 104 and/or other component of the UPF 136 to adjust the network slice 120 to provide the specified timing delay.
The venue acoustic profile 254 may be used by the wave cancelation signal generator 212 to account for characteristics of the venue's structure and/or environment. For example, the speed of propagation of ambient sound in the venue 103 may be affected by factors such as the temperature and/or humidity of the environment. Accordingly, the venue acoustic profile 254 may include propagation data that can be used to estimate the speed of sound in the venue, and adjust the timing of the cancelation signal 256 such that the acoustic cancelation signal emitted by the acoustic emitter 116 will be out of phase with the ambient sound (ideally by 180 degrees) within a phase threshold then arriving at the UE 110. In some embodiments, the venue acoustic profile 254 may comprise such data derived from measurements of sound propagation delays. For example, as shown in
The venue acoustic profile 254 may be used by the wave cancelation signal generator 212 to further account for characteristics of the venue's structure that produce multipath characteristics in the ambient sounds such as reverberations and/or echo effects, for example. For example, in some embodiments, the venue acoustic profile generator 216 may execute the calibration protocol discussed above to generate an acoustic map of a volume of the venue 103. For example, the measurements of the calibration signals 286 returned to the venue acoustic profile generator 216 may be evaluated for phase shifts and/or multipath signal summations incurred by the calibration signal(s) 284 due to interactions with surfaces of structural elements such as floors, walls, pillars, ceilings, and other surfaces. The results may be represented by the acoustic map and stored in the venue acoustic profile 254 for the venue 103 in the venue acoustic profile data 218. In other embodiments, the venue acoustic profile 254 may include an acoustic map based on one or more predefined and/or default profiles. For example, the venue acoustic profile 254 may include a predefined generic “small room” acoustic map parameters accounting for structural surfaces in close proximity. Another venue acoustic profile 254 may include a predefined generic “open space” acoustic map parameters accounting for a venue with few, or no, structural surfaces.
In some embodiments, wave cancelation signal generator 212 may execute a machine learning model or other logic trained and/or programed to input the venue acoustic profile 254, UE position data 252, and digitized audio signal 250 and predict at least a portion of the ambient sound expected to be received at the location of the UE 110 at a given point in time. Using that prediction, the wave cancelation signal generator 212 may generate the cancelation signal 256 to cancel that portion of the ambient sound at it is received at the location of the UE 110 at that given point in time.
In some embodiments, one or more aspects of the ambient sound mitigation service provided by the ambient sound mitigation server 170 may be controlled via an application 112 executed by the UE 110. For example, the wave cancelation signal generator 212 may receive a control from the application 112 of the UE 110 indicating a frequency band, noise characteristic or similar identification representative of the portion of the ambient sound that the wave cancelation signal generator 212 may target for cancelation. For example, the application 112 may provide a user interface on the UE 110 from which a user may select a baseline noise profile to target for cancelations (e.g., such as corresponding to white, pink, blue, and black noise colors as defined by American National Standard T1.523-2001, Telecom Glossary 2000).
The application layer 310 facilitates execution of the UE 110 operating system and executables (including applications such as application 112) by one or more processors or controllers of the UE 110. The application layer 310 may provide a direct user interaction environment for the UE 110 and/or a platform for implementing mission specific processes relevant to the operation of the UE 110. TEE 320 facilitates a secure area of the processor(s) of UE 110. That is, TEE 320 provides an environment in the UE 110 where isolated execution and confidentiality features are enforced. Example TEEs include Arm TrustZone technology, Software Guard Extensions (SGX) technology, or similar.
As shown in
The method 500 at 510 includes establishing at least one low latency network slice for at least one user equipment (UE) coupled to a radio access network, wherein the radio access network is configured to communicate with the at least one UE over one or both of uplink (UL) radio frequency (RF) signals and downlink (DL) RF signals. In some embodiments, the radio access network comprises a 5G New Radio (NR) base station. As previously discussed, the radio access network may be coupled to a network operator core (e.g., an operator core network of a telecommunications network comprising at least one radio access network). In such embodiments, the radio access network communicates uplink (UL) and downlink (DL) signals between one or more UE within a coverage area radio access network and the network operator core. The at least one low latency network slice may comprise an ultra-reliable low latency communications (URLLC) network slice. The low latency network slice may be used to couple the UE(s) to an ambient sound mitigation server hosted at the network edge of the telecommunications operator core network. Ambient sound mitigation may be provided using the low latency network connection between the UE and the ambient sound mitigation server.
The method 500 at 512 includes generating a cancelation signal based on ambient sound mitigation data received by the radio access network, the ambient sound mitigation data including acoustic sensor data representing an ambient sound signal, wherein the cancelation signal is generated to comprise a phase shift with respect to the ambient sound signal computed at least in part as a function of a location of the at least one UE. The location of the at least one UE may at least in part indicate a distance between the at least one UE and a source producing the ambient sound signal. Moreover, the phase shift may be dynamically adjusted at least in part by controlling a latency characteristic of the at least one low latency network slice. In some embodiments, the cancelation signal may be further based on a venue acoustic profile for a venue in which the at least one UE is located. For example, the method may include estimating a change in a speed of sound due to changes in environmental characteristics and adjust the phase shift at least in part based on the change in a speed of sound. In some embodiments, the method may include generating a venue acoustic profile for a venue in which the at least one UE is located based on broadcasting a calibration signal into the venue, and generating the cancelation signal further based on the venue acoustic profile. The venue acoustic profile may comprise an acoustic map of the venue.
The method 500 at 514 includes causing at least one acoustic emitter to emit an acoustic cancelation signal based on the cancelation signal. In some embodiments, the cancelation signal may be transmitted as an acoustic signal from personal wearable speakers (such as headphones or ear pods, for example). The resulting cancelation signal produced by the ambient sound mitigation server is played as an acoustic signal from the personal wearable speakers to cancel at least a portion of ambient sounds in the proximity of the UE reaching the ears of the user. In some embodiments, cancelation signal may be broadcast as an acoustic signal from one or more speakers into an open space or area of the venue proximate to the UE. The resulting cancelation signal broadcast into the area where the UE is located may be used to cancel at least a portion of ambient sounds in the proximity of the speaker from reaching the ears of one or more users in that area.
The method 600 at 610 includes transmitting an acoustic calibration signal into a venue. As shown in
The method 600 at 614 includes computing acoustic propagation data corresponding to the ambient sound based on the one or more characteristics. Based on one or more metrics derived using the returned measurements of the calibration signals (e.g., propagation delays and/or relative phase shifts and a function of frequency), the venue acoustic profile generator may compute the acoustic propagation data (for example, an estimate of the speed of sound in the venue, phase shifts and/or multipath signal summations incurred by the calibration signal(s) due to interactions with surfaces of structural elements). The propagation data may be stored in a venue acoustic profile associated with the venue. The method 600 at 616 includes adjusting the cancelation signal based on the acoustic propagation data. In some embodiments, the venue acoustic profile generator may re-execute the calibration protocol of method 600 periodically (e.g., once per minute) to refresh the propagation data to account for changes in the environmental conditions at the venue over time.
Referring to
The implementations of the present disclosure may be described in the general context of computer code or machine-useable instructions, including computer-executable instructions such as program components, being executed by a computer or other machine, such as a personal data assistant or other handheld device. Generally, program components, including routines, programs, objects, components, data structures, and the like, refer to code that performs particular tasks or implements particular abstract data types. Implementations of the present disclosure may be practiced in a variety of system configurations, including handheld devices, consumer electronics, general-purpose computers, specialty computing devices, etc. Implementations of the present disclosure may also be practiced in distributed computing environments where tasks are performed by remote-processing devices that are linked through a communications network.
With continued reference to
Computing device 700 typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by computing device 700 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer-readable media may comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data.
Computer storage media includes non-transient RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Computer storage media and computer-readable media do not comprise a propagated data signal or signals per se.
Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above should also be included within the scope of computer-readable media.
Memory 712 includes computer-storage media in the form of volatile and/or nonvolatile memory. Memory 712 may be removable, non-removable, or a combination thereof. Exemplary memory includes solid-state memory, hard drives, optical-disc drives, etc. Computing device 700 includes one or more processors 714 that read data from various entities such as bus 710, memory 712 or I/O components 720. One or more presentation components 716 presents data indications to a person or other device. Exemplary one or more presentation components 716 include a display device, speaker, printing component, vibrating component, etc. I/O ports 718 allow computing device 700 to be logically coupled to other devices including I/O components 720, some of which may be built in computing device 700. Illustrative I/O components 720 include a microphone, joystick, game pad, satellite dish, scanner, printer, wireless device, etc.
Radio(s) 724 represents a radio that facilitates communication with a wireless telecommunications network. For example, radio(s) 724 may be used to establish communications with components of the core network edge 105. Illustrative wireless telecommunications technologies include CDMA, GPRS, TDMA, GSM, and the like. Radio 724 might additionally or alternatively facilitate other types of wireless communications including Wi-Fi, WiMAX, LTE, and/or other VoIP communications. As can be appreciated, in various embodiments, radio(s) 724 can be configured to support multiple technologies and/or multiple radios can be utilized to support multiple technologies. A wireless telecommunications network might include an array of devices, which are not shown so as to not obscure more relevant aspects of the embodiments described herein. Components such as a base station, a communications tower, or even access points (as well as other components) can provide wireless connectivity in some embodiments.
Referring to
Cloud computing environment 810 includes one or more controllers 820 comprising one or more processors and memory. The controllers 820 may comprise servers of a data center. In some embodiments, the controllers 820 are programmed to execute code to implement at least one or more aspects of the ambient sound mitigation server, including the wave phase cancelation signal generator, network latency control function, and/or the venue acoustic profile generator.
For example, in one embodiment the wave phase cancelation signal generator, network latency control function, and/or the venue acoustic profile generator are virtualized network functions (VNFs) 830 running on a worker node cluster 825 established by the controllers 820. The cluster of worker nodes 825 may include one or more orchestrated Kubernetes (K8s) pods that realize one or more containerized applications 835 for the wave phase cancelation signal generator, network latency control function, and/or the venue acoustic profile generator. In some embodiments, the UE 110 may be coupled to the controllers 820 of the cloud-computing environment 810 by RAN 104 and core network edge 105. In some embodiments, venue acoustic profile data 218 may be implemented at least in part as one or more data store persistent volumes 840 in the cloud-computing environment 810.
In various alternative embodiments, system and/or device elements, method steps, or example implementations described throughout this disclosure (such as the UE, RAN Core Network Edge, Operator Core Network, Ambient Sound Mitigation Servicer, Acoustic Sensor(s) and/or Acoustic emitter(s), or any of the sub-parts thereof, for example) may be implemented at least in part using one or more computer systems, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs) or similar devices comprising a processor coupled to a memory and executing code to realize that elements, processes, or examples, said code stored on a non-transient hardware data storage device. Therefore, other embodiments of the present disclosure may include elements comprising program instructions resident on computer readable media which when implemented by such computer systems, enable them to implement the embodiments described herein. As used herein, the term “computer readable media” refers to tangible memory storage devices having non-transient physical forms. Such non-transient physical forms may include computer memory devices, such as but not limited to: punch cards, magnetic disk or tape, any optical data storage system, flash read only memory (ROM), non-volatile ROM, programmable ROM (PROM), erasable-programmable ROM (E-PROM), random access memory (RAM), or any other form of permanent, semi-permanent, or temporary memory storage system of device having a physical, tangible form. Program instructions include, but are not limited to, computer executable instructions executed by computer system processors and hardware description languages such as Very High Speed Integrated Circuit (VHSIC) Hardware Description Language (VHDL).
As used herein, the terms “function”, “unit”, “node” and “module” are used to describe computer processing components and/or one or more computer executable services being executed on one or more computer processing components. In the context of this disclosure, such terms used in this manner would be understood by one skilled in the art to refer to specific network elements and not used as nonce word or intended to invoke 35 U.S.C. 112(f).
Many different arrangements of the various components depicted, as well as components not shown, are possible without departing from the scope of the claims below. Embodiments in this disclosure are described with the intent to be illustrative rather than restrictive. Alternative embodiments will become apparent to readers of this disclosure after and because of reading it. Alternative means of implementing the aforementioned can be completed without departing from the scope of the claims below. Certain features and sub-combinations are of utility and may be employed without reference to other features and sub-combinations and are contemplated within the scope of the claims.
In the preceding detailed description, reference is made to the accompanying drawings which form a part hereof wherein like numerals designate like parts throughout, and in which is shown, by way of illustration, embodiments that may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the preceding detailed description is not to be taken in the limiting sense, and the scope of embodiments is defined by the appended claims and their equivalents.
Number | Date | Country | |
---|---|---|---|
20240135910 A1 | Apr 2024 | US |