The present description relates generally to processing audio signals, including, for example, near-field audio source detection for electronic devices.
An electronic device may include multiple microphones. The multiple microphones may produce audio signals which include sound from a source, such as a user speaking to the device.
Certain features of the subject technology are set forth in the appended claims. However, for purpose of explanation, several embodiments of the subject technology are set forth in the following figures.
The detailed description set forth below is intended as a description of various configurations of the subject technology and is not intended to represent the only configurations in which the subject technology can be practiced. The appended drawings are incorporated herein and constitute a part of the detailed description. The detailed description includes specific details for the purpose of providing a thorough understanding of the subject technology. However, the subject technology is not limited to the specific details set forth herein and can be practiced using one or more other implementations. In one or more implementations, structures and components are shown in block diagram form in order to avoid obscuring the concepts of the subject technology.
An electronic device may include multiple microphones. The microphones may produce audio signals corresponding to sounds from one or more audio sources. For example, the audio sources may include sources that are external to the electronic device, such as one or more of a user who is speaking to the device, a bystander who is not the user of the device but whose voice may be captured by device microphones, and/or environmental noise (e.g., wind, traffic, and the like). The audio sources that are external to the electronic device may be far-field audio sources for one or more (e.g., all) of the microphones of the electronic device. A far-field audio source may be a source for which the sound received from the audio source at the various microphones of the electronic device differs in phase, but has substantially the same energy. In one or more implementations, a direction-of-arrival of a far-field audio source can be determined based on the different phases of the received sound at the various microphones of the device, and a presumption that the energy of the received sound is substantially the same at the various microphones (e.g., an assumption that the audio source is a far-field source).
However, audio signals generated by the microphones may also include portions that correspond to sounds from one or more near-field audio sources. For example, near-field audio sources can include audio sources that are internal and/or integral to the electronic device. For example, near-field audio sources can include sound-generating components of the electronic device, such as one or more speakers of the electronic device, one or more fans (e.g., cooling fans) of the electronic device, and/or one or more haptic components (e.g., piezoelectric components that generate haptic feedback) of the electronic device.
Aspects of the subject technology provide for distinguishing, with an electronic device having multiple microphones, near-field and far-field audio sources. Because the relative locations of device components with respect to the various microphones of an electronic device are known and fixed, near-field impulse response functions can be predetermined for each microphone/near-field source pair, each of which can also have a direction-of-arrival label. In one or more implementations, far-field impulse response functions can also be predetermined for one or more far-field locations at which an audio source may be expected to be located at one or more times during operation of the electronic device.
In one or more implementations, using at least the near-field impulse response functions, the electronic device can identify audio signals that correspond to one or more near-field source directions-of-arrival. As described in further detail hereinafter, once the near-field and/or far-field audio sources have been distinguished in the audio signals, various device operations can leverage the audio signals and labels corresponding to the distinguished audio sources, such as for residual echo suppression, blind source separation, automatic noise cancellation, acoustic scene mapping, voice assistance, audio and/or video conferencing, telephony, or the like.
In the example of
As shown in
Electronic device 100 may be implemented as, for example, a portable computing device such as a desktop computer, a laptop computer, a smartphone, a peripheral device (e.g., a digital camera, headphones), a tablet device, a smart speaker, a set-top box, a content streaming device, a wearable device such as a watch, a band, a wireless headset device, wireless headphones, one or more wireless earbuds (or any in-ear, against the ear or over-the-ear device), and/or the like, or any other appropriate device that includes one or more sound-generating components and multiple microphones.
Although not shown in
As is discussed further below, microphones 104 and 106 and/or other microphones of the electronic device 100 may be used, in conjunction with the architectures/components described herein, for detection of audio from near-field audio sources and/or operation of the electronic device 100 based on the detection of near-field audio sources.
In the example of
As shown in
In one example use case, speaker 102 may be driven by the electronic device 100 to playback music, or audio content corresponding to video content that is playing on a display of electronic device 100 or a display of another electronic device. In this example use case, the far-field audio source 112 may be a user of the electronic device 100 speaking an audio command to a voice assistant application running on the electronic device 100. For example, the user of the electronic device 100 may speak a voice command to the voice assistant application running on the electronic device to raise or lower the volume of the audio output 114, or a voice command to stop, rewind, or fast forward playback of the audio content.
In another example use case, the electronic device 100 may be used to conduct a call or an audio and/or video conference with a remote participant. In this example use case, the speaker 102 may be driven to generate audio output 114 corresponding to the voice or voices of the remote participant. In this example use case, the far-field audio source 112 may be the user of the electronic device 100 speaking to the remote participant via the electronic device 100. For example, the microphone 104 and/or the microphone 106 may receive the voice input from the user of the electronic device and generate audio signals corresponding to the voice input. The electronic device 100 may process the audio signals and transmit a portion of the audio signals corresponding to the voice input from the user to a remote device of the remote participant.
However, as shown in
For example, in the use case in which the electronic device 100 is being used to conduct a call or an audio and/or video conference, it is undesirable for the voice content from the remote participant that is output from the speaker 102 in the audio output 114, to be re-transmitted (e.g., echoed) back to the remote participant (e.g., as an echo of the remote participant's own voice to the remote participant).
In one or more implementations, echo-suppression operations may be performed by the electronic device 100 to suppress such an echo in the portion of the audio signals from the microphones 104 and 106 that is transmitted to the remote participant. Echo suppression operations can also be performed in other use cases, such as in the example use case described above in which a voice command to a voice assistant application is provided by a user while the speaker 102 outputs audio content. In this use case, echo-suppression operations can help prevent the sound 116 and/or the sound 118 from preventing detection of the voice command by the electronic device 100 and/or misinterpretation of the voice command by the electronic device 100.
In the example use case of a call or audio and/or video conference and/or of a voice command during audio output from the speaker, the audio output from the speaker 102 may be generated by the electronic device 100 based on audio output signals corresponding to the desired audio output. In one or more implementations, echo-suppression operations for these audio outputs can be performed by suppressing or cancelling a portion of the audio signals of the microphones 104 and 106 that matches the audio output signals. In one or more implementations, echo-suppression operations may also, or alternatively, include suppressing or cancelling a portion of the audio signals generated by the microphone 106 (e.g., the microphone furthest from the speaker 102) that corresponds to the audio signals from the microphone 104 (e.g., the microphone nearest the speaker 102), since the sound 116 at the microphone 104 may be dominant in the audio signals from the microphone 104 due to the proximity of the microphone 104 to the speaker 102.
However, even when the audio output 114 is suppressed using known audio output signals and/or audio signals from one or more microphones in close proximity to the speaker(s) generating the audio output, a residual echo of the audio output 114 can remain the echo-suppressed audio signals.
In accordance with aspects of the subject technology, the electronic device 100 may perform residual echo suppression operations to remove this residual echo from the echo-suppressed audio signals (which may be referred to herein as initial echo-suppressed audio signals, in some examples). It is also appreciated that the operations, described herein as residual echo-suppression operations when applied to initial echo-suppressed audio signals, can also be applied to audio signals from one or more microphones without performing a prior echo-suppression operation to provide direct informed echo-suppression. This can be helpful, for example, in electronic devices in which it is not feasible (e.g., due to mechanical, electrical, and/or spatial constraints) to place a microphone in close proximity to each speaker (e.g., an electronic device in which two or more microphones are uniformly distributed about (e.g., equidistant from) a speaker.
In addition to residual echo suppression and direct informed echo-suppression, the operations described herein can be applied to remove noise from microphone-generated audio signals when the noise is not known a priori (e.g., in contrast with in the use cases in which the noise received by the microphones of a device are generated by one or more speakers of that device). For example,
In the examples of
Because the relative locations of the speaker 102, the sound-generating component 108, the microphone 104, and the microphone 106 are fixed, and because the speaker 102 and the sound-generating component 108 are within the near field of both the microphone 104 and the microphone 106, a near-field impulse response may be obtained for each microphone/sound-generating component pair. For example, speaker 102 may be driven (e.g., during manufacturing of the electronic device 100) to generate a broadband audio output while input audio signals are generated by the microphone 104 and the microphone 106. In this way, a frequency-dependent transfer function between the speaker 102 and each of the microphones 104 and 106 can be measured. As another example, sound-generating component 108 may be operated (e.g., during manufacturing of the electronic device 100) to generate sound while input audio signals are generated by the microphone 104 and the microphone 106. In this way, a frequency-dependent transfer function between the sound-generating component 108 and each of the microphones 104 and 106 can be measured.
In one or more implementations, a multi-dimensional impulse response vector can be stored for each sound-generating component (e.g., the speaker 102, the sound-generating component 108 and/or any other sound-generating components) of the electronic device 100, with each dimension of the multi-dimensional impulse response vector corresponding to one of the microphones of the electronic device 100. In this way, near-field impulse response information for each sound-generating component and the microphones of the electronic device 100 can be generated and stored. During operation of the electronic device 100, when audio input is received from one or more (e.g., near-field) components of the electronic device and one or more external (e.g., far-field) audio sources (e.g., as in the use cases of
In one or more implementations, the near-field impulse response information for each sound-generating component may be stored with a label such as a direction-of-arrival corresponding to that sound-generating component. In one or more implementations, the near-field impulse response information for the sound-generating components and the microphones of the electronic device 100 may be used to distinguish components of audio signals that correspond to near-field audio sources and far-field audio sources. In one or more implementations, the electronic device 100 may also store one or more far-field impulse response functions corresponding to one or more far-field locations and the microphones of the electronic device 100, and can use the one or more far-field impulse response functions to distinguish components of audio signals that correspond to different far-field audio sources (e.g., between a user speaking into the electronic device 100 and on or more external noise sources). In one or more implementations, the electronic device 100 may also label portions of the audio signals from the microphones 104 and 106 that correspond to one or more of the direction-of-arrival labels stored with the near-field impulse response information and/or the far-field impulse response information. In this way, subsequent processing of the audio signals can select, emphasize, and/or suppress desired portions of the audio signals using the labels, for various device operations.
In the examples of
Although the location 302 at which contact with an external object 300 may not be known a priori, and may change over time and/or in different use cases, the electronic device 100 may generate and/or store near-field response information for one or more locations 304 on the housing 301, and use the near-field response information for the one or more locations 304 on the housing 301 to identify portions of the audio signals from the microphones 104 and 106 that correspond to sound 314 and sound 318 from the contact.
For example, near-field impulse response information for the one or more locations 304 on the housing of the electronic device 100 may be used to classify near-field noise generated at any location on the housing (e.g., at location 302 due to external contact with the housing) as contact noise generated at one of the one or more locations (e.g., even if the location 302 of the contact is not exactly the same as any of the one or more locations 304 for which the near-field impulse response information was generated). In this way, the near-field impulse response information for the one or more locations 304 may be used to identify and/or remove portions of the audio signals from the microphone 104 and the microphone 106 caused by contact noise.
The architecture of
In the example of
As shown in
In the example of
In one or more implementations, the far-field based operation module 404 may include transmitting the far-field signal 416 to a remote device of a remote participant in call or an audio and/or video conference, providing the far-field signal 416 to a voice assistant application and executing a voice command in the far-field signal 416 with the voice assistant application, and/or providing the far-field signal 416 to an audio signal recorder application, a dictation application, or any other application or process that utilizes audio input to the electronic device from the external environment of the electronic device.
As shown in
For example, the residual echo suppression module 402 may remove a residual-echo portion of the audio signals 410 from the processed audio signals 512 (e.g., frequency space audio signals and/or initial echo-suppressed audio signals) by identifying, with the DOA estimation module 506, various portions of the processed audio signals 512 corresponding to various respective directions-of-arrival. In one or more implementations, identifying the various portions of the processed audio signals 512 that correspond to the various respective directions-of-arrival may include binning the various portions of the processed audio signals 512 into time-frequency bins, and generating a DOA map that indicates the DOA of the dominant audio source for each of the time-frequency bins. For example, the DOA map may map (e.g., based on a correspondence between a shape of a time-frequency response in the audio signals and a time-frequency response in the NF impulse response information for one or more labeled directions-of-arrival) one or more of the various portions (e.g., the time-frequency bins) of the processed audio signals 512 to a labeled direction-of-arrival for the dominant audio source detected in that portion (e.g., bin).
In this way, and because the labeled DOA(s) of the speaker 102 (e.g., and/or any other near-field sound-generating components such as other speakers, fans, haptic components, etc. of the electronic device) are known, the DOA map can be used to identify time-frequency bins of the processed audio signals 512 in which the speaker 102 (e.g., and/or any other near-field sound-generating components such as other speakers, fans, haptic components, etc.) contributes to (e.g., is the dominant contributor to) the audio input. The DOA map and the pre-labeled DOAs can thus be used to identify time-frequency bins in which any speaker or sound-generating component was active and dominant, and/or to isolate the time-frequency bins when any individual speaker or sound-generating was dominant and active.
The NF masking module 508 may generate a mask (e.g., an NF mask) using a predetermined direction-of-arrival (e.g., a look direction 510 or “Look Dir”) corresponding to the speaker 102 and/or any other near-field audio sources, and the identified various portions of the processed audio signals 512 corresponding to the various respective directions-of-arrival (e.g., in the DOA map). The NF masking module 508 may then apply the mask to the processed audio signals 512. For example, the NF mask may be a vector, an array, or other structure of mask values (e.g., gain values) having a high value for time-frequency bins in which the processed audio signals 512 do not include contributions from the speaker 102 and/or any other sound-generating components (e.g., from a direction-of-arrival of the speaker 102 and/or any other sound-generating components), and a low value for the time-frequency bins in which the processed audio signals 512 do include contributions from the speaker 102 and/or any other sound-generating components (e.g., from a direction-of-arrival of the speaker 102 and/or any other sound-generating components). In this example, applying the NF mask may include passing any time-frequency bins with high mask values through to the far-field based operations module 404 in the far-field signal 416 (e.g., and removing time-frequency bins with low mask values). In this way, the electronic device 100 (e.g., the residual echo suppression module 402) can suppress a residual-echo portion of the processed audio signals 512 using a near-field impulse response 403 for the speaker 102 and/or any other speakers of the electronic device, a fan-noise portion of the processed audio signals 512 using a near-field impulse response 403 corresponding to relative locations of the microphones and a fan of the electronic device 100, and/or any other noise portion of the processed audio signals 512 that corresponds to direction-of-arrival for which a near-field impulse response 403 is available.
The architecture of
In the examples of
For example
As illustrated in
For example, in one or more implementations, the audio processing and/or device control operations 600 may include operating the electronic device 100 based on the audio signals 410 and the identified portion of the audio signals by removing the identified portion of the audio signals from the audio signals, and operating the electronic device 100 based on a remaining portion of the audio signals. For example, the audio processing and/or device control operations 600 may include operating the electronic device 100 based on the remaining portion of the audio signals by transmitting the remaining portion of the audio signals (e.g., far-field signal 416) to a remote device (e.g., as part of a telephone call or an audio and/or video conferencing session).
As another example, in one or more implementations, the audio processing and/or device control operations 600 may include operating the electronic device 100 based on the remaining portion of the audio signals by determining whether voice activity is present in the remaining portion of the audio signals, and providing the remaining portion of the audio signals to a voice assistant application at the electronic device 100, if the voice activity is present.
In one or more implementations, the audio processing and/or device control operations 600 may include removing the identified portion of the audio signals from the audio signals by applying, to the audio signals, a gain mask (e.g., the NF mask of
In the example of
In one or more implementations, DOA estimation module 506 may identify an additional portion of the audio signals (e.g., the audio signals 410 and/or the processed audio signals 512) corresponding to an additional sound-generating component of the electronic device 100 (e.g., the sound-generating component 108) using additional near-field impulse response information for the additional sound-generating component (e.g., an additional NF impulse response 403 for the sound-generating component 108 stored in memory of the electronic device 100).
In one or more implementations, DOA estimation module 506 may also identify an additional portion of the audio signals corresponding to a far-field audio source using far-field impulse response information (e.g., a far-field impulse response 509) for a far-field location and two or more microphones (e.g., microphones 104, 106, and/or 401). For example, the far-field impulse response information for the far-field location and the plurality of microphones may include a predetermined far-field impulse response 509 determined (e.g., during manufacturing of the electronic device) by generating a known sound at the far-field location, obtaining an audio signal with each of the microphones, determining a transfer function based on the known sound and each of the obtained audio signals and/or a combination of the obtained signals, and storing the obtained transfer functions as the far-field impulse response 509, labeled with a DOA for the far-field location, for each microphone and/or a combination of the microphones.
In one or more implementations, DOA estimation module 506 may also identify an additional portion of the audio signals corresponding to a contact between the electronic device and an external object (e.g., the external object 300 of
In one or more implementations, the audio processing and/or device control operations 600 may include audio processing operations that identify multiple audio sources in the audio signals 410 and/or the processed audio signals 512, and multi-source audio operations that utilize the identifications of the multiple sources. For example,
In the example of
For example, the architecture of
In one or more implementations, one or more components of the echo suppression module 400, the residual echo suppression module 402, the far-field based operations module 404, the initial processing module 504, the DOA estimation module 506, the NF masking module 508, the audio processing and/or device control operations 600, the thresholding operation 700 and/or the multi-source audio operations 702 of
In the example of
At block 804, the electronic device (e.g., DOA estimation module 506) identifies a portion (e.g., one or more time-frequency bins) of the audio signals corresponding to a sound-generating component (e.g., a speaker, such as speaker 102, or another sound-generating component such as sound-generating component 108) of the electronic device using near-field impulse response information (e.g., NF impulse response(s) 403) for the sound-generating component and the microphones. In one or more implementations, the microphones include at least a first microphone (e.g., microphone 104) and a second microphone (e.g., microphone 106), and the near-field impulse response information includes a first transfer function between the sound-generating component and the first microphone and a second transfer function between the sound-generating component and the second microphone. In one or more implementations, the near-field impulse response information includes transfer function (e.g., a NF IR 403) between each of one or more sound-generating components and each of two or more microphones.
At block 806, the electronic device may be operated based on the audio signals and the identified portion of the audio signals. For example, operating the electronic device based on the audio signals and the identified portion of the audio signals may include performing any or all of the operations of the far-field based operations module 404 of
In one or more implementations, the process 800 may also include identifying an additional portion (e.g., one or more additional time-frequency bins) of the audio signals corresponding to a far-field audio source (e.g., far-field audio source 110 and/or far-field audio source 112) external to the electronic device using far-field impulse response information (e.g., one or more FF IRs 509) for a far-field location and the microphones.
As discussed herein, operating an electronic device based on audio input can include performing residual echo suppression, in one or more implementations.
In the example of
At block 904, the electronic device may receive audio signals (e.g., audio signals 410 and/or processed audio signals 512) from microphones (e.g., microphone 104, microphone 106, and/or one or more microphones 401) of the electronic device while driving the speaker to generate the audio output (e.g., as indicated in
At block 906, the electronic device (e.g., residual echo suppression module 402) may generate echo-suppressed audio signals (e.g., far-field signals 416) by removing a residual-echo portion of the audio signals using a near-field impulse response (e.g., near-field impulse response 403) corresponding to relative locations of the speaker and the microphones. For example, the near-field impulse response corresponding to the relative locations of the speaker and the microphones may be a frequency-dependent near-field impulse response corresponding to the relative locations of the speaker and the microphones. For example, the frequency-dependent near-field impulse response corresponding to the relative locations of the speaker and the microphones may include a sub-band near-field impulse response for the speaker and each of the microphones (e.g., stored as a multi-dimensional vector as discussed herein in connection with some examples).
In one or more implementations, removing the residual-echo portion of the audio signals includes generating (e.g., by echo suppression module 400) initial echo-suppressed audio signals (e.g., initial echo-suppressed audio signals 412) by cancelling a portion of the audio signals corresponding to the audio output from the speaker, and removing (e.g., by residual echo suppression module 402) the residual-echo portion of the audio signals from the initial echo-suppressed audio signals using the near-field impulse response corresponding to the relative locations of the speaker and the microphones (e.g., as described above in connection with
In one or more implementations, removing the residual-echo portion of the audio signals from the initial echo-suppressed audio signals includes identifying (e.g., by DOA estimation module 506) various portions (e.g., time-frequency bins) of the initial echo-suppressed audio signals corresponding to various respective directions-of-arrival, generating (e.g., by NF masking module 508) a mask (e.g., a NF mask) using a predetermined direction-of-arrival of the speaker (e.g., a look direction 510) and the identified various portions of the initial echo-suppressed audio signals corresponding to the various respective directions-of-arrival (e.g., in a DOA map), and applying (e.g., by NF masking module 508) the mask to the initial echo-suppressed audio signals (e.g., as described above in connection with
In one or more implementations, the process 900 may also include suppressing (e.g., by NF masking module 508) a fan-noise portion of the initial echo-suppressed audio signals using a near-field impulse response corresponding to relative locations of the microphones and a fan (e.g., sound-generating component 108) of the electronic device. In one or more implementations, the process 900 may also include suppressing (e.g., by NF masking module 508) a contact-noise portion of the initial echo-suppressed audio signals using a near-field impulse response corresponding to relative locations of the microphones and one or more locations on a housing (e.g., the housing 301) of the electronic device.
As described above, one aspect of the present technology is the gathering and use of data available from specific and legitimate sources for providing user information in association with processing audio and/or non-audio signals. The present disclosure contemplates that in some instances, this gathered data may include personal information data that uniquely identifies or can be used to identify a specific person. Such personal information data can include demographic data, location-based data, online identifiers, telephone numbers, email addresses, home addresses, data or records relating to a user's health or level of fitness (e.g., vital signs measurements, medication information, exercise information), date of birth, or any other personal information.
The present disclosure recognizes that the use of such personal information data, in the present technology, can be used to the benefit of users. For example, the personal information data can be used for operating an electronic device based on audio input. Accordingly, use of such personal information data may facilitate transactions (e.g., online transactions). Further, other uses for personal information data that benefit the user are also contemplated by the present disclosure. For instance, health and fitness data may be used, in accordance with the user's preferences to provide insights into their general wellness, or may be used as positive feedback to individuals using technology to pursue wellness goals.
The present disclosure contemplates that those entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information data will comply with well-established privacy policies and/or privacy practices. In particular, such entities would be expected to implement and consistently apply privacy practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining the privacy of users. Such information regarding the use of personal data should be prominently and easily accessible by users, and should be updated as the collection and/or use of data changes. Personal information from users should be collected for legitimate uses only. Further, such collection/sharing should occur only after receiving the consent of the users or other legitimate basis specified in applicable law. Additionally, such entities should consider taking any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. Further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices. In addition, policies and practices should be adapted for the particular types of personal information data being collected and/or accessed and adapted to applicable laws and standards, including jurisdiction-specific considerations which may serve to impose a higher standard. For instance, in the US, collection of or access to certain health data may be governed by federal and/or state laws, such as the Health Insurance Portability and Accountability Act (HIPAA); whereas health data in other countries may be subject to other regulations and policies and should be handled accordingly.
Despite the foregoing, the present disclosure also contemplates embodiments in which users selectively block the use of, or access to, personal information data. That is, the present disclosure contemplates that hardware and/or software elements can be provided to prevent or block access to such personal information data. For example, in the case of operating an electronic device based on audio input, the present technology can be configured to allow users to select to “opt in” or “opt out” of participation in the collection of personal information data during registration for services or anytime thereafter. In addition to providing “opt in” and “opt out” options, the present disclosure contemplates providing notifications relating to the access or use of personal information. For instance, a user may be notified upon downloading an app that their personal information data will be accessed and then reminded again just before personal information data is accessed by the app.
Moreover, it is the intent of the present disclosure that personal information data should be managed and handled in a way to minimize risks of unintentional or unauthorized access or use. Risk can be minimized by limiting the collection of data and deleting data once it is no longer needed. In addition, and when applicable, including in certain health related applications, data de-identification can be used to protect a user's privacy. De-identification may be facilitated, when appropriate, by removing identifiers, controlling the amount or specificity of data stored (e.g., collecting location data at city level rather than at an address level), controlling how data is stored (e.g., aggregating data across users), and/or other methods such as differential privacy.
Therefore, although the present disclosure broadly covers use of personal information data to implement one or more various disclosed embodiments, the present disclosure also contemplates that the various embodiments can also be implemented without the need for accessing such personal information data. That is, the various embodiments of the present technology are not rendered inoperable due to the lack of all or a portion of such personal information data.
The bus 1008 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the electronic system 1000. In one or more implementations, the bus 1008 communicatively connects the one or more processing unit(s) 1012 with the ROM 1010, the system memory 1004, and the permanent storage device 1002. From these various memory units, the one or more processing unit(s) 1012 retrieves instructions to execute and data to process in order to execute the processes of the subject disclosure. The one or more processing unit(s) 1012 can be a single processor or a multi-core processor in different implementations.
The ROM 1010 stores static data and instructions that are needed by the one or more processing unit(s) 1012 and other modules of the electronic system 1000. The permanent storage device 1002, on the other hand, may be a read-and-write memory device. The permanent storage device 1002 may be a non-volatile memory unit that stores instructions and data even when the electronic system 1000 is off. In one or more implementations, a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) may be used as the permanent storage device 1002.
In one or more implementations, a removable storage device (such as a floppy disk, flash drive, and its corresponding disk drive) may be used as the permanent storage device 1002. Like the permanent storage device 1002, the system memory 1004 may be a read-and-write memory device. However, unlike the permanent storage device 1002, the system memory 1004 may be a volatile read-and-write memory, such as random access memory. The system memory 1004 may store any of the instructions and data that one or more processing unit(s) 1012 may need at runtime. In one or more implementations, the processes of the subject disclosure are stored in the system memory 1004, the permanent storage device 1002, and/or the ROM 1010. From these various memory units, the one or more processing unit(s) 1012 retrieves instructions to execute and data to process in order to execute the processes of one or more implementations.
The bus 1008 also connects to the input and output device interfaces 1014 and 1006. The input device interface 1014 enables a user to communicate information and select commands to the electronic system 1000. Input devices that may be used with the input device interface 1014 may include, for example, alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output device interface 1006 may enable, for example, the display of images generated by electronic system 1000. Output devices that may be used with the output device interface 1006 may include, for example, printers and display devices, such as a liquid crystal display (LCD), a light emitting diode (LED) display, an organic light emitting diode (OLED) display, a flexible display, a flat panel display, a solid state display, a projector, or any other device for outputting information. One or more implementations may include devices that function as both input and output devices, such as a touchscreen. In these implementations, feedback provided to the user can be any form of sensory feedback, such as visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
Finally, as shown in
Implementations within the scope of the present disclosure can be partially or entirely realized using a tangible computer-readable storage medium (or multiple tangible computer-readable storage media of one or more types) encoding one or more instructions. The tangible computer-readable storage medium also can be non-transitory in nature.
The computer-readable storage medium can be any storage medium that can be read, written, or otherwise accessed by a general purpose or special purpose computing device, including any processing electronics and/or processing circuitry capable of executing instructions. For example, without limitation, the computer-readable medium can include any volatile semiconductor memory, such as RAM, DRAM, SRAM, T-RAM, Z-RAM, and TTRAM.
The computer-readable medium also can include any non-volatile semiconductor memory, such as ROM, PROM, EPROM, EEPROM, NVRAM, flash, nvSRAM, FeRAM, FeTRAM, MRAM, PRAM, CBRAM, SONOS, RRAM, NRAM, racetrack memory, FJG, and Millipede memory.
Further, the computer-readable storage medium can include any non-semiconductor memory, such as optical disk storage, magnetic disk storage, magnetic tape, other magnetic storage devices, or any other medium capable of storing one or more instructions. In one or more implementations, the tangible computer-readable storage medium can be directly coupled to a computing device, while in other implementations, the tangible computer-readable storage medium can be indirectly coupled to a computing device, e.g., via one or more wired connections, one or more wireless connections, or any combination thereof.
Instructions can be directly executable or can be used to develop executable instructions. For example, instructions can be realized as executable or non-executable machine code or as instructions in a high-level language that can be compiled to produce executable or non-executable machine code. Further, instructions also can be realized as or can include data. Computer-executable instructions also can be organized in any format, including routines, subroutines, programs, data structures, objects, modules, applications, applets, functions, etc. As recognized by those of skill in the art, details including, but not limited to, the number, structure, sequence, and organization of instructions can vary significantly without varying the underlying logic, function, processing, and output.
While the above discussion primarily refers to microprocessor or multi-core processors that execute software, one or more implementations are performed by one or more integrated circuits, such as ASICs or FPGAs. In one or more implementations, such integrated circuits execute instructions that are stored on the circuit itself.
Those of skill in the art would appreciate that the various illustrative blocks, modules, elements, components, methods, and algorithms described herein may be implemented as electronic hardware, computer software, or combinations of both. To illustrate this interchangeability of hardware and software, various illustrative blocks, modules, elements, components, methods, and algorithms have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application. Various components and blocks may be arranged differently (e.g., arranged in a different order, or partitioned in a different way) all without departing from the scope of the subject technology.
It is understood that any specific order or hierarchy of blocks in the processes disclosed is an illustration of example approaches. Based upon design preferences, it is understood that the specific order or hierarchy of blocks in the processes may be rearranged, or that all illustrated blocks be performed. Any of the blocks may be performed simultaneously. In one or more implementations, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
As used in this specification and any claims of this application, the terms “base station”, “receiver”, “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms “display” or “displaying” means displaying on an electronic device.
As used herein, the phrase “at least one” of preceding a series of items, with the term “and” or “or” to separate any of the items, modifies the list as a whole, rather than each member of the list (i.e., each item). The phrase “at least one” of does not require selection of at least one of each item listed; rather, the phrase allows a meaning that includes at least one of any one of the items, and/or at least one of any combination of the items, and/or at least one of each of the items. By way of example, the phrases “at least one of A, B, and C” or “at least one of A, B, or C” each refer to only A, only B, or only C; any combination of A, B, and C; and/or at least one of each of A, B, and C.
The predicate words “configured to”, “operable to”, and “programmed to” do not imply any particular tangible or intangible modification of a subject, but, rather, are intended to be used interchangeably. In one or more implementations, a processor configured to monitor and control an operation or a component may also mean the processor being programmed to monitor and control the operation or the processor being operable to monitor and control the operation. Likewise, a processor configured to execute code can be construed as a processor programmed to execute code or operable to execute code.
Phrases such as an aspect, the aspect, another aspect, some aspects, one or more aspects, an implementation, the implementation, another implementation, some implementations, one or more implementations, an embodiment, the embodiment, another embodiment, some implementations, one or more implementations, a configuration, the configuration, another configuration, some configurations, one or more configurations, the subject technology, the disclosure, the present disclosure, other variations thereof and alike are for convenience and do not imply that a disclosure relating to such phrase(s) is essential to the subject technology or that such disclosure applies to all configurations of the subject technology. A disclosure relating to such phrase(s) may apply to all configurations, or one or more configurations. A disclosure relating to such phrase(s) may provide one or more examples. A phrase such as an aspect or some aspects may refer to one or more aspects and vice versa, and this applies similarly to other foregoing phrases.
The word “exemplary” is used herein to mean “serving as an example, instance, or illustration”. Any embodiment described herein as “exemplary” or as an “example” is not necessarily to be construed as preferred or advantageous over other implementations. Furthermore, to the extent that the term “include”, “have”, or the like is used in the description or the claims, such term is intended to be inclusive in a manner similar to the term “comprise” as “comprise” is interpreted when employed as a transitional word in a claim.
All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. § 112(f) unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for”.
The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but are to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more”. Unless specifically stated otherwise, the term “some” refers to one or more. Pronouns in the masculine (e.g., his) include the feminine and neuter gender (e.g., her and its) and vice versa. Headings and subheadings, if any, are used for convenience only and do not limit the subject disclosure.