This disclosure generally relates to processing antenna signals with machine learning.
Antennas are widely used to transmit and receive radio frequency (RF) signals over one or more channels. When multiple antennas are used by a receiving device (e.g., a mobile terminal), these are often formed into an antenna array to improve the reception performance. Each antenna member of an antenna array can be referred to as an antenna element. The received RF signals are typically characterized by attributes such as center frequencies, bandwidth, and modulation schemes. After receiving the RF signals, the receiving device may convert the RF signals to data samples (e.g., via an analog-to-digital converter (ADC)), transfer the data samples to a processing unit via a high-speed data interface, and perform downstream data processing tasks using the data samples.
In one aspect, a method for processing RF signals is provided. The method can be implemented in a system having an antenna array and one or more processors. The method includes receiving one or more RF signals from one or more antenna channels. The method includes obtaining, from the one or more RF signals, a plurality of unlabeled data samples. The method includes generating an input tensor representation of the plurality of data samples. The method includes pretraining a first machine learning network using the input tensor representation to obtain one or more embeddings. The method includes training a second machine learning network using the one or more embeddings. The second machine learning network is configured to perform one or more signal processing tasks.
In some implementations, to pretrain the first machine learning network using the input tensor representation, the method includes causing the first machine learning network to perform at least one of: tensor reconstruction, channel in-painting, time-channel ordering, de-noising, Simple framework for Contrastive Learning of Visual Representations (SimCLR), contrastive predictive coding, Barlow twins, or array covariance matrix estimation.
In some implementations, the tensor reconstruction includes modifying the input tensor representation to obtain a modified tensor representation, encoding the modified tensor representation using an encoder of the first machine learning network to obtain a latent representation, decoding the latent representation using a decoder of the first machine learning network to obtain a reconstructed tensor representation corresponding to the input tensor representation, calculating a loss function between the input tensor representation and the reconstructed tensor representation, making adjustments to one or more parameters of the encoder to reduce the loss function below a threshold value, and obtaining the one or more embeddings based on the adjustments.
In some implementations, to encode the modified tensor representation, the method includes obtaining a convolutional stem output based on the modified tensor representation, scaling the convolutional stem output by a pooling factor, and downsampling the scaled convolutional stem output based on a stride number.
In some implementations, the channel in-painting includes randomly setting one or more unlabeled data samples to zero.
In some implementations, the latent representation has less dimensionality than the input tensor representation.
In some implementations, the first machine learning network is pretrained using self-supervised learning.
In some implementations, the signal processing includes at least one of: beamforming weight detection, bandwidth regression, blind channel detection, signal detection from noise, joint signal detection, interference detection, signal classification, direction-of-arrival estimation, or channel estimation.
In some implementations, to generate the input tensor representation of the plurality of data samples, the method includes obtaining, from the plurality of data samples, a plurality of data frames in a time domain, performing a (STFT) on the plurality of RF data frames to obtain a joint time-and-frequency-domain representation of the plurality of data samples, and normalizing the joint time-and-frequency-domain representation of the plurality of data samples.
In some implementations, the input tensor representation includes at least one of: a first dimension representing the plurality of center frequencies, a second dimension representing the one or more antenna channels, a third dimension representing sampling times, or a fourth dimension representing one or more quadrature channels.
The details of one or more embodiments of the disclosure are set forth in the accompanying drawings and the description below. Other features and advantages of the invention will become apparent from the description, the drawings, and the claims.
Like reference numbers and designations in the various drawings indicate like elements.
Digital antenna arrays are used in modern wireless communications applications to receive RF signals having large data volume. For example, some digital antenna arrays can receive RF signals from software-defined radio (SDR) and provide data samples at each antenna element at tens or hundreds of megasamples per second, with each sample typically having 24 to 28 bits of precision and requiring 32 bits to transfer. Furthermore, the total data rate out of an antenna array can scale linearly with the bandwidth of the RF signals and with the number of antenna elements. This means that data samples from a wideband antenna array can rapidly saturate a high speed interface and computing resource as the bandwidth and the number of antenna elements increases. While beamforming can be used to reduce the data rate by forming a weighted sum of signals received by multiple antenna elements into a single signal, the operation of beamforming can lead to loss of information about signals from some directions. Accordingly, there is a need to reduce the volume of data samples from an antenna array without significantly losing content of the signals.
Within the received RF signals, there can be multiple degrees of spatial redundancy, spectral sparsity, and temporal structure, which can be exploited to reduce the volume of data samples output by an antenna array. For example, the received signal spectrum at each antenna element of an array can be similar, with each antenna element having a position-dependent spectral amplitude and phase offset relative to other antenna elements in the array. These offsets are usually structured functions of space and frequency and can be captured by a number of coefficients in a spatial Fourier representation, with each coefficient corresponding to a multipath direction-of-arrival. Similarly, in a typical terrestrial environment, a limited fraction of the spectrum may be occupied by actual RF signals, while a considerable portion of data samples output by the ADC may result from noise between the RF signals. Moreover, the types of RF signals that need to be represented can have structure that distinguishes them from random noise. Accordingly, the number bits needed to encode the data in the RF signals can be less than that actually output by the antenna array. Because of these characteristics of the RF signals received by antenna arrays, the data samples obtained from the RF signals have the potential of being compressed into representations (known as embeddings) that have fewer degrees of freedom and less dimensionality.
Machine learning networks, such as neural networks and Siamese networks used for signal encoding and/or decoding (encoder-decoder neural networks), can be used to process the data samples obtained from the RF signals. In order to train a machine learning network to perform a data processing task (referred to hereinafter as “downstream data processing task” or simply “downstream task”), labeled training data samples are needed. Labeling data samples (e.g., done manually by human operators) can consume significant labor and resources. It can be useful to have a mechanism for the machine learning network to leverage unlabeled training data. A network learning to compress unlabeled data is one way to use this kind of data in the training process.
As described in detail below, this disclosure provides techniques for pretraining a first machine learning network (also referred to as an embedding network) to perform a pretext task (e.g., a task that is related to the downstream data processing task) with unlabeled data samples. Applying self-supervised learning (SSL), the first machine learning network is used to output compressed representations of the RF signal distributions, and these output embeddings (e.g., information learned from the pretraining) are then provided to train a second machine learning network (also referred to as a downstream-task-specific network) to perform a task of downstream signal data processing. As discussed below, pretraining can improve the training performance by, e.g., expediting the training process and reducing the amount of training data samples to avoid overloading the high-speed interface. Because pretraining is based on SSL with little to no human involvement, having pretraining before training can reduce the effort and resources for processing the data from RF signals.
In the process 100, one or more RF signals are received by one or more antennas 102, which can be an antenna array with multiple antenna elements. The one or more RF signals are received from one or more antenna channels, which can correspond to, e.g., one or more RF signal transmitters, one or more beams, one or more signal bandwidths, one or more carrier frequencies, or one or more modulation schemes, or any suitable combination of these.
The received RF signals are processed by one or more processors 104 to obtain a plurality of data samples from the one or more RF signals at a plurality of center frequencies. The processing can utilize one or more analog-to-digital converters (ADCs) to convert the RF signals to digital samples at a sampling rate. After the sampling process, each data sample can be identified by a combination of one or more variables, and the data samples can be grouped to form multiple training sets, whereas the grouping can be based on center frequency or other criteria depending on implementation. For example, in a scenario where RF signals are received from 4 antenna channels and subsequently sampled at 5,000,000 times, a data sample obtained from a given RF signal can be identified by a combination of: (i) the training set in which that data sample has been grouped; (ii) the antenna channel that receives the given RF signal; (iii) the time at which the data sample is obtained from the given RF signal; and (iv) the quadrature channel (either the in-phase component or the quadrature component) in case the RF signals are I/Q modulated. As such, each data sample can be represented as an element of a tensor space with a shape of [115, 4, 5,000,000, 2], whereas each of the variables in (i)-(iv) is a dimension of the tensor space. Depending on the configurations of the antennas and the processors, the data samples may be identified with more or fewer dimensions. Because these data samples are directly obtained from the RF signals without further processing, these data samples together are referred to as a raw dataset.
The one or more processors 104 generate an input tensor representation 106 of some or all of the data samples in the raw dataset. Depending on the pretext task, the complexity of the embedding network, the nature of the data samples, and/or the computing resources available for pretraining, the size and dimensionality (e.g., number of dimensions) of the input tensor representation 106 can vary. For example, for a pretext task that pretrains the embedding network to reconstruct spectrogram data with minimized error, the input tensor representation 106 can have a shape of [4, 65536, 2], meaning that a subset of the raw dataset is represented by the input tensor representation 106.
To obtain data samples for the input tensor representation 106, the one or more processors 104 can perform various data processing operations. As a non-limiting example, the one or more processors 104 pad the time dimension (5,000,000) with 46272 zeros to obtain a modified tensor space with a shape of [115, 4, 5,046,272, 2]. The one or more processors 104 then divide the time dimension (5,046,272) into 77 chunks of 65536 time samples each, resulting in a tensor with a shape of [115, 4, 77, 65536, 2]. Following these operations, the dimensions of (115) and (77) are merged (“collapsed”) to become a single dimension (8855). This creates a tensor with a shape of [8855, 4, 65536, 2]. In doing so, the one or more processors 104 has obtained 8855 tensor items as input tensor representation 106, with each tensor item having a shape of [4, 65536, 2] to identify data samples corresponding to 4 antenna channels, 65536 sampling times, and 2 quadrature channels. Each tensor item here can be considered a frame of the raw dataset.
In some implementations, the input tensor representation 106 undergoes further pre-processing of compression and dimensionality reduction. An advantage of doing so is to facilitate the learning of the process of the embedding network 108. For example, a frame with a shape of [4, 65536, 2] can be pre-processed into a joint time-frequency representation, e.g., via a short-time Fourier transform (STFT), that is suitable for detecting data information from noise. To do so, the trailing dimension of 2 (i.e., the dimension that represents a quadrature channel) is first absorbed into the time dimension by converting the tensor representation to be complex with real and imaginary parts. This results in a tensor with a shape of [4, 65536]. The time dimension is then reshaped into two dimensions with 32 time chunks and 2048 continuous time steps, resulting in a tensor of shape [4, 32, 2048]. After applying a Hann window function along the time dimension (2048), a discrete Fourier transform (DFT) is performed on the time dimension, resulting in 4-channel Hann-windowed STFTs with 32 time chunks and 2048 frequency bins, e.g., a complex tensor having a shape of [4, 32, 2048]. The real and imaginary parts of the complex tensor are separated back into a new trailing dimension with a size of 2, giving a tensor a shape of [4, 32, 2048, 2] with all entries being real. The trailing quadrature dimension (2) is then merged with the dimension of antenna channels (4), resulting in a set of 8-channel training examples, each of a shape of [8, 32, 2048]. Each training example is then normalized (e.g., standardized) by subtracting the mean of the example set and dividing by the standard deviation of the example set. After the normalization, the set of training examples have zero mean and unit variance across the channel (8), time (32), and frequency (2048) dimensions.
It is noted that the above-described operations may vary depending on applications. It is also noted that not all operations are required for all implementations. For example, in some scenarios where the raw dataset has relatively fewer data samples, it is possible to omit the pre-processing operations.
The items of the input tensor representation 106 can be used to pretrain an embedding network 108, which can be an encoder-decoder neural network or a Siamese network. While the below description is primarily based on scenarios that use encoder-decoder neural networks, other types of machine networks for pretraining and/or training can be used similarly. The data samples in the input tensor representation 106 are unlabeled, meaning that the embedding network 108 executes SSL through the pretraining process with little or no human intervention.
Through the pretraining process, the one or more processors 104 obtain one or more embedding network parameters, referred to as embeddings hereinafter. These embeddings, labeled as embeddings 110 in
The training of the second machine learning network 112 can be partially or fully supervised. For example, the second machine learning network 112 can receive an input dataset 114, which can be derived from the raw dataset or obtained from other sources. Some or all of data samples in the input dataset 114 can be labeled, e.g., by manual input or by automated software/hardware tools. Labeling a data sample can help the second machine learning network 112 track the data sample, compare the output of the second machine learning network 112 with a target output, and make adjustments to improve performance. After training, the second machine learning network 112 can be deployed to perform one or more of the downstream tasks 116.
The pretext task in the example of block diagram 200 is for the encoder 203 to create a latent representation 204 from a modified (e.g., noise-corrupted) version of the input tensor 201, with the latent representation 204 having less dimensionality than the input tensor 201. Ideally, the pretraining should enable the encoder 203 to create the latent representation 204 as if no modifications were made to the input tensor 201. Besides the pretext task described with reference to
In more detail, the input tensor 201 undergoes transformation 202, which modifies some data samples (e.g., corrupt the data samples to simulate noise or interference) represented by the input tensor 201. The modified data samples, forming a modified tensor, are input to the encoder 203, which creates the latent representation 204. The latent representation 204 is then decoded by a decoder 205 to output a reconstructed tensor 206. The input tensor 201 and the reconstructed tensor 206 are then compared to obtain a loss function 207, which describes the difference between the input tensor 201 and the reconstructed tensor 206. The difference is caused by the transformation 202, which modifies the input tensor 201. Specifically, the modification to the input tensor 201 is propagated to the latent representation 204 through the encoding process and further propagated to the reconstructed tensor 206 through the decoding process.
The loss function 207 can be represented by, e.g., a mean-squared magnitude of difference between the reconstructed tensor 206 and the corresponding input tensor 201 for a set of training examples. For example, for each training data sample (represented as an input tensor) in a training set, the difference between the input tensor and the reconstructed tensor is calculated. For the entire training set, all of the differences are mean-squared to obtain the loss function 207. In some implementations, the pretraining and the calculation of the loss function 207 is based on the Adam variant of stochastic gradient descent.
The loss function 207 is input to an optimizer 208 that adjusts the weights of the encoder 203 and the decoder 205. The process repeats and the weights are updated until the loss function is sufficiently small, e.g., lower than a threshold. This would indicate that the encoder-decoder neural network has learned information to conduct a mapping from the modified input data samples to a latent representation that includes enough information for reconstructing the unmodified input data samples with an acceptable level of fidelity. The information learned from the pretraining process, including the weights applied to the encoder 203, can be considered embeddings, such as the one or more embeddings 110 of
The architecture 300 is based on a convolutional residual block structure with squeeze-and-excitation with a squeeze reduction ratio of 8. The encoder 330 first increases the channel count of the input data from 8 to 32 using a convolutional stem 302, while keeping the time and frequency resolutions (e.g., values of the dimensions) unchanged. The encoder 330 then uses a pooling layer 303 to scale (e.g., reduce) the STFT time resolution by a pooling factor of 2. Further, the encoder 330 uses one or more (e.g., two) layers 304 of strided convolution to downsample the tensor output by the pooling layer 303, arriving at the latent representation 305.
The decoder 350 operates in a transposed fashion such that strided layers perform an upsampling operation in the STFT space. That is, the operations performed by blocks 306-308 of the decoder 350 can be considered reversed operations of those performed by blocks 302-304 of the encoder 301. With these operations, the architecture 300 obtains a reconstructed tensor 309.
As discussed earlier, the embedding network weights learned from the pretraining process can be applied to a downstream-task-specific network to train the downstream-task-specific network for performing the downstream data processing task. The training of the downstream-task-specific network is described below with reference to
The training illustrated in
The encoder 403 is provided with an input tensor 401 as a representation of training data samples. The input tensor 401 can be obtained from labeled data samples of a smaller size than the input data sensor 201 of
Different from the decoder 205 in
With the loss function 407 calculated, the optimizer 408 operates to adjust the weights of the encoder 403 and the decoder 405. Similar to the weight adjustment described with reference to
In the training process, the input data samples can be labeled, e.g., with identifying information corresponding to each data sample, which propagates though the encoding and decoding processes and is reproduced in downstream task output 406. The downstream task target output 402 can also have labels that identify the data samples. Labeling is helpful for matching the output of the decoder 405 with a corresponding target output, which ensures accuracy in the computation of the loss function 407. With the labeling, the training of the downstream-task-specific network can be considered supervised learning (or semi-supervised learning), as opposed to SSL in the pretraining described with reference to
In the example deployment, the input tensor 501 can represent data samples obtained from the raw dataset and can additionally or alternatively represent other data samples obtained from other RF signals. Because both the encoder 503 and the decoder 505 are configured with parameters (e.g., weights of the encoder and decoder machine learning networks) learned from the pretraining and/or training process, the downstream task output 506 that is obtained from the deployment has values that are close to a target downstream task output (e.g., difference between values of the downstream task output 506 and the target output are within a specific threshold value).
As an example, a downstream task can be signal bandwidth regression, which involves mapping the STFT outcome of RF signals to a function of frequency bins, with the function taking a value proportional to the signal bandwidth for a bin at the center frequency and a value of zero everywhere else in the frequency domain. An example of signal bandwidth regression performed in the scenario of
At 702, the method 700 involves receiving one or more RF signals from one or more antenna channels. The one or more RF signals can be obtained, e.g., by the one or more antennas 102 of
At 704, the method 700 involves obtaining, from the one or more RF signals, a plurality of unlabeled data samples. The RF signals can be received at a plurality of center frequencies.
At 706, the method 700 involves generating an input tensor representation of the plurality of unlabeled data samples. The input tensor representation can be similar to the input tensor representation 106 of
At 708, the method 700 involves pretraining a first machine learning network using the input tensor representation to obtain one or more embeddings. The first machine learning network can be similar to the embedding network 108 of
At 710, the method 700 involves training a second machine learning network using the one or more embeddings. The second machine learning network can be similar to the second machine learning network 112 of
In the wireless communication system 800, a user equipment (UE) 801 communicates with a base station 802 via a plurality of O-RAN distributed units (DUs) or RAN intelligent controllers (RICs) 803. Each of the O-RAN DUs/RICs 803 has an array of antennas that receive RF signals from the UE 801 either directly or after reflection by reflecting surface 805. In such communication, the UE 801 frequently transmits sounding reference signals (SRSs) to the base station to provide updated information about the channels between the UE 801 and the O-RAN DUs/RICs 803. These updates can help the base station 802 to adjust beamforming taps (e.g., weight parameters).
The wireless communication system 800 can be used as a multi-static radar, for sensing applications, or for digital twins. Depending on the application, the O-RAN can perform one or more downstream tasks to process the signals received by the arrays of the DUs/RICs 803 and use a machine learning network to facilitate the performance of the downstream tasks. The machine learning network can be pretrained/trained according to the operations described previously with reference to
The computing device 900 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The mobile computing device 950 is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart-phones, mobile embedded radio systems, radio diagnostic computing devices, and other similar computing devices. The components shown here, their connections and relationships, and their functions, are meant to be examples only, and are not meant to be limiting.
The computing device 900 includes a processor 902, a memory 904, a storage device 906, a high-speed interface 908 connecting to the memory 904 and multiple high-speed expansion ports 910, and a low-speed interface 912 connecting to a low-speed expansion port 914 and the storage device 906. Each of the processor 902, the memory 904, the storage device 906, the high-speed interface 908, the high-speed expansion ports 910, and the low-speed interface 912, are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate. The processor 902 can process instructions for execution within the computing device 900, including instructions stored in the memory 904 or on the storage device 906 to display graphical information for a GUI on an external input/output device, such as a display 916 coupled to the high-speed interface 908. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. In addition, multiple computing devices may be connected, with each device providing portions of the operations (e.g., as a server bank, a group of blade servers, or a multi-processor system). In some implementations, the processor 902 is a single threaded processor. In some implementations, the processor 902 is a multi-threaded processor. In some implementations, the processor 902 is a quantum computer.
The memory 904 stores information within the computing device 900. In some implementations, the memory 904 is a volatile memory unit or units. In some implementations, the memory 904 is a non-volatile memory unit or units. The memory 904 may also be another form of computer-readable medium, such as a magnetic or optical disk.
The storage device 906 is capable of providing mass storage for the computing device 900. In some implementations, the storage device 906 may be or include a computer-readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid-state memory device, or an array of devices, including devices in a storage area network or other configurations. Instructions can be stored in an information carrier. The instructions, when executed by one or more processing devices (for example, processor 902), perform one or more methods, such as those described above. The instructions can also be stored by one or more storage devices such as computer- or machine readable mediums (for example, the memory 904, the storage device 906, or memory on the processor 902). The high-speed interface 908 manages bandwidth-intensive operations for the computing device 900, while the low-speed interface 912 manages lower bandwidth-intensive operations. Such allocation of functions is an example only. In some implementations, the high speed interface 908 is coupled to the memory 904, the display 916 (e.g., through a graphics processor or accelerator), and to the high-speed expansion ports 910, which may accept various expansion cards (not shown). In the implementation, the low-speed interface 912 is coupled to the storage device 906 and the low-speed expansion port 914. The low-speed expansion port 914, which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet) may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
The computing device 900 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server 920, or multiple times in a group of such servers. In addition, it may be implemented in a personal computer such as a laptop computer 922. It may also be implemented as part of a rack server system 924. Alternatively, components from the computing device 900 may be combined with other components in a mobile device, such as a mobile computing device 950. Each of such devices may include one or more of the computing device 900 and the mobile computing device 950, and an entire system may be made up of multiple computing devices communicating with each other.
The mobile computing device 950 includes a processor 952, a memory 964, an input/output device such as a display 954, a communication interface 966, and a transceiver 968, among other components. The mobile computing device 950 may also be provided with a storage device, such as a micro-drive or other device, to provide additional storage. Each of the processor 952, the memory 964, the display 954, the communication interface 966, and the transceiver 968, are interconnected using various buses, and several of the components may be mounted on a common motherboard or in other manners as appropriate.
The processor 952 can execute instructions within the mobile computing device 950, including instructions stored in the memory 964. The processor 952 may be implemented as a chipset of chips that include separate and multiple analog and digital processors. The processor 952 may provide, for example, for coordination of the other components of the mobile computing device 950, such as control of user interfaces, applications run by the mobile computing device 950, and wireless communication by the mobile computing device 950.
The processor 952 may communicate with a user through a control interface 958 and a display interface 956 coupled to the display 954. The display 954 may be, for example, a TFT (Thin-Film-Transistor Liquid Crystal Display) display or an OLED (Organic Light Emitting Diode) display, or other appropriate display technology. The display interface 956 may include appropriate circuitry for driving the display 954 to present graphical and other information to a user. The control interface 958 may receive commands from a user and convert them for submission to the processor 952. In addition, an external interface 962 may provide communication with the processor 952, so as to enable near area communication of the mobile computing device 950 with other devices. The external interface 962 may provide, for example, for wired communication in some implementations, or for wireless communication in other implementations, and multiple interfaces may also be used.
The memory 964 stores information within the mobile computing device 950. The memory 964 can be implemented as one or more of a computer-readable medium or media, a volatile memory unit or units, or a non-volatile memory unit or units. An expansion memory 974 may also be provided and connected to the mobile computing device 950 through an expansion interface 972, which may include, for example, a SIMM (Single In Line Memory Module) card interface. The expansion memory 974 may provide extra storage space for the mobile computing device 950, or may also store applications or other information for the mobile computing device 950. Specifically, the expansion memory 974 may include instructions to carry out or supplement the processes described above, and may include secure information also. Thus, for example, the expansion memory 974 may be provide as a security module for the mobile computing device 950, and may be programmed with instructions that permit secure use of the mobile computing device 950. In addition, secure applications may be provided via the SIMM cards, along with additional information, such as placing identifying information on the SIMM card in a non-hackable manner.
The memory may include, for example, flash memory and/or NVRAM memory (nonvolatile random access memory), as discussed below. In some implementations, instructions are stored in an information carrier such that the instructions, when executed by one or more processing devices (for example, processor 952), perform one or more methods, such as those described above. The instructions can also be stored by one or more storage devices, such as one or more computer- or machine-readable mediums (for example, the memory 964, the expansion memory 974, or memory on the processor 952). In some implementations, the instructions can be received in a propagated signal, for example, over the transceiver 968 or the external interface 962.
The mobile computing device 950 may communicate wirelessly through the communication interface 966, which may include digital signal processing circuitry in some cases. The communication interface 966 may provide for communications under various modes or protocols, such as GSM voice calls (Global System for Mobile communications), SMS (Short Message Service), EMS (Enhanced Messaging Service), or MMS messaging (Multimedia Messaging Service), CDMA (code division multiple access), TDMA (time division multiple access), PDC (Personal Digital Cellular), WCDMA (Wideband Code Division Multiple Access), CDMA2000, or GPRS (General Packet Radio Service), LTE, 5G/6G cellular, among others. Such communication may occur, for example, through the transceiver 968 using a radio frequency. In addition, short-range communication may occur, such as using a Bluetooth, Wi-Fi, or other such transceiver (not shown). In addition, a GPS (Global Positioning System) receiver module 970 may provide additional navigation- and location-related wireless data to the mobile computing device 950, which may be used as appropriate by applications running on the mobile computing device 950.
The mobile computing device 950 may also communicate audibly using an audio codec 960, which may receive spoken information from a user and convert it to usable digital information. The audio codec 960 may likewise generate audible sound for a user, such as through a speaker, e.g., in a handset of the mobile computing device 950. Such sound may include sound from voice telephone calls, may include recorded sound (e.g., voice messages, music files, among others) and may also include sound generated by applications operating on the mobile computing device 950.
The mobile computing device 950 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a cellular telephone 980. It may also be implemented as part of a smart-phone 982, personal digital assistant, or other similar mobile device.
A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the disclosure. For example, various forms of the flows shown above may be used, with steps re-ordered, added, or removed.
Embodiments of the invention and all of the functional operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the invention can be implemented as one or more computer program products, e.g., one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus.
A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a standalone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a tablet computer, a mobile telephone, a personal digital assistant (PDA), a mobile audio player, a Global Positioning System (GPS) receiver, to name just a few. Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
To provide for interaction with a user, embodiments of the invention can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
Embodiments of the invention can be implemented in a computing system that includes a back end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the invention, or any combination of one or more such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), e.g., the Internet.
The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
While this specification contains many specifics, these should not be construed as limitations on the scope of the invention or of what may be claimed, but rather as descriptions of features specific to particular embodiments of the invention. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Particular embodiments of the invention have been described. Other embodiments are within the scope of the following claims. For example, the steps recited in the claims can be performed in a different order and still achieve desirable results.
This application claims the benefit of U.S. Provisional Patent Application No. 63/341,852, filed May 13, 2022, and U.S. Provisional Patent Application No. 63/465,354, filed May 10, 2023, the content of which are incorporated herein by reference.
This invention was made with government support under U.S. Defense Advanced Research Projects Agency (DARPA) Agreement HR00112190100 awarded by the DARPA. The government has certain rights in the invention.
Number | Date | Country | |
---|---|---|---|
63341852 | May 2022 | US | |
63465354 | May 2023 | US |