The present disclosure relates generally to communication systems. More particularly, the present disclosure relates to implementing systems and methods for controlling communications based on machine learned information.
Wireless communication systems exist today, and are used in various applications. Some of these wireless communication systems comprise cognitive radios. A cognitive radio is a radio that is configured to dynamically use wireless channels to avoid user interference and congestion. During operations, the cognitive radio detects which of the wireless channels is (are) available, and then configures its parameters to facilitate use of the available communication channel(s).
The present disclosure concerns implementing systems and methods for operating a quantum processor. The methods comprise: training one or more quantum neural networks using modulation class data to make decisions as to a modulation classification for a signal based on one or more feature inputs for the signal; obtaining, by the quantum processor, principle components of real and imaginary components of a signal received by a communication device; and performing first quantum neural network operations by the quantum processor using the principle components as inputs to the trained one or more quantum neural networks to generate a plurality of scores, each said score representing a likelihood that the received signal was modulated using a given modulation type of a plurality of different modulation types
The present disclosure also concerns a quantum circuit, comprising: one or more quantum neural networks trained using modulation class data to make decisions as to a modulation classification for a signal based on one or more feature inputs for the signal; and a quantum processor. The quantum processor is configured to: obtain principle components of real and imaginary components of a signal; and perform first quantum neural network operations using the principle components as inputs to the one or more quantum neural networks to generate a plurality of scores. Each score represents a likelihood that the signal was modulated using a given modulation type of a plurality of different modulation types.
The present disclosure further concerns a communication device comprising: a wireless communications circuit configured to receive a signal; and a quantum circuit. The quantum circuit comprises: one or more quantum neural networks trained using modulation class data to make decisions as to a modulation classification for the signal based on one or more feature inputs for the signal; and a quantum processor. The quantum processor is configured to: obtain principle components of real and imaginary components of the signal; and perform first quantum neural network operations using the principle components as inputs to the one or more quantum neural networks to generate a plurality of scores. Each score represents a likelihood that the signal was modulated using a given modulation type of a plurality of different modulation types.
The present solution will be described with reference to the following drawing figures, in which like numerals represent like items throughout the figures.
It will be readily understood that the components of the embodiments as generally described herein and illustrated in the appended figures could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of various embodiments, as represented in the figures, is not intended to limit the scope of the present disclosure, but is merely representative of various embodiments. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The present solution may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the present solution is, therefore, indicated by the appended claims rather than by this detailed description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
Reference throughout this specification to features, advantages, or similar language does not imply that all of the features and advantages that may be realized with the present solution should be or are in any single embodiment of the present solution. Rather, language referring to the features and advantages is understood to mean that a specific feature, advantage, or characteristic described in connection with an embodiment is included in at least one embodiment of the present solution. Thus, discussions of the features and advantages, and similar language, throughout the specification may, but do not necessarily, refer to the same embodiment.
Furthermore, the described features, advantages and characteristics of the present solution may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize, in light of the description herein, that the present solution can be practiced without one or more of the specific features or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments of the present solution.
Reference throughout this specification to “one embodiment”, “an embodiment”, or similar language means that a particular feature, structure, or characteristic described in connection with the indicated embodiment is included in at least one embodiment of the present solution. Thus, the phrases “in one embodiment”, “in an embodiment”, and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
As used in this document, the singular form “a”, “an”, and “the” include plural references unless the context clearly dictates otherwise. Unless defined otherwise, all technical and scientific terms used herein have the same meanings as commonly understood by one of ordinary skill in the art. As used in this document, the term “comprising” means “including, but not limited to”.
Signal recognition and classification has been accomplished using feature-based, expert-system-driven (i.e., non-machine-learning) techniques. The feature-based, expert-system-driven techniques are relatively computationally slow and costly to implement. The present solution provides an alternative solution for signal recognition and classification that overcomes the drawbacks of conventional feature-based, expert-system-driven solutions.
The present solution employs a machine-learning based approach which provides a faster, less-expensive path to adding new signals to a list of recognized signals, offers better recognition performance at lower Signal-to-Noise Ratios (SNRs), and recognizes signals and sources thereof faster and with an improved accuracy as compared to that of the conventional feature-based, expert-system-driven solutions. In some scenarios, the machine-learning based approach uses expert systems and/or deep learning to facilitate signal recognition and classification. The deep learning can be implemented by one or more neural networks (e.g., Residual Neural Network(s) (ResNet(s)), Convolutional Neural Network(s) (CNN(s)), and/or quantum neural network(s)). The individual neural networks may be comprised of numerous stacked layers of various modalities to provide a plurality of layers of convolution. The neural networks are trained to automatically recognize and determine modulation types of received wireless communication signals from digitally sampled and processed data. This training can be achieved, for example, using a dataset that includes information for signals having SNRs from −20 dB to +30 dB and being modulated in accordance with a plurality of analog and/or digital modulation schemes (e.g., phase shift keying, and/or amplitude shift keying).
The machine-learning based approach of the present solution can be implemented to provide a cognitive, automated system to optimize signal classification analyses by modulation choices from data with various SNRs (e.g., SNRs from −20 dB to +30 dB). Subsystems can include, but are not limited to, a tuner, a digitizer, a classifier, and/or an optimizer. Fast recognition and labeling of Radio Frequency (RF) signals in the vicinity is a needed function for Signal Intelligence (SIGINT) devices, spectrum interference monitoring, dynamic spectrum access, and/or mesh networking.
Artificial intelligence (AI) algorithms and/or game theoretic analysis may be used to help solve the problem of signal classification through supervised classification. The game theory analysis provides a flexible framework to model strategies for improved decision-making optimization. Classification strategies may be based on different supervised gradient descent learning algorithms and/or different neural network structures. Individual trainable networks are the potential decisions of a one-sided, single-player game vs. nature, and the action is to choose the goodness-of-fit weighting of the composite network for optimal decision making. A novel, game-theoretic perspective has been derived for solving the problem of supervised classification that takes the best signal modulation prediction derived from supervised classification models. In this one-player game, the system determines the modulation type by choosing the best network model from a plurality of network models at a given time. Within this formulation, a reward matrix (weighted or non-weighted) is used for consistent classification factors that results in higher accuracy and precision compared to using individual machine learning models alone.
The reward matrix comprises an M×C matrix, where M is the number of machine learned models and C is the number of modulation classes. The reward matrix uses goodness-of-fit-predicted class scores or responses in the form of a matrix based on the number of signals and modulation classes. These goodness-of-fit-predicted class scores are used in a linear program to optimally choose which machine learned model to use per signal. For example, a machine learned model can be selected in accordance with the following mathematical equation:
where x represents a decision variable, A represents coefficients in a reward matrix, b represents coefficients which satisfy constraints, and f represents a linear objective function of constants.
The communication devices can include, but are not limited to, cognitive radios configured to recognize and classify radio signals by modulation type at various Signal-to-Noise Ratios (SNRs). Each of the cognitive devices comprises a cognitive sensor employing machine learning algorithms. In some scenarios, the machine learned algorithms include neural networks which are trained and tested using an extensive representative dataset consisting of 24 or more digital and analog modulations. The neural networks learn from the time domain amplitude and phase information of the modulation schemes present in the training dataset. The machine learning algorithms facilitate making preliminary estimations of modulation types and/or signal sources based on machine learned signal feature/characteristic sets. Linear programming optimization may be used to determine the best modulation classification based on prediction scores output from one or more machine learning algorithms.
A communication device can significantly reduce the potentially massive memory storage requirements needed for training data by using compressive sensing techniques, such as but not limited to the Discrete Wavelet Transform (DWT). Data compression techniques such as the DWT transform signals into sparse coefficient sets, and many of the coefficients may be discarded without negatively affecting performance of the communication device. Through this discarding of coefficients, the storage requirements become significantly less than what would have been required for the original signal data.
Referring now to
A user 220 of communication device 202 is a primary user of wireless channel 206. As such, the user 220 has first rights to communicate information over wireless channel 206 via communication device 202 a given amount of time (e.g., X microseconds, where X is an integer). User 220 licensed use of the wireless channel 206 to another user 222. User 222 constitutes a secondary user. Accordingly, user 222 is able to use the wireless channel 206 to communicate information to/from communication device 204 during the time in which the wireless channel is not being used by the primary user 220 for wireless communications. Detection of the primary user by the secondary user is critical to the cognitive radio environment. The present solution provides a novel solution for making such detections by secondary users in a shorter amount of time as compared to conventional solutions. The novel solution will become evident as the discussion progresses.
During operations, the communication device 204 monitors communications on wireless channel 206 to sense spectrum availability, i.e., determine an availability of the wireless channel. The wireless channel 206 is available when the primary user 220 is not and has not transmitted a signal thereover for a given amount of time. The wireless channel 206 is unavailable when the primary user 220 is transmitting a signal thereover or has transmitted a signal thereover within a given amount of time. When a determination is made that the wireless channel 206 is unavailable, the communication device 204 performs operations to transition to another wireless channel 210. This channel transition may be achieved by changing an operational mode of the communication device 204 and/or by changing channel parameter(s) of the communication device 204. The communication device 204 may transition back to wireless channel 206 when a determination is made that the primary user 220 is no longer using the same for communications.
Referring now to
As shown by 310, the communication device continues to monitor the first wireless channel(s). The communication device continues to use the wireless channel(s) for a given period of time or until the primary user once again starts using the same for communications, as shown by 312-314. When the communication device detects that the primary user is once again using the wireless channel(s) [312: YES], then the communication device stops transmitting signals on the first wireless channel(s) as shown by 316. Subsequently, 318 is performed where method 300 ends or other operations are performed (e.g., return to 304).
A detailed block diagram of an illustrative architecture for a communication device is provided in
Communication device 400 implements a machine learning algorithm to facilitate determinations as to an available/unavailable state of wireless channel(s) (e.g., wireless channel 206 of
The communication device 400 can be implemented as hardware, software and/or a combination of hardware and software. The hardware includes, but is not limited to, one or more electronic circuits. The electronic circuits can include, but are not limited to, passive components (e.g., resistors and capacitors) and/or active components (e.g., amplifiers and/or microprocessors). The passive and/or active components can be adapted to, arranged to and/or programmed to perform one or more of the methodologies, procedures, or functions described herein.
As shown in
The cognitive sensor 402 is generally configured to determine a source of a signal transmitted over a wireless channel. This determination is made via a signal source classification application 422 using information 424. Information 424 comprises outputs generated by one or more machine learning algorithms. The machine learning algorithm(s) can employ supervised machine learning. Supervised machine learning algorithms are well known in the art. In some scenarios, the machine learning algorithm(s) include(s), but is (are) not limited to, a deep learning algorithm (e.g., a Residual neural network (ResNet), a Convolutional Neural Network (CNN), and/or a Recurrent Neural Network (RNN) (e.g., a Long Short-Term Memory (LSTM) neural network)). The machine learning process implemented by the present solution can be built using Commercial-Off-The-Shelf (COTS) tools and/or a commercially available Field Programmable Gate Array (FPGA) (e.g., Xilinx Vertex 7 FPGA).
Each machine learning algorithm is provided one or more feature inputs for a received signal, and makes a decision as to a modulation classification for the received signal. In some scenarios, the machine learning algorithms include neural networks that produce outputs hi in accordance with the following mathematical equations.
where L ( ) represents a likelihood ratio function, w1, w1, . . . , wi each represent a weight, and x1, x1, . . . , xi represent signal features and/or the raw data. The signal features can include, but are not limited to, a center frequency, a change in frequency over time, a phase, a change in phase over time, amplitude, an average amplitude over time, a data rate, and a wavelength. The output hi includes a set of confidence scores. Each confidence score indicates a likelihood that a signal was modulated using a respective type of modulation. The set of confidence scores are stored in any format selected in accordance with a given application.
For example, as shown in table 500 of
The cognitive sensor 402 then performs operations to either (i) select the modulation class associated with the highest confidence score or (ii) select one of the modulation classes for the signal based on results of an optimization algorithm. The optimization algorithm can include, but is not limited to, a game theory based optimization algorithm. The game theory based optimization algorithm will be discussed in detail below.
Once the modulation class has been decided for the received signal, the cognitive sensor 402 then makes a decision as to whether the source of the signal was a primary user of the wireless spectrum. This decision is made based on the modulation class, a bit rate and/or a center frequency of the signal. For example, a decision is made that the primary user is the source of a signal when the signal comprises a 10 MHZ-wide BPSK signal with a center frequency of 2.4 GHz. The present solution is not limited in this regard.
At least some of the hardware entities 414 perform actions involving access to and use of memory 412, which can be a Random Access Memory (RAM), and/or a disk driver. Hardware entities 414 can include a disk drive unit 416 comprising a computer-readable storage medium 418 on which is stored one or more sets of instructions 420 (e.g., software code) configured to implement one or more of the methodologies, procedures, or functions described herein. The instructions 420 can also reside, completely or at least partially, within the memory 412, with the cognitive sensor 420, and/or within the processor 406 during execution thereof by the communication device 400. The memory 412, cognitive sensor 420 and/or the processor 406 also can constitute machine-readable media. The term “machine-readable media”, as used here, refers to a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions 420. The term “machine-readable media”, as used here, also refers to any medium that is capable of storing, encoding or carrying a set of instructions 420 for execution by the processor 406 and that cause the processor 406 to perform any one or more of the methodologies of the present disclosure.
Referring now to
The signal classifier 612 uses one or more machine learning algorithms to (i) detect the radio signal y(t) in the presence of noise and (ii) make a decision 614 as to the modulation classification that should be assigned thereto. For example, as shown in
Once the modulation classification has been decided for the signal, the signal classifier 612 performs further operations to determine whether the primary user is the source of the signal. These operations can involve: obtaining the modulation class assigned to the signal; obtaining a bit rate and center frequency for the radio signal; and comparing the modulation class, bit rate and center frequency to pre-stored source information to determine if a match exists therebetween by a certain amount (e.g., the bit rates match by at least 70% and/or the center frequencies match by at least 50%). If a match exists, then the signal classifier 612 decides that the primary user is the source of the radio signal and is using the band BL. Otherwise, the signal classifier 612 decides that someone other than the primary user is the source of the radio signal and trying to encroach on the band BL. If the radio signal y(t) is detected and the primary user is using the band BL, then a decision is made that the band BL is unavailable. Otherwise, a decision is made that the band BL is available.
An illustrative architecture for a neural network implementing the present solution is provided in
Another illustrative architecture for a neural network implementing the present solution is provided in
The present solution is not limited to the neural network architectures shown in
As noted above, the modulation class may be selected based on results of a game theory analysis of the machine learned models. The following discussion explains an illustrative game theory optimization algorithm.
Typical optimization of a reward matrix in a one-sided, “game against nature” with a goal of determining the highest minimum gain is performed using linear programming techniques. In most cases, an optimal result is obtained, but occasionally one or more constraints eliminate possible feasible solutions. In this case, a more brute-force subset summing approach can be used. Subset summing computes the optimal solution by determining the highest gain decision after iteratively considering all subsets of the possible decision alternatives.
A game theory analysis can be understood by considering an exemplary tactical game. Values for the tactical game are presented in the following TABLE 1, which is described in a document entitled “Updating Optimal Decisions Using Game Theory and Exploring Risk Behavior Through Response Surface Methodology” written by J. D. Jordan. The unitless values range from −5 to 5, which indicate the reward received performing a given action for a particular scenario. The actions for the player correlate in the rows in TABLE 1, while the potential scenarios correlate to the columns in TABLE 1. For example, the action of firing a mortar at an enemy truck yields a positive reward of 4, but firing a mortar on a civilian truck yields a negative reward of −4, i.e., a loss. The solution can be calculated from a linear program, with the results indicating that the best choice for the play is to advance rather than fire mortar or do nothing. In examples with very large reward matrices, the enhancement technique of subset summing may also be applied. Since there are four scenarios in this example (enemy truck, civilian truck, enemy tank, or friendly tank), there are 24=16 subsets of the four scenarios. One of these subsets considers none of the scenarios, which is impractical. So in practice, there are always 2P−1 subsets, where P is the number of columns (available scenarios) in a reward matrix. TABLE 1 is reproduced from the following document: Jordan, J. D. (2007). Updating Optimal Decisions Using Game Theory and Exploring Risk Behavior Through Response Surface Methodology.
The goal of linear programming is to maximize a function over a set constrained by linear inequalities and the following mathematical equations (3)-(9).
where z represents the value of the game or the objective function, v represents the value of the constraints, w1 represents the optimal probability solution for the choice ‘Fire Mortar’, w2 represents the optimal probability solution for the choice ‘Advance’, w3 represents the optimal probability solution for the choice ‘Do Nothing’, and i represents the index of decision choice. Using a simplex algorithm to solve the linear program yields mixed strategy {0.2857, 0.7143, 0}. To maximize minimum gain, the player should fire a mortar approximately 29% of the time, advance 71% of the time, and do nothing none of the time.
In scenarios with very large reward matrices, the optional technique of subset summing may be applied. The subset summing algorithm reduces a constrained optimization problem to solving a series of simpler, reduced-dimension constrained optimization problems. Specifically, for a reward matrix consisting of P scenarios (columns), a set of 2P−1 new reward matrices are created by incorporating unique subsets of the scenarios. To illustrate the generation of the subsets to be considered, the following mathematical equation (10) shows an example of constraints from the example of TABLE 1 where each row in the equation corresponds to a row in the reward matrix A. Each new reduced reward matrix is formed by multiplying A element-wise by a binary matrix. Each of the 2P−1 binary matrices has a unique set of columns which are all-zero. The element-wise multiplication serves to mask out specific scenarios, leaving only specific combinations, or subsets, of the original scenarios to be considered. This operation increases the run time, but may be a necessary trade-off for improved accuracy. This method also ensures that the correct answer is found by computing the proper objective function. If, for example, A represents a reward matrix, then the solution for computing all combinations of rows is:
One reason for running all combinations of decisions, 2P−1, where P is the number of columns in a reward matrix, is that one or more constraints eliminate(s) possible feasible solutions, as shown in
The above TABLE 1 can be modified in accordance with the present solution. For example, as shown in
The reward matrix illustrated by table 1100 can be constructed and solved using a linear program. For example, an interior-point algorithm can be employed. A primal standard form can be used to calculate optimal tasks and characteristics in accordance with the following mathematical equation (11).
Referring now to
Next in 1310, a determination is made by the communications device as to whether a given wireless channel (e.g., wireless channel 206 or 210 of
The communication device performs operations in 1312 to selectively use the given wireless channel for communicating signals based on results of the determination made in 1310. For example, the given wireless channel is used by a secondary user (e.g., user 222 of
The communication devices can include, but are not limited to, cognitive radios configured to recognize and classify radio signals by modulation type at various SNRs. Each of the cognitive devices comprises a cognitive sensor employing machine learning algorithms. In some scenarios, the machine learned algorithms include quantum neural networks which are trained and tested using an extensive representative dataset consisting of 24 or more digital and analog modulations. The quantum neural networks learn from the time domain amplitude and phase information of the modulation schemes present in the training dataset. The quantum machine learning algorithms facilitate making preliminary estimations of modulation types and/or signal sources based on machine learned signal feature/characteristic sets. Linear programming optimization may be used to determine the best modulation classification based on prediction scores output from one or more machine learning algorithms. The communication device can significantly reduce the potentially massive memory storage requirements needed for training data by using compressive sensing techniques, such as but not limited to the DWT.
The quantum neural network layers constructed and built in blocks 1402, 1404 are trained in block 1406 to make decisions as to a modulation classification for a received signal based on one or more feature inputs for the received signal. The quantum neural network layers can be trained using modulation class data.
An illustrative quantum neural network 1500 is shown in and |1
are the basis vectors for the two-dimensional complex vector space of single qubits.
In the quantum neural network layers, a unitary operator is applied on a qubit by a sequence of rotations on a block sphere model of a qubit. The qubit is first rotated about the Z axis in accordance with a function ƒ(x) as shown by block 1510, followed by rotations about the Y axis in accordance with a function g1 (θ) and a function g2 (θ)) as shown by blocks 1512, 1514. Multiple g(θ) layers are used for parameter angle encoding. The first rotation about the Z axis handles the initial data, and the rotation about Y axis is being iteratively updated by the system. The initial data comprises a signal vector 1502 including principle components of the real and imaginary components of a received signal. Principle component analysis and principle components are well known. Any known or to be known principle component analysis can be used here to obtain the initial data. A prediction vector 1504 is output from block 1514 as a result of the qubit rotations. The prediction vector 1504 comprises a classification of one of a plurality of classification types.
The prediction vector 1504 is passed to block 1516 where deep learning optimization operations are performed by a plurality of solvers to minimize function g(θ)). The solvers can include, but are not limited to, Adam, Stochastic Gradient Decent Method, and/or RMSprop. The minimized function g(θ)) is passed to block 1518 in which cost function operations are performed to facilitate updating of the Y axis rotation angle θ. The cost function may be configured to minimize parameters over a dataset. The cost function can include, but is not limited to, a cross-entropy cost function which measures a difference between estimate (delta) and estimated value (prediction).
The quantum subset summing approximation may be implemented using a quantum processing circuit 1700. In this regard, the quantum processing circuit 1700 comprises registers 1702, 1704, 1708, 1712, 1714, a quantum adder circuit 1710, a quantum comparison circuit 1716, a quantum amplitude amplification circuit 1718, and a max likelihood estimation circuit 1720. The quantum adder circuit 1710 is generally configured to assemble complex data sets for comparison and processing. The quantum comparison circuit 1716 is generally configured to implement conditional statements in quantum computation.
The quantum processing circuit 1700 is configured to perform a true superposition of subset summing of a reward matrix. Table 1750 of
In order to subset sum the reward matrix, the quantum processing circuit 1700 is configured to compute a row sum of all possible subset matrices using different combinations of n qubits stored in the column subset register 1702 and m qubits stored in the row subset register 1704. The n qubits stored in the row subset register 1704 represent goodness-of-fit metrics for each of the solvers used to assign modulation classes to signals (e.g., Adam, Stochastic Gradient Decent Method, and/or RMSprop). The m qubits stored in the row subset register 1704 represent goodness-of-fit metrics and probabilities that are returned from the algorithm that helps select a modulation class. The information stored in register 1702 include the parameters based on modulation classes. So, for example, if there are three solvers solving the problem as to which of the twenty-four modulation classes should be assigned to a received signal, then there would be twenty-four rows (one corresponding to each modulation class) and three columns (one corresponding to each solver). The number of subset combinations of rows and columns scales exponentially with the number of rows and columns in the reward matrix. However, if this is done in quantum superposition, the system is matching the exponential scaling of the state space of a set of qubits in superposition with that exponential scaling problem of subset summing.
Each of the registers 1702 and 1704 is configured so that a zero value indicates that a row or column is not included in a given subset, and a one value indicates that the row or column is included in the given subset. Each of the quantum logic gates 1706 is configured to convert an initial zero state into an equal superposition of zero and one. If this is done with all row and column register qubits, then the system generates an equal superposition over all possible combinations of zeros for all possible bitstrings.
The reward matrix data is loaded into a reward matrix register 1708. Next, multi-controlled adder operations are performed by the quantum adder circuit 1710 to add up only those reward matrix register elements corresponding to columns and rows that are in the one state. Since the qubits are in superposition, the quantum adder circuit 1710 is simultaneously performing the adder operation for all exponentially many subset combinations. The value of each row sum is output to a subset sum register 1712. Each subset has a plurality of rows associated therewith, and the quantum adder circuit 1710 outputs the sum of each row within each subset.
The quantum comparison circuit 1716 is configured to compare the row sums for each subset to identify the highest row sum. The qubit (corresponding to the highest row sum) in a row action register 1714 is flipped to a one to indicate that this is the row with the highest sum in a given subset. Each time a qubit is assigned a one value its probability amplitude in an overall superposition increases.
Next, amplitude amplification operations are performed with a feedback loop of a maximum likelihood estimation circuit 1720. This is performed to amplify a highest of the probability amplitudes arbitrarily close to one and de-amplifies the remaining probability amplitudes close to zero so that after a plurality of rounds a certain accuracy is ensured.
The quantum processing circuit 1700 makes a decision as to which machine learned model of a plurality of machine learned models to use per signal based on qubit values in the row action register. For example, the quantum processing circuit 1700 may determine that a stochastic Gradient Decent Method based solver is to be used for a first signal, and/or an RMSprop based solver is to be used for a second signal.
This comparison is performed to determine whether the qubit string an is greater than, less than or equal to the qubit string bn. The comparison operation is achieved using a plurality of quantum subtraction circuits Us. Each quantum subtraction circuit is configured to subtract a quantum state |a from a quantum state |b
via xor (⊕) operations, and pass the result |bi−ai
to a quantum gate circuit Eq. A quantum state for a control bit c is also passed to a next quantum subtraction circuit for use in a next quantum subtraction operation. The last quantum subtraction circuit outputs a decision bit s1. If the qubit string an is greater than the qubits string bn, then an output bit s1 is set to a value of 1. If the qubit string an is less than the qubits string bn, then an output bit s1 is set to a value of 0.
The quantum gate circuit Eq orders the subtraction results and uses the ordered subtraction results |b0−a0, |b1−a1
, . . . |bn−1−an−1
to determine whether the qubit string an is equal to the qubits string bn. If so, an output bit s2 is set to a value of 1. Otherwise, the output bit s2 is set to a value of 0.
Quantum adder circuit(s) 1900, 2000 comprise a quantum ripple-carry addition circuit configured to compute a sum of the two strings of qubits an and bn together. The quantum ripple-carry addition circuits should in
The qubit string an can be written as an=an−1, . . . , a0, where a0 is the lowest order bit. Qubit string bn can be written as bn=bn−1, . . . , b0, where b0 is the lowest order bit. Qubit string an is stored in a memory location An, and qubit string bn is stored in a memory location Bn. Cn represents a carry bit. The MAJ gate writes Cn+1 into An, and continues a computation using Cn+1. When done using Cn+1, the UMA gate is applied which restores an to An, restores cn to An−1, and writes sn to Bn.
Both circuits of
Referring now to
Method 2100 begins with 2102 and continues with 2104 where one or more quantum neural networks are trained using modulation class data to make decisions as to a modulation classification for a signal based on one or more feature inputs for the signal. The modulation class data can include, but is not limited to, the data of table 500 shown in
In next block 2106, the quantum processor obtains principle components of real and imaginary components of a signal received by a communication device. Techniques for performing a principle component analysis of real and imaginary numbers are well known. Any known or to be known principle component analysis technique can be used here. The quantum processor performs first quantum neural network operations in block 2108 using the principle components as inputs to the trained quantum neural network(s). The first quantum neural network operations may be performed in accordance with the quantum circuit 1500 shown in
In blocks 2110-2114, the quantum processor performs operations to: assign a modulation class to the received signal based on the scores; check a given wireless channel for availability based at least on the modulation class assigned to the signal; and cause the given wireless channel to be used for communicating signals when a determination is made that the given wireless channel is available. The operations of block 2114 may involve: (re) configuring a channel parameter setting of a communication device; and/or controlling communication operations of the communication device such that signals are received and transmitted on the given channel.
In blocks 2116-2120, the quantum processor performs second quantum neural network operations for selecting which machine learned model of a plurality of machine learned models to use for deep learning optimization (e.g., in block 1516 of
In block 2122, reward matrix data is loaded into a reward matrix register (e.g., reward matrix register 1708 of
The second neural network operations may further comprise the operations of blocks 2118-2122 in
As evident from the above discussion, the present solution concerns methods for operating a quantum processor. The methods comprise: training one or more quantum neural networks using modulation class data to make decisions as to a modulation classification for a signal based on one or more feature inputs for the signal; obtaining, by the quantum processor, principle components of real and imaginary components of a signal received by a communication device; performing first quantum neural network operations by the quantum processor using the principle components as inputs to the trained one or more quantum neural networks to generate a plurality of scores; assigning a modulation class to the received signal based on the plurality of scores; determining whether a given wireless channel is available based at least on the modulation class assigned to the signal; and/or causing the given wireless channel to be used for communicating signals when a determination is made that the given wireless channel is available. Each score represents a likelihood that the received signal was modulated using a given modulation type of a plurality of different modulation types.
The methods also comprise performing second quantum neural network operations that comprise computing a row sum of all possible subset matrices using different combinations of qubits stored in a column subset register and qubits stored in a row subset register. The qubits stored in the row subset register represent goodness-of-fit metrics for each of a plurality of solvers configured to facilitate a minimization of a function implemented by a quantum neural network used to facilitate the first quantum neural network operations. The qubits stored in the column subset register include parameters associated with the plurality of different modulation types. An initial zero state of each qubit in the row subset register and the column subset register into an equal superposition of zero and one.
The second neural network operations may also comprise: loading reward matrix data into a reward matrix register; adding only reward matrix register elements corresponding to columns and rows that are in a one state; outputting a value of each row sum to a subset sum register; comparing row sums to identify a highest row sum; assigning a one value for a qubit in a row action register that is associated with the highest row sum; and or selecting which machine learned model of a plurality of machine learned models to use for deep learning optimization for the signal based on qubit values in the row action register.
The present solution also concerns a quantum circuit. The quantum circuit comprises: one or more quantum neural networks trained using modulation class data to make decisions as to a modulation classification for a signal based on one or more feature inputs for the signal; and a quantum processor. The quantum processor is configured to: obtain principle components of real and imaginary components of a signal; perform first quantum neural network operations using the principle components as inputs to the one or more quantum neural networks to generate a plurality of scores; assign a modulation class to the received signal based on the plurality of scores; determine an availability of a given wireless channel based at least on the modulation class assigned to the signal; and or cause the given wireless channel to be used for communicating signals when a the given wireless channel is available. Each score represents a likelihood that the signal was modulated using a given modulation type of a plurality of different modulation types.
The quantum processor may also be configured to perform second quantum neural network operations that comprise computing a row sum of all possible subset matrices using different combinations of qubits stored in a column subset register and qubits stored in a row subset register. The qubits stored in the row subset register represent goodness-of-fit metrics for each of a plurality of solvers configured to facilitate a minimization of a function implemented by a quantum neural network used to facilitate the first quantum neural network operations. The qubits stored in the column subset register include parameters associated with the plurality of different modulation types.
The second neural network operations may also comprise: converting an initial zero state of each qubit in the row subset register and the column subset register into an equal superposition of zero and one; loading reward matrix data into a reward matrix register; adding only reward matrix register elements corresponding to columns and rows that are in a one state; outputting a value of each row sum to a subset sum register; comparing row sums to identify a highest row sum; assigning a one value for a qubit in a row action register that is associated with the highest row sum; and or selecting which machine learned model of a plurality of machine learned models to use for deep learning optimization for the signal based on qubit values in the row action register.
The present solution also concerns a communication device comprising: a wireless communications circuit configured to receive a signal; and a quantum circuit. Wireless communications circuits are well known. Any known or to be known wireless communication circuit can be used here. The quantum circuit comprises: one or more quantum neural networks trained using modulation class data to make decisions as to a modulation classification for the signal based on one or more feature inputs for the signal; and a quantum processor. The quantum processor is configured to: obtain principle components of real and imaginary components of the signal; perform first quantum neural network operations using the principle components as inputs to the one or more quantum neural networks to generate a plurality of scores (wherein each said score represents a likelihood that the signal was modulated using a given modulation type of a plurality of different modulation types); and select which machine learned model of a plurality of machine learned models to use for deep learning optimization for the signal based on qubit values in the row action register that were obtained via understanding quantum subset summing approximation.
Although the present solution has been illustrated and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art upon the reading and understanding of this specification and the annexed drawings. In addition, while a particular feature of the present solution may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. Thus, the breadth and scope of the present solution should not be limited by any of the above described embodiments. Rather, the scope of the present solution should be defined in accordance with the following claims and their equivalents.
The present application is a Continuation-in-Part Application of and claims the benefit of U.S. Non-Provisional patent application Ser. No. 18/462,760, which was filed on Sep. 7, 2023 and is a continuation application that claims priority to and the benefit of U.S. Non-Provisional patent application Ser. No. 17/200,257 which was filed on Mar. 12, 2021. The content of these Non-Provisional Patent Applications is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | 17200257 | Mar 2021 | US |
Child | 18462760 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 18462760 | Sep 2023 | US |
Child | 18940999 | US |