This description relates to systems and methods for transmitting wireless signals for electronic communications and, in particular, to increasing the data rate of, and reducing the computational complexity of, wireless communications.
In multiple access communications, multiple user devices transmit signals over a given communications channel to a receiver. These signals are superimposed, forming a combined signal that propagates over that channel. The receiver then performs a separation operation on the combined signal to recover one or more individual signals from the combined signal. For example, each user device may be a cell phone belonging to a different user and the receiver may be a cell tower. By separating signals transmitted by different user devices, the different user devices may share the same communications channel without interference.
A transmitter may transmit different symbols by varying a state of a carrier or subcarrier, such as by varying an amplitude, phase and/or frequency of the carrier. Each symbol may represent one or more bits. These symbols can each be mapped to a discrete value in the complex plane, thus producing Quadrature Amplitude Modulation, or by assigning each symbol to a discrete frequency, producing Frequency Shift Keying. The symbols are then sampled at the Nyquist rate, which is at least twice the symbol transmission rate. The resulting signal is converted to analog through a digital-to-analog converter, and then translated up to the carrier frequency for transmission. When different user devices send symbols at the same time over the communications channel, the sine waves represented by those symbols are superimposed to form a combined signal that is received at the receiver.
A method for achieving high data rates includes generating a set of symbols based on an incoming data vector. The set of symbols includes K symbols, K being a positive integer. A first transformation matrix including an equiangular tight frame (ETF) transformation or a nearly equiangular tight frame (NETF) transformation is generated, having dimensions N×K, where N is a positive integer and has a value less than K. A second transformation matrix having dimensions K×K is generated based on the first transformation matrix. A third transformation matrix having dimensions K×K is generated by performing a series of unitary transformations on the second transformation matrix. A first data vector is transformed into a second data vector having a length N based on the third transformation matrix and the set of symbols. A signal representing the second data vector is sent to a transmitter for transmission of a signal representing the second data vector to a receiver.
The details of one or more implementations are set forth in the accompanying drawings and the description below. Other features will be apparent from the description and drawings, and from the claims.
Some known approaches to wireless signal communication include orthogonal frequency-division multiplexing (OFDM), which is a method of encoding digital data on multiple carrier frequencies. OFDM methods have been adapted to permit signal communications that cope with conditions of communication channels such as attenuation, interference, and frequency-selective fading. As the number of transmitters within a communication system (such as an OFDM system) increases, however, bandwidth can become overloaded, causing transmission speeds to suffer. A need therefore exists for improved systems, apparatuses and methods that facilitate the “fast” (i.e., high-speed) wireless communication of signals for a given number of subcarriers of a communications channel (e.g., within an OFDM system).
Some embodiments set forth herein address the foregoing challenges by generating an equiangular tight frame (ETF) or a nearly equiangular tight frame (NETF) and converting/transforming the ETF or NETF into a unitarily equivalent and sufficiently sparse matrix, such that it can be applied (e.g., as a code for transforming an input vector and/or symbols to be transmitted) in a high-speed manner. As used herein, an ETF refers to a sequence or set of unit vectors whose pair-wise inner products achieve equality in the Welch bound, and thus yields an optimal packing in a projective space. The ETF can include more unit vectors than there are dimensions of the projective space, and the unit vectors can be selected such that they are as close to mutually orthogonal as possible (i.e., such that the maximum of the magnitude of all the pair-wise dot-products is as small as possible). An NETF is an approximation of an ETF, and can be used in place of an ETF where an ETF does not exist, or where the ETF cannot be constructed exactly, for a given pair of dimensions. Systems and methods set forth herein facilitate the application of fast ETFs and fast NETFs by transforming the ETFs/NETFs into unitarily equivalent forms having sparse decompositions, which can therefore be applied in a high-speed manner.
In some embodiments, instead of an N×N unitary matrix, an N×K (where K>N) ETF or NETF is used, such that K symbols are accommodated by N subcarriers. Using an N×K ETF or NETF can produce multi-user interference, but in a very predictable and controllable way that can be effectively mitigated. Since applying an N×K matrix to a length K vector can involve O(KN) multiplications (i.e., a high computational complexity), embodiments set forth herein include transforming the ETF or NETF into a sufficiently sparse unitary matrix that can be applied in a high-speed manner, with a computation complexity of, for example, O(N).
In some embodiments, once an ETF/NETF has been defined, the ETF/NETF is “completed,” to form a unitary matrix. As defined herein, “completion” refers to the addition of (K−N) rows to the N×K ETF/NETF so that the resulting K×K matrix is unitary. A series of unitary transformations can then be performed on the completed matrix to render it sparse, but still a unitarily equivalent ETF/NETF. In other words, the result is a unitary matrix, with N of the K columns forming a sparse ETF/NETF (also referred to herein as a fast ETF or a fastNETF). Because this ETF/NETF is sparse, it can be decomposed into a small number of operations. In some such implementation, a block of K symbols is received at or introduced into the system, and acted on using the fast ETF/NETF, resulting in a length N vector. A fast N×N unitary transformation (e.g., as shown and discussed below with reference to
In some embodiments, a block of K symbols is received at or introduced into the system, and is first acted on by a nonlinear transformation (e.g., as shown and described in U.S. patent application Ser. No. 16/459,254, filed Jul. 1, 2019 and titled “Communication System and Method using Orthogonal Frequency Division Multiplexing (OFDM) with Non-Linear Transformation,” the entire contents of which are herein incorporated by reference in their entirety for all purposes), followed by the fast ETF/NETF to generate a length N vector, followed by a fast N×N unitary transformation (e.g., as shown and discussed below with reference to
Some embodiments set forth herein facilitate an increase to the data rate of digital communications, without increasing the bandwidth, the modulation, or the number of subcarriers used, using one or more ETFs and/or NETFs. An example method includes receiving or generating a data vector of length K>N, such that the incoming data vector is
where
The application of the N×K NETF A will have a computational complexity of O(NK). To reduce the computational complexity, however, a large class of fast NETFs can be identified. For example, a matrix A ∈ N×K can be identified such that the following are true:
The diagonal elements of example matrix (0.02) are exactly 1, and the off-diagonal elements have magnitude μ, which is equal to the Welch bound,
The rows of example matrix (0.02) can be normalized so that AA†∝ΠN (where ΠN is the N×N identity matrix). In other words, a matrix may be identified that is both an NETF and a matrix that can be operated “fast” (i.e., computational complexity no worse than O(N log N) or O(K log K)).
Completing NETFs
In some embodiments, given an NETF, rows can be added to the NETF to complete it thereby, transforming it into a unitary matrix. In other words, given the NETF A ∈ N×K, it is desirable to find a B ∈ (K−N)×K so that the completed matrix à is defined as follows, with the N×K “A” matrix” stacked on top of the (K−N)×K“B” matrix” to form a larger, combined matrix:
Consider again the matrix A†A (0.02, above). The eigenvalues of the matrix A†A are
with multiplicity N and 0 with multiplicity K−N. So, consider the matrix
If
where
To find this matrix, the eigen-decomposition of B†B is first computed:
It follows, then, that:
B†≡U√{square root over (D′)}, (0.0.9)
where D\ is D truncated to only include columns with non-zero eigenvalues. Now, the matrix Æà in (0.0.4) satisfies:
As such, redefining
it can be observed that à is the unitary completion of the NETF A.
Making the Completed NETF Fast
In some embodiments, a fast version of the unitary completion of the NETF, Ã, can be identified such that the columns in the first N rows still form an NETF. For example, taking any K×K unitary matrix whose first N rows form an ETF (in its columns) (in other words, the first N components of each column form an NETF), a unitary transform of the form U(N)⊕W can be performed, where W is any element of U(K−N), the result having the property that the first N rows form a column NETF.
For example, given a completed NETF of the form
where A is the NETF, the completed NETF can be acted on with a unitary of the form
where U is an N×N unitary matrix and W is a (K−N)×(K−N) unitary matrix. The resulting matrix
will still be unitary and will have the property that, taking only the first N rows, obtaining the N×K matrix UA, then because
A†A→A†U†UA=A†A, (0.0.15)
(cf (0.0.2)), the NETF property of the first N rows has been preserved.
As such, given an NETF A and completing it to produce Ã, any desired transformation can be applied, using unitary matrices that act, for example, only on the first N rows, and the result will still include an NETF in the first N rows.
Next, observe that, given any vector
where a, b ∈ , a matrix can be constructed of the form:
This matrix will be unitary, and acts on
Using matrices of the form of (0.0.17), components of the initial matrix à can be zeroed out one-by-one, until it has the following form:
Matrix (0.0.19) is a matrix in which, in the first N rows, the nth row begins with (n-1) leading zeros, and everything else is allowed to remain non-zero. Generating the matrix (0.0.19) will include
unitary operations, for example corresponding to a computational complexity of O(N2). Since the generation of the matrix (0.0.19) can be work that is performed only once, it can be polynomial. Notice that, since only unitary matrices were used in acting on the first N rows, the columns of the first N rows still form an NETF. This NETF can, in turn, be used as a “fast” or high-speed NETF.
Starting with this new (unitary) completed à matrix (with the triangle of zeros as shown above), the completed à matrix can be acted on with a unitary (of the form (0.0.17)) that zeroes out one of the non-zero components the last K−N rows. In other words, a matrix J1 ∈ U(K) can be applied to the completed à matrix as follows:
This new matrix can then be acted on with another unitary matrix J2 that zeros out another non-zero component in the last K−N rows, as follows:
The foregoing process can proceed until all of the lower off-diagonal elements of à are zero. Note that because à is unitary, there is no need to separately zero anything above the diagonal, since zeroing the lower off-diagonals will automatically zero the upper off diagonals. Zeroing all of the lower off-diagonals can involve
unitary operations. If Δ≡K−N, this is:
which has a computational complexity of O(N).
As a result of the foregoing process, Ã will have been transformed to the identity matrix using , J1, J2, . . . , JΔN+1Δ(Δ−1). In other words, if Ã0 is denoted as the unitary matrix with the triangle of zeros as in (0.0.19), the following is obtained:
Therefore,
Since Ã0 is a completed NETF (i.e., the columns of the first N rows form an NETF), and Ã0 can be formed by the product of O(N) unitary matrices, each of which is sparse, a fast NETF has been identified. Note that, at this stage, only O(N) work has been performed (i.e., a computational complexity of O(N)). It is possible to leave additional values of the triangle of zeros in Ã0 non-zero, and include them as Ji matrices, if additional computing resources are available (e.g., if reducing work or computational complexity is not a priority). In some implementations, a block of data of length K enters the system, and as a first step, a fast NETF is applied to reduce the block of data to size N<K with O(N) work, and then a fast unitary is applied, with O(N log N) work, for example to add security features, with an associated complexity of O(N log N+N)=O(N log N).
System Overview
Each of the memories 126 and 154, of the one or more signal transmitters 120 and the one or more signal receivers 150, respectively, can store instructions, readable by the associated processing circuitry (124 and 152, respectively) to perform method steps. Alternatively or in addition, instructions and/or data (e.g., symbols 156, transformation matrices 158, transformed symbols 160, data vectors 162, permutations 164, and/or primitive transformation matrices 166) can be stored in storage media 112 and/or 114 (e.g., memory of a remote compute device) and accessible to the signal transmitter(s) 120 and/or signal receiver(s) 150, respectively, for example via wireless or wired communication. For example, one or more signal transmitters 120 can store and/or access instructions to generate a set of symbols that includes K symbols, where K is a positive integer, and to generate a first transformation matrix. The first transformation matrix can include an equiangular tight frame (ETF) transformation or a nearly equiangular tight frame (NETF) transformation, having dimensions N×K, where N is a positive integer and has a value less than K. The one or more signal transmitters 120 can also store and/or access instructions to generate a second transformation matrix based on the first transformation matrix (e.g., by adding K−N rows to the first transformation matrix), the second transformation matrix having dimensions K×K. The one or more signal transmitters 120 can also store and/or access instructions to generate a third transformation matrix by performing a series of unitary transformations on the second transformation matrix (e.g., having a computational complexity or cost of O(N log2 N) arithmetic operations or O(N) arithmetic operations), the third transformation matrix having dimensions K×K, and to produce a set of transformed symbols based on the third transformation matrix and the set of symbols (e.g., by encoding each symbol from the set of symbols based on at least one layer from the set of layers). The second transformation matrix and/or the third transformation matrix can include a unitary transformation. Producing the set of transformed symbols can have a computational complexity of O(N log2 N) arithmetic operations. The one or more signal transmitters 120 can include a set of antenna arrays (not shown) configured to perform Multiple Input Multiple Output (MIMO) operations. In some embodiments, the one or more signal transmitters 120 also stores instructions to decompose the third transformation matrix into a set of layers, each layer from the set of layers including a permutation and a primitive transformation matrix having dimensions M×M, M being a positive integer having a value smaller than N.
Although not shown in
In some embodiments, the sparse transformation matrix of method 300 is a third transformation matrix, and the method 300 also includes generating a first transformation matrix via a processor of a compute device operatively coupled to the transmitter. The first transformation matrix includes an ETF transformation, or a NETF transformation, the first transformation matrix having dimensions N×K, where K is a positive integer and K has a value larger than N. A second transformation matrix can be generated, based on the first transformation matrix, having dimensions K×K. The third transformation matrix can be generated by performing a series of unitary transformations on the second transformation matrix.
Example of Fast ETF Construction
An example algorithm for producing a fast ETF is set forth in this section. Suppose a vector (N, K)=(3, 4), for which the Welch bound is
Next, suppose that the following NETF is generated based on the vector (N, K) (rounding to two decimal places):
It can be directly confirmed, taking the absolute value of each term, that:
Next, B†B can be constructed as follows:
The Eigen-decomposition of matrix (0.0.29), B†B=UDU†, can then be found, where
Multiplying these to obtain B†:
à can then be constructed, as follows:
The matrix à is unitary. Next, the lower off-diagonal triangle in the upper N×N block can be zeroed out. For example, the (1, 1) component can be used to zero the (2, 1) component. The unit unitary matrix for this is:
The (2, 1) component can be eliminated by acting on Ã, with this matrix expanded to the appropriate 4×4. In other words,
Next, the (3, 1) component can be removed by constructing the same matrix from the (1, 1) and (3, 1) components. This multiplication is:
Additional matrix operations can be performed, in a similar manner, until arriving at:
The matrix (0.0.37) is the pattern of zeros that will be obtained as part of a pre-processing step. The first three rows of Ã0 will be the 3×4 ETF that will be used.
Next, a sparse decomposition of Ã0 can be constructed. Note that there are three terms to eliminate: the (4, 1) component, the (4, 2) component, and the (4, 3) component. To accomplish this, a unitary is constructed (from the (1, 1) and (4, 1) components):
This results in J1Ã0 being:
Continuing with this process, the 2, 2 component is used to eliminate the (4, 2) component, with the matrix:
Multiplying J1Ã0 with J2 results in:
Next, the (4, 3) component can be eliminated with the (3, 3) component, using:
This gives:
Then, to set the correct phase on the last term, the matrix (0.0.43) can be multiplied by a final term J4:
Such multiplication results in:
such that:
Ã0=J1†J2†J3†J4†, (0.0.46)
and the multiplication (0.0.46) can be performed to confirm that it matches (0.0.37).
In an example implementation of the foregoing, and given a vector
Example Fast Unitary Transformations—System and Methods
At 530, a signal representing the plurality of transformed symbols is sent to a plurality of transmitters, which transmits a signal representing the plurality of transformed symbols to a plurality of receivers. The method 500A also includes, at 540, sending a signal representing the arbitrary transformation to a second compute device for transmission of the arbitrary transformation to the plurality of signal receivers prior to transmission of the plurality of transformed symbols, for recovery of the plurality of symbols at the plurality of signal receivers.
At 530, a signal representing the second plurality of transformed symbols is sent to a plurality of transmitters, which transmits a signal representing the second plurality of transformed symbols to a plurality of receivers. The method 500B also includes, at 540, sending a signal representing the arbitrary transformation to a second compute device for transmission of the arbitrary transformation to the plurality of signal receivers prior to transmission of the second plurality of transformed symbols, for recovery of the plurality of symbols at the plurality of signal receivers.
In some embodiments, the plurality of signal receivers includes a plurality of antenna arrays, and the plurality of signal receivers and the plurality of signal transmitters are configured to perform Multiple Input Multiple Output (MIMO) operations. In some embodiments, the arbitrary transformation includes a unitary transformation. In some embodiments, the arbitrary transformation includes one of a Fourier transform, a Walsh transform, a Haar transform, a slant transform, or a Toeplitz transform.
In some embodiments, each primitive transformation matrix from the at least one primitive transformation matrix has a dimension (e.g., a length) with a magnitude of 2, and a number of iterations of the iterative process is log2 N. In some embodiments, any other appropriate lengths can be used for the primitive transformation matrix. For example, the primitive transformation matrix can have a length greater than 2 (e.g., 3, 4, 5, etc.). In some embodiments, the primitive transformation matrix includes a plurality of smaller matrices having diverse dimensions. For example, the primitive transformation matrix can include block-U(m) matrices, where m can be different values within a single layer or between different layers.
The fast matrix operations in the methods 500A and 500B (e.g., 520) can be examined in greater detail with reference to Discrete Fourier Transform (DFT). Without being bound by any particular theory or mode of operation, the DFT of a vector
where
Generally, a DFT involves N2 multiplies when carried out using naive matrix multiplication, as illustrated by Equation (18). The roots of unity ωN, however, have a set of symmetries that can reduce the number of multiplications. To this end, the sum in Equation (18) can be separated into even and odd terms, as (assuming for now that N is a multiple of 2):
In addition:
So Bk can be written as:
Now k runs over twice the range of n. But consider the follow equation:
As a result, the “second half” of the k values in the N/2 point Fourier transform can be readily computed.
In DFT, the original sum to get Bk involves N multiplications. The above analysis breaks the original sum into two sets of sums, each of which involves N/2 multiplications. Now the sums over n are from 0 to N/2−1, instead of being over the even or odds. This allows one to break them apart into even and odd terms again in exactly the same way as done above (assuming N/2 is also a multiple of 2). This results in four sums, each of which has N/4 terms. If N is a power of 2, the break-down process can continue all the way down to 2 point DFT multiplications.
The analysis above can be extended beyond the context of DFT as follows. First, a permutation is performed on incoming values in a vector to generate permutated vector. Permutations are usually O(1) operations. Then, a series of U(2) matrix multiplies is performed on the pairs of elements of the permuted vector. The U(2) values in the first column of the DFT example above are all:
The U(2) matrix multiplication can be performed using other matrices as well (other than the one shown in (23)). For example, any matrix A ∈ U(2)⊕U(2)⊕ . . . ⊕U(2) can be used, where ⊕ designates a direct sum, giving this matrix a block diagonal structure.
The combination of one permutation and one series of U(2) matrix multiplications can be regarded as one layer as described herein. The process can continue with additional layers, each of which includes one permutation and multiplications by yet another matrix in U(2) ⊕ . . . ⊕U(2). In some embodiments, the layered computations can repeat for about log(N) times. In some embodiments, the number of layers can be any other values (e.g., within the available computational power).
The result of the above layered computations includes a matrix of the form:
Alog NPlog N . . . A2P2A1P1
where Ai represents the ith series of matrix multiplications and Pi represents the ith permutation in the ith layer.
Because permutations and the A matrices are all unitary, the inverse can also be readily computed. In the above layered computation, permutations are computationally free, and the computational cost is from the multiplications in the Ai matrices. More specifically, the computation includes a total of 2N multiplications in each Ai, and there are log(N) of the Ai matrices. Accordingly, the computation includes a total of 2N*log(N), or O(N*log(N)) operations, which are comparable to the complexity of OFDM.
The layered computation can be applied with any other block-U(m) matrices. For example, the Ai matrix can be Ai=U(3)⊕ . . . ⊕U(3) or Ai=U(4) ⊕ . . . ⊕U(4). Any other number of m can also be used. In addition, any combination of permutations and block-U(m) matrices can also be used in this layered computation allowable.
In some embodiments, the permutation and the block-U(m) transformation within one layer can be performed in a non-consecutive manner. For example, after the permutation, any other operations can be performed next before the block-U(m) transformation. In some embodiments, a permutation is not followed by another permutation because permutations are a closed subgroup of the unitary group. In some embodiments, a block-U(m) transformation is not followed by another block-U(m) transformation because they also form a closed subgroup of the unitary group. In other words, denote Bn as a block-U(n) and P as permutation, then operations like PBn″PBnPBn′Bn
The layered approach to construct unitary matrices can also ensure the security of the resulting communication systems. The security of the resulting communication can depend on the size of the matrix space of fast unitary matrices compared to the full group U(N).
In some embodiments, the transmitters 710 can be substantially identical to the signal transmitter 720 illustrated in
The system 700 also includes a processor 730 operably coupled to the signal transmitters 710. In some embodiments, the processor 730 includes a single processor. In some embodiments, the processor 730 includes a group of processors. In some embodiments, the processor 730 can be included in one or more of the transmitters 710. In some embodiments, the processor 720 can be separate from the transmitters 710. For example, the processor 730 can be included in a compute device configured to process the incoming data 701 and then direct the transmitters 710 to transmit signals representing the incoming data 701.
The processor 730 is configured to generate a plurality of symbols based on an incoming data 701 and decompose a unitary transformation matrix of size N×N into a set of layers, where N is a positive integer. Each layer includes a permutation and at least one primitive transformation matrix of size M×M, where M is a positive integer smaller than or equal to N.
The processor 730 is also configured to encode each symbol from the plurality of symbols using at least one layer from the set of layers to produce a plurality of transformed symbols. A signal representing the plurality of transformed symbols is then sent to the plurality of transmitters 710 for transmission to the plurality of signal receivers 720. In some embodiments, each transmitter in the transmitters 710 can communicate with any receiver in the receivers 720.
In some embodiments, the processor 730 is further configured to send a signal representing one of: (1) the unitary transformation matrix, or (2) an inverse of the unitary transformation matrix, to the receivers 720, prior to transmission of the signal representing the transformed symbols to the signal receivers 720. This signal can be used to by the signal receivers 720 to recover the symbols generated from the input data 701. In some embodiments, the unitary transformation matrix can be used for symbol recovery. In some embodiments, the recovery can be achieved by using the inverse of the unitary transformation matrix.
In some embodiments, the fast unitary transformation matrix includes one of a Fourier matrix, a Walsh matrix, a Haar matrix, a slant matrix, or a Toeplitz matrix. In some embodiments, the primitive transformation matrix has a dimension (e.g., a length) with a magnitude of 2 and the set of layers includes log2 N layers. In some embodiments, any other length can be used as described above. In some embodiments, the signal receivers 720 are configured to transmit a signal representing the plurality of transformed symbols to a target device.
As discussed above, in some embodiments, ETFs (a more general category of transformation than an orthogonal matrix) are used for symbol transformation prior to transmission. An ETF is a set of K unit vectors in N<K vectors for which every dot product has an absolute value that is equal to the “Welch Bound” The Welch Bound imposes an upper limit on the largest absolute value of pair-wise dot products of the vectors. In other words, given N dimensions, if more than N unit vectors are used, it is not possible for them all to be orthogonal to each other. An ETF (or an NETF) is a selection of N>K unit vectors in N dimensions that are as close as possible to all being mutually orthogonal. NETFs exist in all dimensions, for any number of vectors.
Performing Direct Sequence Spread Spectrum (DSSS) and/or Code Division Multiple Access (CDMA) with ETFs or NETFs can disrupt perfect orthogonality, which can in turn introduce multi-user, or inter-code, interference. ETF or NETF-based DSSS/CDMA does, however, provide a parameter that engineers can use to design a system. For example, in a system with multiple users, where users are only transmitting a certain percentage of the time, multi-user interference can be significantly reduced. Under such a circumstance, the system designer can add users without an associated compromise to performance. The system designer can also design in other forms of redundancy (e.g., forward error correction (FEC)), to make transmissions more tolerant of interference, or to reduce the impact of interference on signal integrity.
Code maps set forth herein, in some embodiments, facilitate the construction of viable spreading codes within the complex space CN. Building ETFs or NETFs based on pseudo-noise (PN) codes (e.g., PN codes used in known DSSS/CDMA systems) can be challenging, however by building codes in CN, as set forth herein, a large diversity of numerical methods within complex linear algebra are available for use. In some implementations, a known algorithm is used to construct an ETFs and NETFs in CN, and then a code map is applied such that, while the result may still be an ETF or NETF, it also constitutes a valid set of spreading codes.
Implementations of the various techniques described herein may be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. Implementations may be implemented as a computer program product, i.e., a computer program tangibly embodied in an information carrier, e.g., in a machine-readable storage device (computer-readable medium, a non-transitory computer-readable storage medium, a tangible computer-readable storage medium, see for example, storage media 112 and 114 in
Method steps may be performed by one or more programmable processors executing a computer program to perform functions by operating on input data and generating output. Method steps also may be performed by, and an apparatus may be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Processors suitable for the processing of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. Elements of a computer may include at least one processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer also may include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. Information carriers suitable for embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory may be supplemented by, or incorporated in special purpose logic circuitry.
To provide for interaction with a user, implementations may be implemented on a computer having a display device, e.g., a liquid crystal display (LCD or LED) monitor, a touchscreen display, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
Implementations may be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation, or any combination of such back-end, middleware, or front-end components. Components may be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (LAN) and a wide area network (WAN), e.g., the Internet.
While certain features of the described implementations have been illustrated as described herein, many modifications, substitutions, changes and equivalents will now occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the scope of the implementations. It should be understood that they have been presented by way of example only, not limitation, and various changes in form and details may be made. Any portion of the apparatus and/or methods described herein may be combined in any combination, except mutually exclusive combinations. The implementations described herein can include various combinations and/or sub-combinations of the functions, components and/or features of the different implementations described.
This application is a continuation of U.S. patent application Ser. No. 16/560,447, filed Sep. 4, 2019, and titled “Communication System and Method for Achieving High Data Rates Using Modified Nearly-Equiangular Tight Frame (NETF) Matrices,” the disclosure of which is incorporated by reference herein in its entirety for all purposes. This application is related to U.S. Pat. No. 10,020,839, issued on Jul. 10, 2018 and titled “RELIABLE ORTHOGONAL SPREADING CODES IN WIRELESS COMMUNICATIONS;” to U.S. patent application Ser. No. 16/459,262, filed on Jul. 1, 2019 and titled “COMMUNICATION SYSTEM AND METHOD USING LAYERED CONSTRUCTION OF ARBITRARY UNITARY MATRICES;” to U.S. patent application Ser. No. 16/459,245, filed on Jul. 1, 2019 and titled “Systems, Methods and Apparatus for Secure and Efficient Wireless Communication of Signals using a Generalized Approach within Unitary Braid Division Multiplexing;” and to U.S. patent application Ser. No. 16/459,254, filed on Jul. 1, 2019 and titled “Communication System and Method using Orthogonal Frequency Division Multiplexing (OFDM) with Non-Linear Transformation;” the disclosures of each of which are incorporated by reference herein in their entireties for all purposes.
This United States Government holds a nonexclusive, irrevocable, royalty-free license in the invention with power to grant licenses for all United States Government purposes.
Number | Name | Date | Kind |
---|---|---|---|
5237587 | Schoolcraft | Aug 1993 | A |
5345599 | Paulraj et al. | Sep 1994 | A |
5555268 | Fattouche et al. | Sep 1996 | A |
6389138 | Li et al. | May 2002 | B1 |
7376173 | Yedidia et al. | May 2008 | B2 |
7454084 | Faber et al. | Nov 2008 | B2 |
8737509 | Yu et al. | May 2014 | B2 |
9648444 | Agee | May 2017 | B2 |
10020839 | Robinson et al. | Jul 2018 | B2 |
10491262 | Robinson et al. | Nov 2019 | B2 |
10637705 | Shattil | Apr 2020 | B1 |
10735062 | Robinson | Aug 2020 | B1 |
10771128 | Sitaram et al. | Sep 2020 | B1 |
10819387 | Robinson et al. | Oct 2020 | B2 |
10833749 | Robinson | Nov 2020 | B1 |
10873361 | Robinson | Dec 2020 | B2 |
10917148 | Robinson | Feb 2021 | B2 |
10965352 | Robinson | Mar 2021 | B1 |
11018715 | Robinson et al. | May 2021 | B2 |
11025470 | Robinson | Jun 2021 | B2 |
11050604 | Robinson | Jun 2021 | B2 |
20020009209 | Inoue et al. | Jan 2002 | A1 |
20030185309 | Pautler et al. | Oct 2003 | A1 |
20030210750 | Onggosanusi et al. | Nov 2003 | A1 |
20040059547 | Aftelak | Mar 2004 | A1 |
20040105489 | Kim et al. | Jun 2004 | A1 |
20040253986 | Hochwald et al. | Dec 2004 | A1 |
20060109897 | Guo et al. | May 2006 | A1 |
20060274825 | Ciofi et al. | Dec 2006 | A1 |
20070091999 | Nissan-Cohen et al. | Apr 2007 | A1 |
20070098063 | Reznic et al. | May 2007 | A1 |
20070115797 | Reznic et al. | May 2007 | A1 |
20070281746 | Takano et al. | Dec 2007 | A1 |
20090046801 | Pan et al. | Feb 2009 | A1 |
20090316802 | Tong et al. | Dec 2009 | A1 |
20100119001 | Walton et al. | May 2010 | A1 |
20100150266 | Mondal et al. | Jun 2010 | A1 |
20100202553 | Kotecha et al. | Aug 2010 | A1 |
20100246656 | Hammerschmidt | Sep 2010 | A1 |
20100329393 | Higuchi | Dec 2010 | A1 |
20110134902 | Ko et al. | Jun 2011 | A1 |
20110235728 | Karabinis | Sep 2011 | A1 |
20120093090 | Han et al. | Apr 2012 | A1 |
20120236817 | Chen et al. | Sep 2012 | A1 |
20120257664 | Yue et al. | Oct 2012 | A1 |
20120307937 | Higuchi | Dec 2012 | A1 |
20130064315 | Heath, Jr. et al. | Mar 2013 | A1 |
20130100965 | Ohmi et al. | Apr 2013 | A1 |
20130223548 | Kang et al. | Aug 2013 | A1 |
20140056332 | Soualle et al. | Feb 2014 | A1 |
20150003500 | Kesling et al. | Jan 2015 | A1 |
20150049713 | Lan et al. | Feb 2015 | A1 |
20150171982 | Wang et al. | Jun 2015 | A1 |
20150304130 | Logothetis et al. | Oct 2015 | A1 |
20160309396 | Chai et al. | Oct 2016 | A1 |
20160337156 | Milleth et al. | Nov 2016 | A1 |
20170180020 | Namgoong et al. | Jun 2017 | A1 |
20170237545 | Khandani | Aug 2017 | A1 |
20170288902 | Rusek et al. | Oct 2017 | A1 |
20170294946 | Wang et al. | Oct 2017 | A1 |
20170302415 | Park et al. | Oct 2017 | A1 |
20170331539 | Pham et al. | Nov 2017 | A1 |
20190075091 | Shattil et al. | Mar 2019 | A1 |
20190097694 | Jeon et al. | Mar 2019 | A1 |
20190115960 | Jiang et al. | Apr 2019 | A1 |
20190158206 | Li et al. | May 2019 | A1 |
20190215222 | Cheng et al. | Jul 2019 | A1 |
20190260444 | Hauzner et al. | Aug 2019 | A1 |
20190268035 | Robinson et al. | Aug 2019 | A1 |
20190280719 | Yu | Sep 2019 | A1 |
20190349042 | Ramireddy et al. | Nov 2019 | A1 |
20190349045 | Varatharaajan et al. | Nov 2019 | A1 |
20190379430 | Pekoz et al. | Dec 2019 | A1 |
20200007204 | Jeon et al. | Jan 2020 | A1 |
20200014407 | Smith et al. | Jan 2020 | A1 |
20200366333 | Robinson | Nov 2020 | A1 |
20210006288 | Robinson et al. | Jan 2021 | A1 |
20210006303 | Robinson | Jan 2021 | A1 |
20210006317 | Robinson | Jan 2021 | A1 |
20210006446 | Robinson | Jan 2021 | A1 |
20210006451 | Robinson | Jan 2021 | A1 |
20210036901 | Robinson | Feb 2021 | A1 |
20210159950 | Robinson | May 2021 | A1 |
20210184899 | Robinson | Jun 2021 | A1 |
Number | Date | Country |
---|---|---|
1813435 | Aug 2006 | CN |
101179539 | May 2008 | CN |
101795257 | Aug 2010 | CN |
103634065 | Mar 2014 | CN |
103716111 | Apr 2014 | CN |
1826915 | Aug 2007 | EP |
1883168 | Jan 2008 | EP |
3211812 | Aug 2017 | EP |
2003-198500 | Jul 2003 | JP |
2013-162293 | Aug 2013 | JP |
10-2010-0131373 | Dec 2010 | KR |
10-2013-0118525 | Oct 2013 | KR |
WO 2006025382 | Mar 2006 | WO |
WO 2008024773 | Feb 2008 | WO |
WO 2008098225 | Aug 2008 | WO |
Entry |
---|
International Search Report and Written Opinion for PCT Application No. PCT/US2017/061489, dated Feb. 26, 2018, 8 pages. |
Wu et al., “Practical Physical Layer Security Schemes for MIMO-OFDM Systems Using Precoding Matrix Indices,” IEEE Journal on Selected Areas in Communications, Sep. 2013, vol. 31, Issue 9, pp. 1687-1700. |
Ericsson, “Signature design for NoMA,” 3GPP TSG-RAN WG1 Meeting #93, Tdoc R1-1806241, Busan, South Korea, May 21-25, 2018, pp. 1-5. |
International Search Report and Written Opinion for International Application No. PCT/US2020/039879, dated Oct. 9, 2020, 10 pages. |
International Search Report and Written Opinion for International Application No. PCT/US2020/039606, dated Nov. 25, 2020, 18 pages. |
International Search Report and Written Opinion for International Application No. PCT/US2020/043686, dated Dec. 3, 2020, 24 pages. |
Invitation to Pay Additional Fees for International Application No. PCT/US2020/039606 dated Sep. 21, 2020, 12 pages. |
Invitation to Pay Additional Fees for International Application No. PCT/US2020/043686 dated Oct. 7, 2020, 16 pages. |
Invitation to Pay Additional Fees for International Application No. PCT/US2020/049031 dated Nov. 11, 2020, 13 pages. |
Ma et al., “Secure Communication in TDS-OFDM System Using Constellation Rotation and Noise Insertion,” IEEE Transactions on Consumer Electronics, Aug. 2010, vol. 56, No. 3, pp. 1328-1332. |
Non-Final Office Action for U.S. Appl. No. 16/459,254 dated Nov. 5, 2020, 9 pages. |
Huang et al., “Multi-dimensional encryption scheme based on physical layer for fading channel,” IET Communications, Oct. 2018, vol. 12, Issue 19, pp. 2470-2477. |
Huo and Gong, “A New Efficient Physical Layer OFDM Encryption Scheme,” IEEE INFOCOM 2014, IEEE Conference on Computer Communications, pp. 1024-1032. |
International Search Report and Written Opinion for International Application No. PCT/US2020/040393, dated Sep. 3, 2020, 12 pages. |
Liu et al., “Piecewise Chaotic Permutation Method for Physical Layer Security in OFDM-PON,” IEEE Photonics Technology Letters, Nov. 2016, vol. 28, No. 21, pp. 2359-2362. |
Huawei, “Initial Performance Evaluation of Multi-user MIMO Scheme for E-UTRA Downlink,” 3GPP TSG RAN WG1, R1-051094, San Diego, USA, Oct. 10-14, 2005, 7 pages. |
International Search Report and Written Opinion for International Application No. PCT/US2020/051927, dated Jan. 15, 2021, 19 pages. |
International Search Report and Written Opinion for International Application No. PCT/US2020/049031, dated Jan. 18, 2021, 20 pages. |
International Search Report and Written Opinion for International Application No. PCT/US2021/017043, dated May 14, 2021, 12 pages. |
Sung et al., “M-PSK Codebook Based Clustered MIMO-OFDM SDMA with Efficient Codebook Search,” IEEE, 2012, 5 pages. |
Tseng et al., “Hardware Implementation of the Joint Precoding Sub-System and MIMO Detection Preprocessor in IEEE 802.11n/ac WLAN,” Journal of Signal Processing Systems (2019) 91:875-884. |
Xie et al., “Geometric Mean Decomposition Based Hybrid Precoding for Millimeter-Wave Massive MIMO,” China Communications, May 2018, vol. 15, Issue 5, pp. 229-238. |
Zhang et al., “A Novel Precoder Design for MIMO Multicasting with MMSE-DFE Receivers,” IEEE, 2014, 5 pages. |
Zheng et al., “Performance Comparison on the Different Codebook for SU/MU MIMO,” IEEE 802.16 Broadband Wireless Access Working Group, IEEE 0802.16m-08/1183r1, Sep. 12, 2008, pp. 1-9. |
Strohmer et al., “Grassmannian frames with applications to coding and communication,” Appl. Comput. Harmon. Anal. (2003)14: 257-275. |
Tsiligianni et al., “Construction of Incoherent Unit Norm Tight Frames With Application to Compressed Sensing,” IEEE Transactions on Information Theory, Apr. 2014, vol. 60, No. 4. pp. 2319-2330. |
Number | Date | Country | |
---|---|---|---|
20210067211 A1 | Mar 2021 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16560447 | Sep 2019 | US |
Child | 16909175 | US |