SYSTEM AND METHOD UNIFYING LINEAR AND NONLINEAR PRECODING FOR TRANSCEIVING DATA

Information

  • Patent Application
  • 20190349224
  • Publication Number
    20190349224
  • Date Filed
    December 28, 2017
    6 years ago
  • Date Published
    November 14, 2019
    5 years ago
Abstract
A method for precoding an information symbol conveying data for transmission between a plurality of transmitters and a plurality of receivers via a plurality of communication channels over a subcarricr frequency, the number of transmitters (N) is different than the number of active receivers (K) for that subcarrier frequency, the method comprising: receiving by said transmitters information pertaining to supportabilitics of said receivers to decode non-linearly preceded data; determining a precoding scheme defining for which of said receivers said data to be transmitted by said transmitters shall be precoded using at least one of linear precoding and non-linear precoding, according to said supportabilitics; constructing a signal by applying a reversible mapping to said information symbol, said reversible mapping includes elements each respectively associated with a particular one of said receivers, such that those said receivers supporting the decoding of said non-linearly precoded data are capable of reversing said reversible mapping to said information symbol, while for those said receivers not supporting the decoding of said non-linearly precoded data said information symbol is unaffected by said reversible mapping; constructing a precoder characterized by N≠K such that said precoder is configured to perform regularized generalized inversion of a communication channel matrix.
Description
FIELD OF THE DISCLOSED TECHNIQUE

The disclosed technique relates to communication systems and methods in general, and to a system and method for employing linear and nonlinear precoding, in particular.


BACKGROUND OF THE DISCLOSED TECHNIQUE

In multi-user communications where a centralized transmitter transmits data to a plurality of independent (e.g., non-cooperative) receivers (users), the transmitted data may be subject to inter-user noise, known as crosstalk, which interferes with the communication between different communication entities. Attaining an effective contrivance to eliminate or at least partially reduce crosstalk is therefore of high importance. Crosstalk may generally occur in both wireless and wire-line communications systems, that utilize linear precoding (LP) and nonlinear precoding (NLP) techniques, and particularly, in the Gigabit Internet “G.fast” wire-line standard.


Crosstalk cancellation techniques that employ precoding of data prior to its transmission are known in the art as “vectoring”. Crosstalk cancellation typically requires taking into account of power restrictions, which often involve hardware-related considerations. Additionally, the number of bits is limited to predefined constellation sizes. The linear precoder may eliminate crosstalk in part or fully by using an inverse of a channel matrix. Linear precoding, however, may typically require equalization of gains introduced by the inversion operation (i.e., the gains must be suppressed to satisfy power restrictions, which in turn cause a diminished bitrate). The NLP schemes avoid this problem by use of Tomlinson-Harashima Precoding (THP) scheme working through the modulus operation or by seeking a perturbation vector associated with transmission symbols, thereby reducing power consumption. NLP schemes work seamlessly at the receiver after application of the modulus operation.


Systems and method that combine linear precoding and nonlinear precoding, in general, are known in the art. A World Intellectual Property Organization (WIPO) Patent Cooperation Treaty (PCT) International Publication Number WO 2014/054043 A1 to Verbin et al, to the same present Applicant, entitled “Hybrid Precoder” is directed to a hybrid precoder system and method employing linear precoding and nonlinear precoding to provide far-end crosstalk (FEXT) cancellation that enhances performance and lowers complexity during transmission and reception of data between transmitters and receivers of the communication system. The hybrid precoder system and method employs linear precoding and non-linear precoding for transmitting data between at least two transmitters and a plurality of receivers via a plurality of communication channels over a plurality of subcarrier frequencies. The at least two transmitters are communicatively coupled, respectively, with the plurality of receivers. The hybrid precoder system includes a linear precoder, a non-linear precoder, a controller, and an input selector. The linear precoder is for linearly precoding the data. The non-linear precoder is for non-linearly precoding the data. The controller is coupled with the linear precoder, and with the non-linear precoder. The input selector as well, is coupled with the linear precoder and with the non-linear precoder. The controller at least partly evaluates channel characteristics of at least part of the communication channels. The controller further determines a precoding scheme selection that defines for at least part of the communication channels, over which of the subcarrier frequencies the data to be transmitted shall be precoded using either one of linear precoding and non-linear precoding, according to determined channel characteristics. The input selector selects which of the linear precoded data and the non-linear precoded data is outputted by the hybrid precoder system, according to the precoding scheme selection.


U.S. Patent Application Publication No.: US 2017/0279490 A1 to Maes, entitled “Non-linear Precoding with a Mix of NLP Capable and NLP Non-capable Lines” is directed at a method for achieving crosstalk mitigation in the presence of nonlinear precoding (NLP) non-capable and NLP capable multiple customer premises equipment (CPE). Macs provides a particular solution to the general interoperability problem of using different precoding-capable CPE units where the number active CPE units, is equal to the total number of CPE units.


SUMMARY OF THE DISCLOSED TECHNIQUE

It is an object of the disclosed technique to provide a method for precoding an information symbol conveying data for transmission between a plurality of transmitters and a plurality of receivers via a plurality of communication channels over a subcarrier frequency, where the number of transmitters (N) is different than the number of active receivers (K) for that subcarrier frequency. The method includes the following steps. The method initiates with a step of receiving by the transmitters, information pertaining to supportabilities of the receivers to decode non-linearly precoded data. The method continues with a step of determining a precoding scheme defining for which of the receivers the data to be transmitted by the transmitters shall be precoded using at least one of linear precoding and non-linear precoding, according to the supportabilities. The method continues with a step of constructing a signal by applying a reversible mapping to the information symbol, where the reversible mapping includes elements each respectively associated with a particular one of the receivers, such that those receivers supporting the decoding of non-linearly precoded data are capable of reversing the reversible mapping to the information symbol, while for those receivers not supporting the decoding of non-linearly precoded data the information symbol is unaffected by the reversible mapping. The method continues with a step of constructing a precoder characterized by N≠K such that the precoder is configured to perform regularized generalized inversion of a communication channel matrix.


It is a further object of the disclosed technique to provide a hybrid precoder system for precoding an information symbol conveying data for transmission between a plurality of transmitters and a plurality of receivers via a plurality of communication channels over a subcarrier frequency, where the number of transmitters (N) is different than the number of active receivers (K) for that subcarrier frequency. The hybrid precoder system includes a controller and a processor (coupled therebetween). The controller is configured for receiving information pertaining to supportabilities of the receivers to decode non-linearly precoded data, and for determining a precoding scheme defining for which of the receivers the data to be transmitted by the transmitters shall be precoded using at least one of linear precoding and non-linear precoding, according to the supportabilities. The processor is configured for constructing a signal for transmission, according to the determined precoding scheme, by applying a reversible mapping to the information symbol, where the reversible mapping includes elements each respectively associated with a particular one of the receivers, such that those receivers supporting the decoding of non-linearly precoded data are capable of reversing the reversible mapping to the information symbol, while for those receivers not supporting the decoding of non-linearly precoded data the information symbol is unaffected by the reversible mapping. The processor is further configured for constructing a precoder characterized by N≠K such that the precoder is configured to perform regularized generalized inversion of a communication channel matrix.


It is a further object of the disclosed technique to provide a method for nonlinear precoding of an information symbol at a given precoder input. The information symbol is in a symbol space having a given symbol space size. The nonlinear precoding involves modulo arithmetic and having a plurality of inputs. The method includes the following steps. The method includes an initial step of determining a reference symbol space size which is common to all of the inputs. The method continues with the steps of determining a modulus value according to the reference symbol space size, adapting the given symbol space size according to the reference symbol space size, and nonlinearly precoding the information symbol according to the modulus value, common to all of the inputs.


It is a further object of the disclosed technique to provide a system for nonlinear precoding of an information symbol at a given precoder input, where the information symbol is in a symbol space having a given symbol space size. The nonlinear precoding involves modulo arithmetic and having a plurality of inputs. The system includes a controller and a processor (coupled therebetween). The controller is configured for determining a reference symbol space size that is common to all of the inputs, and for determining a modulus value according to the reference symbol space size. The processor is configured for adapting the given symbol space size according to the reference symbol space size, and for nonlinearly precoding the information symbol according to the modulus value, common to all of the inputs.


In is another object of the disclosed technique to provide a method for nonlinear precoding an information symbol conveying data for transmission between a plurality of transmitters and a plurality of receivers via a plurality of communication channels defining a channel matrix H over a particular subcarrier frequency. The method includes the steps of determining a weighting matrix G, whose number of rows is equal to the number of transmitters; then determining a modified channel matrix equal to HG; and constructing a nonlinear precoder for performing nonlinear precoding of the modified channel matrix.


It is a further object of the disclosed technique to provide a system for nonlinear precoding an information symbol conveying data for transmission between a plurality of transmitters and a plurality of receivers via a plurality of communication channels defining a channel matrix H over a particular subcarrier frequency. The system includes a processor configured for determining a weighting matrix G, whose number of rows is equal to the number of transmitters; for determining a modified channel matrix equal to HG; and for constructing a nonlinear precoder for performing nonlinear precoding of the modified channel matrix.





BRIEF DESCRIPTION OF THE DRAWINGS

The disclosed technique will be understood and appreciated more fully from the following detailed description taken in conjunction with the drawings in which:



FIG. 1 is a schematic diagram illustrating an overview of a communication system, showing a system of the disclosed technique, constructed and operative according to an embodiment of the disclosed technique:



FIG. 2A is schematic diagram illustrating a prior art zero-forcing (ZF) linear precoding scheme;



FIG. 2B is a schematic diagram illustrating a prior art ZF nonlinear vector precoding scheme;



FIG. 2C is a schematic diagram illustrating a prior art QR nonlinear precoding scheme, generally referenced 50;



FIG. 3A is a schematic diagram illustrating an overview of a general hybrid-interoperability precoding scheme supporting both linear and nonlinear precoding, constructed and operative according to the embodiment of the disclosed technique;



FIG. 3B is a schematic diagram illustrating an overview of another general hybrid-interoperability precoding scheme including permutations supporting both linear and nonlinear precoding, constructed and operative according to the embodiment of the disclosed technique;



FIG. 3C is a schematic diagram illustrating an overview of a further general hybrid-interoperability precoding scheme including permutations supporting both linear and nonlinear vector precoding, constructed and operative according to the embodiment of the disclosed technique;



FIG. 4 is a schematic diagram illustrating an example of a specific implementation of the general hybrid-interoperability precoding scheme, utilizing a QR nonlinear precoder and permutations, configured and operative in accordance with the disclosed technique;



FIG. 5 is a schematic diagram illustrating a partition of vector variables into two groups, one group associated with linear precoding (LP) and the other group associated with nonlinear precoding (NLP);



FIG. 6 is a schematic diagram illustrating an example permutation configuration in the specific implementation of the general hybrid-interoperability precoding scheme of FIG. 4;



FIG. 7 is a schematic diagram illustrating an example configuration of an internal structure of the permutation block in FIG. 6;



FIG. 8 is a schematic illustration detailing a partition of a lower-diagonal matrix L having the dimensions (K1+K2)×(K1+K2) into three matrices;



FIG. 9 is a schematic illustration detailing an example configuration of an internal structure of the preprocessing block in FIG. 6;



FIG. 10 is a schematic illustration detailing an example configuration of an internal structure of preprocessing block and permutation block in FIG. 6, shown in a particular 5-user configuration;



FIG. 11 is a schematic illustration showing a particular implementation of NLP/LP control mechanisms in an internal structure of preprocessing block;



FIG. 12 is a schematic illustration showing another particular implementation of the preprocessing block of FIG. 6;



FIG. 13 is a schematic illustration showing further particular implementation of the preprocessing block of FIG. 6;



FIG. 14A is a table showing a database of supportabilities and activity levels of each CPE unit at a particular point in time, constructed and operative in accordance with the disclosed technique;



FIG. 14B is a schematic diagram showing a graph of a particular example of activity levels of CPE units ordered according to (relative) communication link quality as a function of subcarrier frequency at a particular point in time, in accordance with the disclosed technique;



FIG. 15 is a schematic block diagram illustrating a specific implementation of the general hybrid-interoperability precoding scheme, specifically showing delineation into two paths; constructed and operative in accordance with the disclosed technique;



FIG. 16 is a schematic block diagram illustrating another specific implementation of the general hybrid-interoperability precoding scheme, specifically showing delineation into two paths, constructed an operative in accordance with the disclosed technique;



FIG. 17A is a schematic diagram illustrating an example of a specific implementation of the general hybrid-interoperability precoding scheme, utilizing different scalar factors, configured and operative in accordance with the disclosed technique;



FIG. 17B is a schematic block diagram of a method for a specific implementation of nonlinear precoding utilizing different scalar factors, configured and operative in accordance with the disclosed technique; FIG. 18 is a schematic block diagram of a method for a hybrid-interoperability precoding scheme supporting both linear and nonlinear precoding, constructed and operative according to the embodiment of the disclosed technique;



FIG. 19 is a schematic block diagram of a system for nonlinear precoding exhibiting a modulus size that is constellation-independent, constructed and operative in accordance with another embodiment of the disclosed technique;



FIG. 20 is a schematic illustration detailing an example configuration of an internal structure of a vectoring processor in the system of FIG. 19, constructed and operative in accordance with an embodiment of FIG. 19 of the disclosed technique;



FIG. 21 is a schematic diagram of is a schematic illustration of Tomlinson-Harashima precoding used per subcarrier frequency being applied to chosen users only, constructed and operative in accordance with an embodiment of FIG. 19 of the disclosed technique;



FIG. 22A is a schematic diagram showing an example of a non-scaled 4-QAM constellation, constructed and operative in accordance with the disclosed technique;



FIG. 22B is a schematic diagram showing an example of non-scaled 16-QAM constellation;



FIG. 22C is a schematic illustration showing a boundary of a modulo operation representing a square of size r;



FIG. 22D is a schematic diagram showing an example of τ=8=23 chosen as a constant modulo for all constellations; and



FIG. 23 is a schematic block diagram of a method for nonlinear precoding where the modulus size is constellation-independent, constructed and operative in accordance with the embodiment of FIG. 19 of the disclosed technique.





DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS

The disclosed technique overcomes the disadvantages of the prior art by proposing a general solution to the interoperability problem between a data providing entity communicatively coupled with multiple data subscriber entities in a communication network, where part of the data subscriber entities do not support nonlinear precoding (NLP) while another part does. The disclosed technique generally relates to multi-user multiple input multiple output (MIMO) communications systems in which there is a data provider side, typically embodied in the form of a data providing entity, such as a distribution point (DP) that is interconnected via a plurality of communication channels to a plurality of data subscriber entities (i.e., a data subscriber side), typically embodied in the form of multiple corresponding customer premises equipment (CPE) units. The terms “data provider side”, “data provider”, “transmitter side”, “distribution point”, and “distribution point unit” (DPU) used herein are interchangeable. The terms “data subscriber side”, “data subscriber”, “receiver side”, “CPE”, “CPE unit”, and “CPE receiver unit” used herein are interchangeable. The disclosed technique proposes a system and a method configured and operative to unite or unify linear and nonlinear schemes, for a particular subcarrier frequency, at the data provider side, in such a way that enables interoperability in the use of nonlinear precoding for CPE units supporting nonlinear precoding and linear precoding for CPE units not supporting nonlinear precoding.


The system and method of the disclosed technique is configured and operative for precoding an information symbol conveying data for transmission between a plurality of transceivers at the data provider side (i.e., operating in the downstream (DS) direction as transmitters) and a plurality of transceivers at the data subscriber side (i.e., operating in the DS direction as receivers) via a plurality of communication channels over a subcarrier frequency (denoted herein interchangeably ‘tone’). A symbol generally refers to a waveform, a signal, or a state of a communication medium (e.g., link, channel) that transpires over a particular time period (e.g., a time slot). A symbol typically encodes bits (i.e., “information symbol”). According to one (typical) implementation the communication channels are wire-lines (e.g., physical wire conductors such as twisted pairs). According to another implementation the communication channels are realized by the transmission and reception of signals (e.g., via antennas in wireless communication techniques) propagating through a wireless medium (e.g., air). The communication channels whether wired or wireless are susceptible to interference known as crosstalk between the communication channels, more specifically known as far-end crosstalk (FEXT). The communication channels may as be susceptible to near-end crosstalk (NEXT). Precoding is used for mitigating the effects of FEXT, while adaptive filtering may be used for mitigating the effects of NEXT.


The prior art teaches a specific solution, limited to a very special case where the number of transmitters at the transmitter side is equal to the number of active receivers at the receiver side at any particular subcarrier frequency. The disclosed technique offers a solution to the general case, where the number of transmitters is not necessarily equal to the number of active receivers at the receiver side at a particular subcarrier frequency. An ‘active receiver’ (e.g., an active OPE unit) is a receiver that (i) is switched on and is ready to receive data or in a process of receiving data: (ii) is switched on and is ready to receive data or in a process of receiving data at a particular subcarrier frequency (or plurality thereof) and not for other subcarrier frequencies (i.e., the receiver is ‘active’ at particular subcarrier frequencies and ‘inactive’ at other subcarrier frequencies) or (iii) is either one of (i) and (ii) stipulated by a decision rule determined by at least one criterion related to communication performance (e.g., max-mean rate, max-min rate, minimal bit loading value, etc). Examples for (ii) include situations where a particular active CPE unit is unable to receive information at high subcarrier frequencies due to a low signal-to-noise ratio (SNR), or due to the peculiarity of the communication channel or binder structure, etc. Thus, a CPE unit may be active at some subcarrier frequencies, and inactive at others. The system and method of the disclosed technique provide a general solution to the more difficult interoperability problem for the general case, where the number of transmitters (N) is not necessarily equal to the number of active receivers (K). The general solution also solves the special case where N is equal to K.


The following is a succinct summary of the system and method of the disclosed technique; the summary is followed by a comprehensive description. The system includes a controller and a processor implemented at the transmitter side (e.g., in the DPU). The controller is typically embodied in the form of, and interchangeably denoted herein, a ‘vectoring control entity’ (VCE). The processor is typically embodied in the form of, and interchangeably denoted herein, a ‘vectoring processing entity’ (VCE). The controller is configured and operative for receiving information pertaining to supportabilities of the receivers (i.e., CPE units) to decode nonlinearly precoded data. In general, a ‘supportability’ of a receiver defines whether that receiver supports the decoding of nonlinearly precoded data (and linear precoded data). It is assumed herein that if a particular receiver does not support the decoding of nonlinearly precoded data, then its default supportability is the capability to decode linearly precoded data. The controller is further configured and operative for determining a precoding scheme defining for which of the receivers (at the receiver side) the data to be transmitter by the transmitters (at the transmitter side) shall be precoded using at least one of linear precoding (LP) and nonlinear precoding (NLP) according to the supportabilities of the receivers.


The processor is configured and operative for constructing a signal for transmission, according to the determined preceding scheme, by applying a reversible mapping (i.e., a reversible transformation) to the information symbol. A reversible mapping is a function or algorithm that can be reversed (i.e., reversibility yields the operand that is, the object of the mapping operation). As will be described in greater detail hereinbelow, a reversible mapping can be realized by various entities and techniques. Several such techniques include use of modulo arithmetic, a use of a perturbation vector, use of transformations in a lattice reduction technique, and the like. The reversible mapping includes elements (e.g., represented by matrix elements) each respectively associated with a particular one of the receivers, whereby those receivers supporting the decoding of nonlinearly precoded data are capable of reversing the reversible mapping to the information symbol, while for those receivers not supporting the decoding of nonlinearly precoded data the information symbol is unaffected by the reversible mapping. The processor is then configured and operative to construct a precoder characterized by N≠K such that the precoder is configured to perform regularized generalized inversion of a communication channel matrix. The communication channel matrix (or simply “channel matrix”) represents the channel information conveyed between transmitter side and receiver side. Regularized generalized inversion, which will be discussed in greater detail hereinbelow and used in the context of the disclosed technique, relates to a generalization of generalized inversion. Basically, generalized inversion of the channel matrix essentially involves finding a matrix that serves as an inverse of the channel matrix that is not necessarily invertible. An example of generalized inverse includes the Moore-Penrose pseudoinverse. Regularization of the generalized inverse, known herein as “regularized generalized inversion” involves use of the principles of regularization by introducing a regularization term to the mathematical expression representing the generalized inverse.


According to the disclosed technique there is thus provided a method for precoding an information symbol conveying data for transmission between a plurality of transmitters (i.e., defining a transmitter side) and a plurality of receivers (i.e., defining a receiver side) via a plurality of communication channels over a subcarrier frequency. The number of transmitters (N) is different than the number of active receivers (K) for that subcarrier frequency. The method includes the following steps including an initial step of receiving by the transmitter side (e.g., transmitters), information pertaining to supportabilities of the receivers to decode nonlinearly precoded data. The method proceeds with the step of determining a precoding scheme defining for which of the receivers the data to be transmitted by the transmitters shall be precoded using at least one of linear precoding and nonlinear precoding, according the supportabilities. The method proceeds with the step of constructing a signal by applying a reversible mapping to the information symbol. The reversible mapping includes elements each respectively associated with a particular one of the receivers, such that those receivers supporting the decoding of nonlinearly precoded data are capable of reversing the reversible mapping to the information symbol, whereas the information symbol is unaffected by the reversible mapping for those receivers not supporting the decoding of the nonlinearly precoded data. The method proceeds with the step of constructing a precoder characterized by N≠K such that the precoder is configured to perform regularized generalized inversion of a communication channel matrix. The method proceeds with the step of transmitting the signal by the transmitters.


At the receiver side, the CPE units are configured and operative to receive the signal from the transmitters. Both types of CPE units, namely, those supporting the decoding of linearly precoded data as well as those supporting the decoding of nonlinearly precoded data are configured and operative to perform equalization by multiplication of the received signal by a scalar (i.e., not necessarily the same scalar for each CPE unit). The CPE units supporting the decoding of nonlinearly precoded data are further configured and operative to reverse the reversible mapping (e.g., by applying modulo operation in accordance with the selected reversible mapping).


In other words, users who have hardware (e.g., DSL modems), software, firmware, and the like supporting nonlinear precoding (e.g., modulo arithmetic capable) may choose to use at least one of nonlinear precoding and linear precoding, whereas users who don't have hardware (software, firmware, etc.) supporting nonlinear precoding may still use linear precoding. A particular CPE unit whose supportability includes nonlinear precoding is not necessarily limited only to nonlinear precoding as that CPE unit may opt to employ linear precoding for the benefit of system performance, or alternatively, for the reduction of nonlinear precoder dimensionality (consequently reducing computational complexity).


Particularly, in the case of the system employing orthogonal frequency-division multiplexing (OFDM) for encoding data on multiple subcarriers, all users (corresponding to CPE units) may be divided into three groups at every tone: (1) a group of CPE unit(s) (user(s)) employing nonlinear precoding: (2) a group of CPE unit(s) employing linear precoding; and (3) a group of CPE unit(s) that are inactive (i.e., are not precoded at that particular tone and are thus excluded from the transmitted signal). There are also derivative logical groups, e.g., a group that is an intersection of groups (1) and (2), and the like. The communication channels (e.g., lines) of the inactive CPE unit(s), at a specific tone, are exploited to transmit information for the benefit of other CPE unit(s). The division of CPE unit(s) (user(s)) between these three groups may vary from tone to tone since the channel matrix and the signal-to-noise ratio (SNR) are usually frequency dependent. For the above mentioned group (2) of users which employ linear precoding, modulo arithmetic (e.g., the modulus operation) is not utilized (applied) at the receiver side (CPE units).


The ability to allocate CPE units that support nonlinear precoding among linear and nonlinear precoding schemes also improves system performance by diverting nonlinear precoding enabled CPE units exhibiting large power and coding gain losses (typically belonging to small bit constellations) to utilize linear precoding. In addition, this allocation enables part of the CPE units to process data via linear precoding, while enables the remaining (active) CPE units to utilize techniques of nonlinear precoding, such as vector precoding, which reduces the dimensionality of the search space for a perturbation vector (i.e., given that the complexity of such a search is known to lie between polynomial and exponential in dimension size). Consequently, the allocation of the CPE units into three groups for every tone (i) effectively facilitates attainment a solution of the interoperability problem between different CPE units, which either have or don't have nonlinear precoding supporting hardware, and (ii) serves as an instrument to achieve system performance optimization.


Where reversible mapping is implemented via use of modulo arithmetic, the disclosed technique also proposes an option of utilizing a constant modulus size for NLP, and particularly for Tomlinson-Harashima precoding (THP) schemes. This brings about a constant power requirement to be satisfied automatically, facilitating an increase in hardware efficiency involving execution of the modulus operation, as well as averts the need for different modulus values for different constellation sizes. The disclosed technique is implementable to any number of CPE units that may be split arbitrarily between NLP-supporting CPE units and CPE units not supporting NLP.


Notation: The notation used herein for the operation diag(A) applied to a matrix A yields a vector equal to its diagonal, and the operation diag(a) applied to a vector a yields a matrix with a diagonal a and all other non-diagonal elements are zeroes. The notation for the component-wise multiplication (also known as the Hadamard product) is a⊙b, which signifies that every component of the product, c, is constructed by multiplication of components a and b: ck=akbk. The Hermitian conjugation of a matrix R (i.e., matrix transpose and complex conjugation of every element) is denoted by RH. The notation R−H is used herein to denote (R−1)H. In the Figures and Detailed Description, the prime symbol, ′, denotes the Hermitian conjugation: e.g., H′ signifies a Hermitian conjugation of matrix H. The inverse of a diagonal matrix D is also a diagonal matrix given by simple inversion of the diagonal components: for







G
=

inv


(
D
)



,


G
ii

=

1

D
ii







and an non-diagonal components being zero. Vectors and matrices are represented in bold-italics.


Without loss of generality, the disclosed technique will be described in the context of a wire-line communication system, though the principles of the disclosed technique likewise apply for wireless communication systems. Reference is now made to FIG. 1, which is a schematic diagram illustrating an overview of a communication system, generally referenced 100, showing a system of the disclosed technique, generally referenced 102, constructed and operative according to an embodiment of the disclosed technique. FIG. 1 shows a distribution point unit (DPU) 104 (also denoted herein interchangeably as “network side entity”, and “distribution point” (DP)), communicatively coupled to at least one (typically a plurality of) customer premises equipment (CPE) unit(s) 1061, 1062, 1063, . . . , 106N-1, 106N (also denoted herein interchangeably as a “node” or “nodes” in plural, where N≥1 is an integer) via a plurality of N communication lines (also denoted herein interchangeably as “communication channels”) 1081, 1082, 1083, . . . , 108N-1, 108N (typically and at least partially passing through a binder 110, in the wire-line case). DPU 104 includes a controller 112, a processor 114 embodying the system of the disclosed technique, generally referenced 102. DPU 104 further includes a plurality of N network-side fast transceiver unit (FTU-O) transceivers (XCURs) 1161, 1162, 1163, . . . , 116N-1, 116N.


At the receiver side, each one of CPE units 1061, 1062, 1063, . . . , 106N-1, 106N respectively includes a corresponding transceiver (XCVR), also denoted herein interchangeably as “remote-end fast transceiver unit” (FTU-R) 1181, 1182, 1183, . . . , 118N-1, 118N. Particularly, each transceiver (XCVR) 1161, 1162, 1163, . . . , 116N-1, 116N at the network side is communicatively coupled with its respective CPE unit 1061, 1062, 1063, . . . , 106N-1, 106N via its respective communication line 1081, 1082, 1083, . . . , 108N-1, 108N (i.e., index-wise). Each remote-end fast transceiver unit is configured and operative to receive and to transmit data to-and-fro its respective FTU-O at the DPU 104. Specifically, each transceiver at the network side, i.e., FTU-O (where i is an integer running index) is configured to be in communication with a corresponding transceiver at the receiver side, i.e., FTU-R.


The terms “communication channel”, “communication line”, “communication link” or simply “link” are interchangeable and are herein defined as a communication medium (e.g., physical conductors, air) (whether wired or wireless) configured and operative to communicatively couple between DPU 104 and CPE units 1061, 1062, 1063, . . . , 106N-1, 106N. The communications channels are configured and operative to be propagation media for signals (information symbols) for wireless as well as wire-line communication methods (e.g., xDSL, G.fast services). DPU 104 is typically embodied as a multiple-link enabled device (e.g., a multi-port device) having a capability of communicating with a plurality of nodes (e.g., CPE units). Alternatively, DPU 104 is a single-link device (not shown) having a capability of communicating with one node (e.g., a CPE unit). A transmission from DPU 104 to at least one of CPE units 1061, 1062, 1063, . . . , 106N-1, 106N is defined herein as a downstream (DS) direction. Conversely, a transmission from at least one of CPE units 1061, 1062, 1063, . . . , 106N-1, 106N to DPU 104 is defined herein as an upstream (US) direction.


Each one of CPE units 1061, 1062, 1063, . . . , 106N-1, 106N is partly characterized by its respective inherent supportability 1201, 1202, 1203, . . . , 120N-1, 120N to decode at least one type of precoded data, namely, linear precoded (LP) data, and nonlinear precoded (NLP) data (or both linear and nonlinear precoded data). It is assumed that if a particular CPE unit does not support the decoding of NLP data it supports by default the decoding of LP data. In a typical case, it is assumed that all CPE units possess supportability to decode LP data. The point is which of the CPE units further possess supportability to decode NLP data. The system and method of the disclosed technique are configured and operative to receive information pertaining to supportabilities 1201, 1202, 1203, . . . , 120N-1, 120N to decode LP and especially NLP data (represented in FIG. 1 as “N/LP”). Information pertaining to the supportabilities of CPE units is typically acquired in an initialization process, in which initial communication parameters are determined between the DPU and the CPE units. Initialization typically involves a plurality of steps or phases, such as a handshake and discovery phase, a training phase, and a channel evaluation and analysis phase. The initialization phase, and specifically the handshaking and discovery phase, involves the CPE units communicating to DPU 104 information pertaining to their supported capabilities to decode nonlinear precoded data. For example, DPU 104 is configured and operative to send a message (request) to each of the CPE units to report their respective supportability. In response, the CPE units are configured and operative to reply in a return message, specifying information pertaining to their respective supportability (e.g., ‘NLP supported’/‘NLP not supported’). Once initialization has been completed communication system 100 enters a data exchange phase and in particular, when bearer data (e.g., payload data) is being transmitted, this is what is typically known as “showtime”, Alternatively, information pertaining to the supportabilities of the CPE units is determined independently from the CPE units (e.g., in an initial setup of system 100, by controller 112 receiving information such as a lookup table of supportabilities, and the like).


DPU 104 (controller 112 thereof) is further configured and operative to continually determine activity levels 1221, 1222, 1223, . . . , 122N-1, 122N (represented “ACT./INACT.” in FIG. 1) of each of CPE units 1061, 1062, 1063, . . . , 106N-1, 106N The activity level generally defines a current degree of operation or function of a CPE unit. An ‘active receiver’ (interchangeably denoted herein as an “active CPE unit”) is a receiver that is switched on and ready to receive data or in a process of transceiving (i.e., receiving and/or transmitting) data per tone. An ‘inactive receiver’ (interchangeably denoted herein as an “inactive CPE unit”) is a receiver that is not ready to receive data or otherwise unable to communicate with the network side (e.g., switched off, not functioning, malfunctioning, not initialized, not connected, etc.). The activity level will be discussed in greater detail hereinbelow in conjunction with FIGS. 14A and 14B. The maximum number of active receivers (K) is as the total number of CPE units (N) per tone. Typically, in the routine operative state of system 100, at any particular time, the number of active receivers may usually be less than the total number of CPE units (K<N) per tone.


Following the determination of the supportabilities (and activity levels) of the CPE units, the system and method of the disclosed technique are configured and operative to determine a preceding scheme defining for which of the CPE units the, data transmitted by the DPU shall be precoded using at least one of linear precoding and nonlinear precoding, according to the determined supportabilities (and activity levels). In other words, signals transmitted from transceivers 1161, 1162, 1163, . . . , 116N-1, 116N of DPU 104 to respective transceivers 1181, 1182, 1183, . . . , 118N-1, 118N of CPE units 1061, 1062, 1063, . . . , 106N-1, 106N (respectively) have to take into account respective supportabilities 1201, 1202, 1203, . . . , 120N-1, 120N and respective activity levels 1221, 1227, 1223, . . . , 122N11, 122N. As will be described in greater detail hereinbelow, the precoding scheme defines how a transmitted signal by the DPU is precoded given the supportabilities and activity levels of the CPE units.


At this stage, for the purpose of highlighting the differences between the solution of the disclosed technique and known solutions of the prior art, reference is now further made to FIGS. 2A, 2B, and 20. FIG. 2A is schematic diagram illustrating a prior art zero-forcing (ZF) linear precoding scheme, generally referenced 10. FIG. 2B is a schematic diagram illustrating a prior art ZF nonlinear vector precoding scheme, generally referenced 30. FIG. 2C is a schematic diagram illustrating a prior art QR nonlinear precoding scheme, generally referenced 50, FIGS. 2A, 2B, and 2C show a transmitter side, a communication channel (represented by block H), and a receiver side. The precoding scheme shown in FIGS. 2A, 2B, and 2C are described in the context of vector precoding that utilizes a perturbation vector, C.


With reference to linear precoding scheme 10 of FIG. 2A, at the transmitter side, information symbols denoted by a vector s, constitute an input 12 intended for precoding. All components of a perturbation vector 13, c, which are all zero for the linear precoding (LP) scheme, are added 14 with vectors (i.e., which remains the same) and inputted 15 into a linear precoder block 16. Hence, for LP a perturbation is not added, and c is equal to the zero vector. The linear precoder (linear precoder block 16) is denoted by P, which is given by: P=pinv(H)*D. The operator inv( ) denotes an inverse of a matrix (inscribed between parentheses), pinv( ) denotes a pseudoinverse of a matrix (inscribed between parentheses). LP block 16 applies a pseudoinverse operation to a channel matrix H and calculates a product with a diagonal matrix a Diagonal matrix D is used to scale the precoding matrix. LP block 16 yields a signal or group of signals as precoded information symbols denoted by an output 17, o, which is then communicated via a communication channel 18 to a receiver side. Output 17 from the transmitter side, o, to the communication lines, may further be scaled by a scalar gain factor, a (not shown). Signals received at the receiver side denoted by a received vector 22, r, is a sum of a vector 19, y, and an additive noise vector 20, n, denoting that the precoded information symbols propagated through the communication channel includes additive 21 noise 20. Scaling at the receiver side is performed independently for every component by an inv(D) block 24, the result of which is an estimated output symbol 26, S″. No modulus operation is performed at the receiver side.


With reference to nonlinear precoding scheme 30 of FIG. 2B, a perturbation vector 33, c, is determined and added 34 to input information symbols 32 denoted by a vector s, constituting an input 35 intended for precoding. Perturbation vector c is a nonzero vector, which forms an essential part of a nonlinear precoder (NLP). Perturbation vector c is constructed as an (signed) integer (i.e., or for complex constellations it is a pair of two signed complex integers i1+j*i2 where j is the imaginary component) multiplied by a modulus component, having different values per component (as is common in the case when NLP is the Tomlinson-Harashima precoder (THP). A nonlinear precoder (NLP block 36), denoted by P is given by P=inv(H)*D. NLP block 36 applies an inverse operation to a channel matrix H and calculates a product with a diagonal matrix D. Diagonal matrix D is used to scale the precoding matrix and is predefined for THP. NLP block 36 yields a signal or group of signals as precoded information symbols denoted by an output 37, o, which then is communicated via a communication channel 38 to a receiver side. An output 37 to communication channel (e.g., the communication lines) is denoted by o representing the precoded symbols. Output vector o has N components: o∈custom-characterN×1 (where custom-character denotes the complex field). The received vector 42, r, is the sum of a vector 39, y, and an additive noise vector 40, n, denoting that the precoded information symbols propagated through the communication channel includes additive 41 noise 40. Scaling at the receiver side is performed independently for every component by an inv(D) block 44, an output 45 of which is provided to a modulus operation block 46. The notation mod( ) denotes the modulo operation (also interchangeably denoted “modulus” operation) applied to an operand (inscribed between parentheses) per component (i.e., real and imaginary parts). The modulus operation, with its corresponding modulus value, is determined by the constellation (i.e., via simultaneous transmission of constellations of different sizes for different users for different moduli values). The output of modulus operation block 46 results in a estimated symbol 48, ŝ.


With reference to the nonlinear precoding scheme 50 of FIG. 20, at the transmitter side, information symbols denoted by a vector s, constitute an input 52 intended for precoding. FIG. 20 illustrates a Tomlinson-Harashima precoder. All components of the perturbation vector 53, c, which are not all zero for the nonlinear precoding (NLP) scheme, are added 54 with vector s (i.e., which remains the same) and inputted 55 into a nonlinear precoder block 56. The nonlinear precoder (NLP block 56), denoted is given by inv(R′)*D. NLP block 56 applies an inverse operation to a matrix R and calculates a product with a diagonal matrix D (i.e., D is the diagonal of a matrix RH (i.e., D=diag(diag(RH))). Diagonal matrix D is used to scale the precoding matrix and is predefined for THP. An output 57 of NLP block 56, is inputted to a block 58, Q (i.e., of the QR decomposition process) which yields a signal or group of signals as precoded information symbols denoted by an output 59, o=Q*inv(RH)*D, which in turn then is communicated via a communication channel 60 to a receiver side. An output 59 to communication channel (e.g., the communication lines) is denoted by o representing the precoded symbols.


The QR precoding scheme in FIG. 2C employs QR decomposition of a channel matrix H which symbolizes a communication channel (denoted by block 60), mathematically represented by H=(QR)′=R′*Q′. Matrix Q is an orthonormal matrix and R is an upper triangular matrix. A received vector 65,r, at the receiver side is the sum of a vector 62, y, and an additive noise vector 63, n, denoting that the precoded information symbols propagated through the communication channel includes additive 64 noise 63. Scaling is performed independently for every component by an inv(D) block 66, an output 68 of which is provided to a modulus operation block 70. The output of modulus operation block 70 results in a estimated output symbol 72, ŝ.


In conjunction with FIG. 1, reference is now further made to FIGS. 3A, 3B, and 3C. FIG. 3A is a schematic diagram illustrating an overview of a general hybrid-interoperability precoding scheme supporting both linear and nonlinear precoding, generally referenced 150A, constructed and operative according to the embodiment of the disclosed technique. FIG. 3B is a schematic diagram illustrating an overview of another general hybrid-interoperability precoding scheme including permutations supporting both linear and nonlinear precoding, generally referenced 150B, constructed and operative according to the embodiment of the disclosed technique. FIG. 30 is a schematic diagram illustrating an overview of a further general hybrid-interoperability precoding scheme including permutations supporting both linear and nonlinear vector precoding, generally referenced 150C, constructed and operative according to the embodiment of the disclosed technique. FIGS. 3A, 3B, and 30 illustrate different implementations of hybrid-interoperability precoding constructed and operative in accordance with the principles disclosed technique. Similarly numbered reference numbers in FIGS. 3A, 3B, and 30 differentiated respectively by suffixes A, B and C represent similar (but not necessarily identical) entities. Since the different implementations shown in FIGS. 3A, 3B, and 30 are similar in some respects and dissimilar in others, for the purpose of simplifying the explanation of the principles and particulars of the disclosed technique reference is specifically made to general hybrid-interoperability precoding scheme 150A of FIG. 3A with further reference being made to FIGS. 3B and 30 pertaining to differences in implementation.


In hybrid-interoperability precoding scheme 150A information symbols (represented by vector s) are inputted to a reversible mapping block 154A. Reversible mapping block 154A is configured and operative to apply a reversible mapping to the information symbols. A reversible mapping (i.e., a reversible transformation) is defined as a function or algorithm that can be reversed so as to yield the operand that is, the object of the reversible mapping. For example, a reversible mapping is an association between two sets S1 and S2 in which to every element of S1 there is an associated element in S2, and to every element in S2 there is the same associated element in S1. Reversible mapping block 154A may be implemented by various techniques, such as dirty paper coding techniques of applying modulo operation to the information symbols, vector precoding by the use of a perturbation vector, lattice precoding techniques, and the like.


The reversible mapping includes elements, each of which is respectively associated with a particular one of receivers (OPE units) 1061, 1062, 1063, . . . , 106N-1, 106N. For example, a reversible mapping is represented by a vector of rank N: W={w1, w2, w3 . . . wN-1 wN}, where each i-th vector element, wi, is associated with the i-th CPE unit (i.e., index-wise). An example of a simple reversible mapping is an offset function, where information symbol vector s is vector added (i.e., element-wise) with an offset function (vector), i.e.: s+W. For a NLP supporting CPE unit, its associated reversible mapping element (of the reversible mapping (vector)) is offset by a nonzero set value (e.g., an integer value). For example, for CPE unit 1064, its associated reversible mapping element w4≠0 equals the reversible offset integer. For a CPE unit not supporting NLP, its associated reversible mapping element is zero. For example, if CPE unit 1065 does not support NLP, its associated reversible mapping element ws=0. In general, for those CPE units supporting NLP, their respectively associated (e.g., index-wise) reversible mapping elements (e.g., of a reversible mapping vector) are nonzero, while for those CPE units not supporting NLP, their respectively associated reversible mapping elements are zero.


According to one implementation, hybrid-interoperability precoding scheme 150A employs dirty paper coding techniques in which case the reversible mapping is an offset function typically represented by a vector whose integer elements are nonzero for those NLP-supporting CPE units, and zero for those CPE units not supporting NLP (as exemplified above). According to another implementation, hybrid-interoperability precoding scheme 150A employs vector precoding in which case the reversible mapping is a perturbation vector whose elements are nonzero (i.e., at least one vector element or component is nonzero) for those NLP-supporting CPE units, and zero for those CPE units not supporting NLP. According to yet another implementation, hybrid-interoperability precoding scheme 150A employs lattice techniques for precoding in which symbols are reversibly mapped to lattice points having known boundaries. According to this implementation, lattice precoding (or lattice dirty-paper coding) is employed. Generally, a lattice is a regular arrangement of a set of distanced-apart points (a discrete subgroup of custom-charactern (n-dimensional field of rational numbers). A symbol constellation S is a subset of size 2D of a D-dimensional lattice. Information symbols are reversibly mapped to the symbol constellation. Lattice precoding involves performing modulo reduction in relation to a precoding lattice Λp into a fundamental (e.g., Voronoi) region custom-characterp) of precoding lattice Λp (practical for a finite number of points). The symbol constellation is an intersection of the lattice symbol space (e.g., 2custom-character2, where D=2) and custom-character of precoding lattice Λp, namely custom-character(Ap). (A Voronoi region of a lattice is a region having a Voronoi site, where all points are distanced closer to the Voronoi site than to another Voronoi site in the lattice.) The symbol constellation is extended in a periodic manner via addition (or other reversible mapping or transformation) as: custom-character=S+Ap={s+d|s∈S∩custom-characterp), d∈Λo}, where each point v∈custom-character is equivalent modulo A0. Thus, symbols propagating via the communication channel (“channel symbols”) from transmitter side to receiver side consequently fall within custom-characterp). At the receiver side NLP supporting CPE units reverse the reversible mapping by reducing the received signal to the region custom-character via the modulo operation: mod(Λp). Generally, reversible mapping block 154A is configured and operative to apply a reversible mapping to input information symbols 154A as described, and to output a result 156A to a precoding matrix block 158A, P.


Precoding matrix block 158A, P, is configured and operative to perform regularized generalized inversion (hereby denoted “reg.-gen.-inv.”) of a channel matrix H. Regularized generalized inversion is a generalization of general inversion, which in turn is a generalization of inversion. Inversion of a matrix or “matrix inversion” of an invertible matrix A is a procedure of finding a matrix B such that AB=1, where I represents the n×n identity matrix. In the simplest (and less typical) case where the number of transmitters (N) is equal to the number of active receivers (K) (i.e., N=K) the preceding matrix 158A is reduced to a conventional inverse of the channel matrix. At any point in time, however, there is no assurance that the number of active receivers (users) would be equal to the total number of transmitters. This case can be encapsulated by N≠K. In this case (N≠K) channel matrix H is noninvertible by conventional inversion, as non-square matrices (i.e., m×n, where m≠n) don't have a conventional inverse. (Conventional matrix inversion is limited to regular (non-generate square n×n, and non-singular) matrices.)


Generalized inversion of the channel matrix H essentially involves finding a matrix that serves as an inverse of the channel matrix that is not necessarily invertible. An example of a generalized inverse is generally given by:






P=A
H(HAH)−1  (1A),


where H∈custom-characterK×N and A∈custom-characterK×N. In the case for channel matrix H where: A=H an example of generalized inverse can be a pseudoinverse (notation: pinv(H)), such as the Moore-Penrose pseudoinverse given by:






P=H
H(HHH)−1  (1B),


employed when the total number of active receivers (users), K, is less than (also viable when equal to) the total number of transmitters N: K≤N where H∈custom-characterK×N. This case is reduced to the simple case when K=N thus we obtain pinv(H)=inv(H), i.e., the pseudoinverse is a conventional (“simple”) inverse of channel matrix H. It is noted that for the case K≤N, the construction of the pseudoinverse is not unique. For example, given






H
=

(



1


1


1




0


0


1



)





there may be found matrices







P
A

=



(



0.5



-
0.5





0.5



-
0.5





0


1



)






and






P
B


=


(



1



-
2





0


1




0


1



)






such





that









HP
A

=


HP
B

=


(



1


0




0


1



)

.






It is emphasized that the disclosed technique is not restricted to the specific pseudoinverse or to the method of pseudoinverse construction. In general, one may observe that for the case K<N there may be an infinite number of pseudoinverses satisfying the relation HP=I, where H∈custom-characterK×N P∈custom-characterN×K and I is the identity matrix (K×K). To demonstrate this let: P=AH(HAH)−1, where H∈custom-characterK×N, and A∈custom-characterK×N. A is an arbitrary matrix such that the matrix (HAH) whose dimension is K×K is invertible. Invertibility implies:






HP=HA
H(HAH)−1=1  (2).


It may be seen that an arbitrary matrix A is a generality of a matched filter (for the Moore-Penrose pseudoinverse A=H) while the term HAH is a generality of a correlator (for the Moore-Penrose pseudoinverse this term is HHH), and consequently the term (HAH)−1 is a generality of a de-correlator. Since there are an infinite number of possible matrices A E CK×N, there may be an infinite number of generalized inverses. For the case of K=N the relation of generalized inverse reduces to the conventional inverse matrix: AH(HAH)−1=AH(AH)−1(H)−1=H−1. Returning to the above-presented purely illustrative example, for







H
=

(



1


1


1




0


0


1



)


,




we obtain PA by taking






A
=


(



1


1


1




0


1


1



)

.





which leads to a particular case of the Moore-Penrose pseudoinverse, and we obtain PB by taking







A
=

(



1


1


1




0


0


1



)


,




Regularization of the generalized inverse, known herein as “regularized generalized inversion” involves use of the principles of regularization by introducing a regularization term to the mathematical expression representing the generalized inverse. A regularized generalized inverse is generally given by:






P=A
H(HAH+β1)−1  (3A),


where the scalar β≥0 is the regularization factor, and 1 is the regularization term (in this case, the identity matrix, also interchangeably denoted herein I). A typically employed regularized generalized pseudoinverse of the disclosed technique, for A=H is given by:






P=H
H(HHH+β1)−1  (3B),


The regularization factor controls the impact of the regularization term, and can be selected to optimize (e.g., maximize) the signal-to-interference-plus-noise ratio (SINR) at the receiver side. Note that the case where β=0 reduces equation (3) into the Moore-Penrose pseudoinverse of equation (1B). In an analogous manner regarding the selection of the pseudoinverse, the disclosed technique is likewise not limited to a particular regularized generalized inverse. Precoding matrix block 158A is configured and operative to perform preceding in the sense of the disclosed technique, i.e., regularized generalized inversion of the channel matrix according to equation (3B) and to produce an output 160A, denoted by output vector o representing precoded symbols, thenceforward communicated via communication channel 162A to the receiver side.


At the receiver side, a received vector 180A, r, is the sum of a vector 164A, y, and an additive noise vector 166A, n, denoting that the precoded information symbols propagated through the communication channel include additive 168A noise 166A. For each one of CPE units 1061, 1062, 1063, . . . , 106N-1, 106N (FIG. 1) received vector 180A (FIG. 3A) can follow one of two different paths: 184A (“a”) or 188A (“b”), according to its respective supportability 1201, 1202, 1203, . . . , 120N-1, 120N. Specifically, for a CPE unit supporting the decoding of NLP data, received vector 180A follows path 184A and enters an inverse mapping block 186A, otherwise (i.e., CPE unit not supporting the decoding of NLP data) received vector follows path 188; this is illustratively shown by a receiver supportability-dependent pseudo-block 182A (i.e., not a true operational block).


Inverse mapping block 186A is configured and operative to “reverse” the reversible mapping applied at the transmitter side by reversible mapping block 154A. For example, for a reversible mapping that is a function, the inverse mapping is a corresponding inverse function. If hybrid-interoperability precoding scheme 150A employs dirty paper coding and vector precoding techniques, inverse mapping block 186 is typically embodied as a modulo operation block (e.g., a modulus operation applied to an operand (e.g., an integer offset)). In the case hybrid-interoperability precoding scheme 150A employs lattice techniques it is as aforementioned.


For a CPE unit supporting the decoding of NLP data, the output of inverse mapping block 186A (following path 184A (“a”)) results in an estimated output symbol 190A, ŝ. For a CPE unit not supporting the decoding of NLP data, received vector 180A follows path 188A (“b”) the output of which is an estimated output symbol 190A, ŝ.


With reference now to hybrid-interoperability precoding scheme 150B of FIG. 3B, information symbols 152B are inputted to a reversible mapping block 154B, which is the same as 154A of FIG. 3A. Reversible mapping block 154B is configured and operative to apply a reversible mapping to input information symbols 154B and to output a result 156B to a permutation block 157B, which in turn is configured and operative to permute information symbol elements in vector s according to a permutation matrix Π′. Vector s includes vector elements each of which is associated with a particular CPE unit. The permutation introduces additional degrees of freedom which facilitates the optimization of performance. A particular example permutation that will be hereinafter discussed in greater detail in conjunction with FIGS. 6 and 7 involves grouping information symbol elements associated with different CPE units according to their respective supportability such to form two successive aggregate groups. Alternatively, arbitrary permutations of information symbol elements are also viable.


Precoding matrix block 158B, P, is configured and operative to perform regularized generalized inversion of a permuted channel matrix W=ΠTH, where ΠT represents the transpose of permutation matrix Π and H represents the channel matrix (in accordance with the principles of equations (3A) and (3B)). The precoding matrix is denoted by P=reg.−gen.−inv(W)D, where D represents a scaling matrix that depends on the permutation used. Precoding matrix block 158B is configured and operative to perform precoding thereby producing an output 160B that is communicated via a communication channel 162B that is represented by a block 162B and given by H=ΠW, to the receiver side.


At the receiver side, a received signal 180B, represented by a vector, r, is a sum of a vector 164B, y, and an additive 168B noise vector 166B, n (respectively similar to 180A, 164A, and 166A of FIG. 3A). Received signal 180B is inputted to a block 181B configured and operative to perform scaling by inverting the permuted diagonal vector custom-character, (i.e., inv(custom-character)) where) custom-character=ΠDΠT and to output a signal to a receiver supportability-dependent pseudo-block 182B (whose operation is identical to 182A). Two paths 184B (“a”) or 188B (“b”), including inverse mapping block 186B are respectively identical with paths 184A, 188A and inverse mapping block 186A (of FIG. 3A). Receiver supportability-dependent pseudo-block 182B outputs an estimated output symbol 190B, ŝ.


In an alternative optional implementation to hybrid-interoperability precoding scheme 150B, reversible mapping block 154B and permutation block 157B are in reversed order (i.e., information symbols 152 are inputted first to permutation block 157B an output of which is provided to reversible mapping block 154B) (not shown).


According to a different implementation, with reference being made to hybrid-interoperability precoding scheme 150C of FIG. 30, information symbols 152C are inputted to a permutation block 157C configured and operative to permute information symbol elements in vector s according to a permutation matrix ΠT and to output permuted information symbols 153C. Vector s includes vector elements each of which is associated with a particular CPE unit. An adder 155C is configured and operative to combine a vector of permuted information symbols 153C with a perturbation vector 154C, denoted by c, an output 156C of which is inputted to a precoding block 158C. Perturbation vector 154C constitutes as a reversible mapping in the nonlinear vector precoding scheme of FIG. 3C. The reversible mapping, in this case implemented by a perturbation vector c includes vector elements each respectively associated with a particular one of the receivers, such that those receivers supporting the decoding of nonlinearly precoded data are configured to apply an inverse mapping to the reversible mapping, whereas for i-th vector elements of c associated with receivers not supporting the decoding of nonlinearly precoded data information symbol are unaffected by the reversible mapping (i.e., ci=0).


The following operational blocks and procedures of general hybrid-interoperability precoding scheme 150C in FIG. 3C are substantially similar to those respectively in general hybrid-interoperability precoding scheme 150B in FIG. 3B. Precoding matrix block 158C, P, is configured and operative to perform regularized generalized inversion of a permuted channel matrix W=ΠTH, where ΠT represents the transpose of permutation matrix H and H represents the channel matrix (in accordance with the principles of equations (3A) and (3B)). The precoding matrix is given by P=reg.−gen.−inv(W)D, where D represents a scaling matrix that depends on the permutation used. Precoding matrix block 158C is configured and operative to perform precoding thereby producing an output 160C that is communicated via a communication channel 162B that is represented by a block 162C and given by H=ΠW, to the receiver side.


At the receiver side, a received signal 180C, represented by a vector, r, is a sum of a vector 164C, y, and an additive 168C noise vector 166C, n (respectively similar to 180B, 164B, and 166B of FIG. 3B). Received signal 180C is inputted to a block 181C configured and operative to perform scaling by inverting the permuted diagonal vector custom-character, (i.e., inv(custom-character)) where custom-character=ΠDΠT) and to output a signal to a receiver supportability-dependent pseudo-block 182C (whose operation is identical to 182B). Two paths 184C (“a”) or 188C (“b”), including inverse mapping block 186C are respectively identical with paths 184B, 188B and inverse mapping block 186B (of FIG. 3B). Receiver supportability-dependent pseudo-block 182C outputs an estimated output symbol 190C, ŝ.


To explicate the particulars of the disclosed technique in greater detail, an example of a specific implementation will now be described. Reference is now made to FIGS. 4 and 5. FIG. 4 is a schematic diagram illustrating an example of a specific implementation of the general hybrid-interoperability precoding scheme, utilizing a QR nonlinear precoder and permutations, generally referenced 200. FIG. 5 is a schematic illustration of a partition of vector variables into two groups, linear precoding (LP) and nonlinear precoding (NLP), generally referenced 250. The example specific implementation of the disclosed technique involves dividing users (CPE units) into two groups at every frequency tone (given, without loss of generality the use of orthogonal frequency-division multiplexing (OFDM) transmission or flat fading environment): those which use LP (linear precoder) and those which use NLP (nonlinear precoder). We herein denote information symbols for these groups s1 and s2, respectively. The following description discusses the disclosed technique in the context of one particular subcarrier frequency (interchangeably herein “frequency tone” or simply “tone”). Generally, the partition of users into different groups (linear and nonlinear) may differ at different frequency tones (e.g., in consideration with performance optimization issues). It is emphasized that the general precoding scheme according to the disclosed technique involving NLP supporting CPE units and NLP non-supporting CPE units, does not necessitate any specific ordering of the CPE units (e.g., general hybrid-interoperability precoding scheme 150A, FIG. 3A).


As an example to the dynamic nature of this division, users with relatively small constellation sizes are assigned to a first group (e.g., 1 or 2 bits) employing LP, while remaining users of relatively larger constellation sizes are assigned to a second group employing NLP. Alternatively, constellations are not assigned for those users having a low signal-to-noise ratio (SNR) thus not allowing the loading of constellations of larger size than the predetermined value (e.g., the bit loading is less than one bit). This state effectively corresponds to the case where K<N The precoder employed in this case is determined according to the regularized generalized inverse of the channel matrix (utilizing communication channels of inactive CPE units at a specific tone). Noteworthy also in this case is the analogous wireless implementation where the number of transmitting antennas is greater than the number of receiving antennas.


The dynamic division of users into groups involves allocating: group 1 for users employing LP and having no supportability for inverse mapping (e.g., modulo operation capable) functionality (e.g., hardware, software, firmware) or having such functionality but preferably using linear precoding due to optimization issues; and group 2 for users employing NLP having supportability for inverse mapping functionality. This dynamic division merges system optimization (i.e., according to different criteria, such as, max-mean rate, max-min rate, etc.) with resolution of the interoperability issues. This issue is elaborated further hereinbelow. In the description that follows, the operation per frequency tone is described, assuming principles of an OFDM system.


With reference to FIG. 4, implementation of general hybrid-interoperability preceding scheme 200 initiates by information symbols 202, s, being inputted into a permutation block 204, which is configured and operative to permute information symbol elements in vector s, according to a permutation matrix Each information symbol element in vector s is associated with a particular CPE unit. The permutation takes into consideration the division of CPE units into two groups (those that employ LP and those that employ NLP). In its general form, the permutation arbitrarily intermixes between information symbol elements. Alternatively, permutation block 204 permutes the information symbol elements into two successive aggregate groups, namely, a first aggregate group of symbol elements s1 successively followed by a second aggregate group of symbol elements s2. These information symbols are described in terms of a complex vector per user, represented as:






s=[s1T,s2T]T, where s1∈(CK1×1,s2∈CK2×1,K1+K2=K  (4),


With further reference to FIG. 5, which shows different vector variables (shown also in FIG. 4), whose vector elements are partitioned into two distinct and separate aggregate groups, where each group is associated with either one of NLP supporting CPE units (a “NLP group” and NLP non-supporting CPE units (a “LP group”). Specifically, FIG. 5 illustrates the partitioning of vectors c, s, m, y, r (which are all complex vectors of custom-character(K1+K2)×1) into two groups of sizes K1 and K2. Partitioning of a vector into groups means that a vector's elements (also known as components) are allocated to these groups. Their first K1 components are associated with the LP group (i.e., the group of CPE units employing LP and not supporting NLP), whilst the second remaining K2 components are related to the NLP group (i.e., the group of CPE units supporting NLP).


To elucidate the role of permutations in the system and method of the disclosed technique, reference is now further made to FIG. 6, which is a schematic diagram illustrating an example permutation configuration in the specific implementation of the general hybrid-interoperability preceding scheme of FIG. 4, generally referenced 300. FIG. 6 shows a particular example of a permutation configuration for five users. Permutation block 304 (FIG. 6) is a particular example of permutation block 204 (FIG. 4), in which there are a total of five CPE units (users): {1,2,3,4,5} (in “natural ordering”) where the group of users {1,3,5} are the LP group and group of users {2,4} are the NLP group. Similarly to permutation block 204 (FIG. 4), permutation block 304 (FIG. 6) is configured and operative to permute CPE units (users) into two aggregate groups according to supportability, e.g.: {{1,3,5},{2,4}}.


Further detail of permutation block 304 is described in conjunction with FIG. 7, which is a schematic diagram illustrating an example configuration of an internal structure of the permutation block of FIG. 6. Permutation block 304 includes two permutation stages: a stage 1 referenced 330 and a stage 2 referenced 332. Input Information symbols 3021, 3022, 3023, 3024, and 3025 (collectively denoted “3021-5”) are inputted into permutation block 304 (specifically, into stage 1). As shown in FIG. 7, input information symbols s(1), s(2), s(3), s(4), s(5) are in the natural ordering. Stage 1330 is configured and operative to divide the input information symbols into two aggregate groups: the LP group and the NLP group.


Generally, an arbitrary permutation of information symbol elements is achieved by permuting their indices via a permutation matrix Π0, i.e.:





[s1T,s2T]T0Ts  (5).


The example given in FIGS. 6 and 7, shows 5 users of whom the group of users {2,4} (i.e., CPE units 1062, 1064) support the decoding of NLP data, while the group of users {1,3,5} (i.e., CPE units 1061, 1063, 1065) don't support the decoding of NLP data (i.e., the LP group of users). The permutation Π0T permutes the natural ordered list of users 1 through 5 {1,2,3,4,5} into two successive aggregate groups of different supportabilities, thusly: {1,3,5,2,4} (i.e., {{LP},{NLP}}). Stage 2 includes permutation blocks 334 and 336. In stage 2 each of these two aggregate groups of users may be further permuted within each group by employing permutation blocks 334 and 336 respectively implementing permutation matrices Π1T and Π2T, namely:






s
1


1

)1Ts1 and s22)2Ts2  (6).


The following short notations are hereby introduced: x1=s11) and x2=s22). In general, the permutation matrix corresponding to the operation of permutation block 204 (FIG. 4) and by permutation block 304 (in the particular example shown in FIG. 6) is:










Π

K
×
K


=



Π
0



(




Π
1




0


K
1

×

K
2








0


K
2

×

K
1






Π
2




)


.





(
7
)







Permutation block 204 (FIG. 4) outputs a signal 206 (i.e., permuted information symbols represented in FIG. 7 as a vector x=[x(1),x(2),x(3),x(4),x(5)]T in the 5-user example) in which the outputted permuted symbols can generally be represented by:











[


x
1

,

x
2


]

T

=



[


s
1

(

Π
1

)


,

s
2

(

Π
2

)



]

T

=



Π
T


x

=


(




Π
1
T




0


K
1

×

K
2








0


K
2

×

K
1






Π
2
T




)



Π
0
T



x
.








(
8
)







Output signal 206 (FIG. 4) having permuted information symbols (FIG. 7) are added 210 with a perturbation vector 208, c. Perturbation vector 208, c, for the first K1 components, denoted as c1 are zeroes: c1=0, signifying that no perturbation is added 210 to the K1 components of output vector 206. The remaining K2 components, are denoted as c2≠0, signifying that a perturbation value is added 210 to the K2 components output vector 206. Vector c is defined by the following expressions: c=[c1T, c2T]T, c1=0 means 0K1×1, c2∈CK2×1 where K1+K2=K. Vector x is defined by the following expressions x=[x1T,x2T]T, x1∈CK1×1, x2∈CK2×1, where K1+K2=K.


The perturbation vector, which can be any addition that may be eliminated by modulo arithmetic operations at the receiver, is one particular example of a reversible mapping. The employment of the perturbation vector approach effectively connotes that for every k-th index the point (sk+ck) belongs to an expanded constellation set for the k-th user. Generally, various algorithms may be used to determine the perturbation vector. Particularly, perturbation vector c is attained by taking into account system constraints that serve several objectives. An example objective involves minimizing the output power of the precoder signal (thus aiding to avoid intensive pre-scaling of the transmitted signal, consequently increasing the SNR at the receiver). Another example objective concerns the minimization of the bit error rate (BER) of the received signal.


Alternatively, the permutation matrix is applied directly to the vector sum of the symbol vector and the perturbation vector, s+c, since c+c=c+ΠTs=ΠT(s+Πc)=ΠT(s+{tilde over (c)}), where {tilde over (c)} represents an auxiliary perturbation vector. The perturbation vector and auxiliary perturbation vector are related thusly: {tilde over (c)}=c(Π)=Πc. Arranging c as c=[c1T, c2T]T we obtain:






{tilde over (c)}=Π[c1T,c2T]T  (9).


Perturbation vector c1, whose components are all zero, is added to s1·c11) is also a zero vector. Obtaining an implicit expression to perturbation vector c in terms of the auxiliary perturbation vector ć (by multiplying the auxiliary vector by ΠT from the left and using the identity ΠTΠ) yields: c=ΠT{tilde over (c)}. An output 212 (FIG. 4) from adding operation 210, x+c, constitutes as an input to a block 214 configured and operative to perform the pseudo-inversion operation: pinv(R′)*D, and to output a signal 216 (FIG. 4) represented by a vector in referenced 256 in FIG. 5. In the 5-user example of FIG. 6, a pre-processing block 306 performs the same operation as block 214 (note that L=R′). Outputted signal 216, represented by vector m is partitioned (FIG. 5) into two sub-vectors 2561 and 2562 of vector 256 corresponding to two aggregate groups of vector elements, namely, a first aggregate group, sub-vector m1, successively followed by a second aggregate group, sub-vector m2. Block 214 (FIG. 4) in conjunction with a Q block 218, a permutation block 204, and a perturbation vector 208, c, collectively constitute a precoder 220, denoted by P, which in turn is configured and operative to perform pseudo-inversion of a communication channel matrix H, in accordance with a ZF (zero-forcing) scheme involving permutations. Analogously, in the 5-user example of FIG. 6, permutation block 304, a pre-processing block 306, an orthonormal matrix “Q” block 308, and a plurality of scalar power gain blocks 3101-5 collectively constitute a precoder 312, which in turn is configured and operative to perform inversion of a communication channel 314.


Block 214 (FIG. 4) and pre-processing block 306 (in the 5-user example in FIG. 6) utilize a diagonal matrix D∈CK×K (i.e., D=diag(d)) configured and operative to perform power scaling for precoder 220 (e.g., for meeting power constraints). Vector d is defined by the following expressions: d=[d1T,d2T]T, where d1∈CK1×1, and d2∈CK2×1, and where K1+K2=K, Q block 218 (FIG. 4) and a Q block 308 (in the 5-user example in FIG. 6) are configured to perform the following QR decomposition for a Hermitian conjugated and permuted channel according to:






H
H
=QRΠ
T hence H=ΠRHQH  (10),


signifying that the QR decomposition is constructed from:






H
H
Π=QR  (11)


where H∈CK×N is the channel matrix, R∈CK×K is an upper triangular matrix, Q∈CN×K is an orthogonal matrix such that QHQ=1K×K, and ∈ΠCK×K is a permutation matrix. The multiplication expression ΠX permutes the rows of a matrix X; the multiplication expression XΠT permutes the columns of matrix X. In particular, the determination of matrix R in the QR factorization process depends not only on the channel matrix but also on the permutation matrix Π. Permutation matrix Π consists of K elements, such that all elements but one is zero and just one element is equal to 1, for each row and each column of Π. The position (i.e., index value) of this element (the 1) is different for every row and column so that these positions determine the permutation sequence. With reference to the 5-user example of FIGS. 6 and 7, let us consider the following a 5-dimensional vector (1,2,3,4,5) and its 5-element permutation into {2,4,1,3,5}, represented by the following matrix:










Π
=

(



0


1


0


0


0




0


0


0


1


0




1


0


0


0


1




0


0


1


0


0




0


0


0


0


1



)


,


Π


[




a
1






a
2






a
3






a
4






a
5




]


=


[




a
2






a
4






a
1






a
3






a
5




]

.






(
12
)







In particular, this also demonstrates an example of:










Π
=

(




Π
1




0

2
×
3







0

3
×
2





Π
2




)


,




(
13
)







where the 2×2 permutation matrix Π1 permutes the first two indices and the 3×3 permutation matrix Π2 permutes last three indices.


The construction of precoder a P represented by precoder matrix P∈CN×K (referenced 220 in FIGS. 4 and 312 in FIG. 6) is achieved according to the following equations:






P=αQR
−H

T
, o=P(s+c),  (14).


P∈custom-characterN×K is a generalized vector precoder matrix, s is the information symbol vector and c is the perturbation vector. For the LP group of CPE units the respective components of c are zero, while for the NLP group of CPE units the respective components of c are nonzero. Alternatively, there might be NLP supporting CPE units whose respective components of c are constrained (forced) to be zero for the purposes of optimization (e.g., dimensionality reduction in decreasing the number of nonzero components of c, thus reducing computational load), effectively treating NLP supporting CPE units as part of the LP group of CPE units. The perturbation vector is added for the NLP group of CPE units for example via search-based criteria, implicitly via the THP scheme, and the like.


Scalar power gain blocks 3101, 3102, 3103, 3104, and 3105, (collectively denoted 3101-5) (FIG. 6) apply a scalar α factor, which is common to all users (may be used optionally), matrices Q and R are determined according to equation (11), and matrix D is used for power scaling. Specifically, matrix RH is a lower triangular matrix hereby denoted interchangeably by L, (i.e., L=RH). The input to block 218 in FIG. 4 and block 308 in FIG. 6, matrix Q, is given by:






m=R
−H

T(s+c)=L−1T(s+c)  (16),


which means that:









Lm
=



D


(




Π
1
T




0


K
1

×

K
2








0


K
2

×

K
1






Π
2
T




)





Π
0
T



(

s
+
c

)



=



D


[



(

s
1

(

Π
1

)


)

T

,



(

s
2

(

Π
2

)


)

T

+

c
2
T



]


T

.






(
17
)







Since L is lower diagonal, for the LP group we obtain:






L
1
m
1
=D
1
s
1


1

)  (18),


or equivalently,






m
1
=L
1
−1
D
1
s
1


1

)  (18*),


where D1=diag(d1).


Reference is now further made to FIG. 8, which is a schematic illustration, generally referenced 350, detailing a partition of a lower-diagonal matrix L having the dimensions (K1+K2)×(K1+K2) into three matrices L1, L2, and M13, and a zero matrix 0K1×K2 having zero-valued elements. Matrices L1−1, L1, D1, Π1 are of size K1×K1 and vectors m1, s11), d1 are of dimension K1×1. Note that permutation matrix Π1 has only K1 non-zero elements and is defined by the vector of size K1 specifying the permutations. Matrix D1 has K1 non-zero elements on its principal diagonal. Matrix L1−1 is a lower diagonal matrix whose inverse is matrix L1, the latter of which is also a lower diagonal matrix consisting of K1 first rows of matrix L.


Diagonal scaling vector d1 can represent the degrees of freedom for optimization under constraints. Optimization involves use of the diagonal scaling vector d1 in conjunction with the degrees of freedom afforded by the permutation. The elements of the diagonal (power) scaling vector d (D=diag(d)) are typically selected to be real-valued and non-negative (although it is not a required restriction). It is noted that different precoder outputs may be scaled by gains of different values (not shown in FIG. 6). The corresponding degrees of freedom rendered by the diagonal of the corresponding components of matrix D are chosen to satisfy and ensure that power constraints are met per communication channel (e.g., line, antenna), as well as to optimize the performance (e.g. affect communication rates: mean or max-min (maximum-minimum) or max-min with constraints etc.). For example, the optimization, may seek maximization of the average rate, the max-min rate, or alternatively other multi-criterion optimizations. Typical constraints include the power density mask (PSD), and the minimal bit loading value.


Following determination of diagonal scaling vector d1, vector m1 is calculated directly via equation (18*), or sequentially via equation (18). The direct path allows calculation of the components of m independently from each other, thus allowing for parallel processing to be employed.


The NLP scheme for the NLP group of CPE units (users) will now be described in greater detail. Reference is now made to FIG. 9, which is a schematic illustration detailing an example configuration, of an internal structure of the preprocessing block in FIG. 6, generally referenced 370. FIG. 9 shows an internal structure of preprocessing block 306 in FIG. 6, for the 5-user example. Also shown is permutation block 304 of FIGS. 6 and 7, configured and operative to provide permutated information symbols (vector x) as input to preprocessing block 306. Preprocessing block 306 includes a plurality of gain blocks 3721, 3722, 3723, 3724, and 3725, a plurality of NLP/LP control mechanisms 3741, 3742, 3743, 3744, and 3745, a plurality of adders 3762, 3763, 3764, and 3765, and a plurality of modular arithmetic calculation units 3781, 3782, 3783, 3784, and 3785. Permuted information symbols x={x(1), x(2), x(3), x(4), x(5)} are respectively inputted into gain blocks 3721, 3722, 3723, 3724, and 3725 (index-wise). The gain blocks are configured and operative to multiply the permuted information symbols by respective gain components f(1), f(2), f(3), f(4), and f(5) of a gain vector f. NLP/LP control mechanisms are each 3741, 3742, 3743, 3744, and 3745 are configured and operative to control application of reversible mapping 154A (FIG. 3), for example, a modulo-Z adder (where Z∈custom-character, i.e., a whole number), according to the respective supportabilities of the CPE units. For the LP group of users; NLP/LP control mechanisms 3741, 3742, 3743, 3744, and 3745 are configured and operative to direct relevant input signals in(1), in(2), in(3), in(4), in(5) to the output of preprocessing block 306, thereby respectively bypassing modular arithmetic calculation units 3781, 3782, 3783, 3784, and 3785. For the NLP group of users, NLP/LP control mechanisms 3741, 3742, 3743, 3744, and 3745 are configured and operative to direct relevant input signals in(1), in(2), in(3), in(4), in(5) respectively to modular arithmetic calculation units 3781, 3782, 3783, 3784, and 3785.


The operation of adders 3762, 3763, 3764, and 3765, and that of modular arithmetic calculation units 3781, 3782, 3783, 3784, and 3785, which are relevant to the group of NLP users, will now be described in detail. Analogously to equation (18) relating to the LP group of users, for all LP and NLP groups of users we have:






Lm=DΠ(s+c)  (19).


Referencing FIG. 8, which shows a partition of the lower-diagonal L matrix into (sub)matrices, where the lower-diagonal matrix is denoted L1, the rectangular matrix is denoted M12, and the lower-diagonal is denoted L2. All elements above the main diagonal are zeroes. The dimensions of L1, M12 and L2 are K1×K1, K2×K1, and K2×K2 respectively. From equation (19) we obtain:






L
2
m
2
+M
21
m
1
=D
2Π2T(s2+c2)  (20).


The following notations are hereby defined: D2=diag(d2), where d2 diag(L2). The objective now is to construct a unit diagonal matrix LU:






L
U
=D
L
−1
L  (21),


where DL=diag(dL), and dL=[diag(L1)T,diag(L2)T]T. The modular arithmetic calculation units are configured and operative to facilitate construction of the diagonal matrix LU in a recursively manner according to:








L
U



(

k
,
n

)


=



L


(

k
,
n

)



L


(

k
,
k

)



.





Particularly, multiplication of equation (17) the left side by DL−1 results in:






L
U
m=[(f(1)⊙s11))T,s22)T+c2T]T  (22),


where ⊙ denotes the component-wise multiplication, and f is a gain vector calculated as according to:





diag(f(1))=(diag(diag(L1)))−1D1  (23),


hence:












f
1



(
k
)


=




d
1



(
k
)




L
1



(

k
,
k

)



=



d
1



(
k
)



L


(

k
,
k

)





,


for





k

=
1

,





,

K
1





(
24
)







According to equation (20) the first α×α elements of the scaled matrix:






L
1=diag(diag(L1))Lu1  (25).


Thus:





L
1
−1
=L
u1
(diag(diag(L1)))−1  (26).


By using equation (23) we may rewrite equation (18*) as:






m
1
=L
1
−1
D
1
s
1


1

)
=L
U1
−1(diag(diag(L1)))−1D1s11)  (27),


then obtain an expression for m1 by means of the scaled matrix, Lu1−1, and gain vector f:






m
1
=L
U1
−1diag(f(1))s11)  (28),


(which is an alternative expression for m1).


We observe that a sequential solution of the system of equations LUm=b+{tilde over (c)} can be given by:


We observe that a sequential solution of the system of equations LUm=b+{tilde over (c)} can be given by:






m(1)=b(1)+{tilde over (c)}(1)






m(k)=b(k)+{tilde over (c)}(k)−Σn=1k-1LU(k,n)m(n) for k=2, . . . ,K  (29),


where we denote:






b=diag(fs=diag(f)s(Π)  (30),


and discuss the following general scheme derived from equation (19):






L
U
m=diag(f)s(Π)+{tilde over (c)}  (31).


From equations (29) and (30) we arrive at:






m(1)=f(1)s(Π)(1)+{tilde over (c)}(1)






m(k)=f(k)s(Π)(k)+{tilde over (c)}(k)−Σn=1k-1LU(k,n)m2(n), for k=2, . . . ,K   (32).


For any index k we may choose an auxiliary perturbation vector {tilde over (c)}(k) to be proportional to the modulus size such that:






m(k)=mod(f(k)s(Π)(k)−Σn=1k-1LU(k,n)m2(n))  (33).


Reference is now made to FIG. 10, which is a schematic illustration detailing an example configuration of an internal structure of preprocessing block and permutation block in FIG. 6, shown in a particular 5-user configuration, generally referenced 400. FIG. 10 shows an example internal structure of permutation block 304 of FIG. 6, including permutation sub-blocks 334 and 336 (FIG. 7). As described in conjunction with FIG. 7, permutation sub-blocks 334 and 336 each respectively permute the two aggregate groups LP and NLP of users. Preprocessing block 306 employs equation (22) for the NLP group of users such that:

    • (1) The first K1 permuted information symbols s11)(k) (i.e., x(1), x(2), x(3)) are inputted to preprocessing block 306 and gain blocks 3721, 3722, 3723 apply (e.g., multiply by) gain vector components f(1)(k) i.e., as f(1)(k)s11)(k) (i.e., f(1), f(2), f(3)). The rest of K2 permuted information symbols, corresponding to the NLP users enter directly as s21)(k) (i.e., gain blocks 3724, and 3725 don't apply gains, i.e., f(4)=1, f(5)=1). For the LP group of users, outputs from gain blocks 3721, 3722, 3723 correspond to respective inputs in(1), in(2), in(3) to respective modular arithmetic calculation units 3781, 3782, 378, (i.e., effectively bypassed), which in turn correspond to respective outputs m(1), m(2), m(3). For the NLP group of users, outputs from gain blocks 3724, 3725 correspond to respective inputs in(4), in(5) to respective modular arithmetic calculation units 3784, 3785, which in turn correspond to respective outputs m(4), and m(5).
    • (2)NLP/LP control mechanisms 3741, 3742, 3743, control the application of modulus operations, which are not applied for the K1 first equations corresponding to the LP users. NLP/LP control mechanisms 3744 and 3745 control the application of modulus operations applied to the subsequent K2 equations corresponding to the NLP users. The perturbation vectors are implicitly calculated via modulus operations according to:






m(1)=f(1)(1)s11)(1)  (34)






m(k)=f(1)(k)s11)(k)−Σn=1k-1LU(k,n)m(n), for k=2, . . . ,K1  (35)






m(k)=mod(s22)(k))−Σn=1k-1LU(k,n)m(n)), for k=K1+1, . . . ,K2   (36).


The gains, represented by vector f, are calculated according to: f(k)=d1(k)/L(k,k), where d1 represents (pre-calculated) diagonal coefficients for the linear precoder.


The presented scheme is a convenient way to embed the LP and NLP groups of users together into the THP structure, concomitantly with NLP/LP control mechanisms configured and operative to respectively apply modulus arithmetic operations to groups of users classified according to supportability. The implementation of permutations by permutation block 304 is achieved via equation (8) such that the permutation block matrix can be represented in the form of








(




Π
1
T




0


K
1

×

K
2








0


K
2

×

K
1






Π
2
T




)





acting on [s1T,s2T]T as its input. Alternatively, the permutation block may also include a matrix Π0T (see equation (8)) acting on the natural order of information symbols, which can be represented by:







(




Π
1
T




0


K
1

×

K
2








0


K
2

×

K
1






Π
2
T




)




Π
0
T

.





There are various implementations to preprocessing block 306 (FIG. 6), several of which are hereby given by the following examples. Reference is now further made to FIG. 11, which is a schematic illustration showing a particular implementation of NLP/LP control mechanisms in an internal structure of preprocessing block, generally referenced 410. Particular implementation 410 of preprocessing block 306 is the same as particular implementation 400 of processing block 306, apart from FIG. 11 showing another particular implementation of NLP/LP control mechanisms. Specifically, NLP/LP control mechanisms 3741, 3742, 3743, 3744, and 3745 are implemented as electronic switches configured and operative to route input signals in(1), in(2), in(3), in(4), in(5) either to respective modular arithmetic calculation units 3781, 3782, 3783, 3784, and 3785, or directly to respective outputs m(1), m(2), m(3), m(4), m(5), thereby controlling the application of the modulo operation (i.e., the reversible mapping) according to the respective supportabilities of the CPE units.


In accordance with another example implementation of preprocessing block 306, reference is now further made to FIG. 12, which is a schematic illustration showing another particular implementation of the preprocessing block of FIG. 6, generally referenced 420. FIG. 12 is similar to FIG. 10, apart from implementation of a calculation block 422 for the LP group of users. For those LP group of users, permuted information symbols x(1), x(2), and x(3) are inputted into calculation block 422, which is configured and operative to calculate inv(Lu1)*diag(f), (i.e., using the scaled to unit diagonal Lu matrix and the scaled gains f) the respective outputs of which are m(1), m(2), and m(3). For the NLP group of users, modular arithmetic calculation units 3784, and 3785 are used. FIG. 12 shows a combination of a united direct non-sequential action for the LP group of users, in conjunction with a sequential THP preprocessing action for the NLP group of users. Alternatively, according to another implementation (not shown), calculation block 422, is configured and operative to calculate: inv(L1)*D1. For both implementations, it is emphasized that only m(k) for the NLP group of users is calculated sequentially. For the LP group of users, m(k) is calculated either sequentially or directly according to equation (18*), namely: m1=L1−1D1s11) or alternatively, by opting to use the scaled matrix: m1=Lu1−1dia(f(1))s11).


In accordance with another example implementation of preprocessing block 306, reference is now further made to FIG. 13, which is a schematic illustration showing further particular implementation of the preprocessing block of FIG. 6, generally referenced 430. FIG. 13 is generally similar to FIG. 12, apart from a different configuration of modular arithmetic calculation units 3784, and 3785 and the inclusion of a calculation block 432 that includes a plurality of recursive calculation blocks and adders 434 and 436. The example implementation shown in FIG. 13 is configured and operative to reduce the real-time precoding complexity, due to preprocessing block 306 requiring the execution of fewer computations involving the sequential part of the computation process (i.e., as compared with the configuration shown in FIG. 12).


In particular, preprocessing block 306 performing the sequential calculations for determining the k-th output, m(k), given in equation (36), may be viewed as follows. Subsequent to determining m1 we may calculate:





δ1(k)=−Σn=1K1LU(k,n)m1(n)  (37).


Then, the remaining K2 sequential equations may be represented by:






m(k)=mod(s22)(k))+δ1(k)−Σn=K1+1k-1LU(k,n)m(n))  (38),


for k=K1+1, . . . , K2, where δ is an offset vector, which may be represented in the vector form as:





δ=−MU21m1  (39),


where matrix Mu12 is based on partition of the scaled unit diagonal matrix Lu (i.e., Mu12=LU(rows: K1+1 to K, columns: 1 to K1)). Offset vector δ may also be written in terms of D2 as:





δ=D2−1M21m1, where D2=diag(diag(L2))  (40).


Since the offset vector δ is known, it may be separately calculated (independently of the sequential calculations). The outputs of adders 434 and 436 constitute outputs of calculation block 432 represented by offset vector δ, whose two components are δ1 (inputted to adder 3764) and δ2 (inputted to adder 3765). This configuration typically reduces real-time precoding complexity as noted.


Alternatively, according to another implementation (not shown), calculation block 422 in FIG. 13, is configured and operative to instead calculate: inv(L1)*D1.


Returning now to FIG. 4 (and FIG. 6 in the 5-user example), block 214 in FIG. 4 (and preprocessing block 306 of FIG. 6) output a signal represented by vector in to the input of Q block 218 (and Q block 308 of FIG. 6). Q block 218 (FIG. 4) and a Q block 308 (in the 5-user example in FIG. 6) are configured to perform QR decomposition for a Hermitian conjugated and permuted channel according to equations (10) and (11), and to output an outputted signal 222 represented by vector o in FIG. 4. Scalar power gain blocks 3101, 3102, 3103, 3104, and 3105, (collectively denoted 3101-5), which are configured and operative to apply a scalar factor α common to all users, are optional and shown only in FIG. 6. Alternatively, the scalar factor may be different for different users as will be described in greater detail in conjunction with FIGS. 17A and 17B. In the case where the system of the disclosed technique utilizes the scalar power gain blocks (FIG. 6), these in turn output the signal represented by vector o (see equations (14)).


Precoder 220 (FIG. 4) and precoder 312 (FIG. 6), represented by precoder matrix P∈custom-characterN×K are configured and operative to perform pseudo-inversion of the communication channel matrix, and to generate outputted signal 222, represented by vector o (recapping that o=P(s+c) and P=αQR−HT (equations (14)) to the communication channel 224 (FIG. 4) and 314 (FIG. 6 in the 5-user example). P represents a generalized inverse of the communication channel matrix H, given by:






H=(Q*R*ΠT)H=Π*RH*QH  (41).


If a direct substitution of the factorized channel matrix (equation (41) above) is made into an expression for the pseudoinverse pinv(R′)*D of block 214 (FIG. 4), the result yields basic elements of THP based on QR decomposition. In particular, if we denote the diagonal scaling of the communication channel pseudoinverse as {tilde over (D)} and the scaling of the THP block 220 as D then:





pinv(H){tilde over (D)}=HH(HHH)−1{tilde over (D)}=QRΠTRHQHQRΠT)−1{tilde over (D)}=QRΠTΠ−T(RHR)−1Π−1{tilde over (D)}=QRR−1(RH)−1ΠT{tilde over (D)}=Q(RH)−1T  (42),


where ΠT{tilde over (D)}=Q∈custom-characterN×K, R∈custom-characterK×K and D is a diagonal matrix of size (dimension) K×K and ΠT is the permutation matrix of size K×K (where the columns of matrix Q are orthonormal) based on the so-called thin (or reduced) QR factorization (due to K<N). The relation between the diagonal matrix which scales the channel pseudoinverse and the scaling utilized by THP block 220 is D=Π−1{tilde over (D)}Π=ΠT{tilde over (D)}Π. Alternatively, it is: {tilde over (D)}=ΠDΠT. The scaling is not arbitrary (e.g., as in the prior art) but THP block 220 is configured to scale according to D=diag(diag(RH). Note that the QR decomposition presented is based on THP that represents the precoder utilizing the Moore-Penrose pseudoinverse and its corresponding specifically chosen diagonal: {tilde over (D)}=Πdiag(diag(RH))ΠT.


The transmitter side outputs outputted signal 222 (vector o), which propagates through communication channel 224, the result of which is a signal 226 (represented by a vector y) received at the receiver side. A received signal 232 at the receiver side, represented by a vector r is the sum of signal 226 (y) with additive noise 228 (represented by a vector n), signifying that the precoded information symbols propagated through the communication channel includes additive 230 noise. Received signal 232, vector r may be represented by:












r
=



Ho
+
n







=




HP


(

s
+
c

)


+
n







=




Π






R
H



Q
H


α






QR

-
H



D







Π
T



(

s
+
c

)



+
n








==




α





Π





D







Π
T



(

s
+
c

)



+
n


,







(
43
)







where rΠcustom-characterK×1 is the received signal, o∈custom-characterK×1 is the output to the communication lines (i.e., the transmitted signal) and it n∈custom-characterK×1 is the additive noise at the receiver side. The expression ΠDΠT represents the permuted diagonal:






D
Π
=ΠDΠ
T  (44),


therefore:






r=αD
Π(s+c)+n=αdn⊙(s+c)+n  (45).


The permutation matrix Π orders (i.e., permutes) the constructed diagonal:






d
Π=diag(DΠ)=Πd  (46),


where d=diag(D). In THP, the input signal is first permuted as ΠTs, then the sequence {tilde over (c)} is added via the THP scheme:





ΠTs+{tilde over (c)}=ΠT(s+Π{tilde over (c)})=ΠT(s+c)  (47),


where c=Π{tilde over (c)}.


Continuing at the receiver side, received signal 232 (FIG. 4) is inputted to a block 234 configured and operative to perform scaling by inverting the permuted diagonal vector DΠ, as defined in equation (44), the output of which is denoted by an output signal 236. With reference to the 5-user example of FIG. 6, at the receiver side, complex-valued scalar gain blocks 3161, 3162, 3163, 3164, 3165 (collectively denoted by 3161-5) are configured and operative to apply corresponding complex-valued scalar gain factors F(1), F(2), F(3), F(4), and F(5). (A special case of complex-valued scalar gain factors is where they are real values). For NLP supporting CPE units, output signal 236 (FIG. 4) represents an input to a modulo operation block 238, which is configured and operative to apply modulo arithmetic operation to received signal 236, and to output an output signal 240 of estimated symbols, represented by a vector ŝ. For those CPE units whose supportability does not include NLP (i.e., LP group of users) output signal 236 is effectively output signal 240 (i.e., no modulo operation is applied). With reference to the 5-user example of FIG. 6, at the receiver side, modulo arithmetic blocks 3182 and 3184 are employed by respectively employed respectively by CPE units 1082 and 1084 (FIG. 1, for example) having supportability to decode NLP encoded data (i.e., the NLP group of users). Modulo arithmetic blocks 3182 and 3184 are configured and operative to encode the NLP encoded data by applying modulo operation, the generated result of which are output as estimated symbols 3202 (i.e., ŝ(2)) and 3204 (i.e., ŝ(4)). CPE units not supporting the decoding of NLP encoded data (LP group of users) don't have modulo arithmetic blocks, and consequently the outputs from complex-valued scalar gain blocks 3161, 3163, 3165 constitute as estimated symbols 3201 (i.e., ŝ(1)) 3203 (i.e., ŝ(3)), and 3205 (i.e., ŝ(5)).


For non-cooperative receivers (i.e., receivers that do not share or use information they process between themselves) received signals, r1 and r2, are processed: (i) independently for every user (as the users are non-cooperative); and (ii) by using scalar scaling of the received signals. These can be represented in the matrix-vector form as diagonal gain corrections, diag(g(1)) and diag(g(2))) for both LP and NLP groups of users, and additionally by modulus operations applied only to the NLP group of users, as may be represented by the following equations for outputted estimated symbols:






ŝ
1
=G
(1)
r
1=diag(g(1))r1=g(1)⊙r1  (48),






ŝ
2=mod(G(2)r2)=mod(diag(g(2))r2)=mod(g(2)⊙r2)  (49),


where s1 and s2 respectively signify information symbols for the LP and NLP groups of users and ŝ1 and ŝ2 denote their corresponding estimates (which may be further inputted into error correction blocks (not shown) as known in the art).


The application of complex-valued scalar gain factors F(1), F(2), F(3), F(4), F(5) shown in FIG. 6 for the NLP supporting CPE receiver units, in the case where F(k)≠1 is generally achieved in two steps. The first step involves compensating for the communication channel attenuation gains, given by the diagonal of the L matrix (i.e., diag(L)). The second step involves compensating for gains affected by the modulus operation. In this case an estimated symbol for a k-th NLP supporting CPE receiver unit is given by:













s
^

NLP



(
k
)


=


1


f
Π



(
k
)





mod


(


1


D
Π



(

k
,
k

)





r


(
k
)



)




,




(
50
)







where DΠ=ΠDLΠT; DL=diag(diag(L)); and fΠ=diag(Πdiag(f)ΠT), as the received symbol is prior to frequency equalization at the receiver side (CPE units) is given by (prior to addition of noise):






y=ΠD
L(diag((f))ΠTs+{tilde over (c)})=ΠDLΠT(Πdiag((f))ΠTs+Π{tilde over (c)})  (51).


Recalling that the diagonal matrix D (after factorization) in FIG. 4 is D=DLdiag(f), vector r is scaled by components of the diagonal matrix ΠDLΠT thus obtaining:





DLΠT)−1y=Πdiag((f))ΠTs+Π{tilde over (c)}  (52).


Subsequently the modulo operation is performed on the permuted auxiliary perturbation {tilde over (c)}=Πc after which the second scaling step is performed, in accordance with equation (50), thereby obtaining:





(Πdiag((f))ΠT)−1 mod(ΠDLΠT)−1y=s  (53).


In practice, this procedure is applied to received vector r, thereby arriving at equation (50).


Note that for the standard THP case where F(k)≡1, the k-th estimated symbol reduces to:












s
^

NLP



(
k
)


=


mod


(


1


D
Π



(

k
,
k

)





r


(
k
)



)


.





(
54
)







The estimated symbol ŝNLP(k) corresponds to the NLP group of users at the receiver side (CPE units). At the transmitter side (DPU) the processor applies the modulus operation to symbols whose indices correspond to the NLP group of users after the permutation block (i.e., while not applied to symbols whose indices correspond to the LP group of users. For the LP group of CPE receiver units, the estimated symbol for a l-th CPE receiver unit is:














s
^

LP



(
l
)


=


1


f
Π



(
l
)





1


D
Π



(

l
,
l

)





r


(
l
)




)

,




(
55
)







which is performed in one step (while two steps are also possible). Equation (31) may be rewritten to account for the THP scheme in the case where f(k)=1 for index k corresponding to the permuted NLP group of users as: LUm=diag(f)ΠT(s+c), where c=ΠT{tilde over (c)}.


It is noted that the precoder given by: P=QR−H, where QΠcustom-characterN×K and RΠcustom-characterK×K (where the columns of matrix Q are orthonormal) based on the so-called thin (or reduced) QR factorization (due to K<N) performed for HH as HH=QR and hence H=RHQH) leads to the Moore-Penrose pseudoinverse. We straightforwardly observe that:












P
=





H
H



(

HH
H

)



-
1








=




QR


(


R
H



Q
H


QR

)



-
1








=




QR


(


R
H


R

)



-
1








==



QR


(


R

-
1




R

-
H



)








=




QR

-
H


.








(
56
)







Therefore, the precoder based on the QR decomposition represents a particular pseudoinverse. One may readily observe that the same is true for QR decomposition with permutations, as described hereinabove.


It is assumed that the number of transmitters is N, and that K≤N. In a wireless system, N represents the number of transmitting antennas and in a wire-line system N is the number of CPE units (e.g., modems) transmitting through binder 110 (FIG. 1). The K CPE units which receive information (at a particular frequency) are regarded as active CPE units (“active users”) at that particular frequency. The same CPE (user) may be active at some frequency tones and non-active at the other frequency tones. To further explain the meaning of an activity level of a particular CPE unit at a particular subcarrier frequency, reference is further made to FIGS. 14A and 14B. FIG. 14A is a table showing a database of supportabilities and activity levels of each CPE unit at a particular point in time, generally referenced 450, constructed and operative in accordance with the disclosed technique. FIG. 14B is a schematic diagram showing a graph of a particular example of activity levels of CPE units ordered according to (relative) communication link quality as a function of subcarrier frequency at a particular point in time, generally referenced 480, in accordance with the disclosed technique.


Database 450 shown in tabulated form includes a CPE number field 452, a supportability field 454, and an activity level field 456. CPE number field 452 includes a numbered list of CPE units 1061, 1062, 1063, . . . , 106N-1, 106N (FIG. 1). Supportability field 454 includes an ordered list of supportabilities 1201, 1202, 1203, . . . , 120N-1, 120N (FIG. 1) each associated (index-wise) with a particular CPE unit. Activity level field 456 includes an ordered list of activity levels activity levels 1221, 1222, 1223, . . . , 122N-1, 122N (FIG. 1) (per tone), each associated (index-wise) with a particular CPE unit. Activity level field 454 includes a plurality of subfields: an ON/OFF subfield 458, a synchronization subfield 460, a frequency response subfield 462, and an activity level per tone (subcarrier frequency) subfield 464. Supportability field 454 includes a list of supportabilities 1201, 1202, 1203, 120N-1, 120N defining for each CPE unit its ability to decode at least one of NLP data and LP data (i.e., with respect to hardware, software, firmware, etc.). ON/OFF subfield 458 lists for each CPE unit whether it is switched ‘on’ or otherwise not (e.g., ‘off’, non-existent, etc.). Synchronization subfield 460 lists for each CPE unit whether it is synchronized for transceiving data with corresponding transceivers 1161, 1162, 1163, . . . , 116N-1, 116N of DPU 104 (FIG. 1) (e.g., in the “showtime” state). Frequency response subfield 462 lists for each CPE unit its corresponding frequency response (i.e., a measure of the magnitude (and phase) of a signal propagating via the communication channels as a function of subcarrier frequency). Activity level per tone subfield 464 lists for each CPE unit whether it is considered active or inactive according to a determination made by DPU 104. Specifically, at least one of controller 112 (FIG. 1) and processor 114 of DPU 104 is configured and operative to employ a decision rule (e.g., an algorithm) for determining active CPE units (K) and inactive CPE units from a total number of CPE units (N) according to various criteria including data from ON/OFF subfield 458, synchronization subfield 460, frequency response subfield 462 (e.g., taking account SNR), and according to various constraints including optimization criteria such as the maximization of average rate, maximization of max-min rate, and the like. It is noted that data stored by database 450 is time-dependent (i.e., are subject to change over time). Consequently, activity level for each CPE is both subcarrier frequency-dependent as well as time-dependent.


Further reference is made to FIG. 14B showing a graph 480 of a particular example of activity levels of CPE units ordered according to (relative) communication link quality (performance, e.g., channel capacity) as a function of subcarrier frequency at a particular point in time. Graph 480 includes a vertical axis 482, and a horizontal axis 484. The CPE units are ordered along vertical axis 482 according to relative communication link quality, i.e., CPE units exhibiting a relatively high communication link quality (e.g., SNR) are positioned at a relative higher position along vertical axis 482 in comparison to CPE units exhibiting relatively lower communication link quality. For demonstration purposes, the CPE unit exhibiting the highest communication link quality (in relative terms) is referenced 488 in FIG. 14B, whereas the CPE unit exhibiting the lowest link quality is referenced 486. Horizontal axis 484 represents frequency (i.e., a range of subcarrier frequencies) the higher the frequency the farther along it is on this axis. A point in graph 480 is defined by a (horizontal, vertical) coordinate that corresponds to a particular CPE unit at a particular subcarrier frequency. A point located on a curve 490 or within a shaded area 492 represents that it is inactive, whereas a point located outside shaded area 492, i.e., within a non-shaded area 494 represents that it is active. Shaded area 492 is defined by curve 490 that is characteristic to the communication system 100. Curve 490 is a typical representation of a characteristic tendency of SNR decline with higher frequencies but is given only as an example for explicating the disclosed technique, for it can assume other forms (e.g., non-monotonic, continuous, discontinuous, etc.). FIG. 14B illustrates that a particular CPE unit may be active at some subcarrier frequencies, and inactive at others. In particular, for the example given, CPE unit 488 exhibiting the relatively highest communication link quality is active at all subcarrier frequencies, whereas CPE unit 486 exhibiting the lowest link quality is inactive from a threshold subcarrier frequency fγ. Each CPE unit may have its respective threshold subcarrier frequency. In the wireless case, a drop in the SNR at a specific frequency maybe due to a property of the communication channel (including the environment). Alternatively, inactive CPE units may be determined according to an optimization algorithm (e.g., that maximizes the average rate for all CPE units).


A memory (not shown) is configured and operative to store database 450. The memory is typically configured to be coupled with at least one of controller 112 and processor 114 of DPU 104 (FIG. 1). In one implementation, memory is intrinsic to DPU 104. In another implementation, memory is distinct and external to DPU 104, accessible via known communication methods (e.g., wire-line, wireless, internet, intranet communication techniques, etc.).


The precoder constructed according to the disclosed technique to solve the interoperability problem of CPE units having different supportabilities relates to the general case where K<N, but more specifically relates to the more typical and tougher case where K<N at a particular subcarrier frequency. As aforementioned, a solution to the special simple case where K=N is already proposed by the prior art. The case where K<N may also arise for example in situations where some CPE units might not have a large enough SNR (e.g., due to significant channel signal attenuation in a high frequency portion of the transmitted spectrum of frequencies (e.g., corresponding to moderate and lengthy communication lines)). This case can be expressed, for example, when there's an insufficient SNR for transmitting a certain number of bits in a constellation at a particular frequency (e.g., calculated when bit-loading is less than a specific value). The communication lines corresponding to these CPE units may be reused to transmit information (i.e., precoding them with information symbols for other users) intended for the benefit of the remaining CPE units (i.e., or subset thereof), instead of for themselves. For example, if the ZF precoder is utilized it can be based on the communication channel pseudoinverse. In an extreme case (i.e., beam forming) where all but one CPE unit is active (or operational), all communication lines are utilized to transmit to that CPE unit. Note that a particular CPE unit may be active at some subcarrier frequencies and inactive at the other subcarrier frequencies. The solution of the disclosed technique relates and is attained on a subcarrier frequency basis.


The disclosed technique is configured and operative to employ an algorithm for determining active users from a total number of users according to various criteria, e.g., depending on optimization criteria such as the maximization of average rate, maximization of max-min rate, etc. In that regard, let's recall equation (19), which may be rewritten as:






L
2
M
2
=D
2Π2T(s2+c2)−M21m1  (57).


Noting that m1 is already known, we introduce:





δ1=−D2−1Π2M21m1  (58).


For a general vector precoding scheme there is not assumption that D2=diag(diag(L2)) The diagonal D2 represents additional degrees of freedom. Then we have:






m
2
=L
2
−1
D
2Π2T(s2+{tilde over (δ)}1+c2)  (59),


where perturbation vector c2 is determined by applying optimization criteria which for a given D2 that minimizes a power criterion applied to vector m2. For example, optimization involves minimization of a q-dimensional (absolute-value) norm ∥m2q, where the following standard notation is used:









z


q

=


(




l
=
1

L






z
l



q


)


1
q






for a vector z of a length L. Such minimization may be applied for every realization of s2 and s1 (where s1 governs m1 and therefore {tilde over (δ)}1). Alternatively, minimization is applied to the statistical power averages. Different types of norms of different dimension q may be used. For example, the norm ∥m22 is related to the total power of all m2 components, while minimization of ∥m2 minimizes the maximal power per component. The components of c2 are c2(k)=τkpk. The modulus τk depends on the constellation size of s2 or according to c2(k)=τpk, where the same modulus value is applied to all constellations (i.e., determined by the maximal allowed constellation size as described below). The complex number Pk is what is sought via the optimization process (typically in the form of signed integer components (i.e., its real and imaginary components are signed integers)). For a constant modulus, the disclosed technique may minimize the instantaneous power value related to the currently transmitted symbols or, alternatively, the average power value for a given time period.


To further elucidate the disclosed technique, reference is now made to FIG. 15, which is a schematic block diagram illustrating a specific implementation of the general hybrid-interoperability precoding scheme, specifically showing delineation into two paths, generally referenced 500, constructed an operative in accordance with the disclosed technique. FIG. 15 essentially shows a hybridization arrangement of two delineated data paths, namely, a LP path 502 corresponding to the LP group of CPE receiver units (users) implementing LP, and a NLP path 504 corresponding to the NLP group of CPE receiver units implementing NLP. The principles of general hybrid-interoperability precoding scheme 500 conforms to principles of the disclosed technique heretofore described.


Initially, it is noted that vectors s1, m1, δ1, d1, y1, n1, r1, and ŝ1 are of dimension K1, and vectors s2, m2, d2, y2, n2, r2, and ŝ2 are of dimension K2. Starting from LP path 502, information symbols, represented by a vector s1 (intended for being linearly precoded) are inputted to a permutation block 506 configured and operative to perform permutation of vector elements of s1 according to a permutation matrix Π1′ (where Π1′ is a K1×K1 permutation matrix) (similarly to permutation matrix of block 334 in FIG. 7). The permuted information symbols from permutation block 506 enter a block 508 configured and operative to perform power scaling according to a matrix D1 (D1=diag(d1)) the output of which is directed to a block 510, which in turn is configured and operative to perform inversion of matrix L1 (i.e., calculate inv(L1), where L1 is a lower diagonal matrix of dimensions K1×K1.) (Analogous to block 422 in FIG. 13). Initially, the system and method of the disclosed technique is configured and operative to optimize the linear precoder by optimized selection of diagonal scaling values of D1 and permutation matrix Π1. Block 510 outputs linearly precoded symbols, denoted by a vector m1, which constitutes as an input to a concatenation block 526, as well as to a block 512, which in turn is utilized in NLP path 504. Block 512 is configured and operative to multiply vector m1 by a matrix M21 (FIG. 8) of dimensions of K2×K1 (where L1, L2 and M21 are partitions of a lower diagonal L matrix of dimensions K×K, and K=K1+K2) and to output an offset vector δ1, which in turn is input to a block 514. Block 514 is configured and operative to multiply offset vector δ1 by (−1) which yields a result in the form of equation (39).


Referring now to NLP path 504, information symbols represented by a vector s2 (intended for being nonlinearly precoded) are inputted to a permutation block 516 configured and operative to perform permutation of vector elements of s2 according to a permutation matrix Π2′ (where Π2′ is a K2×K2 permutation matrix) (similarly to permutation matrix of block 336 in FIG. 7). Adder 518 combines the permuted information symbols with a permutation vector c2 and outputs the result to a block 520 configured and operative to perform power scaling according to a matrix D2 (D2=diag(d2)). Similarly to LP, the system and method of the disclosed technique is configured and operative to optimize the nonlinear precoder by optimized selection of diagonal scaling values of D2 and permutation matrix Π2, as well as optimized selection of the perturbation vector c2. Adder 522 is configured and operative to combine the output of block 520 and block 514 and produce a result which is directed to a block 524, which in turn is configured and operative to perform inversion of matrix L2 (i.e., calculate inv(L2), where L2 is a lower diagonal matrix of dimensions K2×K2). It is noted that permutation blocks 506 and 516 may be regarded as a second permutation stage (e.g., stage 2 referenced 332 in FIG. 7) that follows from a first permutation (e.g., stage 1 referenced 330 in FIG. 7). Block 524 outputs nonlinearly precoded symbols, denoted by a vector m2, which constitutes as another input to concatenation block 526.


Concatenation block 526 is constructed and operative to perform concatenation of K1 vector components of m1 and K2 vector components of m2 into a concatenated vector m of K vector components. Essentially, vector 256, m (FIG. 5), consists of two sub-vectors 2561, and 2562, m2 corresponding to two aggregate groups of vector elements respectively associated with the LP group and NLP group of CPE units. The outputted concatenated vector in is fed into a Q block 528 configured and operative to perform the QR decomposition QR HHΠ (where the channel matrix is H and Π represents a block-diagonal permutation matrix constructed from Π1 and Π2). A matrix L is defined by L=RH. Q block 528 outputs an output signal represented by an output vector o, which constitutes also as a transmitted output signal propagating from the transmitter side that via a communication channel 530 for reception at the receiver side.


At the receiver side, LP path 502 corresponds to LP CPE units configured and operative to receive a signal, represented by a received vector r1 which is the sum of a vector y1 and noise n1, denoting that the received information symbols propagated through communication channel 530 includes additive 532 noise. The received signal (vector r1) is inputted to a block 534, which in turn is configured and operative to perform scaling of the received signal per component according to inv(Π1D1ΠT) the result of which yields outputted estimated symbols ŝ1. NLP path 504 corresponds to NLP-supporting CPE units configured and operative to receive a signal, represented by and operative to receive a signal, represented by a received vector r2 which is the sum of a vector y2 and noise n2, denoting that the received information symbols propagated through communication channel 530 includes additive 536 noise. The received signal (vector r2) is inputted to a block 538, which in turn is configured and operative to perform scaling of the received signal per component according to inv(Π2D2 Π2T) the resulting signal is fed into modulo operation block 540, which in turn is configured and operative to apply a modulo operation to the signal outputted by block 538, the result of which yields outputted estimated symbols ŝ2.


The optimized selection of the scaling values of D1, D2, the permutation matrices ΠI, Π2, as well as the perturbation vector c2 may be determined, for example according to various criteria such as the minimization of the power involving m2 as discussed hereinabove in conjunction with equation (59). It is noted that the dimension of the perturbation vector is K2 corresponds to the dimension of the input information symbol vector s2. The functional split to linear and nonlinear precoders solves the interoperability problem, improves performance of the nonlinear precoder for small constellations, as well as well as reduces the dimensionality of the nonlinear precoder. Regarding NLP dimensionality reduction, it is known that determining the perturbation vector may involve large computational complexity (increasing significantly with vector dimension), therefore it may be beneficial to reduce it by reducing the number of the NLP users and the system and method of the disclosed technique allows to perform this by allocating part of the NLP users to the LP users (i.e., all users remain precoded).


Alternatively, optional scaling of the output from the transmitter side by use of a scaling gain factor α is also viable. To further elucidate this implementation, reference is now made to FIG. 16, which is a schematic block diagram illustrating another specific implementation of the general hybrid-interoperability precoding scheme, specifically showing delineation into two paths, generally referenced 550, constructed an operative in accordance with the disclosed technique. FIG. 16 essentially shows a similar hybridization arrangement of two delineated data paths of FIG. 15 apart from several modifications described hereinbelow. FIG. 16 shows two paths, namely, a LP path 552 corresponding to the LP group of CPE receiver units (users) implementing LP, and a NLP path 554 corresponding to the NLP group of CPE receiver units implementing NLP. The principles of general hybrid-interoperability precoding scheme 550 conforms to principles of the disclosed technique heretofore described. General hybrid-interoperability precoding scheme 550 hybridizes between linear precoding and Tomlinson-Harashima vector precoding.


Initially, and similarly to the description heretofore described in conjunction with FIG. 14, it is noted that vectors s1, m1, δ1, d1, y1, n1, r1, and ŝ1, are of dimension and vectors s2, m2, d2, y2, n2, r2, and ŝ2 are of dimension K2, D1=diag(d1), and D2=diag(d2). α is a scalar used for (optional) power scaling. Π1 is a K1×K1 permutation matrix, and Π2 is a K2×K2 permutation matrix. L1 is a lower diagonal matrix of K1×K1. L2 is a lower diagonal matrix of K2×K2. M12 is of K2×K1. L1, L2 and M12 are partitions of lower diagonal L matrix of dimensions K×K, Π represents a block-diagonal permutation matrix constructed from Π1 and Π2.


Starting from LP path 552, information symbols, represented by a vector s1 are inputted to the following blocks 556, 558, 560, 562 which are all respectively configured and operative identically to blocks 506, 508, 510, 512 in FIG. 15. Owing to the lower triangular structure of the L1 matrix, its optimization is not influenced and independent of the nonlinear precoder. Block 562 outputs an offset vector δ1, which in turn is input to a block 564, which in turn is configured and operative to apply inversion to the diagonal of vector L2 (i.e., inv(diag(L2))) and to output a signal to a block 566 configured and operative identically to block 514 in FIG. 15. An output of LP path 552 is a vector m1.


Turning now to NLP path 554, information symbols represented by a vector s, are inputted to a permutation block 516 configured and operative identically to permutation block 516 in FIG. 15. Adder 570 combines the permuted information symbols generated from permutation block 516 with an output of block 556, the result of which is fed into a THP block 572, which in turn is configured and operative to perform THP with respect to L2 (i.e., inversion of matrix L2 according to the principles of THP). THP is performed on permuted information symbols s2 (via permutation matrix Π2) from which a component-wise scaled vector δ1 (i.e., δ1 multiplied by diagonal matrix inv(D2)) is subtracted (corresponding to the subtraction of linear precoded users from nonlinearly precoded users). In this scheme, THP block 572 determines the vector D2=diag(L2), thereby yielding an output signal represented by a vector M2.


Vectors m1, and m2 are inputted to a concatenation block 574 and then to a Q block 576, both of which are respectively identical in construction and operation to blocks 526 and 528 in FIG. 15. An output from Q block 576 is inputted into a scalar gain block 578, which in turn is configured and operative to apply scalar gain factors to an output signal, represented by a vector o, which constitutes as an output from the transmitter side communicated via a communication channel 580 to the receiver side.


At the receiver side, following LP path 552 corresponds to LP CPE units configured and operative to receive a signal, represented by a received vector r1 which is the sum of a vector y1 and additive 582 noise n1. The received signal (vector r1) is inputted to a block 584, which in turn is configured and operative to perform scaling of the received signal per component according to







(

1
α

)

*

inv


(


Π
1

*

D
1

*

Π
1
T


)






the result of which yields outputted estimated symbols NLP path 554 corresponds to NLP-supporting CPE units configured and operative to receive a signal, represented by and operative to receive a signal, represented by a received vector r2 which is the sum of a vector y2 additive 586 and noise n2. The received signal (vector r2) is inputted to a block 588, which in turn is configured and operative to perform scaling of the received signal per component according to







(

1
α

)

*

inv


(


Π
2

*

D
2

*

Π
2
T


)






the resulting signal is fed into a modulo operation block 590, which in turn is configured and operative to apply a modulo operation to the signal outputted by block 588, the result of which yields outputted estimated symbols ŝ2.


Alternatively, according to another implementation of system 102, the scalar factor may be distinctive (e.g., not common, different) for different CPE units (users). In accordance with this alternative implementation, reference is now made to FIG. 17A, which is a schematic diagram illustrating an example of a specific implementation of the general hybrid-interoperability precoding scheme, utilizing different scalar factors, generally referenced 600, configured and operative in accordance with the disclosed technique. Implementation 600 generally provides a system and method for nonlinear precoding of an information symbol conveying data for transmission (over a particular subcarrier frequency) between a plurality of transmitters 1161, . . . , 116N (FIG. 1) and a plurality of receivers 1181, . . . , 118N (FIG. 1) via a plurality of communication channels 1081, . . . , 108N (FIG. 1), where the communication channels define a channel matrix H. A processor (e.g., 114, FIG. 1) is configured to determine a weighting matrix G, whose number or rows is equal to the number of the transmitters; and further for determining a modified channel matrix equal to HG; so as to construct a nonlinear precoder for performing nonlinear preceding of the modified channel matrix. The weighting matrix G can be square (i.e., where the number of rows equals the number of columns).


The weighting matrix G is diagonal having diagonal elements representing individual gains each associated with a particular one of the transmitters. In this case we denote the individual scalar factors by means of a vector α, where each vector element represents an individual scalar factor, each of which is associated with a particular CPE unit (i.e., elements in vector α represent specific power scaling factors that are distinctive for each one of the CPE units). Permutation of inputted information symbols 602 may be performed optionally. In this case, information symbols 602, s, are inputted into a permutation block 604, which is configured and operative to permute information symbol elements in vector s, according to a permutation matrix H′. Each information symbol element in vector s is associated with a particular CPE unit. The permutation takes into consideration the partition of CPE units into two groups (those that employ NLP and those that don't). Adder 610 combines an output from permutation block 604 with a perturbation vector 608 denoted by c, the result of which is a signal 612 that is inputted to a block 614, which int turn is configured and operative to perform the operation: inv(R′)*D, and to output a signal 616 represented by a vector m. Signal 616 is inputted into a Q block 618 configured to perform QR decomposition the result of which is inputted into a gain block 619 denoted by α*G. Gain block 619 is configured and operative to construct a diagonal square weighing matrix G, according to: {tilde over (G)}=diag(α) whose number of rows is equal to the number of transmitters, and to output an output signal 622 that is communicated over a communication channel, denoted by a block 624. Block 614 in conjunction with Q block 618, permutation block 604, block 619, and perturbation vector 608 collectively constitute a precoder 620, denoted by P, which in turn is configured and operative to perform pseudo-inversion of a communication channel matrix H. In the general case, implementation 600 of the disclosed technique uses QR decomposition to determine a modified channel matrix equal to HG. In a particular case involving permutations (i.e., a permutation block 604 Π′), where the processor is further configured for permuting information symbol elements in the information symbol vector, per subcarrier frequency, where each information symbol element is associated with a particular one of the CPE units. The disclosed technique utilizes an inverse of matrix G for preceding, and in particular, for factorizing the communication channel matrix H according to (in the specific case utilizing permutations):






H=ΠR
H
Q
H
{tilde over (G)}
−1  (60).


A signal 626 is a result from propagation through communication channel 624, received at the receiver side. A received signal 632 at the receiver side, represented by a vector r is the sum of signal 626 (y) with additive noise 628 (represented by a vector 71), signifying that the precoded information symbols propagated through the communication channel includes additive 630 noise. Received signal 632 is inputted to a block 634 configured and operative to perform scaling by inverting the permuted diagonal vector custom-character the output of which is denoted by an output signal 636. For NLP supporting CPE units, output signal 636 represents an input to a modulo operation block 638, which is configured and operative to apply modulo arithmetic operation to received signal 636, and to output an output signal 640 of estimated symbols, represented by a vector ŝ. For those CPE units whose supportability does not include NLP (i.e., LP group of users) output signal 636 is effectively output signal 640 (i.e., no modulo operation is applied).


Alternatively, {tilde over (G)} consists of a combination of a variable gain factor with a common gain factor (to all CPE units). This may be expressed by: δ=α*η, where a scalar η represents a common gain factor and the vector α represents the variable gain factor (individual to each CPE unit). Thus, we may choose:






{acute over (G)}=diag(δ)  (61).


The QR decomposition in such case is calculated via the modified channel as follows:






H{tilde over (G)}=ΠR
H
Q
H  (62),


i.e., ΠH{acute over (G)}=RHQH hence (H{tilde over (G)})HΠ=QR and thus {tilde over (G)}HHH=QR. The introduced gains do not affect channel diagonalization. They represent additional degrees of freedom, which may be used for the performance optimization.


Reference is now further made to FIG. 17B, which is a schematic block diagram of a method for a specific implementation of nonlinear precoding utilizing different scalar factors, generally referenced 650, configured and operative in accordance with the disclosed technique. Method 650 initiates with procedure 652. In procedure 652 a weighting matrix G, whose number of rows is equal to the number of transmitters is determined, where nonlinear precoding of an information symbol is employed for conveying data for transmission between a plurality of transmitters and a plurality of receivers via a plurality of communication channels defining a channel matrix H, over a particular subcarrier frequency. With reference to FIGS. 1 and 17A, processor 114 (FIG. 1) executes block 620 (FIG. 17A) by determining a weighting matrix G via block 619 (FIG. 17A).


In procedure 654, a modified channel matrix equal to HG is determined. With reference to FIGS. 1 and 17A, processor 114 (FIG. 1) executes block 620, specifically uses QR decomposition (blocks 614, Q block 618, and block 619) to determine a modified channel matrix equal to HG.


In procedure 656, a nonlinear precoder is constructed for performing nonlinear precoding of the modified channel matrix. With reference to FIGS. 1 and 17A, processor 114 (FIG. 1) executes block 620 by constructing a precoder via blocks 614, 618 and 619 (FIG. 17A).


Reference is now made to FIG. 18, which is a schematic block diagram of a method for a hybrid-interoperability precoding scheme supporting both linear and nonlinear precoding, generally referenced 680, constructed and operative according to the embodiment of the disclosed technique. Method 680 initiates with procedure 682. In procedure 682, information pertaining to supportabilities of a plurality of receivers to decode nonlinearly precoded data is received by a plurality of transmitters. The transmitters are communicatively enabled to transmit an information symbol conveying data to the receivers via a plurality of communication channels over a subcarrier frequency, where the number of transmitters (N) is different than the number of active receivers (K) for that subcarrier frequency. With reference to FIGS. 1 and 3A, DPU 104 (FIG. 1) (includes transceivers 1161, 1162, 1163, . . . , 116N-1; 116N) acquires supportabilities 1201, 1202, 1203, . . . , 120N-1, 120N (to decode at least one type of precoded data: LP data, NLP data, or both) respectively from receivers 1061, 1062, 1063, . . . , 106N-1, 106N during initialization. DPU 104 (FIG. 1) including its transceivers 1161, 1162, 1163, . . . , 116N-1, 116N are communicatively enabled to transmit information symbol 160A (represented by vector o, FIG. 3A) conveying data to receivers 1061, 1062, 1063, . . . , 106N-1, 106N (FIG. 1) via respective communication channels 1081, 1082, 1083, . . . , 108N-1, 108N over a subcarrier frequency, where the number of transmitters is different than the number of active receivers for that subcarrier frequency. DPU 104 (FIG. 1) continually determines (e.g.; via pings; via bi-directional messages, etc.) activity levels 1221, 1222, 1223, . . . , 122N-1, 122N (per tone) of each of CPE units 1061, 1062, 1063, . . . , 106N-1, 106N.


In procedure 684, a precoding scheme defining for which of the receivers the data to be transmitted by the transmitters shall be precoded using at least one of linear precoding and non-linear precoding, according to the supportabilities is determined. With reference to FIGS. 1, 3A, 4, 5, and 15, processor 114 (FIG. 1) determines a precoding scheme that associates each information symbol element of an inputted symbol vector 152A (FIG. 3A), s, to either LP or NLP, according to supportabilities 1201, 1202, 1203, . . . , 120N-1, 120N (FIG. 1). In a particular implementation, permutation block 204 (FIG. 4) permutes information symbol elements in vector s into two successive aggregate groups (represented respectively by sub-vectors 2541 and 2542 in FIG. 5), namely, a first aggregate group of symbol elements s1 (LP path 502 in FIG. 15) corresponding to the LP group of CPE units implementing LP, successively followed by a second aggregate group of symbol elements s2 (NLP path 504, FIG. 15) corresponding to the NLP group of CPE units implementing NLP.


In procedure 686, a signal is constructed by applying a reversible mapping to the information symbol, where the reversible mapping includes elements each respectively associated with a particular one of the receivers, such that those receivers supporting the decoding of nonlinearly precoded data are capable of reversing the reversible mapping to the information symbol, while for those receivers not supporting the decoding of nonlinearly precoded data the information symbol is unaffected by the reversible mapping. With reference to FIGS. 1, 3A, 4, 5, 6, and 10, processor 114 (FIG. 1) constructs a signal 156A (FIG. 3A) by applying a reversible mapping 154A (FIG. 3), e.g., perturbation vector 208, c (FIG. 4) and 252 (FIG. 5) whose vector elements are defined by sub-vector 2521 c1: c1(1), c1(2), . . . , c1(K1), and further by vector elements defined by sub-vector 2522, C2: c2(1), c2(2), . . . , c1(K2) are each respectively associated with a particular one of CPE units 1061, 1062, 1063, . . . , 106N-1, 106N (FIG. 1) such that those receivers supporting the decoding of NLP data are capable of reversing the effect of the perturbation vector, while for those receivers not supporting the decoding of NLP data, the information symbol is unaffected by the reversible mapping, e.g., the perturbation sub-vector for that case: c1=0 does not affect output signal 212 by adder 210 (FIG. 4). Preprocessing block 306 (FIGS. 6, 10) and particularly NLP/LP control mechanisms 3741, 3742, 3743, 3744, and 3745 (FIG. 10) control application of reversible mapping 154A (FIG. 3) (e.g., a modulo-Z adder, a perturbation vector) according to respective supportabilities 1201, 1202, 1203, . . . , 120N-1, 120N (FIG. 1) of CPE units 1061, 1062, 1063, . . . , 106N-1, 106N.


In procedure 688, a precoder characterized by N≠K is constructed, such that the precoder is configured to perform regularized generalized inversion of a communication channel matrix. With reference to FIGS. 1, 3A, and 4, processor 114 constructs a precoder 158A (FIG. 3), e.g., precoder 220 (FIG. 4) such that the precoder is configured to perform regularized inversion (e.g., equation (3B)) of a communication channel matrix 162A, H, (FIG. 3A) and 224 (FIG. 4).


The system and method of the disclosed technique combines linear and nonlinear precoding (precoders) and have the following advantages:

    • 1. Providing a general solution to the interoperability problem between a data providing entity (e.g., a DPU having multiple transceivers) communicatively coupled with multiple data subscriber entities (e.g., CPE units) in a communication network, where part of the data subscriber entities do not support nonlinear precoding (NLP) while another part does. The general case is where the number of transmitters (N) (i.e., transceivers operating as transmitters in the DL direction) is not necessarily equal to the number of active receivers (K) (per tone) (i.e., transceivers operating as receivers in the DL direction). The general solution also includes the special case where N is equal to K. This enhances interoperability of the communication system in whole imparting it with the ability to operate with receivers of different types of supportabilities.
    • Using LP for raising bitrates of inferior communication channels (i.e., those enabling a relatively low number of bits per constellation (compared with superior communication channels enabling a relatively high number of bits per constellation)). It is known in the art that the standard THP scheme may exhibit large power gain loss as well as coding gain loss for small constellations (e.g., 1 or 2 bits, and moderate losses for 3 or 4 bits). For these constellations, the disclosed technique optionally employs optimized LP to avert the aforesaid losses. The optimization of LP is essential and controlled via diagonal gains D1 as well as by the degrees of freedom conferred the permutations Π1, since non-optimized LP introduces loses due to the use of power scaling per output. Optimization may generally achieve greater performance compared with simple scalar scaled ZF.
    • 2. Facilitating a reduction in size of the NLP system (i.e., which effectively translates to the dimension of input vector s2 which is defined to equal to the number of users undergoing or employing NLP), and in so doing, reducing the computational complexity required for determining the perturbation vector c1. While size reduction may not pose a problem for systems employing the THP scheme, however, it may be an issue for schemes employing vector precoding. This may be of importance and interest since the determination of the perturbation vector may be computationally expensive. The exact complexity is difficult to quantify and has been estimated to be between polynomial to exponential on the order of the search space size. Hence, the reduction in size of the NLP system may be very desirable. To determine the perturbation vector, the disclosed technique may typically employ algorithms, which include, for example sphere decoding, the Blockwise Korkine Zolotarev (BKZ) algorithm, and the like.


According to another embodiment of the disclosed technique it is thus provided a method and a system for NLP where the modulus value is constellation-independent. In the prior art, NLP techniques utilize a modulus value that is constellation-dependent. The disclosed technique proposes a method and system for nonlinear precoding of an information symbol at a given precoder input, the information symbol is in a symbol space (i.e., a construct for representing different symbols, e.g., a constellation) having a given symbol space size (e.g., a constellation size). The nonlinear precoding involves modulo arithmetic and has a plurality of inputs. The method includes initially the step of determining a reference symbol space size, which is common to all inputs. Then, the method determines a modulus value according to the reference symbol space size. The method adapts the given symbol space size according to the reference symbol space size. The step of adapting involves scaling the given symbol space size to the boundaries of the reference symbol space size. Subsequently the method nonlinearly precodes the information symbol according to the determined modulus value, common to all of the inputs. The disclosed technique further provides a system for nonlinear precoding of an information symbol at a given precoder input, where the information symbol is in a symbol space size having a given symbol space size. The nonlinear precoding involves modulo arithmetic and has a plurality of inputs. The system includes a controller and a processor. The controller is configured for determining a reference symbol space size that is common to all of the inputs, and for determining a modulus value according to the reference symbol space size. The processor is configured for adapting the given symbol space size according to the reference symbol space size, and for nonlinearly precoding the information symbol according to the determined modulus value common to all of the inputs.


The disclosed technique is implementable for various NLP techniques such as THP, including inflated lattice precoding techniques, which work with non-Gaussian interference and typically outperform THP for all SNR levels, including low SNRs.


In the traditional Tomlinson-Harashima precoding design, the modulo operation is defined per symbol space size (e.g., constellation size), and it is different for different symbol spaces (e.g., constellations). The modulo operation for user number k is denoted as Γτk[x], where Γτ[x] is (see e.g., Ginis and J. M. Cioffi, “A multi-user precoding scheme achieving crosstalk cancellation with applications to DSL systems,” in Proc. 34th Asilomar Conf. Signals, Systems, and Computers, Pacific Grove, Calif., October 2000, pp. 1627-1631) denoted by:












Γ
τ



[
x
]


=

x
-

τ





x
τ

+

1
2







,




(
63
)







where └ . . . ┘ is the floor function, and for a real-valued x, whereas for complex values the modulo operation is performed separately for the real, Re(x), and imaginary, Im(x), parts [we denote the imaginary unit as j]:











Γ
τ



[
x
]


=

x
-

τ






Re


(
x
)


τ

+

1
2





-

j





τ







Im


(
x
)


τ

+

1
2




.







(
64
)







Reference is now made to FIGS. 19, 20, and 21. FIG. 19 is a schematic block diagram of a system for nonlinear precoding exhibiting a modulus size that is constellation-independent, generally referenced 700, constructed and operative in accordance with another embodiment of the disclosed technique. FIG. 20 is a schematic illustration detailing an example configuration of an internal structure of a vectoring processor in the system of FIG. 19, generally referenced 720, constructed and operative in accordance with an embodiment of FIG. 19 of the disclosed technique. FIG. 21 is a schematic diagram of Tomlinson-Harashima preceding used per subcarrier frequency being applied to chosen users only, generally referenced 730, constructed and operative in accordance with an embodiment of FIG. 19 of the disclosed technique. System 700 (FIG. 19) includes a vectoring processor 702 (herein denoted interchangeably “processor”) and a controller 704. Vectoring processor 702 includes a scaler 706 and a nonlinear precoder (NLP) 708. Controller 704 is typically implemented by a physical layer (PHY) controller of a DPU (e.g., DPU 104 of FIG. 1).



FIG. 20 shows an internal structure of vectoring processor 702 of system 700 including a mapper 722 coupled with vectoring processor 702. Vectoring processor 702 includes a preprocessing subblock 710, which in turn includes scaler 706, NLP 708, a plurality of adders 7122, 7123, 7124, 7124, a Q block 714, and a G block 716. Without loss of generality and for the purposes of simplicity, the internal structure of vectoring processor 702 in FIG. 20 shows an implementation in the 5-user example, although the principles analogously extended and apply in the general case for N users. Scaler 706 includes a plurality of gain blocks 7061, 7062, 7063, 7064, and 706c. NLP 708 includes a plurality of modular arithmetic calculation units 7081, 7082, 7083, 7084, and 7085.


Information bits B={b(1), b(2), b(3), b(4) b(5)} (interchangeably denoted “data bits”) are inputted into mapper 722, which is configured and operative to map the information bits (bit streams) into respective information symbols x=fx(1),x(2),x(3),x(4), x(5)), which in turn are inputted into vectoring processor 702. Specifically, the information symbols are correspondingly inputted into individual gain blocks 7061, 7062, 7063, 7064, and 7065 of preprocessing subblock 710 of vectoring processor 702. The gain blocks are configured and operative to apply respective gain components f(1), f(2), f(3), f(4), and f(5) of gain a vector f to the corresponding information symbols (i.e., index-wise) the outputs of which are inputted to adders 7122, 7123, 7124, 7124 (except for an output of gain block 7061 that is directly provided into modular arithmetic calculation unit 7081). Each adder 7122, 7123, 7124, 7124 can be respectively viewed as a part of a respective modular arithmetic calculation unit 7082, 7083, 7084, and 7085. Modular arithmetic calculation units 7081, 7082, 7083, 7084, and 7085 are generally configured and operative to facilitate construction of the diagonal matrix LU in a recursively manner according to:








L
U



(

k
,
n

)


=



L


(

k
,
n

)



L


(

k
,
k

)



.






FIG. 21 illustrates a schematic diagram of Tomlinson-Harashima precoding (THP), employing recursive calculations (of matrix LU) used per subcarrier frequency being applied to chosen users only, constructed and operative in accordance with an embodiment of FIG. 19 of the disclosed technique.


The disclosed technique formulates a way to set the modulo value that is employed in THP to be independent on the symbol space size (constellation size). In the prior art, and in particular with the Tomlinson-Harashima precoder, the modulo value is dependent on a symbol space size (e.g., the constellation size). FIG. 21 shows information symbols x(1), x(2), . . . , x(K) (or permuted information symbols) are inputted to the precoder having a plurality of inputs. K is the number of the active users. FIG. 21 shows the recursive NLP procedure for a more general case of K active users (in comparison with the more specific 5-user example shown in FIG. 20). The modulo sizes, which are denoted in FIG. 21 as M2, . . . , MK (i.e., M2≡τ2, . . . , MK≡τK) depend in the prior art on the constellation size of the input symbols. L is a lower-diagonal matrix (equal to RH, where R is the upper-diagonal matrix obtained by QR decomposition of the permuted channel, i.e., HHΠ=QR (see description hereinabove)). The number of the NLP users is designated K2. The modulus value may be selected to be symbol space (constellation) dependent (as known in the prior art), or alternatively, symbol space (constellation) independent, as proposed by the disclosed technique, the particulars of which follow. For a mixture of linear and nonlinear precoders acting on the same subcarrier frequency, inputs x(k), represent an “impure” information symbol (optionally permuted) having an offset that takes into account the influence of the other users processed in previous step(s) via the linear precoder. This offset, as disclosed, need not be processed sequentially by processor 702, is determined at a processing stage of NLP 708. This facilitates a reduction in the burden of real-time performance since fewer operations are needed to be executed for real-time implementations.


A customary modulus value is r=Mdmin for pulse amplitude modulation (PAM) and τ=√{square root over (M)}dmin for a two-dimensional (2-D) square symbol space (constellations) of digital quadrature amplitude modulation (QAM), where dmin is the quantized symbol position (constellation point) spacing. See for example G. Ginis and J. M. Cioffi, “A multi-user preceding scheme achieving crosstalk cancellation with applications to DSL systems,” in Proc. 34th Asilornar Conf. Signals, Systems, and Computers, Pacific Grove, Calif., October 2000, pp. 1627-1631, and the discussion for square constellations. For 4-QAM the modulus value is τ=2dmin (due to √{square root over (4)}=2), and for 16-QAM the modulus value is τ=4dmin (while the symbol positions (constellation points) in the symbol space are traditionally located at integer positions (n1, n2) taking independently the values {−3, 1,1,3}). Other, non-integer even values, are different for different symbol space sizes (constellation sizes), where dmin (i.e., a symbol position spacing for a symbol space).


The average energy, Ē (or (E)) in M-QAM, where is the number of symbol positions (e.g., constellation points) in the symbol space (e.g., constellation) and is square (i.e., for square symbol spaces (constellations)) is given by: Ē(M−1)dmin2/6, thus the average energy for 4-QAM with constellation points ±1±j, where M=4 (points) and dmm=2 is:








(

4
-
1

)

*


2
2

6


=


3
*

4
6


=
2.





and the constellation square boundary is at 2 (and hence the initial modulo value is 4). The average energy for a symbol space size (constellation) having M-symbol positions (constellation points) is given by:











E
_

=



1
M






m
=
1

M






z
m



2



=


1
M






m
=
1

M



(


x
m
2

+

y
m
2


)





,




(
65
)







where the m-th constellation point is expressed as zm=xm+jym and xm and ym are respectively real and imaginary coordinate values in the complex plane. For 16-QAM, the average energy for we obtain:






Ē 4/16Σm=14(xm2+ym2)=¼{2+10+10+18}=10  (66).


Reference is now further made to FIG. 22A, which is a schematic diagram showing an example of a non-scaled 4-QAM constellation diagram, generally referenced 740, constructed and operative in accordance with the disclosed technique. Particularly, FIG. 22A (as well as with FIGS. 22C, and 22D) illustrates a scatter diagram of symbols (dots) distributed in quantized symbol positions (constellation points) in a symbol space (constellation) that is represented on a complex plane having a real axis I-axis (“in phase”) and an imaginary axis Q-axis (“quadrature”). The square-shaped dotted lines represents a boundary (or frame) of the symbol space. A symbol space size is defined by its boundary. The symbol space size (constellation size) is scaled by the root of the average energy, which yields:







τ

4


-


QAM


=


4

2


=

2



2

.







For 64-QAM it is readily observed that the average energy is equal to 10 and the initial modulo size is 8. Scaling of the initial modulus value by the root of the average energy yields







τ

16

QAM


=


8

10


.





Thus, unit energy scaling produces closed but different modulus values for every symbol space (constellation) which are mutually exclusive (i.e., they may include irrational numbers, such as:










(

2


2


)


(

8

10


)


=


5

2


)

.




It is noted that if one approximates the symbol space (constellation) square with a large number of uniformly distributed integer symbol positions (constellation points), then the modulo value of the symbol space (constellation) with unit average energy would equal r=(i.e., for M-QAM the un-scaled τunscaled=√{square root over (M)}dmin and after rescaling τ with







E
_





we obtain:






τ
=



τ
unscaled



E
_



=




M



d

m





i





n







(

M
-
1

)



d

m





i





n

2


6



=



M

M
-
1



·


6

.








For large M we obtain τ→√{square root over (6)}). For finite and especially small values for M the scaled modulus value is symbol space size (constellation size) M-dependent, (i.e., consequently depending on dmin). Such moduli are a costly operation and they may be made much less costly as discussed hereinbelow. Reference is now further made to FIGS. 22B and 22C. FIG. 22B is a schematic diagram showing an example of non-scaled 16-QAM symbol space (constellation), generally referenced 742. FIG. 22C is a schematic illustration showing a boundary or frame of a modulo operation representing a square of size τ, generally referenced 744. This frame corresponds to a reference symbol space size. Controller 704 (FIG. 19) is configured and operative to determine this reference symbol space size that is common to all inputs x(1), x(2), . . . , x(K) of the NLP 720 (FIGS. 20 and 21). The initial modulus value equals 8. Controller 704 is configured and operative to further determine a modulus value according to the determined reference symbol space size.


According to a typical implementation, controller 704 determines the reference symbol space size that is a maximum symbol space size thereby enabling the transmission of a maximum number of bits per information symbol. Vectoring processor 702 and particularly scaler 706 (FIG. 19), having individual gain blocks 7061, 7062, 7063, 7064, and 7065, is configured and operative to adapt (e.g., scale) the given symbol space size according to the determined modulus value. The process of adapting of involves scaling of symbol positions (constellation points) in the symbol space according to the reference symbol space size. The reference symbol space size can be selected to be numerically equal to 2√{square root over (M)}, where M is a maximal number of symbol positions (constellation points) in the symbol space. The modulus value can be selected to be numerically equal to said reference symbol space size.


NLP 708 (FIG. 19), having individual modular arithmetic calculation units 7081, 7082, 7083, 7084, and 7085 (FIG. 20) is configured and operative to nonlinearly precode inputted information symbol x(1), x(2), . . . , x(K) (FIG. 21) according to the determined modulus value that is common to all of the inputs of NLP 708.


Q block 714 is configured to perform QR decomposition the result of which is inputted into G block 716. G block 716 is a gain scaling block configured and operative to apply gain scaling and to output signals (not shown) that are communicated over a communication channel (e.g., 1081, . . . , 108N, FIG. 1), G block 716 is typically used for gain scaling (e.g., power normalization) so to apply the same gain to all outputs. Alternatively, G block 716, which can be represented by a vector G, has vector elements that can be different for each output, for applying different gain scaling to different outputs. G block 716 is dependent on the number of lines L.


The system and method of disclosed technique uses of same modulus value for any symbol space size (constellation size). This serves several purposes:

    • 1. It scales all constellations automatically to the same power. This follows from known observations that THP transmission results in homogeneous distribution of points over the whole constellation area (see short discussion of this in, e.g., Ginis and J. M. Cioffi, “A multi-user preceding scheme achieving crosstalk cancellation with applications to DSL systems,” in Proc. 34th Asilomar Conf. Signals, Systems, and Computers, Pacific Grove, Calif., October 2000, pp. 1627-1631). It is stressed that the constraint of the average energy of the constellation points equaling one is a mathematical statement but not directly related to the actual transmitted energy of the THP per output. This is a typical scaling employed in digital communications. The constellation square size (for QAM modulation) determines the average energy per THP communication channel (e.g., line, antenna) output. If all squares are designed to be of the same size, the average power consumption will be the same. The constellation points inside of this square are determined from the standard requirement that the minimal distance from a constellation point to the square border is equal to half of the minimal distance between constellation points. For example, if the modulo value is chosen to be 27=2*64=128, then for the traditional points of 16-QAM which are n1+jn2 with n1, n2∈{−3, 1,1,3} and the square boundary placed at coordinates ±4 so that the modulus is 2*4=8 along the real and imaginary axes, therefore the are scaling factor is (128/8)=16. Hence, the new locations of the 16-QAM constellation points are at n1+jn2 with n1, n2∈{−3·16, −16·1, 16·1, 3·16}={−48, 16, 16, 48}. Note, that multiplication by 16 may be achieved efficiently just as a shift by 4 binary positions. The above chosen modulo example namely τ=128 is convenient for constellations up to 4096-QAM (i.e., representing 12-bits, since 212=4096), which can be validated via the above-mentioned general relation τ=2√{square root over (M)}=2√{square root over (212)}=2*26=128); the points of this constellation, denoted as n1+jn2, are integers n1,n2∈(−63, −61, . . . , −1, 1, . . . , 61, 63). If a 14-bit constellation is to be included a modulus value of: 2·√{square root over (214)}=28=256 is proposed. The 12 and 14-bit constellations mentioned above are given as examples. These are the current maximum constellation sizes in the current G.fast standard. In another example, for a modulo value of 8, symbol positions (constellation points) for 16-QAM (are not needed to be scaled) and 4-QAM (which needs to be scaled by factor of 2, is again effectively represented by a shift), with reference now being made to FIG. 22D, which is a schematic diagram showing an example of τ=8=23 chosen as a constant modulo for all constellations, generally referenced 746. Such constellations are 16-QAM designated by black points in FIG. 22D, which is not scaled, since it is in its natural size, and the 4-QAM designated by white points/rings. The 4-QAM is scaled by factor of 2 (where the scaling is applied via a shift and there's no need in multiplication). Specifically, for modulo values of 2, processor 702 performs the modulus operation for binary represented values in a very efficient form (e.g., shift operations that do not require use of multiplication operations or associated hardware). The modulus operation performed on complex numbers (representing constellation points), which are outside of the constellation square scales these constellation points into the constellation square. The scaling of constellations to the unit energy is performed by using a scalar gain factor α. For an illustrative example, if τ=128, which corresponds to the maximum number of bits in the constellation of 12-bits, then α=√{square root over (6)}/128 performs the desired scaling to the unit energy. Similarly, when the maximum number of bits in the constellation is 14, we obtain τ=2*√{square root over (214)}=256 then α=√{square root over (6)}/256. As an option, the precoder matrix L may incorporate a as a part or the whole.
    • 2. It simplifies the system since there's no need to take into account the symbol space size (constellation size) for performing modulo operation in decoding and encoding operations. Having the same modulus value for all symbol spaces makes these operations independent from applied permutations, which would otherwise have to be taken into account.
    • 3. It simplifies hardware implementations. For example, if the modulo has a degree of 2: τ=2p (where the degree p is a positive integer), the divisions in the above moduli relations may be re-substituted for hardware efficient shift operations. Further hardware specific simplifications, may involve mask techniques.
    • 4. The proposed modulo operation is both applied at the centralized transmission unit (DPU 104, FIG. 1) performing the broadcast (i.e. performing the downlink transmission) as well as at the receivers (CPE units 1061, 1062, 1063, . . . , 106N-1, 106N, FIG. 1). The constellations for the NLP group of users and optionally the LP group of users are scaled to the same size and then the same modulo (e.g., τ=256) is applied at the transmitter side (DPU unit) for the NLP group of users. At the receiver side, the NLP group of users employs the same modulo size for any constellation,
    • 5. This implementation is more general than the transmission to the decentralized receivers and it may be used in any up-link or down-link (or both) for virtually any MIMO system that employs the THP (and vector precoding) performing modulo operations at both receiver and transmitter sides.


Note that while “conventional” modulo arithmetic operation may be performed according to dmin*√{square root over (M)}, the system of method of the disclosed technique typically select to perform this operation according to 2√{square root over (Mmax)}, where Mmax represents the maximal number of symbol positions M for a modulation scheme (e.g., M-ary QAM) of a communication standard (e.g., 12 or 14 for G.fast).


Reference is now made to FIG. 23, which is a schematic block diagram of a method for nonlinear precoding where the modulus size is constellation-independent, generally referenced 800, constructed and operative in accordance with the embodiment of FIG. 19 of the disclosed technique. Method 800 initiates in procedure 802. In procedure 802 a reference symbol space size that is common to all inputs of nonlinear precoding involving modulo arithmetic is determined. The nonlinear precoding is of an information symbol at a given precoder input, the information symbol is in a symbol space having a given symbol space size. With reference to FIGS. 19, 20, 21, and 22C, controller 704 (FIG. 19) determines a reference symbol space size 744 that is common to all inputs x(1), x(2), . . . , x(K) (FIGS. 20, and 21) of NLP 708 (FIGS. 19 and 20) and NLP 720 (FIGS. 20 and 21) involving modulo arithmetic (shown by recursive calculation blocks in FIGS. 20 and 21). The information symbol is in a symbol space (e.g., FIGS. 22A, 22B, 22D) having a given symbol space size 744 (FIG. 22C) (constellation size T).


In procedure 804 a modulus value is determined according to the reference symbol space size). With reference to FIGS. 19 and 22B, controller 704 (FIG. 19) determines a modulus value according to reference symbol space size T 744 (FIG. 22C).


In procedure 806, the given symbol space size is adapted according to the reference symbol space size. With reference to FIGS. 19, 20, 21C, and 22D, scaler 706 (FIGS. 19 and 20) adapts (e.g., scales) the given symbol space size according to the determined reference symbol space size in procedure 802. The determined modulus value for all symbol spaces (constellations) is depicted by 746 in FIG. 22D.


In procedure 808, the information symbol is nonlinearly precoded according to the modulus value that is common to all of the inputs. With reference to FIGS. 19, 20, and 21, NLP 708 (FIGS. 19 and 20) and NLP 720 (FIGS. 20 and 21) nonlinearly precode the information symbol (i.e., for each of the inputted information symbols x(1), x(2), . . . , x(K)) according to the determined modulus value that is common to all of the inputs.


System 700 and method 800 are configured and operative for utilization in a multiple-user multiple-input-multiple-output (NANO) matrix channel environment, employing NLP, for normalizing the given symbol space size according to the determined modulus symbol space size (i.e., fundamentally, based on the boundary and not on energy considerations).


It will be appreciated by persons skilled in the art that the disclosed technique is not limited to what has been particularly shown and described hereinabove. Rather the scope of the disclosed technique is defined only by the claims, which follow.

Claims
  • 1. A method for precoding an information symbol conveying data for transmission between a plurality of transmitters and a plurality of receivers via a plurality of communication channels over a subcarrier frequency, the number of transmitters (N) is different than the number of active receivers (K) for that subcarrier frequency, the method comprising: receiving by said transmitters information pertaining to supportabilities of said receivers to decode non-linearly precoded data;determining a precoding scheme defining for which of said receivers said data to be transmitted by said transmitters shall be precoded using at least one of linear precoding and non-linear precoding, according to said supportabilities;constructing a signal by applying a reversible mapping to said information symbol, said reversible mapping includes elements each respectively associated with a particular one of said receivers, such that those said receivers supporting the decoding of said non-linearly precoded data are capable of reversing said reversible mapping to said information symbol, while for those said receivers not supporting the decoding of said non-linearly precoded data said information symbol is unaffected by said reversible mapping;constructing a precoder characterized by N≠K such that said precoder is configured to perform regularized generalized inversion of a communication channel matrix.
  • 2. The method according to claim 1, further comprising determining which of said receivers constitutes an active receiver per said subcarrier frequency wherein each said active receiver is characterized by an activity level defined by one of: (a) is switched on and ready to receive said data at a particular said subcarrier frequency;(b) is switched on and is in a process of receiving said data at a particular said subcarrier frequency;(c) is either one of (a) and (b) but not for other said subcarrier frequency; and(d) is either one of (a), (b), and (c) stipulated by a decision rule determined by at least one criterion related to communication performance.
  • 3. (canceled)
  • 4. The method according to claim 1, wherein said communication channels facilitate propagation of said signal, and are selected from a list consisting of: wire-lines; and antennas in wireless techniques.
  • 5. The method according to claim 1, wherein said supportabilities further include capability for decoding linearly precoded data.
  • 6. (canceled)
  • 7. (canceled)
  • 8. (canceled)
  • 9. The method according to claim 1, wherein said reversible mapping is a perturbation vector, where said elements in said perturbation vector associated with said receivers not supporting the decoding of said nonlinearly precoded are zero, and said perturbation vector includes at least one nonzero element associated with respective at least one said receivers supporting the decoding of said nonlinearly precoded data.
  • 10. (canceled)
  • 11. The method according to claim 1, further comprising precoding according to constructed said precoder; and transmitting said signal by said transmitters via said communication channels.
  • 12. The method according to claim 1, further comprising reversing said reversible mapping by said receivers supporting the decoding of said nonlinearly precoded data.
  • 13. The method according to claim 12, wherein said reversing of said reversible mapping is performed by modulo arithmetic.
  • 14. The method according to claim 1, further comprising permuting information symbol elements in said information symbol per said subcarrier frequency, each information symbol element is associated with a particular one of said receivers.
  • 15. The method according to claim 14, wherein said permuting involves grouping said information symbol elements into distinct and successive aggregate groups according to said supportabilities of respective said receivers, wherein said distinct and successive aggregate groups include a linear precoding (LP) group not supporting the decoding of said nonlinear precoded data, and a nonlinear precoding (NLP) group supporting the decoding of said nonlinear precoded data.
  • 16.-26. (canceled)
  • 27. A hybrid precoder system for precoding an information symbol conveying data for transmission between a plurality of transmitters and a plurality of receivers via a plurality of communication channels over a subcarrier frequency, the number of transmitters (N) is different than the number of active receivers (K) for that subcarrier frequency, the hybrid precoder system comprising: a controller configured for receiving information pertaining to supportabilities of said receivers to decode non-linearly precoded data, and for determining a precoding scheme defining for which of said receivers said data to be transmitted by said transmitters shall be precoded using at least one of linear precoding and non-linear precoding, according to said supportabilities; anda processor configured for constructing a signal for transmission, according to determined said precoding scheme, by applying a reversible mapping to said information symbol, said reversible mapping includes elements each respectively associated with a particular one of said receivers, such that those said receivers supporting the decoding of said non-linearly precoded data are capable of reversing said reversible mapping to said information symbol, while for those said receivers not supporting the decoding of said non-linearly precoded data said information symbol is unaffected by said reversible mapping, said processor constructing a precoder characterized by N≠K such that said precoder is configured to perform regularized generalized inversion of a communication channel matrix.
  • 28. The system according to claim 27, wherein said controller is further configured for determining which of said receivers constitutes an active receiver per said subcarrier frequency, wherein each said active receiver is characterized by an activity level defined by one of: (a) is switched on and ready to receive said data at a particular said subcarrier frequency;(b) is switched on and is in a process of receiving said data at a particular said subcarrier frequency;(c) is either one of (a) and (b) but not for other said subcarrier frequency; and(d) is either one of (a), (b), and (c) stipulated by a decision rule determined by at least one criterion related to communication performance.
  • 29. (canceled)
  • 30. The system according to claim 27, wherein said communication channels are configured to facilitate propagation of said signal, and are selected from a list consisting of: wire-lines; and antennas in wireless techniques.
  • 31. The system according to claim 27, wherein said supportabilities further include capability for decoding linearly precoded data.
  • 32. (canceled)
  • 33. (canceled)
  • 34. (canceled)
  • 35. The system according to claim 27, wherein said reversible mapping is a perturbation vector, where said elements in said perturbation vector associated with said receivers not supporting the decoding of said nonlinearly precoded are zero, and said perturbation vector includes at least one nonzero element associated with respective at least one said receivers supporting the decoding of said nonlinearly precoded data.
  • 36. (canceled)
  • 37. The system according to claim 27, wherein said system is configured to precode constructed said precoder; and to transmit said signal to said receivers via said communication channels.
  • 38. The system according to claim 27, wherein said receivers supporting the decoding of said nonlinearly precoded data are configured for reversing said reversible mapping.
  • 39. The system according to claim 38, wherein said receivers are configured to employ modulo arithmetic for said reversing said reversible mapping.
  • 40. The system according to claim 27, wherein said processor is further configured for permuting information symbol elements in said information symbol per said subcarrier frequency, each information symbol element is associated with a particular one of said receivers.
  • 41. The system according to claim 40, wherein said permuting involves grouping said information symbol elements into distinct and successive aggregate groups according to said supportabilities of respective said receivers, wherein said distinct and successive aggregate groups include a linear precoding (LP) group not supporting the decoding of said nonlinear precoded data, and a nonlinear precoding (NLP) group supporting the decoding of said nonlinear precoded data.
  • 42.-68. (canceled)
PCT Information
Filing Document Filing Date Country Kind
PCT/IL2017/051402 12/28/2017 WO 00
Provisional Applications (2)
Number Date Country
62610751 Dec 2017 US
62439501 Dec 2016 US