PRECODER FOR JOINT COMMUNICATION AND SENSING

Information

  • Patent Application
  • 20250240125
  • Publication Number
    20250240125
  • Date Filed
    January 18, 2024
    a year ago
  • Date Published
    July 24, 2025
    2 months ago
Abstract
In some implementations, a device may obtain information associated with one or more targets, wherein the information associated with the one or more targets includes sensing information and a communication performance control parameter. The device may determine a sensing beampattern based at least in part on the information associated with the one or more targets. The device may determine a target sensing autocorrelation matrix for the sensing beampattern. The device may identify a candidate sensing autocorrelation matrix for a joint communication and sensing transmission based at least in part on the target sensing autocorrelation matrix. The device may determine a target sensing precoder based at least in part on the candidate sensing autocorrelation matrix. The device may generate a joint communication and sensing precoder based at least in part on the target sensing precoder and the communication performance control parameter.
Description
BACKGROUND

Joint communication and sensing (JCAS) is a technology that integrates communication and sensing into a single system. A JCAS system may leverage hardware and signal processing techniques of communication systems to perform sensing tasks while maintaining communication functionalities. This integration may allow for efficient use of the electromagnetic spectrum, reducing the need for separate systems for performing communication and sensing. JCAS may enhance capabilities of systems in complex scenarios, such as urban environments and crowded spaces, where traditional sensing and communication systems may encounter challenges.


SUMMARY

In some implementations, a method includes obtaining information associated with one or more targets, wherein the information associated with the one or more targets includes sensing information and a communication performance control parameter; determining a sensing beampattern based at least in part on the information associated with the one or more targets; determining a target sensing autocorrelation matrix for the sensing beampattern; identifying a candidate sensing autocorrelation matrix for a joint communication and sensing transmission based at least in part on the target sensing autocorrelation matrix; determining a target sensing precoder based at least in part on the candidate sensing autocorrelation matrix; and generating a joint communication and sensing precoder based at least in part on the target sensing precoder and the communication performance control parameter.


In some implementations, a device includes one or more memories; and one or more processors, coupled to the one or more memories, configured to: obtain information associated with one or more targets, wherein the information associated with the one or more targets includes sensing information and a communication performance control parameter; determine a sensing beampattern based at least in part on the information associated with the one or more targets; determine a target sensing autocorrelation matrix for the sensing beampattern; identify a candidate sensing autocorrelation matrix for a joint communication and sensing transmission based at least in part on the target sensing autocorrelation matrix; determine a target sensing precoder based at least in part on the candidate sensing autocorrelation matrix; and generate a joint communication and sensing precoder based at least in part on the target sensing precoder and the communication performance control parameter.


In some implementations, a non-transitory computer-readable medium storing a set of instructions includes one or more instructions that, when executed by one or more processors of a device, cause the device to: obtain information associated with one or more targets, wherein the information associated with the one or more targets includes sensing information and a communication performance control parameter; determine a sensing beampattern based at least in part on the information associated with the one or more targets; determine a target sensing autocorrelation matrix for the sensing beampattern; identify a candidate sensing autocorrelation matrix for a joint communication and sensing transmission based at least in part on the target sensing autocorrelation matrix; determine a target sensing precoder based at least in part on the candidate sensing autocorrelation matrix; and generate a joint communication and sensing precoder based at least in part on the target sensing precoder and the communication performance control parameter.





BRIEF DESCRIPTION OF THE DRAWINGS


FIGS. 1A-1H are diagrams illustrating examples of precoders for joint communication and sensing.



FIG. 2 is a diagram of an example environment in which systems and/or methods described herein may be implemented.



FIG. 3 is a diagram of example components of a device associated with precoders for joint communication and sensing.



FIG. 4 is a flowchart of an example process associated with precoders for joint communication and sensing.





DETAILED DESCRIPTION

The following detailed description of example implementations refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements.


Joint communications and sensing (JCAS) is a technology where sensing functionality is added to communication networks to enable new use-cases such as traffic monitoring, identification of parking spots in busy city streets, around the corner vehicle detection, detecting pedestrians crossing streets, counting the number of people within a local area, detection of unidentified drones or other flying objects, detection of humidity levels for agricultural applications, detecting the presence of people within geo-fenced areas of a factory, accurate localization and tracking of large passive objects in a factory, collision avoidance between autonomous guided vehicles or other mobile robots and people, and estimating the height of stacked pallets or containers in a warehouse, among other examples.


A JCAS transmitter (Tx) and sensing receiver (Rx) may be at a same location in a mono-static mode, which may allow for rapid and easy information exchange between the JCAS Tx and sensing Rx. However, there may be self-interference (SI) between the two nodes due to signal coupling from the JCAS Tx to the sensing Rx as they operate using the same time and frequency resources. In contrast, a bi-static mode may not suffer from self-interference, but instantaneous information exchange may not be possible between the JCAS Tx and sensing Rx, for example, since they are located separately. To use the available resources efficiently, a precoding design may be used to manage the communications and sensing performance jointly. In some cases, a trade-off parameter may be used in the precoding design to adjust the communication and sensing performance flexibly. However, to find a balance between the communications and sensing performance, it may be difficult to provide a guaranteed communications or sensing performance as the trade-off parameter used may have a complex dependence on other network parameters. Various aspects are described herein for a novel, low-complexity precoding technique with a closed-form solution that provides a fully controllable communication performance. The aspects described herein may allow some signal-to-interference-and-noise ratio (SINR) loss for each communication user equipment (UE), and may optimize the sensing performance accordingly.


In some examples, a JCAS Tx may have ideal channel state information (CSI). However, there may be some estimation errors caused by thermal noise, external interference, or hardware impairments at the receiver, among other examples. These may be referred to as Type-1 channel estimation errors. Additionally, or alternatively, there may be some estimation errors caused by quantization errors in practical systems. These may be referred to as Type-2 channel estimation errors. There are two cases of channel estimation modes in current communications systems. Case 1 refers to reciprocity-based operation in a time-division duplex (TDD) mode where the downlink channels are estimated via uplink pilot symbols. Case 2 refers to channel estimation operation in the UE side in TDD or frequency-division-duplex (FDD) mode where the estimated coefficients are quantized and fed back to the base station. In general, Case 2 mode includes both type-1 errors and type-2 errors, whereas Case 1 includes only type-1 errors. Type-1 errors can be statistically modelled using additive white Gaussian noise errors, whereas type-2 errors are modelled as norm-bounded errors where the maximum error amount depends on the selected quantization points. Due to channel estimation errors, the communication performance may decrease, and robust algorithms may be required to decrease the effects of the channel estimation errors. Various aspects are described herein for a precoder design using sensing-aware zero forcing (SAZF) that provides robustness against both type-1 and type-2 channel estimation errors in both case 1 and case 2 and under both sensing methods (mono-static and bi-static).


In some examples, a direct signal path between JCAS Tx and sensing Rx may be eliminated either by Tx or Rx operations, for example, since the direct signal path does not include any useful information about the targets to be sensed. This may be true in both gNB mono-static and gNB bi-static cases. In the first scenario, the direct path may cause strong self-interference as both JCAS Tx and sensing Rx may be closely located and may be operating using the same time and frequency resource block. The received signal power of the reflected signal coming from the nearby objects may be significantly lower than that of the self-interference power. This may also result in saturation of analog-to-digital converters of sensing Rx. To realize the gNB mono-static JCAS, a self-interference may need to be suppressed. For a gNB bi-static scenario, by means of sufficient spatial separation between JCAS Tx and sensing Rx, a strong direct signal component may not be expected. Nevertheless, this line-of-sight component may have more power than the signal components reflecting from the targets. When there is not any direct connection via another dedicated link (wired or wireless) between JCAS Tx and sensing Rx nodes, the wireless direct link between JCAS Tx and sensing Rx can be used to extract some information. On the other hand, considering current network abilities, it may be possible to connect JCAS Tx and sensing Rx to a higher-level network entity (such as a core network) so that some information exchange becomes possible. When there is such a connection, as the direct path does not include any information about the surrounding targets, it may be eliminated again, either at the sensing Rx or JCAS Tx, to improve performance.


For gNB mono-static scenarios, a common assumption is that by means of analog or digital cancellation techniques, it may be possible to suppress the self-interference sufficiently so that it does not affect the sensing performance. On the other hand, analog or digital cancellation without any multiple-input multiple-output (MIMO) processing may not be sufficient for JCAS systems, as these systems may have potentially larger transmit power compared to traditional communication transmitters to sense passive objects. In some cases, to suppress the self-interference to a noise level (−90 dBm) in a full-duplex communication system with 23 dBm Tx power, 113 dB suppression is required. Considering that a mono-static JCAS system may experience two-way path-loss (for example, the signal transmitted by JCAS Tx may first reach to a target and may be reflected back to sensing Rx), higher Tx power may be required for sensing, resulting in a higher SI suppression necessity. For gNB bi-static scenarios, it may be possible to operate sensing algorithms at the sensing Rx node when the direct path between JCAS Tx and sensing Rx exists. When there is a dedicated connection between JCAS Tx and sensing Rx, the direct path can be calculated (for example, an angle-of-arrival (AoA) as well as delay and Doppler values) and, thus, it may be possible to either cancel the related path at the sensing Rx or discard the information (AoA, delay, Doppler) for that path. However, the performance may degrade due to unnecessary power leakage toward the direct path between the JCAS Tx and sensing Rx, a decreased degree of freedom at the sensing Rx, and/or decreased resolution around the direct path angles. As a result, in gNB bi-static scenarios with some level of information exchange between JCAS Tx and sensing Rx, the wireless direct link between JCAS Tx and sensing Rx may need to be cancelled to improve performance. Various aspects are described herein for an array signal processing algorithm for increasing SI suppression capability in a gNB mono-static system and/or suppressing the useless wireless direct path between JCAS Tx and sensing Rx in a gNB bi-static system. The aspects may allow some level of suppression by modifying JCAS Tx precoder with the direct path channel information. This suppression capability may help the JCAS system to have an SI signal level that is close to the noise power level in gNB mono-static mode and/or to have a less dominant direct path between the JCAS Tx and sensing Rx in gNB bi-static mode.



FIGS. 1A-1H are diagrams illustrating examples of precoders for joint communication and sensing.


As shown in FIG. 1A and example 100, in a mono-static mode, a JCAS Tx and sensing Rx are at the same location, which may allow for rapid and easy information exchange between the JCAS Tx and sensing Rx. For example, a JCAS Tx and sensing Rx may communicate with UE 1 and UE 2 using communication beams and may sense Target 1 and Target 2 using sensing beams. In some cases, this may result in self-interference between the two nodes, for example, due to signal coupling from JCAS Tx to sensing Rx as they operate using the same time and frequency resources. In contrast, as shown in FIG. 1A and example 102, a JCAS Tx may communicate with UE 1 and UE 2 using communication beams and may sense Target 1 and Target 2 using sensing beams. Additionally, a sensing Rx may sense Target 1 and Target 2 using sensing beams. The bi-static mode may not suffer from self-interference. However, instantaneous information exchange may not be possible between JCAS Tx and sensing Rx, for example, since the JCAS Tx and sensing Rx are located separately.


As shown in FIG. 1B and example 104, a JCAS precoder may be designed. As shown by reference number 106, a JCAS device may obtain prior sensing knowledge and may obtain a communication performance control parameter. As shown by reference number 108, the JCAS device may determine a sensing beampattern according to the prior information about the targets. As shown by reference number 110, the JCAS device may determine a desired sensing autocorrelation matrix leading to the desired beampattern. As shown by reference number 112, the JCAS device may find a feasible sensing autocorrelation matrix for the JCAS transmission. As shown by reference number 114, the JCAS device may determine the optimal sensing precoder. As shown by reference number 116, the JCAS device may design a JCAS precoder using the optimal sensing precoder and the communication performance control parameter. Additional details are described below.


In some aspects, a JCAS system (mono-static or bi-static) may be designed where the downlink communication data signal is used also for sensing purposes. A dedicated sensing Rx that is separate from the JCAS Tx may be used to perform sensing operations (such as object detection, angle-of-arrival estimation, or distance/velocity estimation of targets, among other examples). The location of the sensing Rx can be near (mono-static) or far away (bi-static) from the JCAS. In some cases, there may be a dedicated connection (either cabled or wireless) between the JCAS Tx and sensing Rx for information exchange. In some cases, the JCAS Tx and sensing Rx may have the same number of antennas (N) with the same array geometry. Matrices and vectors described herein are denoted by uppercase and lowercase letters, tr(⋅), custom-character[⋅], ∥⋅∥, (⋅) are trace, expectation, Frobenius (custom-character) norm, and pseudoinverse operators, respectively. custom-character, custom-character denote the sets of complex and real numbers, respectively. [A]i,j denotes the entry at the i-th row and the j-th column of the matrix A, exp(A) denotes the matrix satisfying [exp(A)]i,j=e[A]i,j for all entries. Acustom-character0 shows that the matrix A is Hermitian and positive semi-definite. In denotes the n×n identity matrix. In some aspects, there may be K≤N single-antenna UEs to simultaneously receive data from the JCAS Tx at the same time/frequency resource block in multi-user mode. Each UE may receive L data samples in a resource block on which the communication channel is assumed to be constant. In this case, the received signal by the k-th UE can be written as








y
k

=



h
k









=
1

K



w




s



+

z
k



,




where

    • ykcustom-character is the received signal vector,
    • hkcustom-character is the channel vector between the JCAS Tx and the k-th UE,
    • wk is the precoder vector designed by the JCAS Tx for the k-th UE,
    • skcustom-character are the intended information samples for the k-th UE, and
    • zkcustom-character are the noise samples at the k-th UE receiver.


In some aspects, zk˜CN(0,σk2I), where σk2 is the average noise power at the k-th UE receiver. A transmission of independent streams to UEs may be performed with the model:







𝔼
[


s
k



s

H


]

=

{





1
L

,





if


k

=







0
,





if


k












The term 1/L may be used to have unity power for each block with L samples. After defining augmented matrices, the received signal of all UEs may be represented as: Y=HX+Z=HWS+Z, where the matrices Y∈custom-character, H∈custom-character, W∈custom-character, S∈custom-character, Z∈custom-character are formed by concatenating yk, hk, wk, sk, zk for all k in the suitable dimension. For the perfect (ideal) CSI case, it may be assumed that the channel matrix H is perfectly known by the JCAS Tx. In this case, the signal model may be represented as:







y
k

=




h
k







=
1

K



w




s





+

z
k


=





h
k



w
k



s
k




desired

+




h
k








k





w




s







interference

+




z
k



noise

.







The signal-to-interference-and-noise ratio (SINR) for each user can be expressed as








SINR
k

=






"\[LeftBracketingBar]"



h
k



w
k




"\[RightBracketingBar]"


2


𝔼
[




"\[LeftBracketingBar]"




h
k










k





w




s





+

z
k




"\[RightBracketingBar]"


2

]


=





"\[LeftBracketingBar]"



h
k



w
k




"\[RightBracketingBar]"


2










k


K





"\[LeftBracketingBar]"



h
k



w





"\[RightBracketingBar]"


2


+

σ
k
2





,



k
.






To obtain high SINR values, a commonly used approach is zero-forcing (ZF) precoding which eliminates the multi-user interference by setting hkcustom-character=0, ∀k≠custom-character. To maintain fairness between the users, the condition hkwk=μσk, ∀k may be used for some non-negative real number μ, resulting in SINRk2, ∀k. The related conditions can be written in the matrix form as HW=μΣ, where Σ=diag(σ1, σ2, . . . , σK). The precoder matrix W may also satisfy the transmit power constraint tr(WWH)=Pt, where Pt is the total transmit power of the JCAS Tx for the whole resource block with L samples. The ZF solution aims at maximizing UE SINRs (and hence μ) under interference cancellation and transmit power constraints. To find the ZF solution, solve for (P0).













max


W


μ


such


that


HW

=

μ



,


tr

(

WW
H

)

=


P
t

.






(
PO
)







The optimal solution of (P0) can be expressed by:







W
ZF

=




P
t






H









H





.






Example proof: all solutions of the linear matrix equation HW=μΣ can be formulated as W=μHΣ+H0H1, where H0 is a matrix whose columns form a basis for null space of H and H1 is an arbitrary matrix with suitable dimensions. It is known that H0 can be found by singular value decomposition and the columns of H0 can be chosen as the right singular vectors of H corresponding to zero singular values. In other words, the singular value decomposition of H can be written as:







H
=


(




U
1




U
2





)



(




S
1




0

r
×

(

N



r

)








0


(

K
-
r

)

×
r





0


(

K
-
r

)

×

(

N
-
r

)






)



(




V
1
H






V
2
H




)



,




where

    • r=rank(H) is the number of non-zero singular values,
    • S1custom-character is a diagonal matrix with non-zero singular values on its diagonal,
    • columns of U1custom-character, U2custom-character are left singular vectors, and
    • columns of V1custom-character, V2custom-character are right singular vectors of H.


In this case, H and H0 can be expressed as:








H


=


V
1



S
1

-
1




U
1
H



,


H
0

=


V
2

.






As right singular vectors are mutually orthogonal (e.g., V2H V1=0), H0H H=0 and:







P
c

=



tr
(


WW
H

)

=





μ


H







+

H
0




H
1






2

=




μ
2







H






2


+





H
0



H
1




2





μ
2







H



Σ



2









Additionally, it follows that:






μ





P
t






H







.





The equality holds when:







W
ZF

=




P
t






H









H





.






Therefore, the ZF precoder provides SINR levels:








SIN


R
k


=


P
t






H






2



,




The sensing performance depends on the beampattern created by the JCAS Tx. The beampattern for the transmitted signal may be given by:








P

(

φ
,
θ

)

=



a
H

(

ϕ
,
θ

)



𝔼
[

XX


H


]



a

(

φ
,
θ

)



,




where:







a

(

φ
,
θ

)

=

exp



(

j



2

π

λ



Bu

(

φ
,
θ

)


)






is the array steering vector (for both JCAS Tx and sensing Rx antenna arrays),

    • B∈custom-character is the matrix including antenna positions of the Tx/Rx antenna array, and
    • u(φ,θ)=[cos θ sin α cos θ cos φ sin θ]T is the unit vector for the azimuth angle φ and the elevation angle θ.


Here, an array (such as a uniform linear array (ULA)) with half-wavelength spacing may be placed onto the x-axis and, therefore, the array steering vector can be expressed as a(φ,θ)=[1 ejπ sin φ ej2π sin φ . . . ej(N-1)π sin φ]T, which is only a function of the azimuth angle by the symmetry of ULA in elevation angles.


An optimal sensing precoder W0 can be designed according to the prior information about nearby sensing targets. The procedure includes 4 steps.


Step-1: Determine a sensing beampattern according to the prior information about the targets.


In this step, a related sensing beampattern P(φ,θ) is determined. If there is no prior information about the targets, an omnidirectional pattern can be used where P(φ,θ)=1 for all φ, θ. If the target angular locations are limited to an interval, a directional beampattern can be aimed where P(φ,θ)=1 for φ∈[φii+1], θ∈[θii+1] for some i and zero elsewhere. Here [φii+1], [θii+1] are the azimuth and elevation intervals for the i-th target, respectively.


Step-2: Determine a desired sensing autocorrelation matrix Rd leading to the desired beampattern.


We solve the problem (P1) in this step:












min

R
d







P

(
φ
)

-



a
H

(
φ
)



R
d



a

(
φ
)





p



such


that



R
d



0

,



tr

(

R
d

)

=

P
t


,




(

P

1

)







where ∥⋅∥p denotes any valid norm (such as custom-character norm).


Step-3: Find a feasible sensing autocorrelation matrix for the JCAS transmission.


As the precoder matrix W has dimensions N×K, the equation WWH=Rd does not have a solution if rank(Rd)>K. To find a feasible solution, solve (P2) to find a feasible autocorrelation matrix Rd,K.












min

R

d
,
K







R
d

-

R

d
,
K







such


that



R

d
,
K




0

,



rank
(

R

d
,
K


)


K

,



tr

(

R

d
,
K


)

=


tr

(

R
d

)

.






(
P2
)







The optimal solution of (P2) is given by:








R

d
,
K


=




k
=
1

K



(


λ
k

+


1
K






i
=

K
+
1


N


λ
i




)




u
k



u
k
H




,






    • where λ1≥λ2≥ . . . ≥λN are the eigenvalues and u1, u2, . . . , uN are the corresponding eigenvectors of Rd.





Example Proof: Let the eigenvalues of Rd,K be e1≥e2≥ . . . ≥eK≥eK+1=eK+2= . . . =eN=0. Using von Neumann's trace inequality:








t


r

(


R
d



R

d
,
K



)







i
=
1

N




λ
i



e
i




=




i
=
1

K




λ
i




e
i

.







Therefore:











R
d

-

R

d
,
K





2

=




t


r

(

R
d
2

)


+

t


r

(

R

d
,
K

2

)


-

2
·

tr

(


R
d



R

d
,
K



)









i
=
1

N



λ
i
2


+




i
=
1

K


e
i
2


-

2





i
=
1

K




λ
i



e
i






=





i
=

K
+
1


N


λ
i
2


+




i
=
1

K




(


e
i

-

λ
i


)

2

.








Using the Arithmetic Geometric Mean inequality:












i
=
1

K



(


e
i

-

λ
i


)

2





1
K




(




i
=
1

N




"\[LeftBracketingBar]"



e
i

-

λ
i




"\[RightBracketingBar]"



)

2





1
K




(





i
=
1

N


e
i


-

λ
i


)

2



=


1
K




(




i
=

K
+
1


N


λ
i


)

2



,




where the last equality is derived from:










i
=
1

N



λ
i


=


tr

(

R
d

)

=


t


r

(

R

d
,
K


)


=





i
=
1

N


e
i


=




i
=
1

K



e
i

.









Therefore:











R
d

-

R

d
,
K





2







i
=

K
+
1


N


λ
i
2


+


1
K





(




i
=

K
+
1


N


λ
i


)

2

.







The equality holds when:









e
k

-

λ
k


=


1
K



(




i
=

K
+
1


N


λ
i


)



,



k

=
1

,
2
,


,
K
,




and the eigenvectors of Rd,K corresponding to e1, e2, . . . , eK are u1, u2, . . . , uK, respectively.


Step-4: As a final step, determine the optimal sensing precoder by solving (P3).











min
W





HW
-




P
t







H
















such


that



WW
H


=


R

d
,
K


.





(
P3
)







The solution of (P3) provides a precoder W0 with a desired sensing beampattern (W0W0H=Rd,K) and the solution is chosen to have a performance close to that of the ZF solution.


The optimal solution of (P3) is given by W=FUVH, where F∈custom-character is a matrix satisfying (such a matrix exists as rank(Rd,K)≤K≤N) FFH=Rd,K and the matrices whose columns are left and right singular vectors of FHHHΣ are U and V, respectively.


Example Proof: Using several algebraic manipulations, we can write:










HW
-




P
t







H














2

=


t


r

(

H

W


W
H



H
H


)


+



P
t






H






2




tr

(


2

)


-


2
·



P
t






H







·
Re




{

tr

(


HW

)

}

.







As tr(HWWHHH)=tr(HRd,KHH) is fixed, the term Re{tr(ΣHW)} may be maximized. By von Neumann's trace inequality:








Re


{

tr

(


HW

)

}






"\[LeftBracketingBar]"


tr

(


HW

)



"\[RightBracketingBar]"







i
=
1

K


d
i



,






    • where dis are singular values of the matrix ΣHW.





We know that the eigenvalues of the matrix ΣHWWHHHΣ are dis. Hence Re{tr(ΣHW)} is upper bounded by the sum of square roots of the eigenvalues of the matrix ΣHWWHHHΣ=ΣHRd,KHHΣ.


When W=FUVH,







W


W
H


=


F

U


V
H


V


U
H



F
H


=


R

d
,
K


.






Let the singular value decomposition of FHHHΣ be UDVH. Then:








HW

=




HFU


V
H



=


V

D


U
H


U


V
H


=

V

D



V
H

.








As the singular values of ΣHW are di's, D=diag(d1, d2, . . . , dK).


Finally:








tr

(


HW

)

=


t


r

(
D
)


=




i
=
1

K



d
i




,






    • which gives the optimal objective value.





Therefore:

    • W0=FUVH is the optimal precoding for sensing.


After finding the optimal sensing precoder, the proposed SAZF technique can be used to modify the ZF solution so that an efficient JCAS precoder can be designed.


The resulting JCAS precoder can be found by solving (P4).












min

W
,
μ






W
-

W
0






such


that






HW

=

μ



,

μ


μ
0


,


t


r

(

W


W
H


)


=

P
t


,


where





μ
0


=




P
t







H







·

10


c
dB


2

0







and






(
P4
)









    • cdB is an input parameter that controls how much SINR loss is allowed for each UE compared to that of the ZF solution.





The optimal solution of (P4) is given by:








W

S

A

Z

F


=


c
1



H







+

c
2




V
2



V
2
H



W
0





,


μ

S

A

Z

F


=

c
1


,




where

    • V2 is a matrix whose columns form a basis for null space of H and








c
1

=

max


{



a
1






a
2




a
1
2



a
3


+

a
3
2





,


μ
0


}



,


c
2

=




P
t

-


c
1
2







H







2








W
0
H



V
2






,



a
1

=

Re


{

t


r
(


W
0
H



H





)


}



,


a
2

=


P
t







W
0
H



V
2




2



,


a
3

=






H







2

·






W
0
H



V
2




2

.







Example Proof: first consider the problem for a fixed μ. As tr(WWH)=tr(W0W0H)=Pt, Re{tr(W0H W)} may be maximized as:










W
-

W
0




2

=


t


r

(

W


W
H


)


+

t


r

(


W
0



W
0
H


)


-


2
·
Re




{

tr

(


W
0
H


W

)

}

.







All solutions of the linear equation HW=μΣ can be expressed in the form: W=μHΣ+V2W1, where V2 is a matrix whose columns form a basis for the null space of H and W1 is any matrix with suitable dimensions.


Specifically, V2 can be selected using the singular value decomposition of H:







H
=


(


U
1




U
2


)




(




S
1




0

r
×

(

N
-
r

)








0


(

K
-
r

)

×
r





0


(

K
-
r

)

×

(

N
-
r

)






)




(




V
1
H






V
2
H




)



,




where

    • r=rank(H) (e.g., the number of non-zero singular values),
    • S1custom-character is a diagonal matrix with non-zero singular values on its diagonal,
    • columns of U1custom-character, U2custom-character are left singular vectors, and
    • columns of V1custom-character, V2custom-character are right singular vectors of H.


In this case, the pseudoinverse of H can be calculated as:







H


=


V
1



S
1

-
1





U
1
H

.






As right singular vectors are mutually orthogonal, i.e., V2HV1=0, V2HH=0. In this case:







Re


{

t


r

(


W
0
H


W

)


}


=


Re


{

tr


(


W
0
H

(

μ


H







+

V
2




W
1




)

)


}


=

Re



{


μ

t


r
(


W
0
H



H





)


+

t


r

(


W
0
H



V
2



W
1


)



}

.







Hence, Re{tr(W0HV2W1)} may be maximized as the term Re{μtr(W0HHΣ)} is fixed. By the transmit power constraint and the fact V2HH=0:







P
t

=


t


r

(

W


W
H


)


=





μ


H







+

V
2




W
1






2

=



μ
2








H







2


+






V
2



W
1




2

.








As V2HV2=I:











W
1



2

+


μ
2







H







2



=


P
t

.





Hence, an equivalent problem becomes:












max

W
1




tr

(


W
0
H



V
2



W
1


)


+

t


r

(


W
1
H



V
2
H



W
0


)



such


that










W
1



2

+


μ
2







H







2



=


P
t

.






(

P4
.1

)







(P4.1) is a quadratic programming problem with a single quadratic constraint and a linear objective function. It can be shown that strong duality holds for this problem, and hence a Lagrange multipliers method provides the globally optimal solution. The Lagrangian can be written as:










(

W
1

)

=


tr


(


W
0
H



V
2



W
1


)


+

t


r

(


W
1
H



V
2
H



W
0


)


+

λ

(





W
1



2

+


μ
2







H







2


-

P
t


)



,




and when:











=




V
2
H



W
0


+

λ


W
1



=
0


,




The result is:







W
1

=


-

1
λ




V
2
H




W
0

.






In this case:






W
=


μ


H







+

V
2




W
1




=

μ


H







-

1
λ




V
2



V
2
H




W
0

.









In other words, the solution of (P4) with c1=μ and c2=1/λ is obtained. By the condition tr(WWH)=Pt:









1

λ
2








V
2
H



W
0




2


+


μ
2







H







2



=

P
t





yielding:









"\[LeftBracketingBar]"


1
λ



"\[RightBracketingBar]"


=





P
t

-


μ
2








H







2









W
0
H



V
2





.





W0HV2≠0 can be assumed because otherwise, the objective function in (P4.1) becomes zero for any W1 showing that it is not possible to enhance sensing performance and W1=0 can be selected to obtain the standard ZF solution.


To maximize








Re


{

t


r

(


W
0
H



V
2



W
1


)


}


=


-

1
λ








W
0
H



V
2




2



,




λ<0 and hence:








c
2

=


-

1
λ







P
t

-


c
1
2







H







2









W
0
H



V
2







,




as desired.


Solving (P4.1) for a fixed λ and to find the optimal solution of (P4), the objective function Re{tr(W0HW)} can be maximized with respect to μ. So, Re{μtr(W0HHΣ)+tr(W0HV2W1)} can be maximized. Using the fact:







t


r

(


W
0
H



V
2



W
1


)


=



-

1
λ








W
0
H



V
2




2


=





W
0
H



V
2








P
t

-


μ
2







H



Σ



2










The following is to be solved:












max
μ


μ

Re


{

tr

(


W
0
H



H



Σ

)

}


+





W
0
H



V
2








P
t

-


μ
2







H



Σ



2






such


that


μ





μ
0

.





(

P4
.2

)







The objective function in (P4.2) can be expressed as ƒ(μ)=a1μ+√{square root over (a2−a3μ2)} where a1, a2, a3 are defined as given in the optimal solution of (P4). The following is known:











2


f

(
μ
)





μ
2



=


-



a
2



a
3




(


a
2

-


a
3



μ
2



)


3
/
2





0


,




and hence ƒ(μ) is concave for






μ





a
2


a
3



.





Therefore, the optimal solution of (P4.2) is obtained at μ=max{μopt, μ0} where μopt is the solution of










f

(
μ
)




μ


=

0
.





μopt can be obtained by solving:










f

(
μ
)




μ


=



a
1

-



a
3


μ




a
2

-


a
3



μ
2






=


0


as



μ
opt


=


a
1






a
2




a
1
2



a
3


+

a
3
2




.








Finally, as the optimal solution of (P4.2), the following is obtained:







μ
=

max



{



a
1






a
2




a
1
2



a
3


+

a
3
2





,

μ
0


}



,




as desired. The proof is completed by solutions of (P4.1) and (P4.2).


Using the solution of (P4), a JCAS precoder can be designed as described in Step-5.


Step-5: Design a JCAS precoder using W0 (output of Step-4), the communication channel matrix H, and the input control parameter cdB by solving (P4).


The condition in (P4) HW=μΣ is a linear equation with NK unknowns (the entries of W) with K2 equations for a fixed μ. Hence, for K<N, there are infinitely many solutions with the degrees of freedom (N−K)K. The method SAZF tries to use all degrees of freedom to optimize sensing performance by also optimizing μ under the given communication SINR loss allowance.


As shown in FIG. 1C and example 110, example beampatterns are shown for SAZF for cdB=0 dB (solid line) and cdB=3 dB (dotted line) and for two UEs with 3 dominant angular paths with azimuth angles ϕ=−15°, −25°, −35° for UE 1 and #=40°, 20°, 50° for UE 2, and a single target in [−5°, 5° ]. For both UEs, the normalized channel path gains are 0, −5, −10 dB, respectively. There is an N=16 element ULA with half-wavelength inter-element spacing at the JCAS Tx.


In some cases, due to thermal noise, inter/intra-cell interference, or hardware impairments, among other examples, the channel coefficients might not be perfectly estimated. Furthermore, if the estimated channel coefficients are fed back to the transmitter, to decrease the feedback overhead, a quantization is applied introducing additional errors. Two channel estimation error types under two cases can be considered:


Type-1 channel estimation error: The error obtained in the channel estimation algorithm due to thermal noise, other interference signals, and/or hardware impairments. This is modelled as an additive complex Gaussian noise with known second order statistics.


Type-2 channel estimation error: The error obtained due to quantization of the estimated channel coefficients. This is modelled as an additive norm-bounded error where the maximum error amount is determined by the selected quantization points and assumed to be known.


Case 1: Reciprocity-based operation in TDD mode where the downlink channels are estimated via uplink pilot symbols. In this case, only the type-1 channel estimation error is effective.


Case 2: Downlink channel estimation operation in the UE side in TDD or FDD mode where the estimated coefficients are quantized and fed back to the JCAS Tx. In this case, type-1 and type-2 channel estimation errors are effective.


To cover all cases and error types, the channel matrix H is assumed to be known by the JCAS Tx with type-1 and type-2 errors:








h
k

=



h
k

^

+

Δ


h
k


+

Δ


q
k




,




where Δhk˜CN(0, ΔRh,k) is the type-1 error (ΔRh,k is the covariance matrix of Δhk, e.g., custom-character[ΔhkHΔhk]=ΔRh,k) and Δqk is the type-2 error satisfying ∥Δqk∥≤ek for all k, where ek is determined by the inter-distances of the quantized channel vectors. This model is suitable for both Case 1 and 2 (in Case 2, ek=0, ∀k). As the transmitter has only the estimated channel information, the signal model can be modified as:







y
k

=





h
k







=
1


K




w




s





+

z
k


=






h
k

^



w
k



s
k






desired




+


Δ


h
k



w
k



s
k







mismatch
,
1





+


Δ


q
k



w
k



s
k







mismatch
,
2





+



h
k








k





w




s









interference




+



z
k



noise


.







The SINR for each user can be expressed as:








SINR
k

=





"\[LeftBracketingBar]"




h
k

^



w
k




"\[RightBracketingBar]"


2


𝔼

[




"\[LeftBracketingBar]"



Δ


h
k



w
k



s
k


+

Δ


q
k



w
k



s
k


+


h
k



Σ



k




w




s



+

z
k




"\[RightBracketingBar]"


2

]



,


k

,




where the expectation is over Δhk, custom-character, zk.


Due to channel estimation errors, the modified SINR expression has extra terms in the denominator (compared to a perfect CSI case), decreasing the communication performance. Robust algorithms may be used to decrease the effects of the channel estimation errors.


As shown in FIG. 1D and example 120, an SAZF precoder may be designed for imperfect CSI. As shown by reference number 122, a JCAS device may evaluate λmax(ΔRh,k) and σk2 using the existing reference symbols for each UE. As shown by reference number 124, the JCAS device may evaluate ek using the quantization points for channel vectors for each UE. As shown by reference number 126, the JCAS device may calculate a modified error matrix Σ′. The JCAS device may obtain a modified JCAS precoder by running a robust SAZF algorithm via Step 1-5 by replacing Σ with Σ′ in the solution of (P4) in Step 5. Additional details regarding these features are described below.


In some aspects, to increase the robustness against channel estimation errors, a modification to the SAZF algorithm may be used.


We can calculate the denominator in the modified SINR as:







𝔼

[





"\[LeftBracketingBar]"



Δ


h
k



w
k



s
k


+

Δ


q
k



w
k



s
k


+


h
k










k





w




s



+

z
k




"\[RightBracketingBar]"


2

]

=


𝔼

[




"\[LeftBracketingBar]"



Δ


h
k



w
k


+

Δ


q
k



w
k





"\[RightBracketingBar]"


2

]

+

𝔼

[









"\[LeftBracketingBar]"



h
k








k




w






"\[RightBracketingBar]"


2

]

+


σ
k
2

.






The first term is equal to:







f
1

=


𝔼

[





"\[LeftBracketingBar]"



Δ


h
k



w
k


+

Δ


q
k



w
k





"\[RightBracketingBar]"


2

]

=



w
k
H


Δ


R

h
,
k




w
k


+

Δ


q
k



w
k



w
k
H


Δ



q
k
H

.








The second term is equal to:







f
2

=


𝔼

[





"\[LeftBracketingBar]"



h
k








k




w






"\[RightBracketingBar]"


2

]

=






k





(



(



h
k

^

+

Δ


q
k



)



w






w

H

(



h
k

^

+

Δ


q
k



)

H


+


w

H


Δ


R

h
,
k




w




)

.







Hence:








f
1

+

f
2


=







k




(



h
k

^

+

Δ


q
k



)



w






w

H

(



h
k

^

+

Δ


q
k



)

H



+





=
1

K



w

H


Δ


R

h
,
k




w




+

Δ


q
k



w
k



w
k
H


Δ



q
k
H

.







By Von-Neumann's theorem:









w

H


Δ


R

h
,
k




w







λ
max

(

Δ


R

h
,
k



)



w

H



w




,




where λmax(⋅) shows the maximum eigenvalue of the Hermitian and positive semidefinite matrix in its argument. Using the last observation and the transmit power condition:








P
t

=








=
1

K



w

H



w




,




we obtain that:








f
1

+

f
2










k




(



h
ˆ

k

+

Δ


q
k



)



w






w

H

(



h
ˆ

k

+

Δ


q
k



)

H



+


P
t

·


λ
max

(

Δ


R

h
,
k



)


+

Δ


q
k



w
k



w
k
H


Δ



q
k
H

.







Here, Pt is the total transmit power of the JCAS Tx for the whole resource block with L samples.


Since:









(



h
ˆ

k

+

Δ


q
k



)



w






w

H

(



h
ˆ

k

+

Δ


q
k



)

H


=




h
ˆ

k



w




w

H




h
ˆ

k
H


+

Δ


q
k



w




w

H


Δ


q
k
H


+



h
ˆ

k



w




w

H


Δ


q
k
H


+

Δ


q
k



w




w

H




h
ˆ

k
H




,




we obtain that:









f
1

+

f
2





Δ


q
k



R
w


Δ


q
k
H


+

Δ


q
k



R

w
,
k





h
ˆ

k
H


+



h
ˆ

k



R

w
,
k



Δ


q
k
H


+






k





h
ˆ

k



w




w

H




h
ˆ

k
H



+


P
t

·


λ
max

(

Δ


R

h
,
k



)




,





where:







R
w

=





=
1

K



w




w

H




,


R

w
,
k


=






k




w





w

H

.








Using von-Neumann's inequality once more:








Δ


q
k



R
w


Δ


q
k
H






λ
max

(

R
w

)


Δ


q
k


Δ


q
k
H





P
t



e
k
2



,




as the trace of the matrix Rw is equal to Pt and the quantization error is norm bounded by ek.


As a result:







SINR
k








"\[LeftBracketingBar]"




h
ˆ

k



w
k




"\[RightBracketingBar]"


2














k







"\[LeftBracketingBar]"




h
ˆ

k



w





"\[RightBracketingBar]"


2


+



P
t

·

λ
max




(

Δ


R

h
,
k



)


+







Δ


q
k



R

w
,
k





h
ˆ

k
H


+



h
ˆ

k



R

w
,
k



Δ


q
k
H


+


P
t



e
k
2


+

σ
k
2






.





To obtain high SINR values, a commonly used approach in communication theory is zero forcing (ZF) precoding which eliminates the multi-user interference by setting ĥkcustom-character=0, ∀k≠custom-character. When ZF precoding is used:








R

w
,
k





h
^

k
H


=


R

w
,
k


=







k




w




w

H




h
^

k
H



=
0.






Therefore, the SINR of the k-th UE is lower bounded by:








SINR
k









"\[LeftBracketingBar]"




h
ˆ

k



w
k




"\[RightBracketingBar]"


2




P
t

·


λ
max

(

Δ


R

h
,
k



)


+


P
t



e
k
2


+

σ
k
2





,



k
.






To maintain fairness between the users, the following condition is obtained:










h
ˆ

k



w
k


=

μ





P
t

·


λ
max

(

Δ


R

h
,
k



)


+


P
t



e
k
2


+

σ
k
2





,


k





for some non-negative real number μ to have SINRk≥μ2, ∀k.


In the perfect CSI case, ΔRh,k=0 and ek=0 and hence ĥkwk=μσk, ∀k.


The related conditions can be written in the matrix form as:







HW
=

μ










,






    • where Σ′=diag(δ1, δ2, . . . , δK), where δk=√{square root over (Pt·λmax(ΔRh,k)+PtEk2k2)}, ∀k.





The robust version of the SAZF provides the same SINR of all communication UEs and may be beneficial when there are UEs with different levels of channel estimation errors.


As shown in FIG. 1E and example 128, beampatterns obtained by SAZF and robust SAZF may be compared. In Case 1, there is no channel estimation error and SAZF is used. In Case 2, there are channel estimation errors, but non-robust SAZF is applied as if there is perfect CSI. Case 3 presents the results for robust SAZF when there is no perfect CSI. Two UEs with different channel estimation error levels may be considered. The robust SAZF tries to maintain a similar performance to both UEs and hence the gain values at UE 1 (with larger channel estimation errors) angles are higher in robust SAZF design. On the other hand, for UE 2, the gains are lower. In general, robust SAZF aims at enhancing performance of UEs with larger channel estimation errors while maintaining similar median SINRs for all UEs. SAZF and its robust version with cdB=3 is shown for two UEs with 3 dominant angular paths with azimuth angles φ=−15°, −25°, −35° for UE 1 and φ=40°, 20°, 50° for UE 2, and a single target in [−5°, 5°]. For both UEs, the normalized channel path gains are 0, −5, −10 dB, respectively. The channel estimation error for UE 1 is high (λmax(ΔRh,1)=1, e1=0.4, σ12=1) and the channel estimation error for UE 2 is low (λmax(ΔRh,2)=0.01, e2=0.4, σ22=1), where Pt=1. There is an N=16 element ULA with half-wavelength inter-element spacing at the JCAS Tx.


In some aspects, robust SAZF can provide equal and optimized SINRs under channel estimation errors. To show the level of type-1 and type-2 channel estimation errors, two metrics can be defined as below:








SNR

ch
,
1
,
k


=




h
ˆ

k




h
ˆ

k
H



tr

(

Δ


R

h
,
k



)



,


SNR

ch
,
2
,
k


=




h
ˆ

k




h
ˆ

k
H



e
k
2



,



k
.






Here, SNRch,1,k shows the ratio of the estimated channel power to type-1 error power, and SNRch,2,k shows the ratio of the estimated channel power to maximum type-2 error power for the k-th UE. The following may be defined:








SNR
k

=



P
t


σ
k
2





h
ˆ

k




h
ˆ

k
H



,




which denotes the signal-to-noise ratio (SNR) value of the k-th UE in the case where it is the only UE (no interference), there is no channel estimation error, and the optimal communications precoder (maximal ratio transmission) is applied at the JCAS Tx side:







w
k

=




P
t




h
^

k




h
^

k
H








h
^

k
H

.






In other words, SNRk is the maximum SNR for the k-th UE for any precoder which can be attained in the ideal single user MIMO case.


We know that the lower bound provided by robust SAZF is the same for all UEs, e.g.:












"\[LeftBracketingBar]"




h
ˆ

k



w
k




"\[RightBracketingBar]"


2




P
t

·


λ
max

(

Δ


R

h
,
k



)


+


P
t



e
k
2


+

σ
k
2



=

SINR
0


,



k
.






Using the Cauchy-Schwarz inequality:










"\[LeftBracketingBar]"




h
ˆ

k



w
k




"\[RightBracketingBar]"


2





h
ˆ

k




h
ˆ

k
H



w
k
H




w
k

.






Using the fact N·λmax(ΔRh,k)≥tr(ΔRh,k):








SINR
0






w
k
H



w
k



P
t




1


N
·
SN



R


c

h

,
1
,
k




+

1

S

N


R


c

h

,
2
,
k




+

1

S

N


R
k






,



k
.






Therefore:









(


1


N
·
SN



R


c

h

,
1
,
k




+

1

S

N


R


c

h

,
2
,
k




+

1

S

N


R
k




)



SINR
0






w
k
H



w
k



P
t



,



k
.






If the above inequality for all k is summed and Σk=1KwkHwk=Pt is used:







SINR
0





[




k
=
1

K


(


1


N
·
SN



R


c

h

,
1
,
k




+

1

S

N


R


c

h

,
2
,
k




+

1

S

N


R
k




)


]


-
1


.





The last inequality shows that under channel estimation errors, even if the transmit power is large (resulting in large SNRk values), the SINR values of the UEs may not be large due to small SNRch,1,k and/or small SNRch,2,k values. In one example, assume that SNRch,1,k=SNRch,2,k=10 dB for all K=4 UEs for N=16. Let Pt→∞. In this case, SINR0≤3.72 dB, which is low considering infinite transmission power. When the robust version of SAZF is not used, some UEs may have even lower SINRs.


In some cases, the received signal at the sensing Rx can be represented as follows:

    • Ys(t)=HLoSX(t−tLoS)ej2πƒLoStj=1THs,jX(t−ts,j)ej2πƒs,jt+Zs(t), where
    • Ys(t)∈custom-character is the received signal matrix by the sensing Rx at the time instant t,
    • HLoScustom-character is the direct path channel matrix between the JCAS Tx and sensing Rx,
    • Hs,jcustom-character is the sensing channel matrix for the j-th target including the two-way path-loss and power drop due to radar cross section,
    • X(t−tLoS)=WS(t−tLoS) and X(t−ts,j)=WS(t−ts,j) are the precoded signal matrices at the time instants t−tLoS and t−ts,j, respectively,
    • ƒLoS and ƒs,j are doppler frequency shifts for the direct path and the j-th target, respectively, and
    • Zs(t), ∈custom-character is the noise matrix related to thermal noise at the sensing Rx at the time instant t.


Here tLoS=dLoS/c shows the time delay for the direct path, T is the total number of targets, and ts,j is the time delay for the j-th target.


The path-loss for reflected signals from targets may be much higher than that of a direct path and hence the estimation of HLoS can be done minimizing the minimum-mean square error. Therefore, an accurate estimate of HLoS can be obtained. JCAS Tx and sensing Rx can perform some information exchange about the transmitted signal X(t). For example, in mono-static cases, they are located near to each other and direct connection is possible. In bi-static cases, the information exchange can be performed via dedicated links such as wired or wireless connections over the core network. As the locations and velocities (in case of mobile nodes) of JCAS Tx and sensing Rx are known by both ends, the initial estimate of the delay tLoS (via dLoS) and ƒLoS can be calculated and fine tuning can be done by cross-correlation of the received samples with the signal X(t).


As shown in FIG. 1F and example 130, a technique is proposed to suppress the direct path between JCAS Tx and sensing Rx. As shown by reference number 132, a JCAS device may find a singular value decomposition of a direct path channel matrix. As shown by reference number 134, the JCAS device may calculate a null-space channel matrix. As shown by reference number 136, the JCAS device may evaluate the eigenvalue decomposition of a transformed sensing autocorrelation matrix. As shown by reference number 138, the JCAS device may calculate a modified feasible sensing autocorrelation matrix. Additional details are described below.


In some aspects, the optimal sensing precoder W0 obtained at the output of Step-4 may be modified so that the Tx beampattern nulls the direct path out. The HLoS matrix may be used as the input. Problem (P2) may be modified as:










min

R

d
,
K








R
d

-

R

d
,
K














such


that






R

d
,
K



0

,









rank





(

R

d
,
k


)


K

,








tr

(

R

d
,
K


)

=

tr

(

R
d

)


,






H

L

o

S




R

d
,
K




H
LoS
H


=
0.








(

P2
.1

)







The optimal solution to this problem is given by Proposition 1.


Proposition 1: The optimal solution of (P2.1) is given by








R

d
,
K


=




i
=
1

q



(


λ
i

+


1
q



(


t


r

(

R
d

)


-




j
=
1

q


λ
j



)



)



V
2



u
i



u
i
H



V
2
H




,






    • where q=min{N−r, K}, r=rank(HLoS),

    • λ1≥λ2≥ . . . ≥AN-r are the eigenvalues of V2HRd V2 and

    • u1, u2, . . . , uN-r are corresponding eigenvectors,

    • V2 is defined using the singular value decomposition of HSI as










H
SI

=



[




U
1




U
2




]

[




Σ
1



0




0


0



]

[




V
1
H






V
2
H




]







    • where Σ1custom-character includes the non-zero singular values in the descending order,

    • U1custom-character and U2custom-character include left singular vectors, and

    • V1custom-character and V2custom-character include right singular vectors.





Example proof: The proof can be completed via the following steps:


Step 1: Let the eigenvalue decomposition of Rd,K be:







R

d
,
K


=




[




U
3




U
4




]

[




Σ
2



0




0


0



]

[




U
3
H






U
4
H




]

=


U
3



Σ
4



U
3
H









    • where Σ2custom-character includes all non-zero eigenvalues.





The condition HLoSRd,KHLoSH=0 is equivalent to U3=V2V3 for some V3custom-character.


Step 2: As a result of step 1:







R

d
,
K


=



V
2



V
3



Σ
2



V
3
H



V
2
H


=


V
2


R


V
2
H







where

    • R∈custom-character is a matrix satisfying R=V3Σ2V3H≥0.


As V2HV2=I, tr(Rd,K)=tr(R).


Since all columns of V2 are independent, rank(Rd,K)=rank(R).


Step 3: As R∈custom-character, rank(R)≤N−r. Using rank(R)=rank(Rd,K)≤K, rank(R)≤min{N−r, K}=q. Let the eigenvalues of R be e1≥e2≥ . . . ≥eq≥eq+1=eq+2= . . . =eN-r=0. Using von Neumann's inequality:








t


r

(


V
2
H



R
d



V
2


R

)







i
=
1


N
-
r




λ
i



e
i




=




i
=
1

q



λ
i




e
i

.







Therefore:














R
d

-

R

d
,
K





2

=


t

r


(

R
d
2

)


+

t

r


(

R

d
,
K

2

)


-


2
·
tr



(


R
d



R

d
,
K



)









=


tr


(

R
d
2

)


+

t

r


(


V
2


R


V
2
H



V
2


R


V
2
H


)


-


2
·
tr



(


R
d



V
2


R


V
2
H


)









=


tr


(

R
d
2

)


+

t

r


(

R
2

)


-


2
·
tr



(


V
2
H



R
d



V
2


R

)












tr


(

R
d
2

)


+




i
=
1

q


e
i
2


-

2





i
=
1

q



λ
i



e
i











=


tr


(

R
d
2

)


-




i
=
1

q


λ
i
2


+




i
=
1

q




(


e
i

-

λ
i


)

2

.










Using the Arithmetic-Geometric Mean inequality:












i
=
1

q



(


e
i

-

λ
i


)

2





1
q




(




i
=
1

q




"\[LeftBracketingBar]"



e
i

-

λ
i




"\[RightBracketingBar]"



)

2





1
q




(





i
=
1

q


e
i


-

λ
i


)

2



=


1
q




(


t


r

(

R
d

)


-




i
=
1

q


λ
i



)

2



,




where the last equality is obtained from:










i
=
1

q


e
i


=


t


r

(
R
)


=


t


r

(

R

d
,
K


)


=

t



r

(

R
d

)

.








Hence:











R
d

-

R

d
,
K





2




t


r

(

R
d
2

)


-




i
=
1

q


λ
i
2


+


1
q





(


t


r

(

R
d

)


-




i
=
1

q


λ
i



)

2

.







The equality holds when:









e
i

-

λ
i


=


1
q



(


t


r

(

R
d

)


-




j
=
1

q


λ
j



)



,



i

=
1

,


,
q




and eigenvectors of R are u1, u2, . . . , uN-r yielding:







R

d
,
K


=




i
=
1

q



(


λ
i

+


1
q



(


t


r

(

R
d

)


-




j
=
1

q


λ
j



)



)



V
2



u
i



u
i
H




V
2
H

.







Solution of (P2.1) provides cancellation of the direct path when the beampattern is formed using the autocorrelation matrix Rd,K. Hence, if the modified optimal sensing precoder W0 is generated following Steps 1-4 where (P2.1) is solved instead of P2 in Step-3, then the direct path can be cancelled.


An example sensing beampattern with and without direct path cancellation is shown in FIG. 1G and example 140. Here, there is a uniform linear array with N=16 antennas at the JCAS Tx, the target is inside [−5°, 5° ] and φLoS=12°. When the direct path cancellation is active, the beampattern has a null at the desired azimuth angle φLoS at the direction of sensing Rx. In this example, the direct path is line-of-sight with a single dominant angle. In general, the direct path between JCAS Tx and sensing Rx can have multiple dominant directions with some non-line-of-sight components. As long as HLoS is accurately estimated, the corresponding direct path signal can be cancelled.


A beampattern obtained by the solution of SAZF with direct path cancellation is shown in FIG. 1H and example 142. The beampattern for the JCAS precoder is nearly the same except the gain at direct path angle. The direct path signal suppression is about 8 dB in this example. A significant direct path suppression can be obtained without changing the communication and sensing performances. The resulting beampatterns are shown for SAZF with cdB=3 for two UEs with 3 dominant angular paths with azimuth angles φ=−15°, −25°, −35° for UE 1 and φ=40°, 20°, 50° for UE 2, and a single target in [−5°, 5° ]. For both UEs, the normalized channel path gains are 0, −5, −10 dB, respectively. There is an N=16 element ULA with half-wavelength inter-element spacing at the JCAS Tx and φLoS=12°.


As indicated above, FIGS. 1A-1H are provided as examples. Other examples may differ from what is described with regard to FIGS. 1A-1H.



FIG. 2 is a diagram of an example environment 200 in which systems and/or methods described herein may be implemented. As shown in FIG. 2, environment 200 may include a testing system 210, a base station 220, a UE 230, and a network 240. Devices of environment 200 may interconnect via wired connections, wireless connections, or a combination of wired and wireless connections.


Testing system 210 includes one or more devices capable of communicating with base station 220 and/or a network (e.g., network 240), such as to perform processing of a signal produced by base station 220. Testing system 210 may communicate with base station 220 by a wired connection, as described elsewhere herein. In some implementations, testing system 210 may wirelessly communicate with base station 220.


Testing system 210 may include a beamforming network, a feedback component, and/or a test component as described elsewhere herein. The beamforming network may include an analog beamforming network that outputs a signal associated with a beam direction, as described elsewhere herein. The feedback component may include a passive radio frequency (RF) component, such as an RF coupler, that outputs a feedback signal based on an output signal of the beamforming network or a calibration signal of a calibration component of the base station 220, as described elsewhere herein. The test component may include one or more devices capable of receiving, generating, storing, processing, and/or providing information associated with a signal, such as an RF signal (e.g., an output signal of the beamforming network). For example, the test component may include a communication and/or computing device, such as a mobile phone (e.g., a smart phone, a radiotelephone, etc.), a laptop computer, a tablet computer, a handheld computer, a desktop computer, a gaming device, a wearable communication device (e.g., a smart wristwatch, a pair of smart eyeglasses, etc.), or a similar type of device.


Base station 220 includes one or more devices capable of communicating with a UE using a cellular radio access technology (RAT). For example, base station 220 may include a base transceiver station, a radio base station, a node B, an evolved node B (eNB), a gNB, a base station subsystem, a cellular site, a cellular tower (e.g., a cell phone tower or a mobile phone tower), an access point, a transmit receive point (TRP), a radio access node, a macro-cell base station, a microcell base station, a picocell base station, a femtocell base station, or a similar type of device. Base station 220 may transfer traffic between a UE (e.g., using a cellular RAT), other base stations 220 (e.g., using a wireless interface or a backhaul interface, such as a wired backhaul interface), and/or network 240. Base station 220 may provide one or more cells that cover geographic areas. Some base stations 220 may be mobile base stations. Some base stations 220 may be capable of communicating using multiple RATs.


In some implementations, base station 220 may perform scheduling and/or resource management for UEs covered by base station 220 (e.g., UEs covered by a cell provided by base station 220). In some implementations, base stations 220 may be controlled or coordinated by a network controller, which may perform load balancing and/or network-level configuration. The network controller may communicate with base stations 220 via a wireless or wireline backhaul. In some implementations, base station 220 may include a network controller, a self-organizing network (SON) module or component, or a similar module or component. In other words, a base station 220 may perform network control, scheduling, and/or network management functions (e.g., for other base stations 220 and/or for uplink, downlink, and/or sidelink communications of UEs covered by the base station 220). In some implementations, base station 220 may include a central unit and multiple distributed units. The central unit may coordinate access control and communication with regard to the multiple distributed units. The multiple distributed units may provide UEs and/or other base stations 220 with access to network 240.


In some implementations, base station 220 may be capable of MIMO communication (e.g., beamformed communication). In some implementations, base station 220 may include a calibration component for phase calibration of signals produced or received by base station 220, as described elsewhere herein. In a testing scenario, one or more antenna elements (e.g., an antenna array) of base station 220 may be disconnected, and base station 220 may be connected to a test panel, as described elsewhere herein.


UE 230 may include one or more devices capable of communicating with base station 220 and/or a network (e.g., network 240). For example, UE 230 may include a wireless communication device, a radiotelephone, a personal communications system (PCS) terminal (e.g., that may combine a cellular radiotelephone with data processing and data communications capabilities), a smart phone, a laptop computer, a tablet computer, a personal gaming system, user equipment, and/or a similar device. UE 230 may be capable of communicating using uplink (e.g., UE to base station) communications, downlink (e.g., base station to UE) communications, and/or sidelink (e.g., UE-to-UE) communications. In some implementations, UE 230 may include a machine-type communication (MTC) UE, such as an evolved or enhanced MTC (eMTC) UE. In some implementations, UE 230 may include an Internet of Things (IoT) UE, such as a narrowband IoT (NB-IoT) UE.


Network 240 includes one or more wired and/or wireless networks. For example, network 240 may include a cellular network (e.g., a long-term evolution (LTE) network, a code division multiple access (CDMA) network, a 3G network, a 4G network, a 5G network, or another type of next generation network), a public land mobile network (PLMN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a telephone network (e.g., the Public Switched Telephone Network (PSTN)), a private network, an ad hoc network, an intranet, the Internet, a fiber optic-based network, a cloud computing network, and/or a combination of these or other types of networks.


The quantity and arrangement of devices and networks shown in FIG. 2 are provided as one or more examples. In practice, there may be additional devices and/or networks, fewer devices and/or networks, different devices and/or networks, or differently arranged devices and/or networks than those shown in FIG. 2. Furthermore, two or more devices shown in FIG. 2 may be implemented within a single device, or a single device shown in FIG. 2 may be implemented as multiple, distributed devices. Additionally, or alternatively, a set of devices (e.g., one or more devices) of environment 200 may perform one or more functions described as being performed by another set of devices of environment 200.



FIG. 3 is a diagram of example components of a device 300 associated with precoders for joint communication and sensing. The device 300 may correspond to the testing system 210, the base station 220, and/or the UE 230. In some implementations, the testing system 210, the base station 220, and/or the UE 230 may include one or more devices 300 and/or one or more components of the device 300. As shown in FIG. 3, the device 300 may include a bus 310, a processor 320, a memory 330, an input component 340, an output component 350, and/or a communication component 360.


The bus 310 may include one or more components that enable wired and/or wireless communication among the components of the device 300. The bus 310 may couple together two or more components of FIG. 3, such as via operative coupling, communicative coupling, electronic coupling, and/or electric coupling. For example, the bus 310 may include an electrical connection (e.g., a wire, a trace, and/or a lead) and/or a wireless bus. The processor 320 may include a central processing unit, a graphics processing unit, a microprocessor, a controller, a microcontroller, a digital signal processor, a field-programmable gate array, an application-specific integrated circuit, and/or another type of processing component. The processor 320 may be implemented in hardware, firmware, or a combination of hardware and software. In some implementations, the processor 320 may include one or more processors capable of being programmed to perform one or more operations or processes described elsewhere herein.


The memory 330 may include volatile and/or nonvolatile memory. For example, the memory 330 may include random access memory (RAM), read only memory (ROM), a hard disk drive, and/or another type of memory (e.g., a flash memory, a magnetic memory, and/or an optical memory). The memory 330 may include internal memory (e.g., RAM, ROM, or a hard disk drive) and/or removable memory (e.g., removable via a universal serial bus connection). The memory 330 may be a non-transitory computer-readable medium. The memory 330 may store information, one or more instructions, and/or software (e.g., one or more software applications) related to the operation of the device 300. In some implementations, the memory 330 may include one or more memories that are coupled (e.g., communicatively coupled) to one or more processors (e.g., processor 320), such as via the bus 310. Communicative coupling between a processor 320 and a memory 330 may enable the processor 320 to read and/or process information stored in the memory 330 and/or to store information in the memory 330.


The input component 340 may enable the device 300 to receive input, such as user input and/or sensed input. For example, the input component 340 may include a touch screen, a keyboard, a keypad, a mouse, a button, a microphone, a switch, a sensor, a global positioning system sensor, a global navigation satellite system sensor, an accelerometer, a gyroscope, and/or an actuator. The output component 350 may enable the device 300 to provide output, such as via a display, a speaker, and/or a light-emitting diode. The communication component 360 may enable the device 300 to communicate with other devices via a wired connection and/or a wireless connection. For example, the communication component 360 may include a receiver, a transmitter, a transceiver, a modem, a network interface card, and/or an antenna.


The device 300 may perform one or more operations or processes described herein. For example, a non-transitory computer-readable medium (e.g., memory 330) may store a set of instructions (e.g., one or more instructions or code) for execution by the processor 320. The processor 320 may execute the set of instructions to perform one or more operations or processes described herein. In some implementations, execution of the set of instructions, by one or more processors 320, causes the one or more processors 320 and/or the device 300 to perform one or more operations or processes described herein. In some implementations, hardwired circuitry may be used instead of or in combination with the instructions to perform one or more operations or processes described herein. Additionally, or alternatively, the processor 320 may be configured to perform one or more operations or processes described herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software.


The number and arrangement of components shown in FIG. 3 are provided as an example. The device 300 may include additional components, fewer components, different components, or differently arranged components than those shown in FIG. 3. Additionally, or alternatively, a set of components (e.g., one or more components) of the device 300 may perform one or more functions described as being performed by another set of components of the device 300.



FIG. 4 is a flowchart of an example process 400 associated with precoders for joint communication and sensing. In some implementations, one or more process blocks of FIG. 4 may be performed by one or more components of device 300, such as processor 320, memory 330, input component 340, output component 350, and/or communication component 360.


As shown in FIG. 4, process 400 may include obtaining information associated with one or more targets, wherein the information associated with the one or more targets includes sensing information and a communication performance control parameter (block 410). For example, the device may obtain information associated with one or more targets, wherein the information associated with the one or more targets includes sensing information and a communication performance control parameter, as described above.


As further shown in FIG. 4, process 400 may include determining a sensing beampattern based at least in part on the information associated with the one or more targets (block 420). For example, the device may determine a sensing beampattern based at least in part on the information associated with the one or more targets, as described above.


As further shown in FIG. 4, process 400 may include determining a target sensing autocorrelation matrix for the sensing beampattern (block 430). For example, the device may determine a target sensing autocorrelation matrix for the sensing beampattern, as described above.


As further shown in FIG. 4, process 400 may include identifying a candidate sensing autocorrelation matrix for a joint communication and sensing transmission based at least in part on the target sensing autocorrelation matrix (block 440). For example, the device may identify a candidate sensing autocorrelation matrix for a joint communication and sensing transmission based at least in part on the target sensing autocorrelation matrix, as described above.


As further shown in FIG. 4, process 400 may include determining a target sensing precoder based at least in part on the candidate sensing autocorrelation matrix (block 450). For example, the device may determine a target sensing precoder based at least in part on the candidate sensing autocorrelation matrix, as described above.


As further shown in FIG. 4, process 400 may include generating a joint communication and sensing precoder based at least in part on the target sensing precoder and the communication performance control parameter (block 460). For example, the device may generate a joint communication and sensing precoder based at least in part on the target sensing precoder and the communication performance control parameter, as described above.


Process 400 may include additional implementations, such as any single implementation or any combination of implementations described below and/or in connection with one or more other processes described elsewhere herein.


In a first implementation, process 400 includes generating a plurality of joint communication and sensing precoder coefficients using the joint communication and sensing precoder.


In a second implementation, alone or in combination with the first implementation, generating the plurality of joint communication and sensing precoder coefficients comprises generating the plurality of joint communication and sensing precoder coefficients for a mono-static joint communication and sensing system.


In a third implementation, alone or in combination with one or more of the first and second implementations, generating the plurality of joint communication and sensing precoder coefficients comprises generating the plurality of joint communication and sensing precoder coefficients for a bi-static joint communication and sensing system.


In a fourth implementation, alone or in combination with one or more of the first through third implementations, process 400 includes evaluating, using one or more reference symbols for each UE of a plurality of UEs, a first parameter associated with a Type-1 channel estimation error, wherein the first parameter is based at least in part on a maximum eigenvalue of a covariance matrix for the Type-1 channel estimation error, evaluating, using one or more channel vectors for each UE of the plurality of UEs, a second parameter associated with a Type-2 channel estimation error, wherein the second parameter is based at least in part on a maximum error norm value for the Type-2 channel estimation error, and calculating a modified error matrix based at least in part on the first parameter and the second parameter.


In a fifth implementation, alone or in combination with one or more of the first through fourth implementations, process 400 includes generating a modified joint communication and sensing precoder based at least in part on the modified error matrix, the target sensing precoder, and the communication performance control parameter.


In a sixth implementation, alone or in combination with one or more of the first through fifth implementations, process 400 includes identifying a singular value decomposition of a direct path channel matrix, calculating a null-space matrix based at least in part on the singular value decomposition of the direct path channel matrix, evaluating an eigenvalue decomposition of a transformed sensing autocorrelation matrix, and calculating a modified candidate sensing autocorrelation matrix based at least in part on the eigenvalue decomposition of the transformed sensing autocorrelation matrix.


In a seventh implementation, alone or in combination with one or more of the first through sixth implementations, determining the target sensing precoder based at least in part on the candidate sensing autocorrelation matrix comprises determining the target sensing precoder based at least in part on the modified candidate sensing autocorrelation matrix, wherein the joint communication and sensing precoder is based at least in part on the target sensing precoder.


Although FIG. 4 shows example blocks of process 400, in some implementations, process 400 includes additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted in FIG. 4. Additionally, or alternatively, two or more of the blocks of process 400 may be performed in parallel.


The foregoing disclosure provides illustration and description, but is not intended to be exhaustive or to limit the implementations to the precise forms disclosed. Modifications and variations may be made in light of the above disclosure or may be acquired from practice of the implementations.


As used herein, the term “component” is intended to be broadly construed as hardware, firmware, or a combination of hardware and software. It will be apparent that systems and/or methods described herein may be implemented in different forms of hardware, firmware, and/or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the implementations. Thus, the operation and behavior of the systems and/or methods are described herein without reference to specific software code—it being understood that software and hardware can be used to implement the systems and/or methods based on the description herein.


As used herein, satisfying a threshold may, depending on the context, refer to a value being greater than the threshold, greater than or equal to the threshold, less than the threshold, less than or equal to the threshold, equal to the threshold, not equal to the threshold, or the like.


Even though particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of various implementations. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of various implementations includes each dependent claim in combination with every other claim in the claim set. As used herein, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiple of the same item.


When “a processor” or “one or more processors” (or another device or component, such as “a controller” or “one or more controllers”) is described or claimed (within a single claim or across multiple claims) as performing multiple operations or being configured to perform multiple operations, this language is intended to broadly cover a variety of processor architectures and environments. For example, unless explicitly claimed otherwise (e.g., via the use of “first processor” and “second processor” or other language that differentiates processors in the claims), this language is intended to cover a single processor performing or being configured to perform all of the operations, a group of processors collectively performing or being configured to perform all of the operations, a first processor performing or being configured to perform a first operation and a second processor performing or being configured to perform a second operation, or any combination of processors performing or being configured to perform the operations. For example, when a claim has the form “one or more processors configured to: perform X; perform Y; and perform Z,” that claim should be interpreted to mean “one or more processors configured to perform X; one or more (possibly different) processors configured to perform Y; and one or more (also possibly different) processors configured to perform Z.”


No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more.” Further, as used herein, the article “the” is intended to include one or more items referenced in connection with the article “the” and may be used interchangeably with “the one or more.” Furthermore, as used herein, the term “set” is intended to include one or more items (e.g., related items, unrelated items, or a combination of related and unrelated items), and may be used interchangeably with “one or more.” Where only one item is intended, the phrase “only one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Also, as used herein, the term “or” is intended to be inclusive when used in a series and may be used interchangeably with “and/or,” unless explicitly stated otherwise (e.g., if used in combination with “either” or “only one of”).

Claims
  • 1. A method, comprising: obtaining information associated with one or more targets, wherein the information associated with the one or more targets includes sensing information and a communication performance control parameter;determining a sensing beampattern based at least in part on the information associated with the one or more targets;determining a target sensing autocorrelation matrix for the sensing beampattern;identifying a candidate sensing autocorrelation matrix for a joint communication and sensing transmission based at least in part on the target sensing autocorrelation matrix;determining a target sensing precoder based at least in part on the candidate sensing autocorrelation matrix; andgenerating a joint communication and sensing precoder based at least in part on the target sensing precoder and the communication performance control parameter.
  • 2. The method of claim 1, further comprising generating a plurality of joint communication and sensing precoder coefficients using the joint communication and sensing precoder.
  • 3. The method of claim 2, wherein generating the plurality of joint communication and sensing precoder coefficients comprises generating the plurality of joint communication and sensing precoder coefficients for a mono-static joint communication and sensing system.
  • 4. The method of claim 2, wherein generating the plurality of joint communication and sensing precoder coefficients comprises generating the plurality of joint communication and sensing precoder coefficients for a bi-static joint communication and sensing system.
  • 5. The method of claim 1, further comprising: evaluating, using one or more reference symbols for each user equipment (UE) of a plurality of UEs, a first parameter associated with a Type-1 channel estimation error, wherein the first parameter is based at least in part on a maximum eigenvalue of a covariance matrix for the Type-1 channel estimation error;evaluating, using one or more channel vectors for each UE of the plurality of UEs, a second parameter associated with a Type-2 channel estimation error, wherein the second parameter is based at least in part on a maximum error norm value for the Type-2 channel estimation error; andcalculating a modified error matrix based at least in part on the first parameter and the second parameter.
  • 6. The method of claim 5, further comprising generating a modified joint communication and sensing precoder based at least in part on the modified error matrix, the target sensing precoder, and the communication performance control parameter.
  • 7. The method of claim 1, further comprising: identifying a singular value decomposition of a direct path channel matrix;calculating a null-space matrix based at least in part on the singular value decomposition of the direct path channel matrix;evaluating an eigenvalue decomposition of a transformed sensing autocorrelation matrix; andcalculating a modified candidate sensing autocorrelation matrix based at least in part on the eigenvalue decomposition of the transformed sensing autocorrelation matrix.
  • 8. The method of claim 7, wherein determining the target sensing precoder based at least in part on the candidate sensing autocorrelation matrix comprises determining the target sensing precoder based at least in part on the modified candidate sensing autocorrelation matrix, wherein the joint communication and sensing precoder is based at least in part on the target sensing precoder.
  • 9. A device, comprising: one or more processors configured to: obtain information associated with one or more targets, wherein the information associated with the one or more targets includes sensing information and a communication performance control parameter;determine a sensing beampattern based at least in part on the information associated with the one or more targets;determine a target sensing autocorrelation matrix for the sensing beampattern;identify a candidate sensing autocorrelation matrix for a joint communication and sensing transmission based at least in part on the target sensing autocorrelation matrix;determine a target sensing precoder based at least in part on the candidate sensing autocorrelation matrix; andgenerate a joint communication and sensing precoder based at least in part on the target sensing precoder and the communication performance control parameter.
  • 10. The device of claim 9, wherein the one or more processors are configured to generate a plurality of joint communication and sensing precoder coefficients using the joint communication and sensing precoder.
  • 11. The device of claim 9, wherein the one or more processors are configured to: evaluate, using one or more reference symbols for each user equipment (UE) of a plurality of UEs, a first parameter associated with a Type-1 channel estimation error, wherein the first parameter is based at least in part on a maximum eigenvalue of a covariance matrix for the Type-1 channel estimation error;evaluate, using one or more channel vectors for each UE of the plurality of UEs, a second parameter associated with a Type-2 channel estimation error, wherein the second parameter is based at least in part on a maximum error norm value for the Type-2 channel estimation error; andcalculate a modified error matrix based at least in part on the first parameter and the second parameter.
  • 12. The device of claim 11, wherein the one or more processors are configured to generate a modified joint communication and sensing precoder based at least in part on the modified error matrix, the target sensing precoder, and the communication performance control parameter.
  • 13. The device of claim 9, wherein the one or more processors are configured to: identify a singular value decomposition of a direct path channel matrix;calculate a null-space matrix based at least in part on the singular value decomposition of the direct path channel matrix;evaluate an eigenvalue decomposition of a transformed sensing autocorrelation matrix; andcalculate a modified candidate sensing autocorrelation matrix based at least in part on the eigenvalue decomposition of the transformed sensing autocorrelation matrix.
  • 14. The device of claim 13, wherein the one or more processors, to determine the target sensing precoder based at least in part on the candidate sensing autocorrelation matrix, are configured to determine the target sensing precoder based at least in part on the modified candidate sensing autocorrelation matrix, wherein the joint communication and sensing precoder is based at least in part on the target sensing precoder.
  • 15. A non-transitory computer-readable medium storing a set of instructions, the set of instructions comprising: one or more instructions that, when executed by one or more processors of a device, cause the device to: obtain information associated with one or more targets, wherein the information associated with the one or more targets includes sensing information and a communication performance control parameter;determine a sensing beampattern based at least in part on the information associated with the one or more targets;determine a target sensing autocorrelation matrix for the sensing beampattern;identify a candidate sensing autocorrelation matrix for a joint communication and sensing transmission based at least in part on the target sensing autocorrelation matrix;determine a target sensing precoder based at least in part on the candidate sensing autocorrelation matrix; andgenerate a joint communication and sensing precoder based at least in part on the target sensing precoder and the communication performance control parameter.
  • 16. The non-transitory computer-readable medium of claim 15, wherein the one or more instructions further cause the device to generate a plurality of joint communication and sensing precoder coefficients using the joint communication and sensing precoder.
  • 17. The non-transitory computer-readable medium of claim 15, wherein the one or more instructions further cause the device to: evaluate, using one or more reference symbols for each user equipment (UE) of a plurality of UEs, a first parameter associated with a Type-1 channel estimation error, wherein the first parameter is based at least in part on a maximum eigenvalue of a covariance matrix for the Type-1 channel estimation error;evaluate, using one or more channel vectors for each UE of the plurality of UEs, a second parameter associated with a Type-2 channel estimation error, wherein the second parameter is based at least in part on a maximum error norm value for the Type-2 channel estimation error; andcalculate a modified error matrix based at least in part on the first parameter and the second parameter.
  • 18. The non-transitory computer-readable medium of claim 17, wherein the one or more instructions further cause the device to generate a modified joint communication and sensing precoder based at least in part on the modified error matrix, the target sensing precoder, and the communication performance control parameter.
  • 19. The non-transitory computer-readable medium of claim 15, wherein the one or more instructions further cause the device to: identify a singular value decomposition of a direct path channel matrix;calculate a null-space matrix based at least in part on the singular value decomposition of the direct path channel matrix;evaluate an eigenvalue decomposition of a transformed sensing autocorrelation matrix; andcalculate a modified candidate sensing autocorrelation matrix based at least in part on the eigenvalue decomposition of the transformed sensing autocorrelation matrix.
  • 20. The non-transitory computer-readable medium of claim 19, wherein determining the target sensing precoder based at least in part on the candidate sensing autocorrelation matrix comprises determining the target sensing precoder based at least in part on the modified candidate sensing autocorrelation matrix, wherein the joint communication and sensing precoder is based at least in part on the target sensing precoder.