Phase retrieval using coordinate descent techniques

Information

  • Patent Grant
  • 10437560
  • Patent Number
    10,437,560
  • Date Filed
    Friday, November 4, 2016
    8 years ago
  • Date Issued
    Tuesday, October 8, 2019
    5 years ago
Abstract
Coordinate descent is applied to recover a signal-of-interest from only magnitude information. In doing so, a single unknown value is solved at each iteration, while all other variables are held constant. As a result, only minimization of a univariate quartic polynomial is required, which is efficiently achieved by finding the closed-form roots of a cubic polynomial. Cyclic, randomized, and/or a greedy coordinate descent technique can be used. Each coordinate descent technique globally converges to a stationary point of the nonconvex problem, and specifically, the randomized coordinate descent technique locally converges to the global minimum and attains exact recovery of the signal-of-interest at a geometric rate with high probability when the sample size is sufficiently large. The cyclic and randomized coordinate descent techniques can also be modified via minimization of the l1-regularized quartic polynomial for phase retrieval of sparse signals-of-interest, i.e., those signals with only a few nonzero elements.
Description
TECHNICAL FIELD

The present disclosure generally relates to phase retrieval in signals-of-interest. Specifically, the present disclosure relates to recovering a signal-of-interest from its magnitude information using coordinate descent techniques.


BACKGROUND OF THE INVENTION

Phase retrieval generally refers to the process of recovering a complete signal-of-interest from only intensity (i.e., magnitude) information relating to that signal. Phase retrieval applies to a number of important fields, including astronomy, computational biology, crystallography, digital communications, electron microscopy, neutron radiography, and optical imaging.


The purpose of phase retrieval is to recover a signal using the magnitude-square observations of its linear transformation. In fact, phase retrieval corresponds to a difficult nonconvex optimization problem where minimizing a multivariate quartic (i.e., fourth-order) polynomial is required. This problem is generally “NP-hard,” where NP-hardness (i.e., non-deterministic polynomial-time hardness), in computational complexity theory, is a class of problems that are, informally, “at least as hard as the hardest problems in NP.” More precisely, a problem, H, is NP-hard when every problem, L, in NP can be reduced in polynomial time to H. That is, a given solution for problem L can be verified to be a solution for problem H in polynomial time. As a consequence, finding a polynomial algorithm to solve any NP-hard problem would give polynomial algorithms for all the problems in NP, which is unlikely as many of them are considered hard.


Traditional phase retrieval techniques involve restoring a time-domain signal from its power spectrum observations, although the Fourier transform can be generalized to any linear mappings. Conventional approaches include the Gerchherg-Saxton (GS) technique, the Wirtinger Flow (WF) technique, the PhaseLift technique, and the PhaseCut technique. The GS and WF techniques directly deal with the nonconvex problem formulation by adopting alternating minimization and gradient descent, respectively. On the other hand, the PhaseLift technique and the PhaseCut technique relax the nonconvex problem to a convex program. However, these approaches have the drawbacks of requiring lengthy observations, a relatively large number of iterations, and/or high computational complexity.


BRIEF SUMMARY OF THE INVENTION

In view of the foregoing, embodiments described herein provide for recovering a complete signal-of-interest from only intensity (or magnitude) information relating to the signal by applying coordinate descent techniques. According to embodiments, a single unknown value is solved at each iteration, while all other variables are held constant. As a result, only minimization of a univariate quartic polynomial is required to fully recover the signal-of-interest, which is efficiently achieved by finding the closed-form roots of a cubic polynomial.


Described embodiments comprise using a cyclic, randomized, and/or greedy coordinate descent technique to minimize the multivariate quartic polynomial. Each of these coordinate descent techniques globally converges to a stationary point of the nonconvex problem, and specifically, the randomized coordinate descent technique locally converges to the global minimum and attains exact recovery of the signal-of-interest at a geometric rate with high probability when the sample size is sufficiently large. The cyclic and randomized coordinate descent techniques can also be modified by minimizing the l1-regularized quartic polynomial for phase retrieval of sparse signals-of-interest, i.e., those signals with only a few nonzero elements.


Further, application of each of the three (3) coordinate descent techniques to blind equalization in digital communications is described herein. As discussed, described embodiments are superior to known approaches in terms of computational simplicity and/or recovery performance, even when the number of signal measurements is relatively small.


The foregoing has outlined rather broadly the features and technical advantages of the present invention in order that the detailed description of the invention that follows can be better understood. Additional features and advantages of the invention will be described hereinafter that form the subject of the claims of the invention. It should be appreciated by those skilled in the art that the concepts and specific embodiments disclosed can be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present invention. It should also be realized by those skilled in the art that such equivalent constructions do not depart from the spirit and scope of the invention as set forth in the appended claims. The novel features that are believed to be characteristic of the invention, both as to its organization and method of operation, together with further strengths and advantages will be better understood from the following description when considered in connection with the accompanying figures. It is to be expressly understood, however, that each of the figures is provided for the purpose of illustration and description only and is not intended as a definition of the limits of the present invention.





BRIEF DESCRIPTION OF THE DRAWINGS

For a more complete understanding of the present invention, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:



FIG. 1 shows the normalized reduction of objective function versus number of iterations/cycles at SNR=20 dB according to one or more embodiments;



FIG. 2 shows relative recovery error versus number of iterations/cycles at SNR=20 dB according to one or more embodiments;



FIG. 3 shows empirical probability of success versus number of noise-free measurements according to one or more embodiments;



FIG. 4 shows NMSE of recovered signal versus SNR according to one or more embodiments;



FIG. 5 shows the magnitudes of recovered signals according to one or more embodiments, where lines notated with a star (*) and lines notated with a hollow circle (∘) denote the recovered and true signals, respectively;



FIG. 6 shows the probability of success versus number of noise-free measurements for sparse phase retrieval according to one or more embodiments;



FIG. 7 shows scatter plots of constellations of received signal and equalizer outputs according to one or more embodiments;



FIG. 8 shows ISI versus number of iterations/cycles according to one or more embodiments;



FIG. 9 shows a phase retrieval system according to one or more embodiments; and



FIG. 10 shows a phase retrieval method according to one or more embodiments.





DETAILED DESCRIPTION OF THE INVENTION

Described embodiments provide for recovering a complete signal-of-interest, i.e., the signal's phase information and magnitude information, from only intensity measurements of the signal. Doing so is generally referred to herein as phase retrieval. Conventionally, phase retrieval involves restoring a time-domain signal from power spectrum observations of the signal-of-interest, although the Fourier transform can be generalized to any linear mappings. Nevertheless, phase retrieval is a nonconvex optimization problem where minimizing a multivariate fourth-order polynomial is required. This problem is generally considered to be “NP-hard,” where NP-hardness (i.e., nondeterministic polynomial-time hardness), in computational complexity theory, is a class of problems that are, informally, “at least as hard as the hardest problems in NP.” More precisely, a problem, H, is NP-hard when every problem, L, in NP can be reduced in polynomial time to H. That is, a given solution for problem L can be verified to be a solution for problem H in polynomial time. As a consequence, finding a polynomial algorithm to solve any NP-hard problem would give polynomial algorithms for all the problems in NP, which is unlikely as many of them are considered hard.


According to the inventive concepts described herein, coordinate descent techniques are utilized to solve the foregoing problems more efficiently than previously known approaches. That is, a single unknown is solved at each iteration while all other variables are held constant. As a result, only minimization of a univariate quartic polynomial is required, which is efficiently achieved by determining the closed-form roots of a cubic polynomial. Embodiments can utilize one or more of three (3) coordinate descent techniques, referred to herein as (1) cyclic, (2) randomized, and (3) greedy coordinate descent techniques. As discussed, these three (3) coordinate descent techniques globally converge to a stationary point of the nonconvex problem, and specifically, the randomized coordinate descent technique locally converges to the global minimum and attains exact recovery of phase information at a geometric rate with high probability when the sample size is sufficiently large.


Further, the cyclic and randomized coordinate descent techniques can be modified by minimizing the l1-regularized quartic polynomial for phase retrieval of sparse signals, i.e., signals with only a few nonzero elements. Also, a specific application of the three (3) coordinate descent techniques to blind equalization in digital communications is described herein. Experimental evaluations reveal that described embodiments have been demonstrated to be superior to known approaches in terms of computational simplicity and/or recovery performance.


To better illustrate the inventive concepts described herein, general mathematical principles of phase retrieval are first discussed. Mathematically, the problem of phase retrieval is to find the complex-valued signal, x∈custom characterN, from M phaseless observations, bmcustom character. This is shown by equation (1):

bm=|amHx|2+vm, m=1, . . . ,M  equation (1)

where amcustom characterN are known sampling vectors, vmcustom character are additive zero-mean noise terms, and M>N. It should be appreciated that in optical imaging applications, am is the discrete Fourier transform vector. If bm=|amHx|+vm, it can be converted to bm2. The x can be recovered up to a global phase, ϕ∈[0, 2π), because ex is also a solution. Adopting the least squares criterion, x is determined from equation (2):











min

x



N





f


(
x
)



:=




m
=
1

M





(






a
m
H


x



2

-

b
m


)

2

.






equation






(
2
)








In terms of real-valued representations, equation (2) is equivalent to the expression shown at equation (3):











min


x
_





2

N






f


(

x
_

)



:=




m
=
1

M





(



q
m



(

x
_

)


-

b
m


)

2

.




where






equation






(
3
)









x
_

=

[




Re


(
x
)







Im


(
x
)





]







and




equation






(
4
)










q
m



(

x
_

)


=



x
_

T




A
_

m



x
_



,







A
_

m

=


[




Re


(
x
)





-

Im


(

A
m

)








Im


(
x
)





Re


(

A
m

)





]





2

N
×
2

N




,






A
m

=



a
m



a
m
H







N
×
N


.







(
5
)







The objective function ƒ(x) is a multivariate quartic polynomial of x=[x1 . . . x2N]T, implying that its minimization is generally “NP-hard.” Again, NP-hardness (i.e., non-deterministic polynomial-time hardness) is a class of problems that are, informally, “at least as hard as the hardest problems in NP.” More precisely, a problem, H, is NP-hard when every problem, L, in NP can be reduced in polynomial time to H. That is, a given solution for problem L can be verified to be a solution for problem H in polynomial time. As a consequence, finding a polynomial algorithm to solve any NP-hard problem would give polynomial algorithms for all the problems in NP, which is unlikely as many of them are considered hard.


According to described embodiments, the coordinate update strategy is exploited to minimize ƒ(x). Coordinate descent is an iterative procedure that successively minimizes the objective function along coordinate directions. The result of the kth iteration can be denoted as xk=[x1k . . . x2Nk]T. In the kth iteration, ƒ(xk) is minimized with respect to the ik th (ik∈{1, . . . , 2N}) variable while keeping the remaining 2N−1 {xik}i≠ik constant. This is equivalent to performing a one-dimensional search along the ikth coordinate, which can be expressed as:










α
k

=

arg







min

α






f


(



x
_

k

+

α






e

i
k




)








equation






(
6
)









where eik is the unit vector with the ikth entry being one and all other entries being zero. From the foregoing, x is updated by:

xk+1=xkkeik  equation (7)

which implies that only the ikth component is adjusted:

xikk+1xikkk  equation (8)

while all other components remain constant. Since xk is known, ƒ(xk+αeik) is a univariate function of α. Therefore, equation (6) is a one-dimensional minimization problem.


A coordinate descent solution is shown in the following:














Algorithm 1 CD for Phase Retrieval


 Initialization: Determine x0 ϵcustom character2N using the spectral method, referenced herein:


 for k = 0, 1 . . . , do


  Choose index ik ϵ {1, . . . , 2N}:





  
αk=argminaϵRf(x_k+αeik);x_ikk+1xikk+ak;






  Stop if f (xk) − f (xk + 1) < ϵ is satisfied, where ϵ > 0 is a small tolerance parameter.


 end for









According to the inventive concepts, prior to presenting a solution to equation (6), one or more of three (3) selection rules for the coordinate index, ik, are followed by the phase retrieval system described herein. According to one embodiment, a cyclic rule is followed where ik is first assigned a value of 1, then 2, and so forth through 2N. These steps are then repeated, starting with ik=1 again. That is, ik is cyclically assigned a value from {1, . . . , 2N}. Accordingly, one cycle corresponds to 2N iterations. According to another embodiment, a random rule is followed where ik is randomly selected from {1, . . . , 2N} with equal probability. According to yet another embodiment, a greedy rule is followed where ik is chosen as











i
k

=

arg







max
i








f
i



(


x
_

k

)








,









f
i



(


x
_

k

)



=





f


(


x
_

k

)







x
_

i
k



.






equation






(
9
)








That is, in following the greedy rule the coordinate with the largest absolute value of the partial derivative is chosen. Therefore, in following the greedy rule, the phase retrieval system must compute the full gradient at each iteration, which is not required in following the cyclic and random rules. The three (3) coordinate descent techniques: cyclic, random, and greedy rules are referred to as CCD, RCD, and GCD, respectively. Also, experimental evaluations show that the GCD converges faster than CCD and RCD at the expense of the extra full gradient calculation.


Next, the closed-form solution for equation (6) can be derived. For notational simplicity, the superscript and subscript k in equation (6) are omitted. That is, given x for any current iteration and the search direction ei, equation (10) can be followed:











min

α






φ


(
α
)



:=


f


(


x
_

+

α






e
i



)


.





equation






(
10
)








Employing equation (3), φ(α) is expressed as










φ


(
α
)


=




m
=
1

M




(



q
m



(


x
_

+

α






e
i



)


-

b
m


)

2






equation






(
11
)









where the mth term is

φm(α)=(qm(x+αe1)−bm)2.  equation (12)


Expanding the quadratic function in equation (12) yields:

qm(x+αei)=α2eiTAmei+2αeiTAmx+xTAmxcustom characterc2,imα2+c1,imα+c0m  equation (13)

where c2,im, c1,im and c0m are the coefficients of the univariate quadratic polynomial. Note that c0m has no relation with i. According to equation (5), the coefficient of the quadratic term can be simplified to:










c

2
,
i

m

=



e
i
T




A
_

m



e
i


=



[


A
_

m

]


i
,
i


=

{









[

a
m

]

i



2

,

i
=
1

,





,
N











[

a
m

]


i
-
N




2

,

i
=

N
+
1


,





,

2

N












equation






(
14
)









where [A]i,i and [α]i represent the (i, i) entry of A and ith element of α, respectively. It is also revealed from equation (5) that












A
_

m



x
_


=


[




Re


(


A
m


x

)







Im


(


A
m


x

)





]

=


[




Re


(


(


a
m
H


x

)



a
m


)







Im


(


(


a
m
H


x

)



a
m


)





]

.






equation






(
15
)








Therefore, the coefficient of the linear term in equation (13) is computed as










c

1
,
i

m

=


2


e
i
T




A
_

m



x
_


=

{






Re


(



(


a
m
H


x

)



[

a
m

]


i

)


,

i
=
1

,





,
N







Im


(



(


a
m
H


x

)



[

a
m

]



i
-
N


)


,

i
=

N
+
1


,





,

2

N





.







equation






(
16
)








The constant term is

c0m=xTAmx=xHAmx=|amHx|2.  equation (17)


Since qm(x+αei) is quadratic, φm(α) of equation (12) is a univariate quartic polynomial of α. Inserting equation (13) into equation (12) results in:

φm(α)=d4,imα4+d3,imα3+d2,imα2+d1,imα+d0m  equation (18)

where

d4,im=(c2,im)2
d3,im=2c2,imc1,im
d2,im=(c1,im)2+2c2,im(c0m−bm)
d1,im=2c1,im(c0m−bm)
d0m=(c0m−bm)2.  equation (19)

Note that d0m is not related to i.


Recalling φ(α)=Σm=1Mφm(α), it is clear that the coefficients of the quartic polynomial

φ(α)=d4,iα4+d3,iα3+d2,iα2+d1,iα+d0  equation (20)

are the sum of those of {φm(α)}m=1M:











d
0

=




m
=
1

M



d
0
m



,


d

j
,
i


=




m
=
1

M



d

j
,
i

m



,

j
=
1

,





,
4.




equation






(
21
)








The minimum point of φ(α) is obtained by setting its derivative to zero:

φ′(α)=4d4,iα3+3d3,iα2+2d2,iα+d1,i=0.  equation (22)


Equation (22) refers to finding the roots of a univariate cubic polynomial, which is easy and fast because there is a closed form solution. Due to the real-valued coefficients of the cubic polynomial, there are only two possible cases on the roots. The first case is that equation (22) has a real root and a pair of complex conjugate roots. In this case, the minimizer is the unique real root because the optimal solution of a real-valued problem must be real. The second case is that equation (22) has three real roots. Then the optimal a is the real root associated with the minimum objective. Once the coefficients of equation (22) are determined, the complexity of calculating the roots of a cubic polynomial is merely custom character(1).


In view of the foregoing, the complexity requirement of the coordinate descent techniques is now examined according to embodiments described herein. The most significant computational cost at each iteration is calculating the coefficients {dj,i}j=14, or equivalently, computing and c2,im, m1,im, and c0m, m=1, . . . M, because {dj,i}j=14 can be easily calculated from c2,im, c1,im, and c0m according to equation (19) and equation (21). From equation (14), c2,im is the squared modulus of [am]i and can be pre-computed in advance before iteration, which requires custom character(M) multiplications for calculating all M coefficients {c2,im}m=1M. According to equation (16) and equation (17), {amHx}m=1M must be computed to obtain {c1,im}m=1M and {c0m}m=1M. This involves a matrix-vector multiplication Ax, where the mth row of the matrix A is amH, that is:









A
=


[




a
1
H











a
M
H




]






M
×
N


.






equation






(
23
)








At first glance, the matrix-vector multiplication requires a complexity of custom character(MN). However, this complexity can be reduced to custom character(M) per iteration for the coordinate descent technique. By observing equation (7), it is determined that only a single element changes between two consecutive iterations. Specifically, this is seen from:












x

k
+
1


-

x
k


=


(


x
j

k
+
1


-

x
j
k


)



e
j



,

j
=

{





i
k

,


if





j


N









i
k

-
N

,
otherwise










equation






(
24
)









which yields

Axk+1=Axk+(xjk+1−xjk)A:,j  equation (25)

where A:,j represents the jth column of A. It is only required to compute Ax0 before an iteration. Afterward, the matrix-vector product can be efficiently updated from that of the previous iteration by an efficient computation of a scalar-vector multiplication, (xjk+1−xjk)A:,j, which merely requires custom character(M) operations.


In sum, the complexity of CCD and RCD is on the order of custom character(M) per iteration. Therefore, the complexity in 2N iterations, for a cycle in which CCD is followed, is the same as that of the WF using full gradient descent. While for GCD, an extra cost for computing the full gradient is needed, which results in a complexity of custom character(MN).


Further, as previously mentioned, the three (3) coordinate descent techniques converge to a stationary point from any initial value, and the RCD converges to the global minimum and achieves exact retrieval with high probability provided that the sample size is large enough.


According to embodiments described herein, the foregoing concepts can also be utilized to more efficiently retrieve phase values of sparse signals. If x is sparse, then the real-valued x is also sparse. The following l1-regularization, which is commonly used in compressed sensing, for sparse phase retrieval is adopted:











min


x
_





2

N






g


(

x
_

)



:=





m
=
1

M




(




x
_

T




A
_

m



x
_


-

b
m


)

2


+

τ





x
_



1







equation






(
26
)









where ∥x1i|xi| is the l1-norm, τ>0 is the regularization factor, and g(x)=ƒ(x)+τ∥x1. As there is no gradient for equation (26), the GCD cannot be implemented because it requires a gradient for index selection. Therefore, only CCD and RCD are discussed for the l1-regularization, and they are referred to as l1-CCD and l1-RCD, respectively.


The steps of the coordinate descent technique for solving equation (26) are similar to those in shown above for recovery general signals. The only difference is that an l1-norm term is added to the scaler minimization problem of equation (10), which is shown as










min

α







{


φ


(
α
)


+

τ






x
_

+

α






e
i





1



}

.





equation






(
27
)








By ignoring the terms independent to α, equation (27) is equivalent to










min

α







{


φ


(
α
)


+

τ




α
+


x
_

i






}

.





equation






(
28
)








Making a change of variable β=α+xi, substituting α=β−xi into equation (20), and ignoring the irrelevant components, we obtain an equivalent scalar minimization problem











min

β






ψ


(
β
)



:=



u
4



β
4


+


u
3



β
3


+


u
2



β
2


+


u
1


β

+

τ



β








equation






(
29
)









where the coefficients of the quartic polynomial {uj}j=14 are calculated as

u4=d4,i
u3=d3,i−4xid4,i
u2=d2,i−3xid3,i+6xi2d4,i
u1=d1,i−2xid2,i+3xi2d3,i−4xi3d4,i.  equation (30)


Although ψ(β) is non-smooth due to the absolute term, the closed-form solution of its minimum can still be derived. The minimizer of ψ(β) is evaluated in two intervals, namely, [0, ∞) and (−∞, 0). Next, the set S+ containing the stationary points of ψ(β) in the interval [0, ∞) is defined. That is, S+ is the set of real positive roots of the cubic equation:

4u4β3+3u3β22+u2β+(ui+τ)=0, β≥0.  equation (31)


The S+ can include zero (0) elements, or can include one (1) or three (3) elements since the cubic equation of (31) may have zero (0), one (1), or three (3) real positive roots. Similarly, S is the set that contains the stationary points of ψ(β) in (−∞, 0), that is, the real negative roots of

4u4β3+3u3β2+2u2β+(u1−τ)=0, β<0.  equation (32)


Again, S can include zero (0) elements, or can include one (1) or three (3) elements. The minimizer of ψ(β) in β ∈[0, ∞) must be the boundary, i.e., 0, or one (1) element of S+. The minimizer in (−∞, 0) must be an element of S. In sum, the minimizer of equation (29) is limited to the set {0 ∪ S+ ∪ S}, which includes a maximum of seven (7) elements, i.e.,











β
*

=

arg







min
β







ψ


(
β
)





,

β



{

0


𝒮
+



𝒮
-


}

.






equation






(
33
)








Therefore, ψ(β) must be evaluated over a set of, at most, seven (7) elements only, whose computation is straightforward and not computationally intensive. The coordinate of the l1-regularized coordinate descent technique is updated as xikk+1←β*. If S+ ∪ S=∅, then xikk+1=β*=0, which makes the solution sparse.


As discussed, certain embodiments can be applied to several different fields, including blind equalization in a wireless communication system. To further illustrate the inventive concepts, these embodiments, as applied blind equalization in a wireless communication system, as described herein. Consider a communication system with discrete-time complex baseband signal model:

r(n)=s(n)*h(n)+v(n)  equation (34)

where r(n) is the received signal, s(n) is the transmitted data symbol, h(n) is the channel impulse response, v(n) is the additive white noise, and * denotes convolution. The received signal is significantly distorted due to the inter-symbol interference (ISI) induced by the propagation channel.


Blind equalization aims at recovering the transmitted symbols without knowing the channel response. If the equalizer is defined with P coefficients w=[w0 . . . wP−1]T and rn=[r(n) . . . r(n−P+1)]T, the equalizer output is










y


(
n
)


=





i
=
0


P
-
1





w
i
*



r


(

n
-
i

)




=


w
H




r
n

.







equation






(
35
)








As many modulated signals in communication, such as phase shift keying (PSK), frequency modulation (FM), and phase modulation (PM), are of constant modulus, the constant modulus criterion is applied to obtain the equalizer:











min
w





n




(






w
H



r
n




2

-
κ

)

2



,
where




equation






(
36
)







κ
=


𝔼


[




s


(
n
)




4

]



𝔼


[




s


(
n
)




2

]







equation






(
37
)









is called the dispersion constant. It is obvious that the problem of equation (36) has the same form as the phase retrieval of equation (2). Both of them involve multivariate quartic polynomials. The only difference between phase retrieval and blind equalization is that the decision variable of the former is the unknown signal x while that of the latter is the equalizer w. Therefore, the WF and coordinate descent methods can be applied to solve the blind equalization problem of equation (36). By defining the composite channel-equalizer response as v=(n)=h(n)*w(n), the quantified ISI, which is expressed as









ISI
=





n






v


(
n
)




2


-


max
n






v


(
n
)




2





max
n






v


(
n
)




2







equation






(
38
)









reflects the equalization quality. From the foregoing, smaller values of ISI imply better equalization. Further, all methods use the same initial value obtained from the spectral method. The measurement vectors satisfy a complex standard i.i.d Gaussian distribution.


The convergence behavior of the three (3) coordinate descent techniques is now discussed. The signal x and noise vm in equation (1) are i.i.d. Gaussian. In experimental evaluations, the following parameters were set: N=64 and M=6N. The results of WF are included for comparison. Note that it is fair to compare 2N iterations (one cycle) for the coordinate descent technique with one WF iteration because the computational complexity of the CCD and RCD per cycle is the same as the WF per iteration. The GCD has a higher complexity for every 2N iterations than WF, CCD, and RCD.


Two quantities are plotted to evaluate the convergence rate. The first quantity is the reduction of the objective function normalized with respect to ∥b∥2:












f


(


x
_

k

)


-

f


(


x
_

*

)






b


2


,

b
=


[


b
1













b
M


]

T






equation






(
39
)









where x* denotes the optimal solution. The second quantity is the relative recovery error:














(


x
_

k

)

-


T

ϕ
k




(


x
_

*

)





2






x
_

*



2





equation






(
40
)









where Tϕk(x*) is extracted from ∥xk−ekx*∥ has the minimum value, which reflects the convergence speed to the original signal. The signal-to-noise ratio (SNR) in equation (1) is defined as










S





N





R

=


E


[



b


2

]



M






σ
v
2







equation






(
41
)









where σv2 is the variance of FIGS. 1 and 2 show the normalized objective reduction and recovery error, respectively, at SNR=20 dB. Fifty (50) independent trials were performed while the averaged results are also plotted with thick lines. It can be clearly seen that the three coordinate descent techniques converge faster than the previously known WF technique. As seen, among the observed techniques, the convergence speed of the GCD is the fastest.


Statistical Performance


For purposes of discussing statistical performance, the experimental settings are the same as those discussed above, except M and SNR vary. The performance of GS method is also examined. The empirical probability of success and normalized mean square error (NMSE), which is the mean of the relative recovery error in equation (40), are used to measure the statistical performance. In this case, all results are averaged over two hundred (200) independent trials. In the absence of noise, if the relative recovery error of a phase retrieval scheme is smaller than 10−5, it is considered a successful exact recovery.



FIG. 3 shows the empirical probability of success versus the number of measurements M. It is observed that the GCD is slightly better than WF, while CCD and RCD are slightly inferior to the WF.



FIG. 4 plots the NMSEs versus SNR and it is seen that the three (3) coordinate descent techniques and WF have comparable NMSEs, while each is superior to the GS method.


Phase Retrieval of Sparse Signal


Phase retrieval of a sparse signal with K nonzero elements is also discussed with respect to described embodiments. In addition to WF, the two convex relaxation based methods, namely, PhaseLift and PhaseCut, and sparse GS methods using hard-thresholding are examined for comparison. The regularization factor was set to τ=2.35M for the l1-CCD and l1-RCD. In this case, the support of the sparse signal is randomly selected from [1, N] where N=64. The real and imaginary parts of the nonzero coefficients of x are drawn as random uniform variables in the range [−2/√{square root over (2)}, −1/√{square root over (2)}] ∪ [1/√{square root over (2)}, 2/√{square root over (2)}].



FIG. 5 shows the recovered signal with K=5 and M=2N in the noise-free case. As seen, the WF, PhaseLift, PhaseCut, and sparse GS methods cannot recover the signal when M is relatively small while the l1-CCD and l1-RCD work well.



FIG. 6 plots the probability of success versus number of measurements, which demonstrates the superiority of l1-CCD and l1-RCD.


Blind Equalization


The application of the coordinate descent techniques to blind equalization is discussed with respect to described embodiments and compared to WF method. The results of the conventional super-exponential (SE) algorithm are also included. The transmitted signal adopts quadrature PSK (QPSK) modulation, namely, s(n)∈{1, −1, j, −j}. A typical finite impulse response (FIR) communication channel with impulse response {0.4, 1, −0.7, 0.6, 0.3, −0.4, 0.1} is adopted.



FIG. 7 shows the constellations of the received signal and equalizer outputs of 1000 samples with 20 dB zero-mean Gaussian noise. The received signal is severely distorted due to the channel propagation. The SE, WF, and coordinate descent techniques succeed in recovering the transmitted signal up to a global phase rotation. Clearly, the CCD and GCD have a higher recovery accuracy.



FIG. 8 plots the ISI versus number of iterations/cycles with 25 dB zero-mean Gaussian noise when there are 2000 samples. The ISI is averaged over 100 independent trials. In addition to faster convergence than the WF and SE methods, the coordinate descent techniques, especially CCD, arrive at a lower ISI. This means that the coordinate descent techniques achieve a more accurate recovery.


With regard to the convergence analysis of the coordinate descent techniques, the inventive concepts described herein prove that the three (3) coordinate descent techniques globally converge to a stationary point from any initial value. Further, the sequence of the iterations generated by the RCD locally converges to the global minimum point in expectation at a geometric rate under a mild assumption. This implies that in the absence of noise, the RCD achieves exact phase retrieval under a moderate condition.


Global Convergence to Stationary Point


Lemma 1: Given any finite initial value x0custom character2N and ƒ(x0)=ƒ0, the sublevel set of ƒ(x)

Sƒ0={x(x)≤ƒ0}  equation (42)

is compact, i.e., bounded and closed. The iterates of the three coordinate descent techniques, namely, xk, k=0, 1, . . . , are in the compact set Sƒ0.


Proof: If ∥x∥→∞, then ƒ(x)→∞ on since ƒ(x) is quartic. The converse-negative proposition implies that ƒ(x)≤ƒ0≤∞ guarantees ∥x∥ for all x∈Sƒ0. Hence, Sƒ0 is bounded. Now it is clear that all the points in the sublevel set satisfy ƒ(x)∈[0, ƒ0] since ƒ(x)≥0.


Since the mapping ƒ(x) is continuous and the image [0, ƒ0] is a closed set, the inverse image {x|0≤ƒ(x)≤ƒ0} is also closed. This completes the proof that Sƒ0 is compact. The coordinate descent technique monotonically decreases ƒ(x), meaning that ƒ(xk)≤ƒ(xk−1)≤ . . . ≤ƒ(x0)=ƒ0. Therefore, all the iterates {xk} must be in the compact set Sƒ0.


Lemma 1 guarantees the analysis can be limited in the compact set Sƒ0 rather than the whole domain of ƒ(x). The following lemma states that the partial derivatives of ƒ(x) are locally Lipschitz continuous on Sƒ0, although they are not globally Lipschitz continuous over the whole domain custom character2N.


Lemma 2: On the compact set Sƒ0, the gradient ∇ƒ(x) is component-wise Lipschitz continuous. That is, for each i=1, . . . , 2N, it follows

|∇iƒ(x+tei)−∇iƒ(x)|≤Li|i|, t∈custom character  equation (43)

for all x,x+tei∈Sƒ0, where Li>0 is called the component-wise Lipschitz constant on Sƒ0. Further, we have










f


(


x
_

+

te
i


)





f


(

x
_

)


+

t








i



f


(

x
_

)




+



L
i

2




t
2

.







equation






(
44
)








Proof: For t=0, both sides of (43) are equal to 0 and (43) holds. For t≠0, it means that ∥x+teix∥=t≠0. Since the function















i



f


(


x
_

-

te
i


)



-



i



f


(

x
_

)










x
_

+

te
i

-

x
_








equation






(
45
)









is continuous on the compact set Sƒ0, its minimum over Sƒ0 is Li, which is attained by Weierstrass' theorem. Then:










max








i



f


(


x
_

-

te
i


)



-



i



f


(

x
_

)










x
_

+

te
i

-

x
_






=

L
i





equation






(
46
)









immediately elicits equation (43). The following second-order partial derivative













i
,
i

2



f


(

x
_

)



=



[



2



f


(

x
_

)



]


i
,
i


=



[




2



f


(

x
_

)







x
_







x
_

T




]


i
,
i


=




2



f


(

x
_

)







x
_

i
2









equation






(
47
)









is well defined since ƒ(x) is twice continuously differentiable. Noting that ∇i,i2ƒ(x) is the partial derivative of ∇iƒ(x) with respect to x, and shown by equation (43),













i
,
i

2



f


(

x
_

)



=



lim

t

0








i



f


(


x
_

-

te
i


)



-



i



f


(

x
_

)




t




L
i






equation






(
48
)









is obtained, which holds for all x∈Sƒ0. Applying Taylor's theorem and equation (48), there exists a γ∈[0, 1] with x+γtei∈Sƒ0 such that













f


(


x
_

-

te
i


)


=




f


(

x
_

)


+

t





f


(

x
_

)


T




e
i


+



t
2

2



e
i
T





2



f


(


x
_

-

γ






te
i



)





e
i









=





f


(

x
_

)


+

t





f
i



(

x
_

)




+



t
2

2






i
,
i

2



f


(


x
_

-

γ






te
i



)

















f


(

x
_

)


+

t




f
i




(

x
_

)


+



L
i

2




t
2

.










equation






(
49
)








The component-wise Lipschitz constant Li is not easy to compute or estimate because the partial derivatives are complicated multivariate polynomials. However, the coordinate descent techniques do not require Li. This quantity is just used for theoretical convergence analysis. The minimum and maximum of all the component-wise Lipschitz constants are Lmin=min1≤i≤2N Li and Lmax=max1≤i≤2N Li, respectively. Employing similar steps of Lemma 2, we can prove that the full gradient ∇ƒ(x) is Lipschitz continuous:

∥∇ƒ(x)−∇ƒ(z)∥≤L∥xz  equation (50)

with the “full” Lipschitz constant L. It is not difficult to show L≤Σi Li and thus L≤2N Lmax is obtained.


Theorem 1: The CCD, RCD, and GCD globally converge to a stationary point of the multivariate quartic polynomial from an arbitrary initialization.


Proof: Based on the component-wise Lipschitz continuous property of equation (44), it is derived that:













f


(


x
_


k
+
1


)


=




min






f


(



x
_

k

+

α






e

i
k




)















f


(



x
_

k

+

α






e

i
k




)




|

α
=


-




i
k




f


(


x
_

k

)




/

L

i
k












=




f


(



x
_

k

-






i
k




f


(


x
_

k

)




L

i
k





e

i
k




)














f


(


x
_

k

)


-



(




f

i
k




(


x
_

k

)



)

2


L

i
k



+



L

i
k


2





(




f

i
k




(


x
_

k

)



)

2


L

i
k

2










=





f


(


x
_

k

)


-


1

2


L

i
k







(




f

i
k




(


x
_

k

)



)

2















f


(


x
_

k

)


-


1

2


L
max






(




f

i
k




(


x
_

k

)



)

2










equation






(
51
)









from which we obtain a lower bound on the progress made by each coordinate descent technique iteration











f


(


x
_

k

)


-

f


(


x
_


k
+
1


)






1

2


L
max







(




f

i
k




(


x
_

k

)



)

2

.






equation






(
52
)








For different rules of index selection, the right-hand side of equation (52) will differ. For GCD, it chooses the index with the largest partial derivative magnitude. Hence, by equation (9), it follows











(




f

i
k




(


x
_

k

)



)

2

=





(



f


(


x
_

k

)






2




1

2

N










f


(


x
_

k

)





2

.








equation






(
53
)








Plugging equation (53) into equation (52) leads to the following lower bound of the progress of one GCD iteration











f


(


x
_

k

)


-

f


(


x
_


k
+
1


)






1

4


NL
max












f

i
k




(


x
_

k

)





2

.






equation






(
54
)








This means that one GCD iteration decreases the objective function with an amount of at least ∥∇ƒ(xk)∥2/(4N Lmax). Setting is k=0, . . . , j, in equation (54) and summing over all inequalities yields













k
=
0

j








f


(


x
_

k

)





2




4



NL
max



(


f


(


x
_

0

)


-

f


(


x
_


j
+
1


)



)





4


NL
max



f
0






equation






(
55
)









where ƒ(xj+1)≥0 is used. Taking the limit as j→∞ on equation (55), a convergent series is obtained













k
=
0










f


(


x
_

k

)





2




4


NL
max




f
0

.






equation






(
56
)








If a series converges, then its terms approach to zero, which indicates ∇ƒ(xk) is a zero vector as k→∞, that is, the GCD converges to a stationary point.


For RCD, since ik is a random variable, ƒ(xk+1) is also random:











E


[

f


(


x
_


k
+
1


)


]




E


[


f


(


x
_

k

)


-


1

2


L
max






(




f

i
k




(


x
_

k

)



)

2



]



=



f


(


x
_

k

)


-


1

2


L
max








i
=
1


2

N





1

2

N





(




f
i



(


x
_

k

)



)

2





=


f


(


x
_

k

)


-


1

4


NL
max










f


(


x
_

k

)





2








equation






(
57
)









where the fact that ik is uniformly sampled from {1, . . . , 2N} with equal probability of 1/(2N) is employed. Then the RCD at least obtains a decrease on the objective function in expectation











f


(


x
_

k

)


-

E


[

f


(


x
_


k
+
1


)


]






1

4


NL
max











f


(


x
_

k

)





2

.






equation






(
58
)








Following similar steps in the GCD, it is sufficient to prove that the expected gradient of the RCD approaches to the zero (0) vector and thus converges to a stationary point in expectation.


For CCD, ik takes value cyclically from { 1, . . . , 2N}. Applying the property of CCD, a lower bound of the decrease of the objective function after 2N CCD iterations can be derived:











f


(


x
_

k

)


-

f


(


x
_


k
+

2

N



)












f


(


x
_

k

)





2


4



L
max



(

1
+

2



NL
2

/

L
min
2




)




.





equation






(
59
)








Setting k=0, 2N, . . . , 2jN, in equation (59) and summing over all the inequalities yields
















k
=
0

j








f


(


x
_


2

kN


)





2







(


f


(


x
_

0

)


-

f


(


x
_


2


(

j
+
1

)


N


)



)


4



L
max



(

1
+

2



NL
2

/

L
min
2




)
















f


(


x
_

0

)



4



L
max



(

1
+

2



NL
2

/

L
min
2




)




.








equation






(
60
)








Taking the limit as j→∞ of (60) yields a convergent series. Thus, the gradient approaches to zero, indicating that the CCD converges to a stationary point.


Local Convergence to Global Minimum


If x* is an optimal solution of equation (2), then all the elements of the following set

custom characterc:={ex*,φ∈[0,2π)}  equation (61)

are also optimal solutions of equation (2). The distance of a vector z∈custom characterN to custom characterc is










dist


(

z
,

𝒫
c


)


=


min
ϕ





z
-


e

j





ϕ




x
*










equation






(
62
)









and the minimum of equation (62) attains at ϕ=ϕ(z). Similarly, the set of all optimal solutions of the real-valued problem (3) is defined as









:=

{



[




Re


(


e

j





ϕ




x
*


)







Im


(


e

j





ϕ




x
*


)





]



=
Δ




T
ϕ



(


x
_

*

)



,

ϕ


[

0
,

2

π


)



}





equation






(
63
)









where x*=[Re(x*)T Im(x*)T]T is a global minimizer of (3). That is, Tϕ(x*) denotes the effect of a phase rotation to x*. The projection of onto custom character is the point in custom character closest to xk, which is denoted as Tϕk(x*) where










ϕ
k

=

arg







min
ϕ








x
_

k

-


T
ϕ



(


x
_

*

)





.







equation






(
64
)








Then the distance of xk to custom character is













dist


(



x
_

k

,

)


=




min
ϕ







x
_

k

-


T
ϕ



(


x
_

*

)












=








x
_

k

-


T

ϕ
k




(


x
_

*

)





.








equation






(
65
)








Proving that dist(xk,custom character)→0 further explains the benefits of embodiments described herein. The following lemma, which essentially states that the gradient of the objective function is well behaved, is crucial to the proof.


Lemma 3: For any z∈custom characterN with dist (z, custom characterc)≤ϵ, the regularity condition

Re(custom character∇ƒ(z),z−ejϕ(z)x*custom character)≥ρ dist(z,custom characterc)+η∥∇ƒ(z)∥2, ρ>0, η>0  equation (66)

holds with high probability if the number of measurements satisfies M≥C0N log N with C0>0 being a sufficiently large constant.


Although the regularity condition of Lemma 3 corresponds to the complex-valued case, the real-valued version according to equation (63) and equation (64) is obtained at once. For xk satisfying dist(xk, custom character)≤ϵ, we have:

custom character∇ƒ(xk),x*)custom character≥ρ dist(xk,custom character)+η∥∇ƒ(xk)∥2.  equation (67)


Theorem 2: Assume that the sample size satisfies M≥C0N log N with a sufficiently large C0. The RCD with a slight modification, in which the one-dimensional search is limited to a line segment, that is:











α
k

=

arg







min
α



f


(



x
_

k

+

α






e

i
k




)





,


s
.
t
.






α





2

η







f

i
k




(


x
_

k

)











equation






(
68
)









converges to custom character in expectation with high probability at a geometric rate











𝔼


[


dist
2



(



x
_


k
+
1


,

)


]






(

1
-


ρ






γ
max


N


)

k




dist
2



(



x
_

0

,

)




,






γ
max

>
0.





equation






(
69
)








Proof: The updating equation of the CD, i.e., xk+1=xkkeik, is equivalently expressed as

xk+1=xk−γk∇ƒik(xk)eik  equation (70)

where γk=−αk/∇ƒik(xk). It requires γ>0 to ensure ƒ(xk+1)<ƒ(xk). Hence, |αk|≤2η|∇ƒi(xk) means 0<γk≤2η. Beginning from equation (65), it follows














dist
2



(



x
_


k
+
1


,

)


=








x
_


k
+
1


-


T

ϕ

k
+
1





(


x
_

*

)





2
















x
_


k
+
1


-


T

ϕ
k




(


x
_

*

)





2







=








x
_

k

-


γ
k






f

i








(


x
_

k

)





e

i
k



-


T

ϕ
k




(


x
_

*

)





2







=









x
_

k

-


T

ϕ
k




(


x
_

*

)





2

+



γ
k
2



(




f


i
k









(


x
_

k

)



)


2

-










2


γ
k






f


i
k









(


x
_

k

)






e

i
k

T



(



x
_

k

-


T

ϕ
k




(


x
_

*

)



)









=





dist
2



(



x
_

k

,

)


+



γ
k
2



(




f


i
k









(


x
_

k

)



)


2

-










2


γ
k









f


i
k









(


x
_

k

)




[



x
_

k

-


T

ϕ
k




(


x
_

*

)



]



i
k



.









equation






(
71
)








It is already known that custom character[(∇ƒik(xk))2]=∥∇ƒ(xk)∥2/(2N) by (57). This also yields










𝔼


[






f

i
k




(


x
_

k

)




[



x
_

k

-


T

ϕ
k




(


x
_

*

)



]



i
k



]


=



1

2

N







i
=
1


2

N









f

i
k




(


x
_

k

)




[



x
_

k

-


T

ϕ
k




(


x
_

*

)



]


i




=



1

2

N









f


(


x
_

k

)



,



x
_

k

-


T

ϕ
k




(


x
_

*

)











ρ

2

N




dist


(



x
_

k

,

)



+


η

2

N









f


(


x
_

k

)





2









equation






(
72
)









where the last line follows from equation (67). Combining equation (71) and equation (72) yields










𝔼


[


dist
2



(



x
_


k
+
1


,

)


]






(

1
-


ρ






γ
k


N


)




dist
2



(



x
_

k

,

)



+



γ
k


2

N




(


γ
k

-

2

η


)








f


(


x
_

k

)





2






(

1
-


ρ






γ
k


N


)




dist
2



(



x
_

k

,

)







equation






(
73
)









where the last inequality follows from 0<γk≤2η. Successively applying equation (58), the following is obtained:











𝔼


[


dist
2



(



x
_


k
+
1


,

)


]







j
=
1

k








(

1
-


ρ






γ
j


N


)




dist
2



(



x
_

0

,

)








(

1
-


ρ






γ
min


N


)

k




dist
2



(



x
_

0

,

)




,










γ

min






=


min

1

j

k





γ
j

.







equation






(
74
)








As seen from the foregoing, embodiments of the phase retrieval system enable recovery of a signal using intensity measurements and, in doing so, recover the phase of the signal given only the magnitude-square component. This is done in a fast manner even when the sample number is small. Certain embodiments can also be applied for the recovery of sparse signals. These features are enabled by a novel framework for employing coordinate descent techniques. These techniques include three variants, namely, CCD, RCD, and GCD, which are devised for phase retrieval of general signals-of-interest. At each iteration, the coordinate descent technique only needs to find the minimum of a univariate quartic polynomial, which is achieved by finding the closed-form roots of a cubic polynomial. Thus, the coordinate descent technique is computationally simple and easy to implement.


Although the phase retrieval problem can be expressed as minimization of a multivariate quartic polynomial, it is still NP-hard. However, by applying coordinate descent, only the minimum of a univariate quartic polynomial at each iteration is determined. This yields a closed-form solution. The described coordinate descent techniques for phase retrieval, namely, CCD, RCD and GCD, are computationally simple and converge faster than WF, which is the state-of-the-art approach.


To further explain these features, it is theoretically proven that the three (3) coordinate descent techniques converge to a stationary point from any initial value. It is further proved that, according to certain embodiments, the RCD converges to the global minimum and achieves exact retrieval with high probability provided that the sample size is sufficiently large. Also, with the use of l1-regularization, CCD and RCD are extended for phase retrieval of sparse signals, referred to a l1-CCD and l1-RCD, respectively. The l1-CCD and l1-RCD outperform previously known techniques, namely, WF, PhaseLift, and PhaseCut, particularly when the number of intensity measurements is small.


According to other embodiments, CCD, RCD and GCD can be applied to blind equalization of constant-modulus communication signals. These techniques are superior to the benchmarks of WF and SE methods in terms of faster convergence and smaller ISI.



FIG. 9 shows a block diagram of phase retrieval system 900 for recovering a full signal-of-interest. Phase retrieval system 900 can be used with, or be part of, a system(s) that include, e.g., a telescope, a microscope, or other optical system using phase retrieval techniques and utilized in various applications, including astronomy, computational biology, crystallography, digital communications, electron microscopy, neutron radiography and optical imaging.


System 900 comprises signal capture block 901, which captures signals, e.g., transmitted or reflected by object 902. Signal capture block 901 receives or otherwise obtains signals or signal data relating to object 902, which can be signals-of-interest which are recovered by system 900.


Object 902, which may be a celestial star, a transmission antenna, or other signal source, emits or reflects signals to form incoming wave front 903. Wave front 903 may be a spherical wave front, a plane wave front, or the like. Wave front 903 propagates from object 902 to phase retrieval system 900. System 900 iterates a series of operations and has an input and an output. A data set having a plurality of elements, each element containing magnitude and phase information, is received at the input. After completing an iteration, the method outputs a new approximation of the received data set, and this approximation forms the basis for the input to the next iteration. It is intended that each iteration is a better approximation than the last iteration.


Within system 900, wave front 903 is applied to optics including lens 904, which can be a Fourier transform lens having its focus at screen 905. For instance, according to embodiments in which lens 904 is a Fourier transform lens, lens 904 performs a frequency-space transformation to produce a reconstruction of wave front 903 at the screen 905 in the spatial domain.


In operation of system 900, phase retrieval engine (PRE) 906 receives a full signal-of-interest from intensity or magnitude information utilizing the iterative coordinate descent techniques described herein. PRE 906 is in commutation with processors of system 900 and other components to enable the functions described herein.


Display unit 907 displays one or more signals-of-interest based on the coordinate descent techniques utilized by system 900 in a given embodiment. Display unit 907 can be electronically accessible and communicatively connected to other components of phase retrieval system 900. According to some embodiments, each pixel of a display device comprising display unit 907 is electronically accessible to other components of phase retrieval system 900. Receiving data from components of phase retrieval system 900, display unit 907 can display the signals-of-interest at or near real time.


Display unit 907 and components of phase retrieval system 900 operate in conjunction with one another to present the signals of interest to a user. Display unit 907 can be, or can comprise, a display device(s), such as an SLM display device or an LCoS display device. In other embodiments, display unit 907 can comprise one or more of high-resolution LCDs, 3D television (TV) displays, high-resolution LCoS display devices, high-resolution SLM display devices, or other desired display devices suitable for displaying the signals-of-interest.


It should be appreciated that signal-of-interest can be communicated over wired or wireless communication channels to display unit 907 or other display components (e.g., remote display components, such as a 3D TV display) to facilitate generation and display of the 3D signals-of-interest.


Phase retrieval system 900 can comprise additional components that enable system 900 to execute functions described herein. For example, phase retrieval system 900 further comprises communication block 908 that communicates information between components of phase retrieval system 900 (e.g., display component(s), capture device(s), processor component(s), user interface(s), data store(s), etc.). The information includes, for example, a source object, signal-of-interests, information relating to defined signal-of-interest generation criterion(s), information relating to an algorithm(s) that facilitate generating signals-of-interest, etc.


Phase retrieval system 900 also comprises aggregator 909 that aggregates data received from various entities (e.g., capture device(s), display component(s), processor component(s), user interface(s), data store(s), etc.). Aggregator 909 correlates respective items of data based at least in part on type of data, source of the data, time or date the data was generated or received, point with which data is associated, image with which data is associated, pixel with which a transparency level is associated, etc., to facilitate processing of the data.


Phase retrieval system 900 also can comprise a processor 910 that can operate in conjunction with the other components to perform the various functions of phase retrieval system 900. Processor 910 can employ one or more processors (e.g., central processing units (CPUs), GPUs, FPGAs, etc.), microprocessors, or controllers that can process data, such as data relating to parameters associated with phase retrieval system 900 and associated components, etc., to perform operations relating to generating signal-of-interests.


Phase retrieval system 900 can also comprise data store 911 that stores data structures (e.g., user data, metadata); code structure(s) (e.g., modules, classes, procedures), commands, or instructions; information relating to generating a signal-of-interest, parameter data; algorithms; criterion(s) relating to signal-of-interest generation; and so on.


Processor 910 can be functionally coupled to data store 911 to store and retrieve information desired to operate and/or confer functionality, at least in part, to other system components and/or substantially any other operational aspects of phase retrieval system 900. It should be appreciated that the various components of phase retrieval system 900 can communicate information between each other and/or between other components associated with phase retrieval system 900 as desired to carry out operations of phase retrieval system 900. It should be further appreciated that respective components of phase retrieval system 900 each can be a stand-alone unit, can be included within phase retrieval system 900 (as depicted), can be incorporated within another component of phase retrieval system 900 or a component separate from phase retrieval system 900, and/or a combination thereof.


Phase retrieval system 900 can further comprise intelligence block 912 that can be associated with (e.g., communicatively connected to) processor 910 and/or other components associated with system 900 to facilitate analyzing data, such as current and/or historical information, and, based at least in part on such information, can make an inference(s) and/or a determination(s) and the like. For example, based in part on current and/or historical evidence, intelligence block 912 can infer or determine a process or algorithm to use.


Intelligence block 912 communicates information related to the inferences and/or determinations to processor 910. Based on the inference(s) or determination(s) made by intelligence block 912, processor 910 can take one or more actions to facilitate generating a signal-of-interest.


Intelligence block 912 reasons about or infers states of system 900, its environment, and/or users from a set of observations as captured via events and/or data. Inferences can be employed to identify a specific context or action, or can generate a probability distribution over states, for example. The inference can be probabilistic—that is, the computation of a probability distribution over states of interest based on a consideration of data and events. Inferences can also refer to techniques employed for composing higher-level events from a set of events and/or data. Such inference results in the construction of new events or actions from a set of observed events and/or stored event data (e.g., historical data), whether or not the events are correlated in close temporal proximity, and whether the events and data come from one or several event and data sources. Various classification (explicitly and/or implicitly trained) schemes and/or systems (e.g., support vector machines, neural networks, expert systems, Bayesian belief networks, fuzzy logic, data fusion engines . . . ) can be employed in connection with performing automatic and/or inferred action in connection with the disclosed subject matter.


System 900 also can comprise presenter 913, which is connected to processor 910. Presenter 913 provides various types of user interfaces to facilitate interaction between a user and any component coupled to processor 910. Presenter 913 can be a separate entity that can be utilized with processor 910 and associated components. However, it is to be appreciated that presenter 913 and/or similar view components can be incorporated into processor 910 and/or can be a stand-alone unit. Presenter 913 provides one or more graphical user interfaces (GUIs) (e.g., touchscreen GUI), command line interfaces, and the like. For example, a GUI can be rendered that provides a user with a region or means to load, import, read, etc., data, and can include a region to present the results of such. These regions can comprise known text and/or graphic regions comprising dialogue boxes, static controls, drop-down-menus, list boxes, pop-up menus, as edit controls, combo boxes, radio buttons, check boxes, push buttons, and graphic boxes. The user can interact with one or more of the components coupled to and/or incorporated into the processor 910.


The user can also interact with the regions to select and provide information via various devices such as a mouse, a roller ball, a keypad, a keyboard, a touchscreen, a pen and/or voice activation, for example. Typically, a mechanism such as a push button or the enter key on the keyboard can be employed subsequent entering the information in order to initiate the search. However, it is to be appreciated that the claimed subject matter is not so limited. For example, merely highlighting a check box can initiate information conveyance. In another example, a command line interface can be employed. For example, the command line interface can prompt (e.g., via a text message on a display and an audio tone) the user for information via providing a text message. The user can than provide suitable information, such as alpha-numeric input corresponding to an option provided in the interface prompt or an answer to a question posed in the prompt. It is to be appreciated that the command line interface can be employed in connection with a GUI and/or API, in addition, the command line interface can be employed in connection with hardware (e.g., video cards) and/or displays (e.g., black and white, and EGA) with limited graphic support, and/or low bandwidth communication channels.


In accordance with one embodiment of the disclosed subject matter, the processor 910 and/or other components, can be situated or implemented on a single integrated-circuit chip. In accordance with another embodiment, processor 910, and/or other components, can be implemented on an application-specific integrated-circuit (ASIC) chip. In yet another embodiment, processor 910 and/or other components, can be situated or implemented on multiple dies or chips.



FIG. 10 shows a flow diagram of an exemplary method performed according to the concepts described herein. Method 1000 is performed to efficiently retrieve phase information for a signal-of-interest and can be executed by a phase retrieval system, such as system 900 shown at FIG. 9.


At step 1001, magnitude information for a signal-of-interest is expressed as an objective function, ƒ(x). As previously discussed, the objective function, ƒ(x), can be expressed in the form of a multivariate quartic polynomial.


At step 1002, a coordinate index, ik, is selected according to one or more predefined coordinate descent rules. The coordinate descent rules can be one or more of a cyclic rule, a random rule, or a greedy rule. In one embodiment, the coordinate index, ik, is cyclically assigned sequential values from 1 to 2N for each of the variable components of the multivariate quartic polynomial. In another embodiment, the coordinate index, ik, is randomly assigned values from 1 to 2N, where each value is assigned with equal probability, for each of the variable components of the multivariate quartic polynomial. In yet another embodiment, the coordinate index, ik, is assigned a value of:











i
k

=

arg







min
i








f
i



(


x
_

k

)








,









f
i



(


x
_

k

)



=





f


(


x
_

k

)







x
_

i
k



.






equation






(
75
)








At step 1003, one variable component of the multivariate quartic polynomial is minimized while every other variable component of the multivariate quartic polynomial is held at a constant value. Preferably, this is done on an iterative basis where, at each iteration, a different component is minimized while the other components are held at constant values. According to one embodiment, for the kth iteration, this is done by minimizing ƒ(xk) with respect to the ikth (ik∈{1, . . . , 2N}) variable while maintaining the remaining 2N−1 {xik}i≠ik variables at a constant value. Also, step 1003 can involve solving only one variable component while maintain every other variable component at a constant value.


The aforementioned systems and/or devices have been described with respect to interaction between several components. It should be appreciated that such systems and components can include those components or sub-components specified therein, some of the specified components or sub-components, and/or additional components. Sub-components could also be implemented as components communicatively coupled to other components rather than included within parent components. Further yet, one or more components and/or sub-components may be combined into a single component providing aggregate functionality. The components may also interact with one or more other components not specifically described herein for the sake of brevity, but known by those of skill in the art.


Although the present invention and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure of the present invention, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present invention. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.

Claims
  • 1. A method for obtaining phase information for a signal-of-interest from only magnitude information for the signal-of-interest, the method comprising: capturing, using a capture device, magnitude information for the signal-of-interest;determining an objective function, ƒ(x), as a function of the magnitude information, the objective function being in the form of a multivariate quartic polynomial;selecting a coordinate index, ik, according to one or more predefined coordinate descent rules;iteratively minimizing, according to the selected coordinate index, the multivariate quartic polynomial, where minimizing the multivariate quartic polynomial comprises:minimizing only one variable component of the multivariate quartic polynomial while maintaining every other variable component of the multivariate quartic polynomial at a constant value; anddisplaying, to a user, via a presenter communicatively coupled to the capture device that facilitates interaction between the user and at least the capture device, a graphical user interface, where the graphical user interface comprises: a first graphic region displaying phase information of the signal-of-interest to the user, a second graphic region displaying magnitude information for the signal-of-interest to the user, and a third region to receive user input comprising capture device, input comprising capture device operating instructions.
  • 2. The method of claim 1 where selecting the coordinate index, ik, comprises: cyclically assigning ik sequential values for each of the variable components of the multivariate quartic polynomial.
  • 3. The method of claim 1 where selecting the coordinate index, ik, comprises: randomly assigning ik values, where each value is assigned with equal probability, for each of the variable components of the multivariate quartic polynomial.
  • 4. The method of claim 1 where selecting the coordinate index, ik, comprises: assigning ik as:
  • 5. The method of claim 1 where iteratively minimizing only one variable component of the multivariate quartic polynomial while maintaining every other variable component of the multivariate quartic polynomial at a constant value comprises: for the kth iteration, minimizing ƒ(xk) with respect to the ikth (ik∈{1, . . . , 2N}) variable while maintain remaining 2N−1{xik}i≠k variable components at a constant value.
  • 6. The method of claim 1 where iteratively minimizing only one variable component of the multivariate quartic polynomial while maintaining every other variable component of the multivariate quartic polynomial at a constant value comprises: solving the only one variable component while maintain every other variable component at a constant value.
  • 7. The method of claim 1 where the one or more predefined coordinate descent rules is selected from the group consisting of: a cyclic rule;a random rule; anda greedy rule.
  • 8. The method of claim 1 where the signal-of-interest is a sparse signal and one or more of the predefined coordinate descent rules is obtained by minimizing a l1-regularized quartic polynomial.
  • 9. A system for obtaining phase information for a signal-of-interest from only magnitude information for the signal-of-interest, the system comprising: a capture device that captures at least magnitude information for the signal-of-interest;a memory;one or more processors coupled to the memory, the one or more processors: determining an objective function, ƒ(x), as a function of the magnitude information, the objective function being in the form of a multivariate quartic polynomial;selecting a coordinate index, ik, according to one or more predefined coordinate descent rules; anditeratively minimizing, according to the selected coordinate index, the multivariate quartic polynomial, where minimizing the multivariate quartic polynomial comprises:minimizing only one variable component of the multivariate quartic polynomial while maintaining every other variable component of the multivariate quartic polynomial at a constant value; anda presenter communicatively coupled to the capture device that facilitates interaction between the user and at least the capture device, a graphical user interface, where the graphical user interface comprises: a first graphic region displaying phase information of the signal-of-interest to the user, a second graphic region displaying magnitude information for the signal-of-interest to the user, and a third region to receive user input comprising capture device input comprising capture device operating instructions.
  • 10. The system of claim 9 where selecting the coordinate index, ik, comprises: cyclically assigning ik sequential values for each of the variable components of the multivariate quartic polynomial.
  • 11. The system of claim 9 where selecting the coordinate index, ik, comprises: randomly assigning ik values, where each value is assigned with equal probability, for each of the variable components of the multivariate quartic polynomial.
  • 12. The system of claim 9 where selecting the coordinate index, ik, comprises: assigning ik as:
  • 13. The system of claim 9 where iteratively minimizing only one variable component of the multivariate quartic polynomial while maintaining every other variable component of the multivariate quartic polynomial at a constant value comprises: for the kth iteration, minimizing ƒ(xk) with respect to the ikth (ik∈{1, . . . , 2N}) variable while maintain remaining 2N−1{xik}i≠ik variables at a constant value.
  • 14. The system of claim 9 where iteratively minimizing only one variable component of the multivariate quartic polynomial while maintaining every other variable component of the multivariate quartic polynomial at a constant value comprises: solving the only one variable component while maintain every other variable component at a constant value.
  • 15. The system of claim 9 where the one or more predefined coordinate descent rules is selected from the group consisting of: a cyclic rule;a random rule; anda greedy rule.
  • 16. The system of claim 9 where the signal-of-interest is a sparse signal and one or more of the predefined coordinate descent rules are obtained by minimizing a l1-regularized quartic polynomial.
US Referenced Citations (1)
Number Name Date Kind
20160204963 Abdoli Jul 2016 A1
Non-Patent Literature Citations (8)
Entry
Waldspurger et al., “Phase Recovery, Maxcut and Complex Semidefinite Programming”, pp. 1-27, 2013.
Wright, “Coordinate Descent Algorithms”, pp. 3-34, 2015.
R. Gerchberg, W. Saxton, “A Practical Algorithm for the Determination of Phase from Image and Diffraction Plane Pictures,” Optik, vol. 35, pp. 237-246, 1972.
E. Candes, X. LI, M. Soltanolkotabi, “Phase Retrieval via Wirtinger Flow: Theory and Algorithms,” IEEE Trans. Inf. Theory, vol. 61, No. 4, pp. 1985-2007, Apr. 2015.
E. Candes, T. Strohmer, V. Voroninski, “PhaseLift: Exact and Stable Signal Recovery from Magnitude Measurements via Convex Programming,” Commun. Pure Appl. Math., vol. 66, No. 8, pp. 1241-1274, 2013.
I. Waldspurger, A. d'Aspremont, S. Mallat, “Phase recovery, MaxCut and complex semidefinite programming.” Math. Program., Ser. A, vol. 149, No. 1, pp. 47-81, Feb. 2015.
Y. Shechtman, Y. Eldar, O. Cohen, H. Chapman, J. Miao, M. Segev, “Phase Retrieval with Application to Optical Imsging,” IEEE Signal Process. Mag., vol. 32, No. 3, pp. 87-109, May 2015.
O. Shalvi, E. Weinstein, “Super-Exponential Methods for Blind Deconvolution,” IEEE Trans. Inf. Theory, vol. 39, No. 2, pp. 504-519, May 1993.
Related Publications (1)
Number Date Country
20180129630 A1 May 2018 US