The invention relates to a computational method for producing a sequence of high-resolution (HR) images from an input sequence of low-resolution (LR) images. More particularly, the method uses a structured subspace framework to learn pairs of LR dictionaries from the input LR sequence ‘and’ employ learned pairs of LR dictionaries into estimating HR images. The structured subspace framework itself is based on a pair of specially structured HR basis matrices, wherein a HR basis spans any HR image whose so-called polyphase components (PPCs) are spanned by the corresponding LR dictionary.
This work addresses the problem of multiframe upsampling (also known as super-resolution, upscaling, or de-aliasing) wherein, given as input a sequence of low resolution (LR) images, the desired output is their corresponding high resolution (HR) versions. In such a problem, low pixel density (low pixel count per square area) causes loss of resolution. Applications include all imaging technologies that produce a sequence of images (e.g. a video, MRI scan, satellite images, aerial surveillance, etc.) with pixel count that is lower than desired.
The conventional approach to the solution of this problem is motion estimation (or motion modeling) across the captured images. But accurate modeling of (complex) motion patterns requires high enough pixel density: a hopeless egg and chicken paradox the signal processing community has been trying to solve for many years.
Instead of the motion estimation route, we adopt a ‘signal representation’ approach (also known as the example-based, training-based or dictionary learning-based approach), which we summarize its relevant (previous) results as follows.
In the machine learning and signal processing communities, it has long been established that given a few samples (partial measurements) of an unknown (HR) image, the entire image can be recovered with reasonable accuracy depending on two main factors: 1—The severity of undersampling; and, 2—The existence of an efficient dictionary (basis) that can represent the HR image well with only a few dictionary atoms. In particular, the more undersampled the image, the more efficient the dictionary needs to be to be able to recover the unknown HR image given partial measurements of it (the available reference (LR) image).
OBSTACLE to Fact 1: creation of a dictionary that can efficiently represent an image is a process known in the machine learning community as: dictionary learning or training. However, the efficiency of a dictionary (for representing a particular image) does not only depend on the learning method in use, but it is also highly dependent on the data (images) used to train the dictionary (efficiency of the dictionary is highly dependent on the training set). In particular, the narrower the training set, the more efficient the learned dictionary would be, even regardless of the learning process. A training set is said to be narrow if the training images (also known as example images) belong to a narrow subclass of images (that also include the sought-after image).
What this means in practice is that if the goal is to recover the HR version of a license plate, say, given only a LR version of it, then a training set extracted from high quality example license plate images would be far more useful than a training set based on generic images. In short, the more specialized the training set, the better the outcome. On the other hand, using specialized (narrow) training sets of HR images to learn efficient dictionaries would be entirely useless for estimating generic images. For example, if your training set is made up of HR images of license plates, the learned dictionaries would be completely useless for estimating any images other than license plates.
A simple fact of signal processing is that any 2D signal can be decomposed into multiple LR versions of it, the so-called polyphase components (PPCs), via the basic operations of (2D) shifting and subsampling (decimation). Specifically, for a decimation (subsampling) factor of p, an image can be decomposed into p2 PPCs. The first PPC is obtained by starting with the first pixel in the first row of the image, and then decimating (vertically and horizontally) by p. Decimating, starting with the second pixel in the first row, we get the second PPC, and so forth. For example, the (p+1)th PPC is obtained by decimating starting with the first pixel in the second row, and the last p2-th PPC is the result of decimating beginning with the p-th pixel in the p-th row of the image. Therefore, a HR image can be trivially reconstructed from its PPCs simply by interlacing them.
A LR sequence of images (e.g. a LR video) can provide a very narrow training set for the PPCs of (the sought after) HR frames. To put it differently, if we have a sequence of LR images, then why not shift our focus from estimating the unknown HR frames, to estimating the (the low-resolution) PPCs (of each HR frame) instead?
OBSTACLE 1 to Fact 3: While Fact 3 avoids the obstacle associated with Fact 1, it introduces the MAJOR issue of lack of ‘partial measurements’. To be more specific, even if we have available a very efficient dictionary for representing an (unknown) image, Fact 1 tells us that if NO partial measurements of it exist, then the efficient dictionary is of NO value.
Now, since Fact 3 suggests using a LR sequence to train efficient dictionaries for the PPCs of a HR frame, the signals that need to be estimated in this case are the PPCs themselves (the target images in a solution based on Fact 3 are the PPCs, from which the HR image is trivially reconstructed). However, without partial measurements for each target signal (each PPC), there is no way to go further along the route of Fact 3.
OBSTACLE 2 to Fact 3: Almost all example-based (aka dictionary-based or training-based) upsampling methods use patch-based processing, which simply means that local sub-regions of a HR frame are estimated instead of estimating the entire HR frame as a whole. In a solution scenario that involves Fact 3, patch-based processing would be essential if the scene is dynamic (changing quickly across the captured frames). However, working on small patches would require regularization, which simply means adding useful information to make up for the scarcity of the data (that comes from working on small patches).
To resolve OBSTACLE 1 under Fact 3, the authors of U.S. Pat. No. 8,665,342 (“[1]” hereafter) (incorporated herein by reference) proposed exploiting relationships between PPCs corresponding to different decimating factors. Their proposed solution entailed an imaging hardware modification. In particular, their solution would work only for optical imaging systems (cameras) where a secondary sensor, with a different sampling rate (different resolution sensor), must be incorporated into the camera such that it would have the same line of sight as that of the primary sensor (by incorporating, for example, a beam splitter so that both sensors ‘see’ the same scene at the same time).
In what follows, we present the basic premise the authors of the '342 patent used as an ‘enabler’ of Fact 3 (the same enabler remains the foundation of the work in U.S. Pat. Nos. 9,538,126; 9,693,012 and 9,781,381. Each of which is incorporated herein by reference) (“[2]” hereafter). Note: We simply do NOT use said premise in our current solution. Indeed, the reason we include this section is only to help the interested reader to gauge how markedly different our current approach is.
In [1], [2], the job of the secondary sensor is to provide the missing ‘partial measurements’ for the target signals of Fact 3 (the PPCs). In particular, each secondary frame (captured by the secondary sensor) is decimated into multiple (even lower resolution) images each of which provides partial measurements for each PPC that need to be estimated. In the following, we provide a quick review of how the relationships between PPCs was exploited by [1] and [2] for image upsampling.
Given either set of PPCs (the PPPCs or the SPPCs), the HR image can be recovered simply by interlacing the PPCs (from either set). Therefore, if we use the LR images (captured by the camera) to create a representative dictionary (of the same low resolution level) for representing the PPPCs, then we can ultimately reconstruct the HR image simply by finding the representations of the PPPCs in terms of said dictionary. However, without knowing any partial measurements for each PPPC, such a scenario would be impossible. Nonetheless, a careful examination of
For the case of ‘dynamic scenes’ (OBSTACLE 2 under Fact 3), the work in [2] proposes regularization in the form of the so-called anchors, as well as generative Gaussian models (GGMs), such that working on small patches of a frame, rather than the whole frame at once, becomes possible.
To recap, Fact 1 and Fact 2 are well known. Fact 3, while quite obvious (given Fact 1 and Fact 2), needed an enabler to get over obstacles associated with it. Previous enablers came in the form of an imaging hardware modification coupled with regularization (for dynamic scenes).
Spatial resolution is, obviously, a very fundamental aspect of image quality, so much so that when the physical constraints of an imaging system force us to choose between many important features, sufficient pixel density comes on top of the list of priorities, overshadowing other important requirements such as high dynamic range, or high-speed imaging, for example. Even in non-optical imaging modalities (where samples are not captured in the spatial domain), the pixel density can be severely limited by the physics of the imaging technology unless other useful features are sacrificed. Take medical imaging for instance. Doctors would not likely accept low resolution images albeit for the sake of a higher density of image ‘slices’ (per organ), lower radiation doses or shorter scanning times.
Said differently, the reason why upsampling can be such a powerful tool is the fact that pixel density can be traded with so many very important imaging qualities (freezing motion blur, reduced noise, reduced crosstalk, higher sensitivity, higher dynamic range, smaller size, cost and complexity of the imaging system, higher frame rate, reduced scanning time, reduced radiation dose, reduced slice thickness, adding angular resolution, and, of course, larger field of view). But pixel density is of such utmost importance that all of the aforementioned imaging aspects are normally sacrificed to satisfy the basic requirement of adequate pixel density, after-all, who wants pixelated images? Upsampling offers the daring solution of turning the table on its head and sacrifice pixels for the sake of other qualities (and then, thereafter, restoring the missing pixels to acceptable quality). Upsampling is therefore a very fundamental problem, and provided there exists a good enough solution, everyone would want a piece of it. A word search of the US patent database for recent patents on upsampling—also known as super-resolution, upscaling, de-aliasing—would find that many big corporations have patents on upsampling/super-resolution solutions.
In the current invention, we entirely forsake the notion of partial measurements for the PPCs (which necessitates a bi-sensor camera setup of [1], [2]). Instead, we adopt a structured subspace perspective, which allows us to relax the condition of the availability (from the onset) of a secondary LR sequence. In particular, the standpoint of beginning the solution with an ‘initial guess’ of a secondary sequence of images is at odds with a designated task of providing ‘the partial measurements’. By contrast, an initial guess of the missing sequence is admissible within the new solution model, as shall become apparent in the remainder of this disclosure.
In particular, the new structured subspace perspective adopted by this invention constitutes a purely algorithmic ‘enabler’ of Fact 3, and it renders previous enablers of Fact 3 obsolete. In other words, the current proposed solution, while based on the (well known) Fact 1, (well known) Fact 2 and (obvious) Fact 3, requires NO hardware modification (no secondary sensor/beam splitter are required) and it does NOT require any regularization for the case of dynamic scenes. Additionally, this would extend the sphere of applications beyond optical imaging systems (such as medical imaging systems, for example, which are non-optical systems).
In this disclosure, we show how the structured subspace perspective can be used to eliminate the (conventionally) basic requirement of ‘partial measurements’. Said differently, estimating a signal (PPCs in the context of Fact 3), without having some partial measurements of it, is quite simply heretofore unknown. By contrast, the structured subspace model we adopt here, completely circumvents the issue of partial measurements (for PPCs) by seeking representations of HR images in terms of specially structured HR bases (instead of seeking representations of PPCs in terms of LR dictionaries as was the case in [1], [2]).
In particular, the HR bases we use are structured such that they span all HR images whose PPCs are spanned by LR dictionaries learned from the available sequence of LR images. In other words, instead of the ‘brute force’ exploitation of the relationships between PPCs corresponding to different subsampling rates (which necessitates a hardware modification, beam splitter etc.), the new structured model herein, in effect, ‘seamlessly’ embeds these relationships, making the essential issue of partial measurements irrelevant, thus capitalizing, without hindrance, on the obvious fact that a sequence of LR images can be used to provide the best (narrowest) training sets for the PPCs of HR images (
The structured subspace framework is based on estimating the representations of the sought-after HR image in terms of a pair of HR bases, each of which is embedding a (different resolution) LR dictionary. Specifically, the structured subspace framework can be summarized with the following equation (which is to be solved whichever—meaningful—way possible)
Vα=Wβ
where:
V is the first HR basis matrix constructed such that it spans any HR image whose PPCs are spanned by the LR dictionary W.
W is the second HR basis matrix constructed such that it spans any HR image whose PPCs are spanned by another (different resolution) LR dictionary Φ.
α is the representation of the HR image in terms of V.
β is the representation of the HR image in terms of W.
Imaging applications that can benefit from upsampling using the disclosed technique:
a) Medical imaging. For example, a magnetic resonance imaging (MRI) system cannot produce an MRI sequence with high density of ‘slices’ without sacrificing the spatial resolution per slice. This is akin to how a camera cannot capture very high frame rate videos without sacrificing the resolution per frame (regardless of the price tag).
b) Other applications, where upsampling can computationally make up for the need to sacrifice spatial resolution, include high dynamic range imaging, thermal cameras, flash LADAR/LIDAR and light field imaging.
c) Situations that require large distance imaging, yet a large field of view (FOV) is desired (avoiding optically zooming in on a small portion of the FOV). Gigapixel cameras (hundreds of sensors housed in a giant camera) are needed for this reason. With reliable upsampling, a frame captured with a resolution of 100 Mega pixels can be blown up to 1.6 Giga pixels, for example. The ability to cover larger field of view (while computationally making up for the loss of resolution) is very useful in remote sensing, surveillance, search and rescue, etc.
d) Consumer/commercial applications. Examples include turning a cell phone camera into a very high-resolution camera, and upconverting a standard-definition video (SDTV) signal to an ultra-high definition video (UHDTV), to match the resolution of high-definition displays.
e) “Denoising by upsampling”: the described techniques can be used to denoise images, of either LR or HR origination. ‘Low-resolution (LR)’ not only means low-number of pixels, but it also means ‘noisy’ images. Similarly, the term ‘high-resolution (HR)’ here simply means ‘upsampled’ (primarily for the purpose of noise removal).
Estimating HR images based on LR dictionaries is a notion that is quite alien to conventional wisdom. Indeed, image upsampling would be an intrinsically simple problem if all it takes to solve it is to “learn” LR dictionaries from a LR sequence, but could it really be that simple? The answer by the authors of [1], [2] (above) was ‘no’, unless the issue of partial measurements for the PPCs is resolved via an imaging hardware modification.
Adding the structured subspace perspective, however, reveals that no imaging hardware modification is necessary, and that the problem can indeed be solved solely by learning LR dictionaries. Before we proceed with solution details, we would like first to list a few assumptions, acronyms, and notational conventions that will be used in describing the details.
q=p+1. (1)
Namely, if we let u denote the HR image, then is a matrix of size d×d, where
d=rpq, (2)
and r is a positive integer.
For example, if A is a matrix, then its transpose is denoted by A*.
Laying the foundation for our proposed structured subspace-based framework begins with asking (and answering) the following question: suppose we have available a LR dictionary that spans the PPCs of the sought after HR image, how do we construct the ‘basis matrix’ of the subspace of all HR images whose PPCs are spanned by said dictionary?
The answer to that question lies in the basic fact that, instead of simply interlacing its PPCs, a HR image, ∈d×d, can alternatively be constructed from its PPCs, by adding up zero-filled and shifted versions of the PPCs. Said differently, the answer to that question begins by analytically expressing the HR image in terms of its PPCs.
Specifically, let {ƒi}i=1p
where Zp∈d×rq zero-fills the columns by a factor of p (zero-filling ‘upsamples’ a vector by inserting p−1 zeros between any two elements, and p−1 zeros after the last element), post-multiplying with Z*p zero-fills the rows, Sm
Using the fact that zero-filling matrices and circular shifting matrices are binary matrices with the property that each column contains a sole “1”, the row-indices of the “1” elements give us a full description of such matrices. In particular, the expression
ΠdZ
which encodes the zero-filling matrix Zp, gives us all we need to know to construct Zp. Specifically, equation (5) literally says that Zp is of size d×rq, and that the k-th column has a “1” located at row ΠdZ
Similarly, the shifting matrices Sm
Using the fact that the matrix equation B=CAD* can be reshaped into the vector equation vec(B)=(D⊗C)vec(A), we rewrite (3) to express ∈d
By the mixed-product property of the Kronecker product, we have
(Sn
Sp
and
Zp
Sp
Using (8)-(11), we have
Note that (12) is the vector equation form of the matrix equation (3).
Now, assume we have available a LR (primary) dictionary, Ψ∈r
ƒi=Ψαi for i=1,2, . . . ,p2. (13)
Equation (12) can therefore be rewritten as
Rewriting (14) in matrix-vector form, we get
Or more concisely,
Thus, the matrix V∈d
In other words, if we let denote the subspace of all HR images of dimension d2 whose PPPCs are in span (Ψ), then
=span(V). (19)
Equation (17) gives us two options as to how to construct V. The first (needlessly computationally expensive) option is to construct the matrices, {Sp
The second option is to simply perform the operations of zero-filling and shifting on the atoms of Ψ, without actually using any matrix operators. In other words, V can be constructed from Ψ without using a single floating point arithmetic. For example, the following Matlab code constructs V from Ψ without any calculations
However, besides allowing for a formal description of V, the matrices {Sp
V=[S
p
1
Z
p
S
p
2
Z
p
. . . S
p
p
Z
p
](⊕i=1p
If we define
P
[S
p
1
Z
p
S
p
2
Z
p
. . . S
p
p
Z
p
], (21)
and
VΨ⊕i=1p
then
V=PVΨ. (23)
Now, instead of wastefully computing the Kronecker product in (11), it can be verified (using the code (5) and equation (11)) that Zp
Similarly, the row-indices encoding the i-th (permutation) matrix, Sp
Now, let's turn our attention to the matrix P∈d
where mp and np are defined in (4).
Equations (22), (23) and (26) offer an alternate formal and efficient description of how to construct V (in lieu of the computationally expensive option of using equations (4)-(7), (10), (11)). In particular, note that VΨ (defined in (22)) is just a matrix that contains p2 replicas of Ψ on its diagonal (no calculations are required to construct VΨ). Additionally, it can be proven that P is a permutation matrix and therefore V can be constructed simply by shuffling the rows of VΨ according to ΠP(k).
By definition, each column and each row in a permutation matrix must have a single “1” element. In other words, if one can verify that the row-indices ΠP (k) are all unique (no repetitions), then P is a permutation matrix. An easier approach to show that P is a permutation matrix, is to use the fact that a matrix A is a permutation matrix iƒƒ A is square, binary and orthogonal (A*A=I). Now, since P is square and binary, proving that P*P=I proves that P is a permutation matrix. In other words, in light of the block structure of P, we just need to show that
The first part is trivial. In particular, Sp
As to the second part, recall that the function of zero-filling and shifting is to perform interlacing of PPCs using the sum in (12), which means that the non-zero elements of zero-filled and shifted PPCs cannot overlap. This implies that
i.e. ƒ*i(Z*p
Let {gi}i=1q
gi=Φβi for i=1,2, . . . , q2, (29)
where {βi}i=1q
Also, let denote the subspace of all HR images of dimension d2 whose SPPCs are in span (Φ), and let W∈d
=span(W). (30)
Following the analysis presented in the previous section, we can construct W from Φ, simply by exchanging p and q. In particular, the following equations define W.
W=QWΦ, (31)
where
WΦ=⊕i=1q
and Q∈d
Now, similarly to (16), the HR image can be expressed as
Now we are ready to ask the following question: Could it be possible that V and W are all we need to find ? If the answer is yes, then, effectively, the pair of LR dictionaries Ψ and Φ are all that is required to find the HR image.
To answer that question, we begin by first combining equations (16) and (35) into one equation
Vα=Wβ, (37)
which we rewrite as
Equation (38) suggests that we need to study the nullspace of the augmented matrix [V W]∈d
Znull([V W]), (39)
and
=span(Z), (40)
and assume that the sought after HR image is large enough. Specifically, assume
d
2
=r
2
p
2
q
2≥(p2M+q2N). (41)
Now, define
∩ (42)
and let U denote the basis for the intersection subspace , i.e.
=span(U). (43)
Since ∈ and ∈, then ∈, which means that if dim()=1, then finding U is tantamount to finding (a scaled version of) . However, thus far, all we know about the dimensionality of the intersection subspace is
1≤dim()≤min(p2M,q2N), (44)
where the lower bound is due to the fact that ∈, while the upper bound is due to (42).
To gain additional insight, we need to turn our attention to the nullspace. In particular, let Z be partitioned into a column of two matrices, ZV∈p
then
U=VZV, (46)
Equivalently,
U=WZW, (47)
Therefore, if V has full rank p2M, or, equivalently, W has full rank q2N, then,
To verify that V is full rank, recall equations (22), (23) and that P*P=I, hence
V*V=⊕i=1p
Therefore, rank(V)=rank(V*V)=p2rank(Ψ*Ψ)=p2rank(Ψ)=p2M. Similarly,
W*W=⊕i=1q
and thus rank(W)=q2N.
Now equation (48) says that the dimensionality of the intersection subspace is equal to the nullity of the augmented matrix [V W]. Of course, the nullity of a matrix is simply the multiplicity of its zero singular values, so the question (which we posed before equation (37)) boils down to: what are the necessary and sufficient conditions for [V W] to have no more than 1 zero-singular value?
The first three necessary conditions are obvious and are already satisfied. Namely, we obviously need the augmented matrix to have more rows than columns (41). Also, the columns of V must be linearly independent (rank(V)=p2M). Similarly, the columns of W must be linearly independent as well (rank(W)=q2N).
The fourth necessary condition is also obvious and it pertains to the span of each of the pair of LR dictionaries Ψ and Φ. In particular, let {{tilde over (ƒ)}i}i=1p
Are there any more conditions? Ostensibly, the answer would require the daunting task of examining the very sparse structure of both V and W; particularly the interaction of both sparse structures within the augmented matrix [V W]. Instead, we derive equivalent forms of (38) to see if the associated matrices are more revealing.
We proceed with the pre-multiplication of both sides of (38) with [V W]*, to obtain
For convenience, and without loss of generality, we assume that atoms of a LR dictionary are orthonormal, i.e.
Ψ*Ψ=I, (52)
and
Φ*Φ=I. (53)
Using (49) and (52) we get
V*V=I, (54)
Similarly (50), (53),
W*W=I. (55)
Consequently, equation (51) is rewritten as
Now the top part of (56) gives us the equation
α=V*Wβ, (57)
and the bottom part reveals that
β=W*Vα. (58)
Plugging (58) in (57), we get
(I−V*WW*V)α=O. (59)
Note that V*WW*V=(W*V)*W*V, and therefore, V*WW*V is a symmetric positive semidefinite (PSD) matrix with singular values between 0 and 1 (the upper limit on the largest singular value is due to (54) and (55)). Likewise, the matrix (I−V*WW*V) is also symmetric, PSD, with singular values between 0 and 1. Moreover, since (59) is derived from (51), which in turn is derived from (38), then ZV=null(I−V*WW*V). In other words, dim() is equal to the multiplicity of the smallest (“0”) singular value of (I−V*WW*V). Equivalently, dim() is equal to the multiplicity of largest (“1”) singular value of V*WW*V, which is the same multiplicity of the largest singular value of W*V, i.e.
dim()=multiplicity of σ1(W*V), (60)
where σ1(.) denotes the largest singular value of a matrix.
So let's take a look at the structure of W*V and see if it could reveal anything about the multiplicity of its singular values (particularly, the largest one). Given equations (22), (23) and equations (31), (32) we can write
W*V=(⊕j=1q
If we only consider the product (⊕j=1q
For practical reasons that shall become apparent later, we are only interested in the case q=p+1, which automatically satisfies GCD(p,q)=1. Said differently, we are only interested in what the structure of W*V can reveal about the multiplicity of σ1(W*V) given the assumption (1). Before we proceed, let us first make the following definitions.
Let Aji∈r
A
j
i
=D
q
R
k
(i),k
(j)Ψ, (62)
where Dq
k
g(i)=mod(q−i,q). (63)
Said differently, if we obtain the 2D form of each atom (column) in Ψ, and decimate (by q) each (2D) atom starting at the entry located in column kg(j)+1 and row kg(i)+1, and then vectorize the decimated atoms and stack them next to each other, we get Aji.
Similarly,
B
j
i
=D
p
R
k
(i),k
(j)Φ, (64)
where Dp
k
p(i)=mod(p−i,p). (65)
Now, it can be verified that when (1) is satisfied, W*V∈q
with each block Ti∈qN×pM, 1−p≤i≤q−1, being a q×p block-Toeplitz matrix itself,
where the j-th sub-block Cji∈N×M, 1−p≤j≤q−1, is given by
C
j
i
=B
j
i*Aji, (68)
and Aji and Bji are as defined in (62)-(65).
In other words, when (1) is satisfied it can be verified that W*V is a Toeplitz-block-Toeplitz (a 2-level Toeplitz) matrix generated by the (p+q−1)2=4p2 sub-blocks {Cji}i,j=1-pq−1 as detailed by equations (62)-(68). Moreover, equations (62)-(65), and (68) tell us that all these sub-blocks are unique (i.e. Cji≠C
So now, instead of wondering about other conditions we might need for the (sparse) augmented matrix of (38) to have a nullity of exactly 1, the question becomes: given that q=p+1, are the aforementioned necessary conditions all we need for the (dense) Toeplitz-block-Toeplitz matrix W*V to have a distinct largest singular value?
The relevant literature is rich with analysis and properties pertaining to Toeplitz matrices. However, to the best of our knowledge, when it comes to the question of the multiplicity of singular values of Toeplitz-structured matrices, the answer can only be given in the context of asymptotic distributions of singular values (and under assumptions that do not apply in our case). Therefore, we state the following empirically verifiable result.
Empirical Result 1: Given that the basic assumption (1) is satisfied, i.e. q=p+1, the following conditions are both necessary and sufficient for σ1 (W*V) to be distinct.
Note that if the pair (Ψ, Φ) simultaneously accommodate the PPCs of exactly one image, then σ1(W*V)=1, and, equivalently, dim()=1.
In the previous two sections, we have seen that the intersection subspace of two HR subspaces based on two (different-resolution) LR dictionaries is at most 1-dimensional (if the four conditions listed above are satisfied). Relying on this knowledge, we now turn our attention to finding the HR image.
As is well-known if dim()=1, the easiest (and most computationally efficient) way to find a solution to (37) is to simply compute the right singular vector associated with the smallest (“0”) singular value of [V W]. Similarly, when (52) and (53) are satisfied, the solution to the alternative formulation (59) is the singular vector of (I−V*WW*V) associated with its smallest (“0”) singular value, which is also the right singular vector associated with the largest (“1”) singular value of W*V.
In practice, however, the pair of LR dictionaries are ‘learned’ from training (example) LR images and thus they would never simultaneously exactly accommodate the PPCs of the sought after HR image (or any other image for that matter). In other words, the intersection subspace is always empty if the conditions of Empirical Result 1 are satisfied (since, in practice, the pair of LR dictionaries do not, simultaneously, exactly accommodate the PPCs of any image).
Nevertheless, since σ1(W*V)<1 is distinct, the optimal (in the Frobenius norm sense) approximate solution to (59), remains the right singular vector of W*V associated with (W*V). Alternatively, since the smallest singular value of [V W] is distinct iƒƒ σ1(W*V) is distinct, then the optimal solution of (37) remains the right singular vector of [V W] associated with its smallest singular value.
Of course, our ultimate goal of solving (37) (or its equivalent formulation (59)) is to estimate . However, an optimal solution to (37) does ‘not’ necessarily guarantee the optimality of the estimation of . To explain with an (extreme) example, let 1 and 2 be two images such that
i.e. 1 is a much better approximation of than is 2, and suppose that the available pair of LR dictionaries simultaneously accommodate the PPCs of 2 exactly, while the same pair of dictionaries only approximate the PPCs of 1 (albeit very well). In this case, the exact solution of (37) will lead to the inferior approximation 2.
Again, in practice, the pair of (learned) dictionaries never do simultaneously accommodate the PPCs of any image, but the previous example helps to highlight the fact that solving (37) can be completely blind to whatever constitutes an optimal estimation of . Put differently, if we are to rely entirely on (37) (to estimate the representation of the HR image in terms of the HR bases), then the learning process of the pair of LR dictionaries must be carefully designed to try to avoid picking inferior approximations of our sought after HR image.
Before we suggest an alternative to requiring a painstaking learning process of the pair of LR dictionaries, we first note that the solution to the following optimization problem
is the same optimal solution to (59), i.e. the solution to (69) is also the right singular vector of W*V associated with σ1(W*V).
Now consider the following modified optimization problem
where
the mean of the PPPCs of the HR image, is assumed to be known (for now), and is a control parameter. By adding the term
we require the solution to be such that the mean of the estimated PPPCs be close to
Of course,
where ∈M×pM is a 1×p2 block matrix, scaled by the factor
with all p2 blocks being equal to the identity matrix (of dimension M),
and therefore,
We are now ready to write a practical version of optimization problem (70)
which has the closed-form solution (see the Equation Solution Section below)
{circumflex over (α)}=μ[μ*+(1−σ)I−V*WW*V]−1(*Ψ*x) (73)
where σ is the smallest singular value of the matrix
and *Ψ*x∈p
After estimating the representation of the HR image in terms of the HR basis V, all is left to get an estimate of the HR image is to use equation (16), i.e. =V{circumflex over (α)}. However, recall (13) and (18) and partition {circumflex over (α)} into p2 vectors, each of length M, i.e.
Therefore,
=V{circumflex over (α)}=interlace {Ψ{circumflex over (α)}i}i=1p
Consequently, the construction of neither V nor W is required here. Only the product W*V is needed, which can be very efficiently computed using equations (62)-(68).
It must be re-emphasized, however, that problem formulation (72) is only one possible answer to the following question: How do we find a solution to (37), that is in favor of an optimal approximation of our sought after HR image? In other words, the HR basis matrices V and W only outline a structured subspace framework within which we have complete freedom to seek an estimate of . Indeed, one can come up with a different objective function, combined with a different set of constraints (or lack thereof), in lieu of (72), but whatever optimization problem one chooses to solve, the matrices V and W will always be in the picture, one way or another.
In that sense, the structured subspace framework (37) can be seen as a conduit through which the pair of ‘LR’ dictionaries (Ψ,Φ) can generate (an estimate of) our ‘HR’ image. Moreover, we shall see next that the same framework provides the foundation for training of said ‘pair’ of LR dictionaries from a ‘single’ LR sequence.
Additionally, it goes without saying that any permutations of equation (37) are exactly equivalent to equation (37). Said differently, since a linear system of equations (e.g. (37)) remains exactly the same system of equations if you simply change the ordering of its equations, then any reformulation of V and W that amounts to permuting their rows does not change the solution space for which equation (37) was set up.
Thus far, we have introduced the (unorthodox) notion that a pair of LR dictionaries can indeed be all we need for estimating a HR image, using the structured subspace model. Moreover, we proposed infusing an element of arbitration to the solution framework, to avoid practical difficulties that might arise with learning a pair of LR dictionaries. In this section we show how the same subspace structure can additionally be used for training pairs of LR dictionaries in the context of estimating HR images from a single LR sequence of images.
Before we proceed, we would like to highlight a notational change. Since we estimate the k-th HR frame, k, in a sequence of K frames =[1 2 . . . K], by estimating patches of it (patch-based processing), we will add the superscript ‘’ to our symbols to indicate that they are associated with estimating the s-th HR patch, ∈d
to indicate that the k-th HR frame is estimated by combining all SL estimated patches
using some combination process .
Now, let denote a learning function (or procedure) that learns a LR dictionary (with orthonormal atoms), using a LR training sample and a training set of LR image patches. In particular, if xk denotes the k-th LR frame in a sequence of LR frames, =[x1 x2 . . . xK] that correspond to (2D) decimation of the HR sequence by a factor of p, i.e. =p
≡() (76)
to signify that the LR dictionary ∈r
≡() (77)
indicates that the training set of patches is extracted from around using some strategy .
Similarly, the same learning function can be used to learn the secondary dictionary ∈r
Following the same notational change for estimates corresponding to patches, the symbol denotes the HR basis matrix constructed from , i.e. =P(⊕i=1p
≡(), (78)
to indicate that the matrix product is constructed from the learned LR dictionary pair (), using equations (62)-(68).
Normally, however, an imaging system would only have one sensor, whose output is , with p being simply the desired upsampling factor. In other words, a secondary sequence does not exist, and, therefore, neither does , nor . This obstacle can nevertheless be overcome by starting the solution with an initial estimate of the HR sequence. Specifically, let
(0)≡() (79)
denote the initial estimate of the unknown HR sequence, based on the available LR sequence , using some estimation process , then the first estimate of the secondary sequence is given by
(1)≡q
and the first version of the secondary LR dictionary can be learned, ≡(), where and denote the training set of LR patches, and the training sample, respectively, extracted from (1).
With both and an initial guess (0) of the HR sequence at hand, our structured subspace model can be used in an iterative joint estimation of the HR sequence ‘and’ training of secondary LR dictionaries, as follows.
For a current estimate of the s-th HR patch in the k-th frame, first compute (recall (73))
=μ[μ*+(1−)I−]−1(*), (81)
with being the smallest singular value of (recall (74))
and obtain the current estimate of the HR patch via (recall (13), (16) and (18)),
==interlace {}i=1p
where ≡(), and ≡(), with and being the training set, and training sample, respectively, extracted from the current secondary LR sequence (t)=q
After computing all SL current estimates of patches, {{}s=1S, corresponding to all S partitionings of the k-th frame, find the current estimate of the k-th HR frame, (t)k≡ ({{}s=1S), for k=1,2, . . . , K, to get a current estimate (t)=[(t)1 (t)2 . . . (t)K] of the HR sequence.
Repeat until a prescribed number of iterations, T, has been reached to get the final estimate of the HR sequence,
(T)=[(T)1 (T)2 . . . (T)K]. (84)
Refer to Table 2 and
At this point, it is worthwhile to note that, all KSL estimates, {}, 1≤k≤K, 1≤s≤S, and 1≤s≤L, needed to create the HR sequence (t), are computed completely independently from each other, and hence they can all be concurrently computed. Namely, if the computing hardware allows it, it is possible to parallel-process all KSL patches at once (per iteration).
Moreover, it should be noted that the proposed baseline is the most straightforward, and least computationally expensive for implementing the structured subspace framework. However, more complex baselines can still benefit from the structured subspace framework by jointly estimating HR patches, for example. In other words, the solution baseline can be devised such that the estimation of any patch is made dependent on the estimation of other patches (e.g. neighboring patches within the same frame, or patches across multiple estimates of the same frame, or even patches across different frames). However, we do not believe that the improvement in results, if any, would justify the added complexity. Specifically, the real power of our solution lies in the unprecedented access it provides to very narrow training sets (that are extracted from a naturally highly correlated sequence of LR images).
In the previous section we described a general baseline for iterative estimation of a HR sequence from a LR sequence, by learning pairs of LR dictionaries within a structured subspace framework. However, many details were (intentionally) left out. Specifically, which estimation process are we going to use to obtain an initial guess of the HR sequence? How about the learning function ? Regarding those local training sets, what are the specifics of their extraction from a LR sequence? Which combination process is going to be used to piece together all SL estimated HR patches per frame? Indeed, what are the S different possible ways to partition a frame into patches?
The discussion of such details was postponed so as to help appreciate the cornerstone role of the structured subspace model, and to emphasize the level of freedom in the design of the baseline's remaining components. Indeed, as we shall see in the experiments section, we can get impressive results despite using some of the simplest outlines for , , , and . This, again, underscores the baseline's most valuable asset: the structured subspace model which, in effect, leverages a sequence of LR images as a very narrow training set for estimating HR images.
A. The Initial Guess ()
For an initial estimate (79) of the HR sequence from the available LR sequence, we choose to simply represent Lanczos interpolation (by a factor of p). One might try more advanced options, but besides added complexity, even advanced methods would not give appreciably better estimates compared to simple image interpolation methods (such as bicubic or Lanczos) when the LR sequence contains complex motion patterns, and the aliasing is relatively strong.
B. Partitioning a frame
In its simplest form, patch-based processing works by dividing a frame we desire to estimate into patches (sub-regions), and then applying the proposed estimation process for each individual patch, independently from other patches. By simply putting each estimated patch in its place, we get an estimate of the HR frame (i.e. an estimate of a HR frame is obtained by ‘tiling’ its estimated patches).
However, multiple estimates of the same frame can be obtained by dividing the frame into overlapping patches. To elaborate, since the size of a HR patch is d×d and d=rpq, if we use an overlap of pq pixels, vertically and/or horizontally, then a HR frame can be partitioned into patches in S=r2 different ways. In particular, since the location of the ‘leading’ patch =1 of the s-th partitioning determines the locations of the remaining patches (since patches are tiled next to each other, vertically and horizontally, per partitioning), we shall give a description of the location of the leading HR patch corresponding to partitioning s, as follows [2].
and A(r1:r2, c1:c2) denotes the submatrix of A, from row r1 to row r2, and column c1 to column c2.
Therefore, the corresponding leading LR patch =1 of the k-th LR frame xk is
=1=xk(ν(s)q+1:ν(s)q+rq,h(s)q+1:h(s)q+rq). (87)
Similarly, the corresponding leading LR patch =1 of the k-th (secondary) LR frame y(t)k is
=1=y(t)k(ν(s)p+1:ν(s)p+rp,h(s)p+1:h(s)p+rp). (88)
C. Extracting Local Training Sets ()
We follow the simple procedure described in [2], wherein (overlapping) LR patches are extracted from within the spatiotemporal neighborhood of the patch , as follows.
The result is a (local) training set containing Mtr=k (2b+1)2 LR patches that are extracted from within a spatiotemporal neighborhood around , with temporal width of k frames, and vertical (horizontal) spatial width of (2b+1) pixels. The following pseudocode describes more explicitly.
Suppose =xk(r0+1:r0+rq, c0+1:c0+rq), then a local training set can be extracted from , around , as follows.
Similarly, is extracted, around , from (t), using the same exact procedure (with Ntr=Mtr=k(2b+1)2).
D. Learning the Dictionaries ()
Dictionary learning () can be based on feature selection (FS), feature extraction (FE), or a combination of both. In our case, FS far outweighs FE. In particular, PPCs are highly correlated signals, so learning a dictionary that represents them well would be heavily dependent on choosing those features (LR patches) in the training set that would capture the subtle differences between the PPCs [2].
To elaborate, since the PPCs are expected to be highly correlated with , we use it as a training sample for selecting the M«Mtr ‘best’ LR patches in . To that end, we use best individual feature (BIF) selection based on L1 distance between
and normalized LK patches in . In other words, we select a subset with only M LR patches out of the Mtr=k (2b+1)2 LR patches in , where the elements in the selected subset are simply the normalized LR patches in the training set that have the smallest L1 distance from the normalized training sample.
Now, since some of our equations are based on the (simplifying) assumption that the dictionary atoms are orthonormal, all is left to do after selecting the best M LR patches is to simply orthonormalize them. This can be done, for example, using the Gram Schmidt method (QR factorization). However, we shall be using the singular value decomposition (SVD), since it keeps the door open for further dimensionality reduction if so desired. In other words, our local dictionary is simply the first M left singular vectors of the selected subset . If {tilde over (M)}<M is desired, then we simply keep the first {tilde over (M)} left singular vectors instead.
Likewise, is the first N left singular vectors of the N-element subset selected from the (normalized) training set , using the BIF method based on the smallest L1 distance from
The dictionary learning process described here is simple, yet it works surprisingly well. It must be noted, however, that this simplicity of dictionary learning is afforded to us by the ‘narrowness’ of the training sets, which, in turn, is attributed to the ‘high correlation’ between images in a LR sequence and the PPCs of a HR frame (
E. Combining Multiple Estimates of a HR frame ()
Given and (and ) for 1≤s≤L, we can estimate for 1≤s≤L, and tile all estimated patches into the s-th estimate s,tk of the k-th HR frame. Repeating this for all s (1≤s≤S=r2), we get S=r2 estimates {s,(t)k}s=1r
So now, what do we do with all these estimates of the same frame? One of the simplest options is to average all r2 estimates. We, nevertheless, choose to follow the solution proposed in [2] to get the final (per iteration t) estimate (t)k of the k-th HR frame, as follows.
where Rw
Now we turn to the solution parameters, and start with the integer scalar r. Given that, in practice, p is the user-defined upsampling factor, and given that q=p+1 (1), then r is the only parameter left to determine the size of the HR patch (2) or, in particular, r determines how small (41) the HR patch can be. Before we continue, now is the time to address the question: why would we want to work on patches of a frame (patch-based processing), instead of upsampling an entire frame as a whole (i.e. with r being large enough to engulf the entire frame)?
The answer to this question is in [2], but we reproduce it here for convenience. First, the short version: the more ‘dynamic’ the LR sequence, the greater the need for patch-based processing, with particularly small patches. The longer version of the answer would require revisiting
On the other hand, larger patches (larger r) allow for the advantage of a more overdetermined (38). So to keep things simple, when the desired upscaling factor p=3, we fix the dimensionality of all LR dictionaries at 3q2, i.e. M=N=3q2, and let r=4. For p=4, we use r=3 and M=N=2q2.
Now, how large a local training set can be (recall Ntr=Mtr=k(2b+1)2)? Ideally, we would want to search for ‘best’ LR patches in as large a spatiotemporal neighborhood as possible, i.e. k and b both can be as large as possible. However, and even though using BIF selection based on L1 distance from a training sample is a very low complexity FS policy, with more and more LR patches to choose from, the computational cost of total FS (performed for all patches) can get quite significant if k and b are too large. For that reason, we pick k=30 and b=5.
Finally, for combining multiple estimates of the same frame (90), we choose H to be a Gaussian kernel with size 5 and standard deviation of 1, and we pick ω=3.
Recall that equation (81) and (82) are simply the patch-based, iteration-dependent versions of equations (73) and (74), respectively. Yet, the control parameter μ in these equations appears patch-independent as well as iteration-independent. Can this indeed be the best strategy for selecting μ.? To use a fixed value across all patches and throughout all iterations?
First, recall that μ determines how close to the reference LR image should the mean of the PPCs of the estimated image be. Therefore, if μ is too large, we run the risk of an estimated HR image that has PPCs that are too close to each other (a solution with some residual aliasing). On the other hand, in practice, if μ is not high enough, the end result can be too smooth, with low contrast, especially for those parts of the scene that are too dynamic (large scene changes across captured frames). In other words, instead of using the symbol μ in equations (81) and (82), we could use to account for the patch and iteration dependency in selecting the control parameter.
Nevertheless, choosing the “right” value for μ is not easy. According to experiments, the range of good values can be quite large (0.1≤μ≤5), and a good choice can be dependent on many factors including high spatial frequency content per patch, patch-size, and selected dictionary atom diversity. Currently, we use the following empirical formula for determining good values for μ (per-patch and per-iteration).
** denotes 2D convolution, ∥.∥F is the Frobenius norm, μ0 is a user-defined scalar that is fixed across all iterations and all patches (we pick μ0=20), and y is a measure of the diversity of selected atoms computed as follows.
No matter how good an empirical formula for selecting might be, if the value for μ0 is too high for some frames (or sub-regions of a frame), we would end up with estimated HR frames that contain regions with some residual aliasing. If that is the case, we propose the following remedy.
After obtaining the final estimate (T)=[(T)1 (T)2 . . . (T)K] of the HR sequence (we pick T=4), we start the solution over, but this time we use as our input sequence of LR images, and we keep all the parameters the same except for μ and r. Specifically, we keep μ fixed at the low value of 0.2, and use r=7 for p=3, and r=5 for p=4. This should take care of residual aliasing. The final sequence can then be resized down to the original desired frame size (i.e. each frame can be decimated by a factor of p).
In this section we test the proposed solution using the ‘Suzie’ sequence of HR frames of size 486×720 each. First, we decimate each frame by a factor of 6 (vertically and horizontally). The top row of
We repeat the same experiment using a decimation factor of 8, and an upsampling factor of p=4.
If we compare both
However, this gap in upsampling quality (e.g. p=4 vs. p=3) would become smaller when the LR sequence is not highly dynamic (specifically, when scene changes, across a set of captured frames, are not too large because the frame rate is sufficiently high).
In section 3 (The Need for an Arbitrator), we explained that solving the equation Vα=W β is the equivalent of finding the image in span(V) and the image in span(W) that are the closest to each other, and we chose equation (72) as the ‘yardstick’ to measure the distance.
To reiterate, since we adopt a feature selection (FS) strategy for selecting the atoms of the primary LR dictionary Ψ (from which the HR basis matrix V is constructed) and the atoms of the secondary LR dictionary Φ (from which the HR basis matrix W is constructed), we chose (72) as the distance measure to guard against picking an inferior solution.
In particular, since images with the simplest structure tend to have the smallest discrepancies, solving Vα=Wβ can favor an overly smooth solution in the absence of a proper distance measure.
On the other hand, smoothing processes are known to be useful for noise removal (denoising), but the challenge is to remove the noise ‘while’ leaving (most of) the underlying signal intact. This leads us to the following question: rather than minimizing it, would it be useful to ‘augment’ this smoothing effect to solve the noise problem?
As it turns out, by making a modification on the upsampling baseline outlined in section 4 (Learning Pairs of LR Dictionaries from a single LR Sequence: The Baseline), we can harness the smoothing effect to address the noise problem. In particular, while a sequence of images might not necessarily be in need for upsampling per se, we still engage a slightly modified version of the upsampling baseline outlined in section 4 to get rid of the noise in a process we call ‘denoising by upsampling’, which we describe next.
Recall that the original purpose of the iterations in the upsampling context was to re-estimate a better version of the (missing) secondary LR sequence. In the proposed noise removal context, however, re-engaging the structured subspace framework is a goal, in and of itself, to invoke the aforementioned smoothing effect multiple times until noise is reduced to acceptable levels.
Said differently, even if the imaging system was built such that it produces two LR sequences with different resolution, if our goal is denoising, then we propose to ignore the originally available secondary sequence.
Indeed, in this embodiment, even the original (primary) LR sequence is disposed of after the first iteration. Specifically, the upsampled output of each iteration is decimated back to the same original pixel count before being fed again as the input for the next iteration. This way, a new iteration starts with a new input sequence with lower noise. Note that here, while the final output is an upsampled version of the original input, upsampling is a means to an end (noise removal).
Please note that in the proposed ‘denoising by upsampling’ context, ‘low-resolution (LR)’ not only means low-number of pixels, but it also means ‘noisy’ images. Similarly, the term ‘high-resolution (HR)’ here simply means ‘upsampled’ (primarily for the purpose of noise removal).
Finally, the proposed modification for ‘denoising by upsampling’ is by no means the only option to engage the structured subspace model for the purpose of denoising, but we believe it to be the simplest and most straightforward.
Because even the input sequence itself is updated with every iteration, starting from equation (76) onward, the only notational changes needed in the new context is to simply add the subscript (t) to , , , , and .
C. Slightly Changed Roles of Estimation Process and the Control Parameter μ
The original role of estimation process was to provide—from the available (primary) LR sequence—an initial guess of the HR sequence to be then decimated to obtain a first estimate of the (missing) secondary LR sequence (see
In the modified (denoising) baseline, however, we obtain updated secondary ‘and’ primary LR sequences to keep shedding more noise with each iteration. Therefore, the role of here is nothing more than obtaining an updated secondary sequence from the (updated) input sequence. For this purpose, is best kept as simple as possible. In fact, we recommend to simply be a ‘nearest-neighbor’ interpolator.
Also, because the primary purpose here is denoising, the role of the control parameter μ is greatly simplified. In particular, choosing a fixed, low value for μ (e.g. μ=0.2) for all image patches and across all iterations should suffice. In this context, μ only guarantees that an updated estimate does not venture too far from a previous estimate.
As to all other baseline components, they retain their same exact roles as previously described in section 4 and section 5 (A Working Solution).
Another use for the structured subspace model can go beyond upsampling to recovering detail lost during the process. This can be useful as one last (special) iteration to be applied to the final output of an upsampling baseline.
In particular, recall that the only condition on the desired upsampling factor p and the secondary factor q is that q>p such that GCD(p,q)=1. Therefore, as long as q is a positive integer, then p=1 (i.e. no upsampling) will always satisfy the condition GCD(p,q)=1.
With that in mind, the following question arises: what possibly can we gain by utilizing the structured subspace model without any (further) upsampling? More specifically, if we already got the final sequence of upsampled frames, (T)=[(T)1 (T)2 . . . (T)K], is there a way to further benefit from the structured subspace model?
To avoid confusion, we shall add two dots on top of some symbols to indicate that we are engaging the structured subspace model ‘without’ upsampling. In particular, we shall use the symbols {umlaut over (p)} and {umlaut over (q)}, and {umlaut over (p)}=1 (i.e. no upsampling is desired), and {umlaut over (q)}=p (where p is the previous user-defined upsampling factor for obtaining ). Now we are ready to ask the same question using more precise terms.
What kind of a HR image would we be seeking by solving the following equation?
{umlaut over (V)}{umlaut over (α)}={umlaut over (W)}{umlaut over (β)}, (93)
where {umlaut over (V)} is the first HR basis matrix whose columns are (the vectorized form of) images learned from the HR sequence , {umlaut over (W)} is the second HR basis matrix constructed such that it spans any HR image whose PPCs are spanned by the LR dictionary {umlaut over (Φ)} learned from the original LR sequence , and {umlaut over (α)} and {umlaut over (β)} are the representations of the sought after HR image in terms of the basis matrices {umlaut over (V)} and {umlaut over (W)}, respectively.
A. Signal vs. Artifact
Before we explain further, we use the term ‘artifact’ to indicate any undesired signal. For example, noise is an artifact, pixelation (due to low pixel density) is an artifact, etc. Now, since the whole point of computing the upsampled sequence is to minimize (if not completely eliminate) pixelation and/or noise artifacts, then the subspace spanned by the basis {umlaut over (V)} would be hardly inhabitable for images with such artifacts. Said differently, since {umlaut over (V)} is learned from , then it is expected to be hardly conducive to constructing noisy/pixelated images. On the other hand, the subspace spanned by {umlaut over (W)} is based upon the original LR sequence where all the original artifacts are present, as well as the underlying signal.
This suggests that, by properly solving equation (93), we are offered a chance to recover at least some of the signal lost during upsampling. In other words, the image we would obtain by utilizing the structured subspace model (without further upsampling) would be an enhanced version. Another way of looking at it is that while solving (37) is the equivalent of conducting a wide-scope search for a good estimate of the underlying image , solving (93) would represent an additional—much narrower—search for an even better estimate of , where the role of {umlaut over (W)} (which is extracted from ) here is to guide the refined search for a better estimate of the HR image within {umlaut over (V)} (which is extracted from ).
One option we found particularly useful to solve (93) is via solving the optimization problem
which has the solution
and therefore the enhanced estimate of the HR image (which we denote by using a ‘double hat’ accent) is given by
where
All required details (including those pertaining to patch-based processing) for using a structured subspace framework in a last iteration (meant to enhance an already upsampled sequence) can be derived from the disclosure herein. In other words, one only needs to adjust for {umlaut over (p)}=1 and {umlaut over (q)}=p. Table 4 and
A few details, however, need to be revisited here.
atoms, where M is the number of columns (atoms) of and r is the user-defined parameter that determine the patch size (d=r{umlaut over (p)}{umlaut over (q)}=rp).
To test the ‘denoising by upsampling’ functionality, we downloaded a particularly noisy CT-scan sequence from TCIA (a database sponsored by the National Institutes of Health), and utilized the ‘denoising by upsampling’ functionality to create HR images from the CT-scan sequence.
The basic principle of learning-based image upsampling is fairly straight-forward. In general, learning-based methods use the available LR image as ‘partial measurements’ (of the unknown HR image) to estimate the representation of the HR image in terms of a HR dictionary learned from a training set of HR example images. While these methods certainly differ in the details, the two most important features that account for most of the variation in results are the training sets they use (generic vs. narrow -special-sets) and the learning process of the HR dictionaries.
In contrast, the work in [1], [2], uses the same learning-based upsampling principle, although for different target signals: the PPCs of the HR image, which are LR signals themselves. This shift of focus allowed the authors of [1], [2] to exploit a sequence of LR images as a source of very narrow training sets, from which highly efficient LR dictionaries can be easily learned to represent the PPCs (of sought after HR images). The major obstacle to their idea was the lack of ‘partial measurements’ required to find the representations of the PPCs in terms of the LR dictionaries. To overcome this obstacle, they proposed a hardware modification such that the imaging device is equipped with another (different resolution) LR sensor (and a beam splitter) to provide the much needed partial measurements for their target images (the PPCs).
Compared to [1], [2], in current work, we shift the focus back to estimating the representation of the HR image itself, and in terms of a pair of HR bases, each of which is embedding a (different resolution) LR dictionary. The end result is a structured subspace framework through which pairs of LR dictionaries are used to estimate HR images directly, thus circumventing the very restrictive hardware assumption required in [1],[2], while still benefiting from the huge advantage of harvesting narrow training sets from a LR sequence of images.
In this section, we give a visualization of the basic solution baseline whose details were laid out above in “A Working Solution”. To help with understanding, it is important to keep the big picture in mind, and which we summarize as follows:
We estimate the entire HR sequence by estimating patches (pieces) of each frame.
Because we estimate each HR patch based on a pair of (local, different resolution) LR dictionaries, we need a pair of LR sequences (of different resolution) from which to extract the pair of local dictionaries (the algorithm takes in a pair of LR dictionaries, per patch, as input, and spits out an estimate of the HR patch).
The reason we need to iterate the solution is because we do NOT have the secondary (different resolution) LR sequence (we keep estimating a better version of the missing secondary LR sequence).
The initial guess ()
Let's first have a look at sample LR frames of an available LR sequence (we call it the primary LR sequence) in
To get the first estimate of the secondary (missing) sequence (1), we need to first get an initial guess of the HR sequence (0). This can be done, for example, by using 2D interpolation (by a factor of p, we choose p=3 here) of each frame xk of the primary (available) LR sequence . Using Matlab, this can be done as follows:
(0)
k=imresize (xk,p,‘lanczos3’); (97)
Now, let's take a look at the sample frames of the initial estimate of the HR sequence, shown in
To obtain the first estimate of the secondary (different/lower resolution) LR sequence, we simply decimate each frame of the HR sequence by a factor of q (q=p+1=4 here). Using Matlab, this can be done using the following code:
y(1)k=(0)k(1:q:end,1:q:end); (98)
Because of the size of the figures, comparing
So now that we have two (different resolution) LR sequences as input, and (1), how do we proceed? Like (almost) everyone else, we do not estimate an entire HR frame as a whole, instead we estimate patches (pieces) of each frame, so the next question is: how do we partition a frame into patches to be estimated?
In Section 5 (A Working Solution), we provided a description of the simplest partitioning scheme (equations (85)-(88)). For a pictorial illustration, we pick frame#31 of the primary sequence and display, in
The so-called ‘leading’ patch per partitioning pattern s is shown as a brightened square in
Now, what happens when we compute all the HR patches corresponding to all the squares (patches) shown in
But why don't we simply use just one partitioning pattern? There's nothing wrong with that except that getting more estimates of the same frame (by partitioning it using S=r2 patterns instead of just one pattern), allows us to reduce estimation errors since different patterns would give different estimation errors and therefore having multiple estimates of the same frame allows us to reduce estimation errors. There are many different ways one could use multiple estimates of the same frame to reduce estimation errors (equation (90), for example, is only one of them), and most people simply take the mean—or the median—of all estimates.
To illustrate the benefits of having multiple estimates of the same frame, we show in
But how did we compute each of the HR patches shown in
Extracting Local Training Sets ()
Recall that, in our solution, estimating a HR patch requires a local pair of (different resolution) dictionaries to be extracted from a pair of LR sequences (of different resolution). We already had LR sequence (
First of all, recall parameters K (the width of the temporal neighborhood of frames around frame k from which training LR patches are to be extracted) and b (2b+1 is the width and height, in pixels, of the spatial window whose center is defined by the location of the LR patch corresponding to the HR patch we need to estimate). Here, we choose k=30 frames, and b=5 pixels.
With (2b +1)2=121 LR patches extracted from each frame in a predefined temporal neighborhood of size k=30 frames, the size of the entire training set is 30×121=3,630 LR patches (similarly, training set has 3,630 LR patches).
However, we do not need all patches in a training set. Indeed, just by looking at
Before we proceed, here we use M=N=2q2=32 as the number of (hopefully) ‘good’ enough LR patches to be chosen from each training set. All is left to do is define a measure of ‘goodness’. The following steps summarize the dictionary “learning” process we described in Section 5 (A Working Solution).
First, execute the following pseudocode:
where the LR patches and correspond to the HR patch you want to estimate (these are the highlighted areas in the left and right LR frames, respectively, shown in the middle row of
The i-th entry in dx is thus the value of the L1 distance between the i-th normalized LR patch in training set and normalized (reference) LR patch . Similarly, the i-th entry in dy is the value of the L1 distance between the i-th normalized LR patch in training set and normalized LR patch .
Second, pick the M LR patches, in the training set , that correspond to the smallest M entries in dx. These are the atoms of your local primary dictionary . Similarly, the atoms of the local secondary dictionary are the N LR patches, in the training set , that correspond to the smallest N entries in dy.
Third, since it's easier to work with orthonormal dictionaries, use SVD decomposition to orthonormalize the atoms of each dictionary. After converting atoms, of each dictionary, from their 2D form to their vector form, and stacking these (per dictionary) as column-vectors next to each other, we can use the following Matlab code to orthonormalize our dictionaries:
[, ˜, ˜]=svd (, 0);
[, ˜, ˜]=svd (, 0); (100)
All the previous steps were taken to set the stage for the proposed structured subspace framework to do its work: converting a pair of (different resolution) LR dictionaries into a HR image. In the following, we describe the most straightforward application of the structured subspace solution.
First, construct the HR basis matrix from the LR dictionary , and the HR basis matrix from the LR dictionary , by running the following Matlab code:
=LR2HRbasis (, r,p,q,M);
=LR2HRbasis (, r,p,q,M); (101)
Recall that we chose p=3 (and thus q=p+1=4), r=4, and M=N=2q2=32. LR2HRbasis is a predefined function written using Matlab as follows:
Second, solve the following equation whichever meaningful way possible: =. Recall:
The following is the Matlab code we used for computing the HR image patch highlighted in
=computeHRpatch (,,,,,r,p,q,M); (103)
where the control parameter is determined according to (91), and computeHRpatch is a predefined function as follows:
J=J′*J; (104)
Thus far, we explained how we got the HR patch highlighted in
Compared to the first iteration, the second iteration (and subsequent iterations) are no different except for working with an updated estimate of the secondary LR sequence, which is obtained by simply decimating the previous estimate of the HR sequence (e.g. we get the updated sequence (2) by decimating each frame in (1) by a factor of q) .
In particular, the primary LR sequence , remains the same for all iterations, so unless we adopt a different training set extraction strategy () or a different dictionary learning procedure () per iteration, there is no need to repeat the training/learning process of local primary LR dictionaries. For example, if we stored shown in
To solve the optimization problem (72) using Lagrange multipliers, we first rewrite (72) as
Also, let
R=[
Ψ*x], (A.4)
and rewrite (A.1) as
Let ƒ({tilde over (α)},β)=∥VE{tilde over (α)}−Wβ∥2+μ∥R{tilde over (α)}∥2, and recall (54), (55). Therefore, the objective function ƒ ({tilde over (α)},β) can be rewritten as
ƒ({tilde over (α)},β)={tilde over (α)}*(E*E+μR*R){tilde over (α)}−2{tilde over (α)}*E*V*Wβ+∥β∥2. (A.6)
g({tilde over (α)},β)=∥{tilde over (α)}∥2−{tilde over (c)}. (A.7)
Since we only have one constraint (A.7), there is only one Lagrange multiplier, which we denote with λ, and the Lagrangian is therefore
({tilde over (α)},β,λ)=ƒ({tilde over (α)},β)−λg({tilde over (α)},β). (A.8)
Now the gradient of ƒ({tilde over (α)},β), with respect to {tilde over (α)} and β is
(E*E+μR*R−E*V*WW*VE){tilde over (α)}=λ{tilde over (α)}. (A.15)
We can now obtain the value of the objective function, by plugging (A.13) in (A.6), and then using (A.15) and (A.14)
Now note that (A.3)
and therefore equation (A.15) is rewritten as
Since any singular pair of the (symmetric) matrix in (A.17) is a solution, and since we seek to minimize the error (A.16), we pick the solution
where ν is the last singular vector of the matrix in (A.17), with associated (smallest) singular value σν (and thus
where e is the last element in ν).
Now recall (A.2) and rewrite (A.17) as
The top part of equation (A.18) is
(μ*+I−V*WW*V){circumflex over (α)}−μ*Ψ*x=σν{circumflex over (α)} (A.19)
and therefore
{circumflex over (α)}=μ[μ*+(1−σν)I−V*WW*V]−1(*Ψ*x). (A.20)
Now, for numerical stability reasons, the computation of the smallest singular value of the matrix in A.17 might be inaccurate (because the last row and the last column in the matrix in A.17 have much higher values compared to μ*+I−V*WW*V), so instead of using Ψ*x in optimization problem A.1 we use its normalized version
So equation A.18 becomes
where σ is the smallest singular value the matrix in A.21. Therefore, the corresponding final solution (after scaling back by ∥Ψ*x∥) is
{circumflex over (α)}=μ[μ*+(1−σ)I−V*WW*V]−1(*Ψ*x). (A.22)
System Type x64-based PC
Processor Intel(R) Core™ i7-8750H CPU@2.20 GHz, 2208 Mhz, 6 Core(s).
Add-on toolbox: Parallel Computing Toolbox Version 6.12 (R2018a)
Number | Date | Country | |
---|---|---|---|
62874698 | Jul 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16779121 | Jan 2020 | US |
Child | 17303466 | US |