IMAGE UPSAMPLING AND DENOISING BY LEARNING PAIRS OF LOW-RESOLUTION DICTIONARIES USING A STRUCTURED SUBSPACE MODEL

Information

  • Patent Application
  • 20210304360
  • Publication Number
    20210304360
  • Date Filed
    May 28, 2021
    3 years ago
  • Date Published
    September 30, 2021
    3 years ago
Abstract
A computational method is disclosed for producing a sequence of high-resolution (HR) images from an input sequence of low-resolution (LR) images. The method uses a structured subspace framework to learn pairs of LR dictionaries from the input LR sequence ‘and’ employ learned pairs of LR dictionaries into estimating HR images. The structured subspace framework itself is based on a pair of specially structured HR basis matrices, wherein a HR basis spans any HR image whose so-called polyphase components (PPCs) are spanned by the corresponding LR dictionary. The computational method may be used to denoise images, whether LR or HR images, by using the structured subspace framework to learn dictionaries of the images and estimate a denoised version of the images from the learned image dictionaries. The denoising process may be iterated until the noise is reduced below a desired threshold.
Description
FIELD OF THE INVENTION

The invention relates to a computational method for producing a sequence of high-resolution (HR) images from an input sequence of low-resolution (LR) images. More particularly, the method uses a structured subspace framework to learn pairs of LR dictionaries from the input LR sequence ‘and’ employ learned pairs of LR dictionaries into estimating HR images. The structured subspace framework itself is based on a pair of specially structured HR basis matrices, wherein a HR basis spans any HR image whose so-called polyphase components (PPCs) are spanned by the corresponding LR dictionary.


BACKGROUND OF THE INVENTION

This work addresses the problem of multiframe upsampling (also known as super-resolution, upscaling, or de-aliasing) wherein, given as input a sequence of low resolution (LR) images, the desired output is their corresponding high resolution (HR) versions. In such a problem, low pixel density (low pixel count per square area) causes loss of resolution. Applications include all imaging technologies that produce a sequence of images (e.g. a video, MRI scan, satellite images, aerial surveillance, etc.) with pixel count that is lower than desired.


The conventional approach to the solution of this problem is motion estimation (or motion modeling) across the captured images. But accurate modeling of (complex) motion patterns requires high enough pixel density: a hopeless egg and chicken paradox the signal processing community has been trying to solve for many years.


Instead of the motion estimation route, we adopt a ‘signal representation’ approach (also known as the example-based, training-based or dictionary learning-based approach), which we summarize its relevant (previous) results as follows.


Fact 1

In the machine learning and signal processing communities, it has long been established that given a few samples (partial measurements) of an unknown (HR) image, the entire image can be recovered with reasonable accuracy depending on two main factors: 1—The severity of undersampling; and, 2—The existence of an efficient dictionary (basis) that can represent the HR image well with only a few dictionary atoms. In particular, the more undersampled the image, the more efficient the dictionary needs to be to be able to recover the unknown HR image given partial measurements of it (the available reference (LR) image).


OBSTACLE to Fact 1: creation of a dictionary that can efficiently represent an image is a process known in the machine learning community as: dictionary learning or training. However, the efficiency of a dictionary (for representing a particular image) does not only depend on the learning method in use, but it is also highly dependent on the data (images) used to train the dictionary (efficiency of the dictionary is highly dependent on the training set). In particular, the narrower the training set, the more efficient the learned dictionary would be, even regardless of the learning process. A training set is said to be narrow if the training images (also known as example images) belong to a narrow subclass of images (that also include the sought-after image).


What this means in practice is that if the goal is to recover the HR version of a license plate, say, given only a LR version of it, then a training set extracted from high quality example license plate images would be far more useful than a training set based on generic images. In short, the more specialized the training set, the better the outcome. On the other hand, using specialized (narrow) training sets of HR images to learn efficient dictionaries would be entirely useless for estimating generic images. For example, if your training set is made up of HR images of license plates, the learned dictionaries would be completely useless for estimating any images other than license plates.


Fact 2

A simple fact of signal processing is that any 2D signal can be decomposed into multiple LR versions of it, the so-called polyphase components (PPCs), via the basic operations of (2D) shifting and subsampling (decimation). Specifically, for a decimation (subsampling) factor of p, an image can be decomposed into p2 PPCs. The first PPC is obtained by starting with the first pixel in the first row of the image, and then decimating (vertically and horizontally) by p. Decimating, starting with the second pixel in the first row, we get the second PPC, and so forth. For example, the (p+1)th PPC is obtained by decimating starting with the first pixel in the second row, and the last p2-th PPC is the result of decimating beginning with the p-th pixel in the p-th row of the image. Therefore, a HR image can be trivially reconstructed from its PPCs simply by interlacing them.


Fact 3

A LR sequence of images (e.g. a LR video) can provide a very narrow training set for the PPCs of (the sought after) HR frames. To put it differently, if we have a sequence of LR images, then why not shift our focus from estimating the unknown HR frames, to estimating the (the low-resolution) PPCs (of each HR frame) instead? FIG. 1 easily demonstrates the potential of this idea. In other words, simply by looking at FIG. 1, it becomes evident that a sequence of LR images can easily provide very narrow training sets for the PPCs of HR images, thus circumventing the obstacle mentioned under Fact 1 above.


OBSTACLE 1 to Fact 3: While Fact 3 avoids the obstacle associated with Fact 1, it introduces the MAJOR issue of lack of ‘partial measurements’. To be more specific, even if we have available a very efficient dictionary for representing an (unknown) image, Fact 1 tells us that if NO partial measurements of it exist, then the efficient dictionary is of NO value.


Now, since Fact 3 suggests using a LR sequence to train efficient dictionaries for the PPCs of a HR frame, the signals that need to be estimated in this case are the PPCs themselves (the target images in a solution based on Fact 3 are the PPCs, from which the HR image is trivially reconstructed). However, without partial measurements for each target signal (each PPC), there is no way to go further along the route of Fact 3.


OBSTACLE 2 to Fact 3: Almost all example-based (aka dictionary-based or training-based) upsampling methods use patch-based processing, which simply means that local sub-regions of a HR frame are estimated instead of estimating the entire HR frame as a whole. In a solution scenario that involves Fact 3, patch-based processing would be essential if the scene is dynamic (changing quickly across the captured frames). However, working on small patches would require regularization, which simply means adding useful information to make up for the scarcity of the data (that comes from working on small patches).


Previous ‘Enablers’ of Fact 3

To resolve OBSTACLE 1 under Fact 3, the authors of U.S. Pat. No. 8,665,342 (“[1]” hereafter) (incorporated herein by reference) proposed exploiting relationships between PPCs corresponding to different decimating factors. Their proposed solution entailed an imaging hardware modification. In particular, their solution would work only for optical imaging systems (cameras) where a secondary sensor, with a different sampling rate (different resolution sensor), must be incorporated into the camera such that it would have the same line of sight as that of the primary sensor (by incorporating, for example, a beam splitter so that both sensors ‘see’ the same scene at the same time).


In what follows, we present the basic premise the authors of the '342 patent used as an ‘enabler’ of Fact 3 (the same enabler remains the foundation of the work in U.S. Pat. Nos. 9,538,126; 9,693,012 and 9,781,381. Each of which is incorporated herein by reference) (“[2]” hereafter). Note: We simply do NOT use said premise in our current solution. Indeed, the reason we include this section is only to help the interested reader to gauge how markedly different our current approach is.


In [1], [2], the job of the secondary sensor is to provide the missing ‘partial measurements’ for the target signals of Fact 3 (the PPCs). In particular, each secondary frame (captured by the secondary sensor) is decimated into multiple (even lower resolution) images each of which provides partial measurements for each PPC that need to be estimated. In the following, we provide a quick review of how the relationships between PPCs was exploited by [1] and [2] for image upsampling. FIG. 2 gives an illustration of decomposing the same 12×12 HR image (color-coded for ease of demonstration) into two different sets of PPCs corresponding to different subsampling rates, p and q=p+1. The first (primary) set, shown in FIG. 2 (b), contains 4 (primary) polyphase components (PPPCs), each corresponding to the same (primary) decimation factor of p=2. FIG. 1 (c) shows the second (or secondary) set, which consists of 9 (secondary) polyphase components (SPPCs), each corresponding to the same (secondary) decimation factor of q=3.


Given either set of PPCs (the PPPCs or the SPPCs), the HR image can be recovered simply by interlacing the PPCs (from either set). Therefore, if we use the LR images (captured by the camera) to create a representative dictionary (of the same low resolution level) for representing the PPPCs, then we can ultimately reconstruct the HR image simply by finding the representations of the PPPCs in terms of said dictionary. However, without knowing any partial measurements for each PPPC, such a scenario would be impossible. Nonetheless, a careful examination of FIG. 2 reveals that, although we do not know any of the PPCs (neither the PPPCs nor the SPPCs), if we know a single SPPC, then we already know a decimated version of each PPPC. For example, suppose we know the middle (5th) SPPC, then we get a decimated version of all 4 PPPCs. Moreover, the locations of these known pixels provided by the (assumed) known SPPC are unique for each PPPC. Hence, if the camera is (additionally) equipped with different (lower) resolution secondary sensor, with a sampling rate that matches the sub sampling rate of the SPPCs, then a captured secondary LR frame can play the role of a reference SPPC (a reference secondary LR image) that provides the needed partial measurements for the PPPCs, and the HR sequence can thus be estimated (see [1], [2] for details).


For the case of ‘dynamic scenes’ (OBSTACLE 2 under Fact 3), the work in [2] proposes regularization in the form of the so-called anchors, as well as generative Gaussian models (GGMs), such that working on small patches of a frame, rather than the whole frame at once, becomes possible.


SUMMARY OF THE INVENTION
A Structured Subspace Perspective as a ‘Powerful Enabler’ of Fact 3

To recap, Fact 1 and Fact 2 are well known. Fact 3, while quite obvious (given Fact 1 and Fact 2), needed an enabler to get over obstacles associated with it. Previous enablers came in the form of an imaging hardware modification coupled with regularization (for dynamic scenes).


Spatial resolution is, obviously, a very fundamental aspect of image quality, so much so that when the physical constraints of an imaging system force us to choose between many important features, sufficient pixel density comes on top of the list of priorities, overshadowing other important requirements such as high dynamic range, or high-speed imaging, for example. Even in non-optical imaging modalities (where samples are not captured in the spatial domain), the pixel density can be severely limited by the physics of the imaging technology unless other useful features are sacrificed. Take medical imaging for instance. Doctors would not likely accept low resolution images albeit for the sake of a higher density of image ‘slices’ (per organ), lower radiation doses or shorter scanning times.


Said differently, the reason why upsampling can be such a powerful tool is the fact that pixel density can be traded with so many very important imaging qualities (freezing motion blur, reduced noise, reduced crosstalk, higher sensitivity, higher dynamic range, smaller size, cost and complexity of the imaging system, higher frame rate, reduced scanning time, reduced radiation dose, reduced slice thickness, adding angular resolution, and, of course, larger field of view). But pixel density is of such utmost importance that all of the aforementioned imaging aspects are normally sacrificed to satisfy the basic requirement of adequate pixel density, after-all, who wants pixelated images? Upsampling offers the daring solution of turning the table on its head and sacrifice pixels for the sake of other qualities (and then, thereafter, restoring the missing pixels to acceptable quality). Upsampling is therefore a very fundamental problem, and provided there exists a good enough solution, everyone would want a piece of it. A word search of the US patent database for recent patents on upsampling—also known as super-resolution, upscaling, de-aliasing—would find that many big corporations have patents on upsampling/super-resolution solutions.


In the current invention, we entirely forsake the notion of partial measurements for the PPCs (which necessitates a bi-sensor camera setup of [1], [2]). Instead, we adopt a structured subspace perspective, which allows us to relax the condition of the availability (from the onset) of a secondary LR sequence. In particular, the standpoint of beginning the solution with an ‘initial guess’ of a secondary sequence of images is at odds with a designated task of providing ‘the partial measurements’. By contrast, an initial guess of the missing sequence is admissible within the new solution model, as shall become apparent in the remainder of this disclosure.


In particular, the new structured subspace perspective adopted by this invention constitutes a purely algorithmic ‘enabler’ of Fact 3, and it renders previous enablers of Fact 3 obsolete. In other words, the current proposed solution, while based on the (well known) Fact 1, (well known) Fact 2 and (obvious) Fact 3, requires NO hardware modification (no secondary sensor/beam splitter are required) and it does NOT require any regularization for the case of dynamic scenes. Additionally, this would extend the sphere of applications beyond optical imaging systems (such as medical imaging systems, for example, which are non-optical systems).


In this disclosure, we show how the structured subspace perspective can be used to eliminate the (conventionally) basic requirement of ‘partial measurements’. Said differently, estimating a signal (PPCs in the context of Fact 3), without having some partial measurements of it, is quite simply heretofore unknown. By contrast, the structured subspace model we adopt here, completely circumvents the issue of partial measurements (for PPCs) by seeking representations of HR images in terms of specially structured HR bases (instead of seeking representations of PPCs in terms of LR dictionaries as was the case in [1], [2]).


In particular, the HR bases we use are structured such that they span all HR images whose PPCs are spanned by LR dictionaries learned from the available sequence of LR images. In other words, instead of the ‘brute force’ exploitation of the relationships between PPCs corresponding to different subsampling rates (which necessitates a hardware modification, beam splitter etc.), the new structured model herein, in effect, ‘seamlessly’ embeds these relationships, making the essential issue of partial measurements irrelevant, thus capitalizing, without hindrance, on the obvious fact that a sequence of LR images can be used to provide the best (narrowest) training sets for the PPCs of HR images (FIG. 1).


The structured subspace framework is based on estimating the representations of the sought-after HR image in terms of a pair of HR bases, each of which is embedding a (different resolution) LR dictionary. Specifically, the structured subspace framework can be summarized with the following equation (which is to be solved whichever—meaningful—way possible)





Vα=Wβ


where:


V is the first HR basis matrix constructed such that it spans any HR image whose PPCs are spanned by the LR dictionary W.


W is the second HR basis matrix constructed such that it spans any HR image whose PPCs are spanned by another (different resolution) LR dictionary Φ.


α is the representation of the HR image in terms of V.


β is the representation of the HR image in terms of W.


Imaging applications that can benefit from upsampling using the disclosed technique:


a) Medical imaging. For example, a magnetic resonance imaging (MRI) system cannot produce an MRI sequence with high density of ‘slices’ without sacrificing the spatial resolution per slice. This is akin to how a camera cannot capture very high frame rate videos without sacrificing the resolution per frame (regardless of the price tag).


b) Other applications, where upsampling can computationally make up for the need to sacrifice spatial resolution, include high dynamic range imaging, thermal cameras, flash LADAR/LIDAR and light field imaging.


c) Situations that require large distance imaging, yet a large field of view (FOV) is desired (avoiding optically zooming in on a small portion of the FOV). Gigapixel cameras (hundreds of sensors housed in a giant camera) are needed for this reason. With reliable upsampling, a frame captured with a resolution of 100 Mega pixels can be blown up to 1.6 Giga pixels, for example. The ability to cover larger field of view (while computationally making up for the loss of resolution) is very useful in remote sensing, surveillance, search and rescue, etc.


d) Consumer/commercial applications. Examples include turning a cell phone camera into a very high-resolution camera, and upconverting a standard-definition video (SDTV) signal to an ultra-high definition video (UHDTV), to match the resolution of high-definition displays.


e) “Denoising by upsampling”: the described techniques can be used to denoise images, of either LR or HR origination. ‘Low-resolution (LR)’ not only means low-number of pixels, but it also means ‘noisy’ images. Similarly, the term ‘high-resolution (HR)’ here simply means ‘upsampled’ (primarily for the purpose of noise removal).





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 shows unknown polyphase components (PPCs) of an unknown (HR) frame vs. a set of captured (LR) frames.



FIG. 2 is an illustration of the relationships between PPCs corresponding to two different subsampling rates, p and q=p+1.



FIG. 3 is a Flow chart illustrating a basic solution baseline based on the proposed structured subspace model. Note: dashed boxes/lines indicate initial guess operations (which are performed only once, before starting the iterative solution).



FIG. 4 is a series of LR to HR image upsampling experiment (p=3).



FIG. 5 is a series of LR to HR image upsampling experiment (p=4).



FIG. 6 is a series of sample frames of the primary (available) LR sequence custom-character.



FIG. 7 is a series of sample frames of the initial guess of the HR sequence custom-character(0).



FIG. 8 is a series of sample frames of the first estimate of the secondary (different/lower resolution) LR sequence custom-character(1).



FIG. 9 is the 31st frame from the primary (originally available) sequence custom-character vs. the 31st frame from the first estimate of the secondary LR sequence custom-character(1). The frames are displayed in this figure with their actual relative sizes.



FIG. 10 is frame #31 from the primary sequence custom-character, partitioned in S=r2=16 different patterns. The sub-images within (imaginary) black borders are the LR patches {custom-character}, where k=31, s=1, . . . , 16, and total number of patches (per pattern s) is 48, i.e. custom-characters=1, . . . , 48.



FIG. 11 shows multiple HR estimates of frame #31, obtained by tiling all estimated HR patches {custom-character}, k=31, s=1, . . . , 16, and custom-characters=1, . . . , 48.



FIG. 12 shows custom-character(1)k (the first iteration estimate) displayed against the initial estimate custom-character(0)k (k=31), where custom-character(1)k is obtained by computing the median of all estimates shown in FIG. 11.



FIG. 13 highlights an estimated HR patch custom-character at k=31, s=4, and custom-characters=20.



FIG. 14 is an illustration of the extraction process of a pair of local LR training sets (needed for computing the HR patch highlighted in FIG. 13).



FIG. 15 is the pair of LR dictionaries custom-character and custom-character we used to compute the HR estimate custom-character highlighted in FIG. 13.



FIG. 16 is a flow chart illustrating a modified baseline with the purpose of ‘denoising by upsampling’. Note: the dashed box/line indicates that the original (noisy) sequence is used only once (as input to the first iteration only).



FIG. 17 is a flow chart illustrating a baseline leveraging a structured subspace model for enhancing an upsampled sequence of images.



FIG. 18 shows images taken from a real-world colonography sequence (CT-scan) in the left column, and the corresponding images after processing the original sequence in the right column.





DETAILED DESCRIPTION OF THE INVENTION

Estimating HR images based on LR dictionaries is a notion that is quite alien to conventional wisdom. Indeed, image upsampling would be an intrinsically simple problem if all it takes to solve it is to “learn” LR dictionaries from a LR sequence, but could it really be that simple? The answer by the authors of [1], [2] (above) was ‘no’, unless the issue of partial measurements for the PPCs is resolved via an imaging hardware modification.


Adding the structured subspace perspective, however, reveals that no imaging hardware modification is necessary, and that the problem can indeed be solved solely by learning LR dictionaries. Before we proceed with solution details, we would like first to list a few assumptions, acronyms, and notational conventions that will be used in describing the details.

    • 1—Without loss of generality, we assume that the HR image, we seek to estimate, is square, and with a dimension that is an integer multiple of pq, where p is the primary decimation factor (equivalently, p is also the desired upsampling factor), and q is the secondary decimation factor, with the added assumption that






q=p+1.   (1)


Namely, if we let u denote the HR image, then custom-character is a matrix of size d×d, where





d=rpq,   (2)


and r is a positive integer.

    • 2—The acronym PPCs stands for polyphase components. PPCs associated with the primary decimation factor p are called primary PPCs or PPPCs for short. SPPCs is the acronym for secondary PPCs, which are associated with the secondary decimation factor q.
    • 3—An underlined symbol denotes a (column) vector. Underlined symbols are also used to denote the vector form of matrices. To elaborate, vec(.) is conventionally used to denote the vectorization operator, which turns a matrix into its (column) vector form (by lexicographical reordering of its elements). Therefore, if symbol A is used to denote a matrix, then A is shorthand for vec(A).
    • 4—The (vector) subspace that contains all (real) vectors (or signals) with dimension n is denoted by custom-charactern. For example, writing custom-charactercustom-characterd2 is simply saying that custom-character (the HR image, in vector form), is in custom-characterd2, and thus has dimension d2. Similarly, custom-characterm×n denotes the subspace of all (real) matrices of size m×n.
    • 5—We denote the identity matrix with the letter I. If a subscript is added, then it signifies the dimension of the identity matrix. For example, In is the identity matrix with dimension n. Also, for an all zero matrix, we use the letter O.
    • 6—The transpose of a matrix (or a vector) is always denoted by adding the superscript ‘*’


For example, if A is a matrix, then its transpose is denoted by A*.

    • 7—A binary matrix A∈custom-characterm×n is a matrix whose entries are either “0” or “1”. A binary matrix A that is tall or square (m≥n), with each of its columns containing a sole “1” can be encoded (parameterized) as follows. Let πmA(k) denote the row-index of the sole “1” in the k-th column of A, then the set of all n row-indices πmA(1:n), completely encodes A (1:n is shorthand for k=1, 2, . . . , n). Note, if m=n, we drop the subscript m. For example,









π
4
A



(

1


:


3

)


=


[

2





3





1

]






encodes






A


[



0


0


1




1


0


0




0


1


0




0


0


0



]




,
and








π
B



(

1


:


3

)


=


[

3





2





3

]






encodes







B


[



0


0


0




0


1


0




1


0


1



]


.








    • 8—We denote the Kronecker product and the Hadamard (element-wise) product, using the symbols ⊗ and ⊙, respectively. Also, the so-called ‘direct sum’ is denoted by ⊕. In particular, the notation ⊕k=1s A is exactly equivalent to Is ⊗A. In other words, if A∈custom-characterm×n, then ⊕k=1sA is simply a matrix of size sm×sn that has s ‘replicas’ of A along the main diagonal (and zeros everywhere else).

    • 9—We use the expression mod(a,b) as a shorthand for ‘a modulo b’. Also, GCD(a,b) is the ‘greatest common divisor of a and b’.

    • 10—Finally, if c is a real number, the smallest integer greater than or equal to c (the ‘ceiling’ of c) is denoted by ┌c┐. On the other hand, └c┘ denotes the largest integer smaller than or equal to c (i.e., └c┘ is the ‘floor’ of c).





1. A HR Subspace Based on a LR Dictionary

Laying the foundation for our proposed structured subspace-based framework begins with asking (and answering) the following question: suppose we have available a LR dictionary that spans the PPCs of the sought after HR image, how do we construct the ‘basis matrix’ of the subspace of all HR images whose PPCs are spanned by said dictionary?


The answer to that question lies in the basic fact that, instead of simply interlacing its PPCs, a HR image, custom-charactercustom-characterd×d, can alternatively be constructed from its PPCs, by adding up zero-filled and shifted versions of the PPCs. Said differently, the answer to that question begins by analytically expressing the HR image in terms of its PPCs.


Specifically, let {ƒi}i=1p2custom-characterrq×rq denote the set of all p2 PPPCs of the HR image (corresponding to the primary decimation factor p), then












u
=






i
=
1


p
2






S


m
p



(
i
)





(


Z
p



f
i



Z
p
*


)




S


n
p



(
i
)


*










=






i
=
1


p
2





(


S


m
p



(
i
)





Z
p


)




f
i



(


Z
p
*



S


n
p



(
i
)


*


)





,







(
3
)







where Zpcustom-characterd×rq zero-fills the columns by a factor of p (zero-filling ‘upsamples’ a vector by inserting p−1 zeros between any two elements, and p−1 zeros after the last element), post-multiplying with Z*p zero-fills the rows, Smp(i)custom-characterd×d circularly shifts by mp(i) rows down, and post-multiplying by S*np(i)custom-characterd×d circularly shifts by np(i) columns to the right. Both Smp(i) and S*np(i) are therefore (cyclic) permutation matrices, where mp(i) and np(i) are in correspondence with the location of the i-th PPPC ƒi on the HR grid, and are given by












m
p



(
i
)


=




i
p



-
1


,







n
p



(
i
)


=

mod






(


i
-
1

,
p

)



,




(
4
)







Using the fact that zero-filling matrices and circular shifting matrices are binary matrices with the property that each column contains a sole “1”, the row-indices of the “1” elements give us a full description of such matrices. In particular, the expression





ΠdZp(k)=1+(k−1)p, k=1,2, . . . ,rq   (5)


which encodes the zero-filling matrix Zp, gives us all we need to know to construct Zp. Specifically, equation (5) literally says that Zp is of size d×rq, and that the k-th column has a “1” located at row ΠdZp(k).


Similarly, the shifting matrices Smp(i) and Snp(i) can be expressed as follows (without using the subscript d since Smp(i) and Snp(i) are square matrices)












Π

S


m
p



(
i
)






(
k
)


=

1
+

mod


(

k
+


m
p



(
i
)


-

1
,
d


)




,





k
=

1
,
2


,





,

d
.





(
6
)









Π

S


n
p



(
i
)






(
k
)


=

1
+

mod


(

k
+


n
p



(
i
)


-

1
,
d


)




,





k
=

1
,
2


,





,

d
.





(
7
)







Using the fact that the matrix equation B=CAD* can be reshaped into the vector equation vec(B)=(D⊗C)vec(A), we rewrite (3) to express custom-charactercustom-characterd2 in terms of {ƒi}i=1p2custom-characterr2q2 as follows.










u
¯

=




i
=
1


p
2





(


(


S


n
p



(
i
)





Z
p


)



(


S


m
p



(
i
)





Z
p


)


)





f
_

i

.







(
8
)







By the mixed-product property of the Kronecker product, we have





(Snp(i)Zp)⊗(Smp(i)Zp)=(Snp(i)⊗Smp(i))(Zp⊗Zp).   (9)


Let




Sp2icustom-characterSnp(i)⊗Smp(i),   (10)


and





Zp2custom-characterZp⊗Zp.   (11)


Sp2icustom-characterd2×d2 therefore represents 2D (circular) shifting, by mp(i) rows down, and by np(i) columns to the right, and Zp2custom-characterd233 r2q2 represents 2D zero-filling by a factor of p. Note that Sp2i is a permutation matrix, since the Kronecker product of two permutation matrices (Smp(i) and Snp(i)) is a permutation matrix.


Using (8)-(11), we have










u
_

=




i
=
1


p
2





S

p
2

i



Z

p
2






f
_

i

.







(
12
)







Note that (12) is the vector equation form of the matrix equation (3).


Now, assume we have available a LR (primary) dictionary, Ψ∈custom-characterr2q2×M, with M linearly independent atoms as its columns, and that W spans the set of PPPCs, {ƒi}i=1p, and let {αi}i=1p2custom-characterM denote the corresponding representations in terms of Ψ, i.e.,





ƒi=Ψαi for i=1,2, . . . ,p2.   (13)


Equation (12) can therefore be rewritten as











u
¯

=




i
=
1


p
2





S

p
2

i



Z

p
2



Ψ



α
¯

i




.




(
14
)







Rewriting (14) in matrix-vector form, we get










u
¯

=



[


S

p
2

1



Z

p
2



Ψ






S

p
2

2



Z

p
2



Ψ⋯






S

p
2


p
2




Z

p
2



Ψ

]



[





α
¯

1







α
¯

2












α
_


p
2





]


.





(
15
)







Or more concisely,











u
¯

=

V


α
¯



,




with




(
16
)







V


=
Δ



[


S

p
2

1



Z

p
2



Ψ






S

p
2

2



Z

p
2



Ψ⋯






S

p
2


p
2




Z

p
2



Ψ

]


,




and




(
17
)







α
¯



=
Δ




[





α
¯

1







α
¯

2












α
_


p
2





]

.





(
18
)







Thus, the matrix V∈custom-characterd2×p2M is the answer to the question we posed at the beginning of this section. In particular, if we pick any vector custom-charactercustom-characterp2M, then custom-character=V{acute over (α)} is a HR image with p2 PPPCs that correspond to p2 linear combinations {{acute over (α)}i}i=1p2 of the M atoms of Ψ, where the coefficients of the linear combinations are stored in {acute over (α)}.


In other words, if we let custom-character denote the subspace of all HR images of dimension d2 whose PPPCs are in span (Ψ), then






custom-character=span(V).   (19)


Equation (17) gives us two options as to how to construct V. The first (needlessly computationally expensive) option is to construct the matrices, {Sp2i}i=1pand Zp2 (using equations (4), (6), (7), (10) and equations (5), (11), respectively) and then perform p2 multiplications with Ψ.


The second option is to simply perform the operations of zero-filling and shifting on the atoms of Ψ, without actually using any matrix operators. In other words, V can be constructed from Ψ without using a single floating point arithmetic. For example, the following Matlab code constructs V from Ψ without any calculations












TABLE 1








Matlab code to construct V from Ψ




















% input: Psi (and r,p,q,M)




d = r*p*q;




Psi_vec2mat = reshape(Psi,r*q,r*q,M);




V = zeros(d*d,p*p*M);




zrs = zeros(d,d,M);




c = 0;




for i = 1:p




  for j = 1:p




    c = c +1;




    atomZshift = zrs;




    atomZshift(i:p:end,j:p:end,:) = Psi_vec2mat;




    V(:,(c-1)*M+1:c*M) = reshape(atomZshift,d*d,M);




  end




end




% output: V










However, besides allowing for a formal description of V, the matrices {Sp2i}i=1p2 and Zp2 pave the way for useful insights. To elaborate, note first that V can be factored as the product of two matrices as follows.






V=[S
p

2

1
Z
p

2

S
p

2

2
Z
p

2

. . . S
p

2

p

2

Z
p

2
](⊕i=1p2Ψ).   (20)


If we define






P
custom-character
[S
p

2

1
Z
p

2

S
p

2

2
Z
p

2

. . . S
p

2

p

2

Z
p

2
],   (21)


and





VΨcustom-characteri=1p2Ψ,   (22)


then





V=PVΨ.   (23)


Now, instead of wastefully computing the Kronecker product in (11), it can be verified (using the code (5) and equation (11)) that Zp2 is a binary matrix with the following code














d
2


Z

p
2





(
k
)


=

1
+

p


(



(




k

r

q




-
1

)


d

+

mod




(


k
-
1

,

r

q


)


)




,





k
=
1

,
2
,





,


r
2




q
2

.






(
24
)







Similarly, the row-indices encoding the i-th (permutation) matrix, Sp2i, are














S

p
2

i




(
k
)


=

1
+

mod






(

m

o

d





(





n
p



(
i
)



d

+


m
p



(
i
)


+
k
-
1

,

d
2


)

,
d

)

+

mod




(


d


(





k
-
1

d



+


n
p



(
i
)



)


,

d
2


)



,









k
=
1

,
2
,








d
2


,




(
25
)







Now, let's turn our attention to the matrix P∈custom-characterd2×d2, defined in (21). P consists of p2 blocks, {Sp2Zp2}i=1p2, and can be encoded given (24) and (25). In particular, knowing the fact that a zero-filling (or ‘upsampling’) matrix acts as a ‘decimation’ matrix when we post-multiply with it, then all post-multiplying Sp2i with Zp2 does is simply knock out (p2−1)r2q2 columns of Sp2i. Said differently, Zp2, conveniently represented by its code (24), tells us which (of the remaining r2 q2) columns of Sp2i are stored as the i-th block in P. Using all these facts together, one can deduce the following encoding of P.














P



(
k
)


=

1
+

mod






(

m

o

d





(





n
p



(

i
¯

)



d

+


m
p



(

i
¯

)


+

k
¯

-
1

,

d
2


)

,
d

)

+

mod




(


d


(






k
¯

-
1

d



+


n
p



(

i
¯

)



)


,

d
2


)



,










k
_

=


mod




(



-
1

,

d
2


)

+
1


,










i
¯

=





d
2





,






=

1
+


p


(



(






k
_

_


r

q




-
1

)


d

+

mod




(




k
¯

¯

-
1

,
rq

)


)




(




k


r
2



q
2





-
1

)



d
2




,












k
_

_



mod




(


k
-
1

,


r
2



q
2



)


+
1













k
=
1

,
2
,





,

d
2






(
26
)







where mp and np are defined in (4).


Equations (22), (23) and (26) offer an alternate formal and efficient description of how to construct V (in lieu of the computationally expensive option of using equations (4)-(7), (10), (11)). In particular, note that VΨ (defined in (22)) is just a matrix that contains p2 replicas of Ψ on its diagonal (no calculations are required to construct VΨ). Additionally, it can be proven that P is a permutation matrix and therefore V can be constructed simply by shuffling the rows of VΨ according to ΠP(k).


By definition, each column and each row in a permutation matrix must have a single “1” element. In other words, if one can verify that the row-indices ΠP (k) are all unique (no repetitions), then P is a permutation matrix. An easier approach to show that P is a permutation matrix, is to use the fact that a matrix A is a permutation matrix iƒƒ A is square, binary and orthogonal (A*A=I). Now, since P is square and binary, proving that P*P=I proves that P is a permutation matrix. In other words, in light of the block structure of P, we just need to show that












(


S

p
2

i



Z

p
2



)

*



S

p
2

j



Z

p
2



=

{




I
,

i
=
j







0
,

i

j










(
27
)







The first part is trivial. In particular, Sp2i is a permutation matrix, and therefore its columns are orthonormal. Obviously, deleting columns (by post-multiplying with Zp2), does not change the fact that the remaining columns are still orthonormal, i.e. (Sp2iZp2)*Sp2iZp2=I.


As to the second part, recall that the function of zero-filling and shifting is to perform interlacing of PPCs using the sum in (12), which means that the non-zero elements of zero-filled and shifted PPCs cannot overlap. This implies that












(


S

p
2

i



Z

p
2





f
_

i


)



(


S

p
2

j



Z

p
2





f
_

j


)


=



0
¯









(


S

p
2

i



Z

p
2





f
_

i


)

*



S

p
2

j



Z

p
2





f
_

j



=
0


,




(
28
)







i.e. ƒ*i(Z*p2S*p2iS*p2jZp2j=0. Now consider the special case where ƒij=1 for some i≠j, where 1 is an all-one vector. In this case, we have 1*(Z*p2Sp2i*Sp2jZp2)1=0, i.e. the sum of elements of the matrix T=Z*p2Sp2i*Sp2jZp2 must be zero, so what does that tell us about T? We know that the product of two permutation matrices is a permutation matrix and therefore Sp2i*Sp2j is a permutation matrix. Also, post-multiplying by Zp2 and pre-multiplying by Z*p2 only deletes columns and rows, respectively. In particular, the product (S*p2iSp2j)Zp2 removes (p2−1)r2q2 columns of the permutation matrix S*p2iSp2j, which means that (S*p2iSp2j)Zp2 has only r2q2 binary rows, each of which containing a single “1”, with the remaining (p2−1)r2q2 rows being entirely zero-row vectors. Now, if we delete (p2−1)r2q2 rows of S*p2iSp2jZp2, by pre-multiplying with Z*p2, we get T, and thus T can be either a binary matrix or an all zero matrix, depending on which rows are deleted from S*p2iSp2jZp2 to get T. However, since the sum of all elements of T must be zero, then T cannot be a binary matrix, which leaves us with only one possibility, i.e. Z*p2S*p2iSp2jZp2=O.


2. The Intersection Subspace of a Pair of HR Subspaces Based on a Pair of LR Dictionaries

Let {gi}i=1q2custom-characterr2p2 denote the set of all q2 SPPCs of the HR image custom-character (corresponding to the secondary decimation factor q), and assume we have available a (secondary) LR dictionary, Φ∈custom-characterr2p2×N, with N linearly independent atoms as its columns, and that Φ spans the set of SPPCs, i.e.





gi=Φβi for i=1,2, . . . , q2,   (29)


where {βi}i=1q2custom-characterN denote the corresponding representations of {gi}i=1q2 in terms of Φ.


Also, let custom-character denote the subspace of all HR images of dimension d2 whose SPPCs are in span (Φ), and let W∈custom-characterd2×q2N be the basis matrix of custom-character, i.e.






custom-character=span(W).   (30)


Following the analysis presented in the previous section, we can construct W from Φ, simply by exchanging p and q. In particular, the following equations define W.





W=QWΦ,   (31)


where





WΦ=⊕i=1q2Φ,   (32)


and Q∈custom-characterd2×d2 is a permutation matrix with row-indices given by














Q



(
k
)


=

1
+

mod






(

m

o

d





(





n
q



(

i
¯

)



d

+


m
q



(

i
¯

)


+

k
¯

-
1

,

d
2


)

,
d

)

+

mod




(


d


(






k
¯

-
1

d



+


n
q



(

i
¯

)



)


,

d
2


)



,










k
¯

=


mod




(



-
1

,

d
2


)

+
1


,










i
¯

=





d
2





,






=

1
+

q


(



(






k
¯

¯


r

p




-
1

)


d

+

mod




(




k
¯

¯

-
1

,

r

p


)


)


+


(




k


r
2



p
2





-
1

)



d
2




,











k
¯

¯

=


mod




(


k
-
1

,


r
2



p
2



)

+
1














k
=
1

,
2
,





,

d
2











where




(
33
)














m
q



(

i
¯

)


=





i
q

¯



-
1


,











n
q



(

i
¯

)


=

m

o



d




(



i
¯

-
1

,
q

)

.








(
34
)







Now, similarly to (16), the HR image can be expressed as










u
¯

=

W



β
_

.




where






(
35
)







β
_



=
Δ




[





β
¯

1







β
¯

2












β
¯


q
2





]

.





(
36
)







Now we are ready to ask the following question: Could it be possible that V and W are all we need to find custom-character? If the answer is yes, then, effectively, the pair of LR dictionaries Ψ and Φ are all that is required to find the HR image.


To answer that question, we begin by first combining equations (16) and (35) into one equation





Vα=Wβ,   (37)


which we rewrite as











[

V





W

]



[




α
¯






-

β
_





]


=


0
¯

.





(
38
)







Equation (38) suggests that we need to study the nullspace of the augmented matrix [V W]∈custom-characterd2×(p2M+q2N). Before we explain, let custom-character denote the nullspace of [V W], and let Z∈custom-character(p2M+q2N)×z be its basis, where z custom-characterdim (custom-character), i.e.





Zcustom-characternull([V W]),   (39)


and






custom-character=span(Z),   (40)


and assume that the sought after HR image is large enough. Specifically, assume






d
2
=r
2
p
2
q
2≥(p2M+q2N).   (41)


Now, define






custom-character
custom-character
custom-charactercustom-character  (42)


and let U denote the basis for the intersection subspace custom-character, i.e.






custom-character=span(U).   (43)


Since custom-charactercustom-character and custom-charactercustom-character, then custom-charactercustom-character, which means that if dim(custom-character)=1, then finding U is tantamount to finding (a scaled version of) custom-character. However, thus far, all we know about the dimensionality of the intersection subspace is





1≤dim(custom-character)≤min(p2M,q2N),   (44)


where the lower bound is due to the fact that custom-charactercustom-character, while the upper bound is due to (42).


To gain additional insight, we need to turn our attention to the nullspace. In particular, let Z be partitioned into a column of two matrices, ZVcustom-characterp2M×z and ZWcustom-characterq2N×z, i.e.










Z
=

[




Z
V






Z
W




]


,




(
45
)







then





U=VZV, (46)


Equivalently,





U=WZW,   (47)


Therefore, if V has full rank p2M, or, equivalently, W has full rank q2N, then,












z


=
Δ






dim


(
𝒵
)


=


r

a

n


k


(
Z
)



=


r

a

n


k


(

Z
V

)



=

r

a

n


k


(

Z
W

)












=




ran


k


(
U
)



=


dim


(
𝒰
)


.









(
48
)







To verify that V is full rank, recall equations (22), (23) and that P*P=I, hence





V*V=⊕i=1p2(Ψ*Ψ).   (49)


Therefore, rank(V)=rank(V*V)=p2rank(Ψ*Ψ)=p2rank(Ψ)=p2M. Similarly,





W*W=⊕i=1q2(Φ*Φ),   (50)


and thus rank(W)=q2N.


Now equation (48) says that the dimensionality of the intersection subspace is equal to the nullity of the augmented matrix [V W]. Of course, the nullity of a matrix is simply the multiplicity of its zero singular values, so the question (which we posed before equation (37)) boils down to: what are the necessary and sufficient conditions for [V W] to have no more than 1 zero-singular value?


The first three necessary conditions are obvious and are already satisfied. Namely, we obviously need the augmented matrix to have more rows than columns (41). Also, the columns of V must be linearly independent (rank(V)=p2M). Similarly, the columns of W must be linearly independent as well (rank(W)=q2N).


The fourth necessary condition is also obvious and it pertains to the span of each of the pair of LR dictionaries Ψ and Φ. In particular, let {{tilde over (ƒ)}i}i=1p2 and {{tilde over (g)}i}i=1q2 denote the two sets of PPPCs and SPPCs, respectively, of a different HR image custom-character≠ccustom-character, c∈custom-character (namely, custom-character is a different image and it is not simply a scaled version of custom-character). If {{tilde over (ƒ)}i}i=1p2∈ span(Ψ), then we know that custom-charactercustom-character. If, additionally, {{tilde over (g)}i}i=1p2∈ span(Φ), then custom-charactercustom-character, and therefore, custom-charactercustom-character. But we already know that custom-charactercustom-character, and therefore, in this case, dim(custom-character)≥2. In other words, the fourth necessary condition necessitates that the pair of LR dictionaries cannot, simultaneously, accommodate the PPCs of any HR image other than custom-character.


Are there any more conditions? Ostensibly, the answer would require the daunting task of examining the very sparse structure of both V and W; particularly the interaction of both sparse structures within the augmented matrix [V W]. Instead, we derive equivalent forms of (38) to see if the associated matrices are more revealing.


We proceed with the pre-multiplication of both sides of (38) with [V W]*, to obtain










[





V
*


V





V
*


W







W
*


V





W
*


W




]

=


[




α
_






-

β
_





]

=


0
_

.






(
51
)







For convenience, and without loss of generality, we assume that atoms of a LR dictionary are orthonormal, i.e.





Ψ*Ψ=I,   (52)


and





Φ*Φ=I.   (53)


Using (49) and (52) we get






V*V=I,   (54)


Similarly (50), (53),






W*W=I.   (55)


Consequently, equation (51) is rewritten as










[



I




V
*


W







W
*


V



I



]

=


[




α
_






-

β
_





]

=


0
_

.






(
56
)







Now the top part of (56) gives us the equation





α=V*Wβ,   (57)


and the bottom part reveals that





β=W*Vα.   (58)


Plugging (58) in (57), we get





(I−V*WW*V)α=O.   (59)


Note that V*WW*V=(W*V)*W*V, and therefore, V*WW*V is a symmetric positive semidefinite (PSD) matrix with singular values between 0 and 1 (the upper limit on the largest singular value is due to (54) and (55)). Likewise, the matrix (I−V*WW*V) is also symmetric, PSD, with singular values between 0 and 1. Moreover, since (59) is derived from (51), which in turn is derived from (38), then ZV=null(I−V*WW*V). In other words, dim(custom-character) is equal to the multiplicity of the smallest (“0”) singular value of (I−V*WW*V). Equivalently, dim(custom-character) is equal to the multiplicity of largest (“1”) singular value of V*WW*V, which is the same multiplicity of the largest singular value of W*V, i.e.





dim(custom-character)=multiplicity of σ1(W*V),   (60)


where σ1(.) denotes the largest singular value of a matrix.


So let's take a look at the structure of W*V and see if it could reveal anything about the multiplicity of its singular values (particularly, the largest one). Given equations (22), (23) and equations (31), (32) we can write






W*V=(⊕j=1q2Φ*)Q*P(⊕i=1p2Ψ).   (61)


If we only consider the product (⊕j=1q2Φ*)(⊕i=1p2Ψ), then it can be proven (with relative ease) that GCD(p,q)=1 is a necessary condition for the largest singular value to be distinct. Unfortunately, the presence of the permutation matrix Q*P in the middle between (⊕j=1q2Φ*) and (⊕i=1p2Ψ) complicates the picture. Yet, the same necessary condition can still be observed to hold for σ1(W*V) to be distinct.


For practical reasons that shall become apparent later, we are only interested in the case q=p+1, which automatically satisfies GCD(p,q)=1. Said differently, we are only interested in what the structure of W*V can reveal about the multiplicity of σ1(W*V) given the assumption (1). Before we proceed, let us first make the following definitions.


Let Ajicustom-characterr2×M and Bjicustom-characterr2×N denote submatrices of Ψ, and Φ, respectively, such that






A
j
i
=D
q

2

R
k

q

(i),k

q

(j)Ψ,   (62)


where Dq2custom-characterr2×r2q2 represents 2D decimation by a factor of q, and Rkq(i),kq(j)custom-characterr2q2×r2q2 represents 2D shifting by kg(i) rows up, and kq(j) columns to the left, where






k
g(i)=mod(q−i,q).   (63)


Said differently, if we obtain the 2D form of each atom (column) in Ψ, and decimate (by q) each (2D) atom starting at the entry located in column kg(j)+1 and row kg(i)+1, and then vectorize the decimated atoms and stack them next to each other, we get Aji.


Similarly,






B
j
i
=D
p

2

R
k

p

(i),k

p

(j)Φ,   (64)


where Dp2custom-characterr2×r2p2 represents 2D decimation by a factor of p, and Rkp(i),kp(j)custom-characterRr2p2×r2p2 represents 2D shifting by kp(i) rows up, and kp(j) columns to the left, where






k
p(i)=mod(p−i,p).   (65)


Now, it can be verified that when (1) is satisfied, W*V∈custom-characterq2N×p2M is a dense q×p block-Toeplitz matrix, i.e.












W
*


V

=

[




T
0




T

-
1





T

-
2








T

1
-
p







T
1




T
0




T

-
1








T

2
-
p







T
2




T
1




T
0







T

3
-
p
























T

q
-
1





T

q
-
2





T

q
-
3








T
0




]


,




(
66
)







with each block Ticustom-characterqN×pM, 1−p≤i≤q−1, being a q×p block-Toeplitz matrix itself,











T
i

=

[




C
0
i




C

-
1

i




C

-
2

i







C

1
-
p

i






C
1
i




C
0
i




C

-
1

i







C

2
-
p

i






C
2
i




C
1
i




C
0
i







C

3
-
p

i























C

q
-
1

i




C

q
-
2

i




C

q
-
3

i







C
0
i




]


,




(
67
)







where the j-th sub-block Cjicustom-characterN×M, 1−p≤j≤q−1, is given by






C
j
i
=B
j
i*Aji,   (68)


and Aji and Bji are as defined in (62)-(65).


In other words, when (1) is satisfied it can be verified that W*V is a Toeplitz-block-Toeplitz (a 2-level Toeplitz) matrix generated by the (p+q−1)2=4p2 sub-blocks {Cji}i,j=1-pq−1 as detailed by equations (62)-(68). Moreover, equations (62)-(65), and (68) tell us that all these sub-blocks are unique (i.e. Cji≠Cjī if either ī≠i or j≠j.


So now, instead of wondering about other conditions we might need for the (sparse) augmented matrix of (38) to have a nullity of exactly 1, the question becomes: given that q=p+1, are the aforementioned necessary conditions all we need for the (dense) Toeplitz-block-Toeplitz matrix W*V to have a distinct largest singular value?


The relevant literature is rich with analysis and properties pertaining to Toeplitz matrices. However, to the best of our knowledge, when it comes to the question of the multiplicity of singular values of Toeplitz-structured matrices, the answer can only be given in the context of asymptotic distributions of singular values (and under assumptions that do not apply in our case). Therefore, we state the following empirically verifiable result.


Empirical Result 1: Given that the basic assumption (1) is satisfied, i.e. q=p+1, the following conditions are both necessary and sufficient for σ1 (W*V) to be distinct.







1
-
r








(



p
2


M

+


q
2


N


)


pq



.







    • This is a reformulation of (41) in light of the fact that r must be an integer.

    • 2—rank(Ψ)=M (the atoms of Ψ must be linearly independent for columns of V to be linearly independent).

    • 3—rank(Φ)=N (the atoms of Φ must be linearly independent for columns of W to be linearly independent).

    • 4—At most, the pair (Ψ,Φ) simultaneously accommodate the PPCs of only one image.





Note that if the pair (Ψ, Φ) simultaneously accommodate the PPCs of exactly one image, then σ1(W*V)=1, and, equivalently, dim(custom-character)=1.


3. The Need for an Arbitrator

In the previous two sections, we have seen that the intersection subspace of two HR subspaces based on two (different-resolution) LR dictionaries is at most 1-dimensional (if the four conditions listed above are satisfied). Relying on this knowledge, we now turn our attention to finding the HR image.


As is well-known if dim(custom-character)=1, the easiest (and most computationally efficient) way to find a solution to (37) is to simply compute the right singular vector associated with the smallest (“0”) singular value of [V W]. Similarly, when (52) and (53) are satisfied, the solution to the alternative formulation (59) is the singular vector of (I−V*WW*V) associated with its smallest (“0”) singular value, which is also the right singular vector associated with the largest (“1”) singular value of W*V.


In practice, however, the pair of LR dictionaries are ‘learned’ from training (example) LR images and thus they would never simultaneously exactly accommodate the PPCs of the sought after HR image custom-character (or any other image for that matter). In other words, the intersection subspace is always empty if the conditions of Empirical Result 1 are satisfied (since, in practice, the pair of LR dictionaries do not, simultaneously, exactly accommodate the PPCs of any image).


Nevertheless, since σ1(W*V)<1 is distinct, the optimal (in the Frobenius norm sense) approximate solution to (59), remains the right singular vector of W*V associated with (W*V). Alternatively, since the smallest singular value of [V W] is distinct iƒƒ σ1(W*V) is distinct, then the optimal solution of (37) remains the right singular vector of [V W] associated with its smallest singular value.


Of course, our ultimate goal of solving (37) (or its equivalent formulation (59)) is to estimate custom-character. However, an optimal solution to (37) does ‘not’ necessarily guarantee the optimality of the estimation of custom-character. To explain with an (extreme) example, let custom-character1 and custom-character2 be two images such that












u
_




u
_




-



u
_

1





u
_

1













u
_




u
_




-



u
_

2





u
_

2








,




i.e. custom-character1 is a much better approximation of custom-character than is custom-character2, and suppose that the available pair of LR dictionaries simultaneously accommodate the PPCs of custom-character2 exactly, while the same pair of dictionaries only approximate the PPCs of custom-character1 (albeit very well). In this case, the exact solution of (37) will lead to the inferior approximation custom-character2.


Again, in practice, the pair of (learned) dictionaries never do simultaneously accommodate the PPCs of any image, but the previous example helps to highlight the fact that solving (37) can be completely blind to whatever constitutes an optimal estimation of custom-character. Put differently, if we are to rely entirely on (37) (to estimate the representation of the HR image in terms of the HR bases), then the learning process of the pair of LR dictionaries must be carefully designed to try to avoid picking inferior approximations of our sought after HR image.


Before we suggest an alternative to requiring a painstaking learning process of the pair of LR dictionaries, we first note that the solution to the following optimization problem











min


α
_

,

β
_










V


α
_


-

W


β
_





2







s
.
t
.








α
_



2




=
1




(
69
)







is the same optimal solution to (59), i.e. the solution to (69) is also the right singular vector of W*V associated with σ1(W*V).


Now consider the following modified optimization problem












min


α
_

,

β
_






(






V


α
_


-

W


β
_





2

+

μ







1

p
2







i
=
1


p
2




Ψ



α
_

i




-


f
_

_




2



)







s
.
t
.








α
_



2




=

c
>
0


,




(
70
)







where









f
_

_

=


1

p
2







i
=
1


p
2





f
_

i




,




the mean of the PPPCs of the HR image, is assumed to be known (for now), and custom-character is a control parameter. By adding the term












1

p
2







i
=
1


p
2




Ψ



α
_

i




-


f
_

_




2

,




we require the solution to be such that the mean of the estimated PPPCs be close to ƒ, which has the effect of solution ‘arbitration’ in favor of custom-character.


Of course, ƒ is unknown, but what we do have in practice is the LR image, whose HR version we seek, and which we denote by x. Now since the (unknown) PPPCs of the (unknown) HR image would be highly correlated with x, then their unknown mean, ƒ, would also be highly correlated with x. Moreover, if x is well approximated by Ψ, i.e. ∥x−Ψ(Ψ*x)∥≈0, then we can use the following expression for arbitration in favor of custom-character,













1

p
2







i
=
1


p
2




Ψ



α
_

i




-

Ψ


(


Ψ
*



x
_


)





2

=





Ψ


(



1

p
2







i
=
1


p
2




Ψ



α
_

i




-


Ψ
*



x
_



)




2

=







1

p
2







i
=
1


p
2





α
_

i



-


Ψ
*



x
_





2

=





𝒥


α
_


-


Ψ
*



x
_





2




,




where custom-charactercustom-characterM×pM is a 1×p2 block matrix, scaled by the factor







1

p
2


,




with all p2 blocks being equal to the identity matrix (of dimension M),










𝒥


=





1

p
2





[




I
M




I
M







I
M




]





p
2


blocks





,




(
71
)







and therefore,







𝒥


α
_


=


1

p
2







i
=
1


p
2






α
_

i

.







We are now ready to write a practical version of optimization problem (70)












min


α
_

,

β
_






(






V


α
_


-

W


β
_





2

+

μ






𝒥


α
_


-


Ψ
*



x
_





2



)







s
.
t
.








α
_



2




=

c
>
0


,




(
72
)







which has the closed-form solution (see the Equation Solution Section below)





{circumflex over (α)}=μ[μcustom-character*custom-character+(1−σ)I−V*WW*V]−1(custom-character*Ψ*x)   (73)


where σ is the smallest singular value of the matrix











[




μ𝒥
+
I
-


V
*



WW
*


V






μ




Ψ
*



x
_







𝒥
*



Ψ
*



x
_








μ




Ψ
*



x
_








(


𝒥
*



Ψ
*



x
_


)

*




μ



]







(



p
2


M

+
1

)


×

(



p
2


M

+
1

)



,




(
74
)







and custom-character*Ψ*x∈custom-characterp2M is simply p2 replicas of Ψ*x∈custom-characterM, scaled by







1

p
2


.




After estimating the representation of the HR image in terms of the HR basis V, all is left to get an estimate of the HR image is to use equation (16), i.e. custom-character=V{circumflex over (α)}. However, recall (13) and (18) and partition {circumflex over (α)} into p2 vectors, each of length M, i.e.








α
^

_

=


[






α
^

_

1








α
^

_

2













α
^

_


p
2





]

.





Therefore,






custom-character=V{circumflex over (α)}=interlace {Ψ{circumflex over (α)}i}i=1p2.


Consequently, the construction of neither V nor W is required here. Only the product W*V is needed, which can be very efficiently computed using equations (62)-(68).


It must be re-emphasized, however, that problem formulation (72) is only one possible answer to the following question: How do we find a solution to (37), that is in favor of an optimal approximation of our sought after HR image? In other words, the HR basis matrices V and W only outline a structured subspace framework within which we have complete freedom to seek an estimate of custom-character. Indeed, one can come up with a different objective function, combined with a different set of constraints (or lack thereof), in lieu of (72), but whatever optimization problem one chooses to solve, the matrices V and W will always be in the picture, one way or another.


In that sense, the structured subspace framework (37) can be seen as a conduit through which the pair of ‘LR’ dictionaries (Ψ,Φ) can generate (an estimate of) our ‘HR’ image. Moreover, we shall see next that the same framework provides the foundation for training of said ‘pair’ of LR dictionaries from a ‘single’ LR sequence.


Additionally, it goes without saying that any permutations of equation (37) are exactly equivalent to equation (37). Said differently, since a linear system of equations (e.g. (37)) remains exactly the same system of equations if you simply change the ordering of its equations, then any reformulation of V and W that amounts to permuting their rows does not change the solution space for which equation (37) was set up.


4. Learning Pairs of LR Dictionaries from a Single LR Sequence: The Baseline

Thus far, we have introduced the (unorthodox) notion that a pair of LR dictionaries can indeed be all we need for estimating a HR image, using the structured subspace model. Moreover, we proposed infusing an element of arbitration to the solution framework, to avoid practical difficulties that might arise with learning a pair of LR dictionaries. In this section we show how the same subspace structure can additionally be used for training pairs of LR dictionaries in the context of estimating HR images from a single LR sequence of images.


Before we proceed, we would like to highlight a notational change. Since we estimate the k-th HR frame, custom-characterk, in a sequence of K frames custom-character=[custom-character1 2 . . . custom-characterK], by estimating patches of it (patch-based processing), we will add the superscript ‘custom-character’ to our symbols to indicate that they are associated with estimating the custom-characters-th HR patch, custom-charactercustom-characterd2, in the k-th frame custom-characterk, where d=rpq (recall (2)). Specifically, since there is more than one way to partition the same frame into patches, we end up with multiple estimates of the same frame by computing all sets of patches that correspond to all possible partitionings. Let S denote the number of ways a frame can be partitioned, with L patches per partitioning, we use the notation












u
_

^

k



𝒫


(


{


{



u
_

^


k







s



}




s

=
1

L

}


s
=
1

S

)






(
75
)







to indicate that the k-th HR frame is estimated by combining all SL estimated patches








{


{



u
_

^


k







s



}




s

=
1

L

}


s
=
1

S

,




using some combination process custom-character.


Now, let custom-character denote a learning function (or procedure) that learns a LR dictionary (with orthonormal atoms), using a LR training sample and a training set of LR image patches. In particular, if xk denotes the k-th LR frame in a sequence of LR frames, custom-character=[x1 x2 . . . xK] that correspond to (2D) decimation of the HR sequence custom-character by a factor of p, i.e. custom-character=custom-characterp2custom-character, where custom-characterp2 represents 2D decimation by a factor of p, then the LR patch custom-charactercustom-characterr2q2 (corresponding to the HR patch custom-character) is used as the training sample, around which a local set of Mtr LR patches custom-charactercustom-characterr2q2×Mtr is extracted from custom-character. Specifically, we use the notation






custom-charactercustom-character(custom-character)   (76)


to signify that the LR dictionary custom-charactercustom-characterr2q2×M expected to approximate the PPPCs of custom-character) is learned from the training set custom-character and the training sample custom-character, using the learning function custom-character. Also, the expression






custom-charactercustom-character(custom-character)   (77)


indicates that the training set of patches is extracted from custom-character around custom-character using some strategy custom-character.


Similarly, the same learning function custom-character can be used to learn the secondary dictionary custom-charactercustom-characterr2p2×N (to approximate the SPPCs of custom-character) if there exists a secondary sequence of LR images custom-character=[y1 y2 . . . yK], such that yk=custom-characterq2custom-characterk for k=1,2, . . . ,K, where custom-characterq2 represents 2D decimation by a factor of q. Namely, Φklscustom-character(custom-character), where custom-charactercustom-characterr2p2 is the secondary training sample (corresponding to custom-character), and custom-charactercustom-characterr2p2×Nis the secondary training set containing Ntr LR patches, i.e. custom-charactercustom-character(custom-character).


Following the same notational change for estimates corresponding to patches, the symbol custom-character denotes the HR basis matrix constructed from custom-character, i.e. custom-character=P(⊕i=1p2custom-character). Similarly, custom-character=Q(⊕ i=1q2custom-character) is the secondary HR basis matrix. Now since we are going to use (72) to solve (37), we only need the construction of the product custom-character, so to simplify presentation we use the following notation






custom-charactercustom-character(custom-character),   (78)


to indicate that the matrix product custom-character is constructed from the learned LR dictionary pair (custom-character), using equations (62)-(68).


Normally, however, an imaging system would only have one sensor, whose output is custom-character, with p being simply the desired upsampling factor. In other words, a secondary sequence custom-character does not exist, and, therefore, neither does custom-character, nor custom-character. This obstacle can nevertheless be overcome by starting the solution with an initial estimate of the HR sequence. Specifically, let






custom-character
(0)custom-character(custom-character)   (79)


denote the initial estimate of the unknown HR sequence, based on the available LR sequence custom-character, using some estimation process custom-character, then the first estimate of the secondary sequence is given by






custom-character
(1)custom-characterq2custom-character(0),   (80)


and the first version of the secondary LR dictionary can be learned, custom-charactercustom-character(custom-character), where custom-character and custom-character denote the training set of LR patches, and the training sample, respectively, extracted from custom-character(1).


With both custom-character and an initial guess custom-character(0) of the HR sequence at hand, our structured subspace model can be used in an iterative joint estimation of the HR sequence ‘and’ training of secondary LR dictionaries, as follows.


For a current estimate custom-character of the custom-characters-th HR patch in the k-th frame, first compute (recall (73))






custom-character=μ[μcustom-character*custom-character+(1−custom-character)Icustom-character]−1(custom-character*custom-character),   (81)


with custom-character being the smallest singular value of (recall (74))










[






μ


*



+
I
-


V

*
k







s





W

(
t
)


k







s





W

(
t
)


*
k







s





V

k







s














μ




Ψ

*
k







s






x
_


k







s









*



Ψ

*




k







s






x
_


k







s










μ




Ψ

*
k







s






x
_


k







s










(


*



Ψ

*




k







s






x
_


k







s




)

*




μ



]

,




(
82
)







and obtain the current estimate of the HR patch via (recall (13), (16) and (18)),






custom-character=custom-character=interlace {custom-character}i=1p2   (83)


where custom-charactercustom-character(custom-character), and custom-charactercustom-character(custom-character), with custom-character and custom-character being the training set, and training sample, respectively, extracted from the current secondary LR sequence custom-character(t)=custom-characterq2custom-character(t-1), where custom-character(t-1) is the previous estimate of the HR sequence.


After computing all SL current estimates of patches, {{custom-character}s=1S, corresponding to all S partitionings of the k-th frame, find the current estimate of the k-th HR frame, custom-character(t)kcustom-character ({{custom-character}s=1S), for k=1,2, . . . , K, to get a current estimate custom-character(t)=[custom-character(t)1 custom-character(t)2 . . . custom-character(t)K] of the HR sequence.









TABLE 2





Pseudo code illustrating the basic solution baseline based on the


proposed structured subspace framework.















Input: A sequence custom-character  of K LR images, and the desired upsampling factor,


p.


Compute the initial estimate custom-character(0) ≡ ε(custom-character ) of the HR sequence.


For t = 1:T


 Obtain custom-character(t) = custom-characterq2custom-character(t−1).


 For k = 1:K


  For s = 1:S


   For custom-characters = 1:L


     Learn the primary LR dictionary:


    Extract = custom-character .


    Extract custom-character  ≡ custom-character (custom-character ).


    Compute custom-character  ≡ custom-character (custom-character ).


    (Note: custom-character , custom-character , custom-character  are iteration-independent).


     Learn the secondary LR dictionary:


    Extract custom-character


    Extract custom-character  ≡ custom-character (custom-character(t), custom-character ).


    Compute custom-character  ≡ custom-character (custom-character ).


     Estimate the HR image within the structured


     subspace framework (37). We recommend (72)


    Compute custom-character  ≡ custom-character (custom-character ).


    Compute custom-character


    Compute smallest singular value custom-character  of matrix (82).


    Compute custom-character  (81).


    Compute custom-character  (83).


   End


  End





  
Computeu^¯(t)k𝒫({{u_^(t)ks}s=1L}s=1S).






 End


 Construct custom-character(t) = [û(t)1û(t)2 . . . û(t)K].


End


Output: custom-charactercustom-charactercustom-character(T) = [û(T)1û(T)2 . . . û(T)K].









Repeat until a prescribed number of iterations, T, has been reached to get the final estimate custom-character of the HR sequence,






custom-character
custom-character
custom-character
(T)=[custom-character(T)1 custom-character(T)2 . . . custom-character(T)K].   (84)


Refer to Table 2 and FIG. 3 for a summary of the baseline.


At this point, it is worthwhile to note that, all KSL estimates, {custom-character}, 1≤k≤K, 1≤s≤S, and 1≤custom-characters≤L, needed to create the HR sequence custom-character(t), are computed completely independently from each other, and hence they can all be concurrently computed. Namely, if the computing hardware allows it, it is possible to parallel-process all KSL patches at once (per iteration).


Moreover, it should be noted that the proposed baseline is the most straightforward, and least computationally expensive for implementing the structured subspace framework. However, more complex baselines can still benefit from the structured subspace framework by jointly estimating HR patches, for example. In other words, the solution baseline can be devised such that the estimation of any patch is made dependent on the estimation of other patches (e.g. neighboring patches within the same frame, or patches across multiple estimates of the same frame, or even patches across different frames). However, we do not believe that the improvement in results, if any, would justify the added complexity. Specifically, the real power of our solution lies in the unprecedented access it provides to very narrow training sets (that are extracted from a naturally highly correlated sequence of LR images).


5. A Working Solution

In the previous section we described a general baseline for iterative estimation of a HR sequence from a LR sequence, by learning pairs of LR dictionaries within a structured subspace framework. However, many details were (intentionally) left out. Specifically, which estimation process custom-character are we going to use to obtain an initial guess of the HR sequence? How about the learning function custom-character? Regarding those local training sets, what are the specifics of their extraction custom-character from a LR sequence? Which combination process custom-character is going to be used to piece together all SL estimated HR patches per frame? Indeed, what are the S different possible ways to partition a frame into patches?


The discussion of such details was postponed so as to help appreciate the cornerstone role of the structured subspace model, and to emphasize the level of freedom in the design of the baseline's remaining components. Indeed, as we shall see in the experiments section, we can get impressive results despite using some of the simplest outlines for custom-character, custom-character, custom-character, and custom-character. This, again, underscores the baseline's most valuable asset: the structured subspace model which, in effect, leverages a sequence of LR images as a very narrow training set for estimating HR images.


A. The Initial Guess (custom-character)


For an initial estimate (79) of the HR sequence from the available LR sequence, we choose custom-character to simply represent Lanczos interpolation (by a factor of p). One might try more advanced options, but besides added complexity, even advanced methods would not give appreciably better estimates compared to simple image interpolation methods (such as bicubic or Lanczos) when the LR sequence contains complex motion patterns, and the aliasing is relatively strong.


B. Partitioning a frame


In its simplest form, patch-based processing works by dividing a frame we desire to estimate into patches (sub-regions), and then applying the proposed estimation process for each individual patch, independently from other patches. By simply putting each estimated patch in its place, we get an estimate of the HR frame (i.e. an estimate of a HR frame is obtained by ‘tiling’ its estimated patches).


However, multiple estimates of the same frame can be obtained by dividing the frame into overlapping patches. To elaborate, since the size of a HR patch is d×d and d=rpq, if we use an overlap of pq pixels, vertically and/or horizontally, then a HR frame can be partitioned into patches in S=r2 different ways. In particular, since the location of the ‘leading’ patch custom-character=1 of the s-th partitioning determines the locations of the remaining patches (since patches are tiled next to each other, vertically and horizontally, per partitioning), we shall give a description of the location of the leading HR patch corresponding to partitioning s, as follows [2].











u


k







s


=
1


=


u
k



(




v


(
s
)



pq

+

1


:







v


(
s
)



pq

+
d

,



h


(
s
)



pq

+

1


:







h


(
s
)



pq

+
d


)



,




where




(
85
)








v


(
s
)


=




s
r



-
1


,






h


(
s
)


=

mod






(


s
-
1

,
r

)



,




(
86
)







and A(r1:r2, c1:c2) denotes the submatrix of A, from row r1 to row r2, and column c1 to column c2.


Therefore, the corresponding leading LR patch custom-character=1 of the k-th LR frame xk is






custom-character
=1=xk(ν(s)q+1:ν(s)q+rq,h(s)q+1:h(s)q+rq).   (87)


Similarly, the corresponding leading LR patch custom-character=1 of the k-th (secondary) LR frame y(t)k is






custom-character
=1=y(t)k(ν(s)p+1:ν(s)p+rp,h(s)p+1:h(s)p+rp).   (88)


C. Extracting Local Training Sets (custom-character)


We follow the simple procedure custom-character described in [2], wherein (overlapping) LR patches are extracted from within the spatiotemporal neighborhood of the patch custom-character, as follows.

    • Pick the k LR frames in custom-character that are (temporally) closest to the k-th frame.
    • Pick the (spatial) location of the patch custom-character as the origin.
    • Extract, from each one of the k frames, (2b+1)2 spatially overlapping patches, starting at the origin, and moving (vertically and horizontally, and in opposite directions) with a step of only one pixel, and for as far as b pixels from the origin.


The result is a (local) training set custom-character containing Mtr=k (2b+1)2 LR patches that are extracted from within a spatiotemporal neighborhood around custom-character, with temporal width of k frames, and vertical (horizontal) spatial width of (2b+1) pixels. The following pseudocode describes custom-character more explicitly.


Suppose custom-character=xk(r0+1:r0+rq, c0+1:c0+rq), then a local training set can be extracted from custom-character, around custom-character, as follows.
















For ik = −0.5κ + k:0.5κ + k − 1



 For ir = −b:b



  For ic = −b:b



      A = xik( r0 + ir + 1:r0 + ir + rq , c0 +ic + 1:c0 + ic + rq).



      Put vec(A) as a column in  custom-character



  End



 End



End









Similarly, custom-character is extracted, around custom-character, from custom-character(t), using the same exact procedure (with Ntr=Mtr=k(2b+1)2).


D. Learning the Dictionaries (custom-character)


Dictionary learning (custom-character) can be based on feature selection (FS), feature extraction (FE), or a combination of both. In our case, FS far outweighs FE. In particular, PPCs are highly correlated signals, so learning a dictionary that represents them well would be heavily dependent on choosing those features (LR patches) in the training set that would capture the subtle differences between the PPCs [2].


To elaborate, since the PPCs are expected to be highly correlated with custom-character, we use it as a training sample for selecting the M«Mtr ‘best’ LR patches in custom-character. To that end, we use best individual feature (BIF) selection based on L1 distance between








x
_


k







s







x
_


k







s









and normalized LK patches in custom-character. In other words, we select a subset custom-character with only M LR patches out of the Mtr=k (2b+1)2 LR patches in custom-character, where the elements in the selected subset are simply the normalized LR patches in the training set that have the smallest L1 distance from the normalized training sample.


Now, since some of our equations are based on the (simplifying) assumption that the dictionary atoms are orthonormal, all is left to do after selecting the best M LR patches is to simply orthonormalize them. This can be done, for example, using the Gram Schmidt method (QR factorization). However, we shall be using the singular value decomposition (SVD), since it keeps the door open for further dimensionality reduction if so desired. In other words, our local dictionary custom-character is simply the first M left singular vectors of the selected subset custom-character. If {tilde over (M)}<M is desired, then we simply keep the first {tilde over (M)} left singular vectors instead.


Likewise, custom-character is the first N left singular vectors of the N-element subset custom-character selected from the (normalized) training set custom-character, using the BIF method based on the smallest L1 distance from









y
_


(
t
)


k







s







y
_


(
t
)


k







s






.




The dictionary learning process described here is simple, yet it works surprisingly well. It must be noted, however, that this simplicity of dictionary learning is afforded to us by the ‘narrowness’ of the training sets, which, in turn, is attributed to the ‘high correlation’ between images in a LR sequence and the PPCs of a HR frame (FIG. 1). Yet, without the structured subspace framework as a tool to harness the power of narrow training LR sets for estimating HR frames, we would have been confined to the imaging hardware requirement of [1], [2].


E. Combining Multiple Estimates of a HR frame (custom-character)


Given custom-character and custom-character (and custom-character) for 1≤custom-characters≤L, we can estimate custom-character for 1≤custom-characters≤L, and tile all estimated patches into the s-th estimate custom-characters,tk of the k-th HR frame. Repeating this for all s (1≤s≤S=r2), we get S=r2 estimates {custom-characters,(t)k}s=1r2 of the k-th HR frame.


So now, what do we do with all these estimates of the same frame? One of the simplest options is to average all r2 estimates. We, nevertheless, choose to follow the solution proposed in [2] to get the final (per iteration t) estimate custom-character(t)k of the k-th HR frame, as follows.













u
^

_


(
t
)

k

=


min


u
_


(
t
)

k







s
=
1


r
2







w
1







w
2








H







u
_


(
t
)

k


-


R


w
1

,

w
2







u
^

_


s
,

(
t
)


k





1






,




(
90
)







where Rw1,w2 shifts the s-th estimate by w1 and w2 pixels in the horizontal and vertical direction, respectively, −(ω−1)/2≤w1,w2≤(ω−1)/2, and H represents a blurring kernel, to be defined by the user, to counteract the softening effect of median filtering (across all r2 estimates, and within a spatial window of size ω×ω, per pixel).


F. Solution Parameters

Now we turn to the solution parameters, and start with the integer scalar r. Given that, in practice, p is the user-defined upsampling factor, and given that q=p+1 (1), then r is the only parameter left to determine the size of the HR patch (2) or, in particular, r determines how small (41) the HR patch can be. Before we continue, now is the time to address the question: why would we want to work on patches of a frame (patch-based processing), instead of upsampling an entire frame as a whole (i.e. with r being large enough to engulf the entire frame)?


The answer to this question is in [2], but we reproduce it here for convenience. First, the short version: the more ‘dynamic’ the LR sequence, the greater the need for patch-based processing, with particularly small patches. The longer version of the answer would require revisiting FIG. 1 to notice that the inter-correlations between the PPCs are much greater than the correlations that exist between the LR frames (of a dynamic scene) and the PPCs (of a frame). In other words, a ‘training’ set of LR frames would not capture well the subtle differences between the PPCs of a HR frame as a whole. Note that in addition to a basic common component the PPCs might share, these subtle differences are, ultimately, what gives us the HR frame. Working on smaller sub-regions (patches), however, allows us to harness, from the LR frames, local training sets (as described above) from which local dictionaries can be learned for estimating HR patches. In other words, the smaller the patch, the more ‘local’ the training sets (and the dictionaries) get.


On the other hand, larger patches (larger r) allow for the advantage of a more overdetermined (38). So to keep things simple, when the desired upscaling factor p=3, we fix the dimensionality of all LR dictionaries at 3q2, i.e. M=N=3q2, and let r=4. For p=4, we use r=3 and M=N=2q2.


Now, how large a local training set can be (recall Ntr=Mtr=k(2b+1)2)? Ideally, we would want to search for ‘best’ LR patches in as large a spatiotemporal neighborhood as possible, i.e. k and b both can be as large as possible. However, and even though using BIF selection based on L1 distance from a training sample is a very low complexity FS policy, with more and more LR patches to choose from, the computational cost of total FS (performed for all patches) can get quite significant if k and b are too large. For that reason, we pick k=30 and b=5.


Finally, for combining multiple estimates of the same frame (90), we choose H to be a Gaussian kernel with size 5 and standard deviation of 1, and we pick ω=3.


G. The Control Parameter μ

Recall that equation (81) and (82) are simply the patch-based, iteration-dependent versions of equations (73) and (74), respectively. Yet, the control parameter μ in these equations appears patch-independent as well as iteration-independent. Can this indeed be the best strategy for selecting μ.? To use a fixed value across all patches and throughout all iterations?


First, recall that μ determines how close to the reference LR image should the mean of the PPCs of the estimated image be. Therefore, if μ is too large, we run the risk of an estimated HR image that has PPCs that are too close to each other (a solution with some residual aliasing). On the other hand, in practice, if μ is not high enough, the end result can be too smooth, with low contrast, especially for those parts of the scene that are too dynamic (large scene changes across captured frames). In other words, instead of using the symbol μ in equations (81) and (82), we could use custom-character to account for the patch and iteration dependency in selecting the control parameter.


Nevertheless, choosing the “right” value for μ is not easy. According to experiments, the range of good values can be quite large (0.1≤μ≤5), and a good choice can be dependent on many factors including high spatial frequency content per patch, patch-size, and selected dictionary atom diversity. Currently, we use the following empirical formula for determining good values for μ (per-patch and per-iteration).











μ

(
t
)


k







s



=

min


(


max


(



μ
0




d
2




p
2


M

+


q
2


N





γ
τ


,
0.1

)


,
5

)



,




where




(
91
)







τ
=






y

(
t
)


k







s



**

[



1



-
1






-
1



1



]




F
2



r
2



p
2




,




(
92
)







** denotes 2D convolution, ∥.∥F is the Frobenius norm, μ0 is a user-defined scalar that is fixed across all iterations and all patches (we pick μ0=20), and y is a measure of the diversity of selected atoms computed as follows.

    • 1—Find the correlation coefficient between custom-character and each atom in custom-character.
    • 2—Assign a value of 1 for correlation coefficients higher than 0.99, a value of 3 for correlation coefficients between 0.95 and 0.99, and a value of 10 for correlation factors below 0.95.
    • 3—γ is the average of all assigned values.


No matter how good an empirical formula for selecting custom-character might be, if the value for μ0 is too high for some frames (or sub-regions of a frame), we would end up with estimated HR frames that contain regions with some residual aliasing. If that is the case, we propose the following remedy.


After obtaining the final estimate custom-charactercustom-charactercustom-character(T)=[custom-character(T)1 custom-character(T)2 . . . custom-character(T)K] of the HR sequence (we pick T=4), we start the solution over, but this time we use custom-character as our input sequence of LR images, and we keep all the parameters the same except for μ and r. Specifically, we keep μ fixed at the low value of 0.2, and use r=7 for p=3, and r=5 for p=4. This should take care of residual aliasing. The final sequence can then be resized down to the original desired frame size (i.e. each frame can be decimated by a factor of p).


6. Results

In this section we test the proposed solution using the ‘Suzie’ sequence of HR frames of size 486×720 each. First, we decimate each frame by a factor of 6 (vertically and horizontally). The top row of FIG. 4 shows some of the LR frames magnified by a factor of 3 using Lanczos interpolation. Corresponding upsampling results of the proposed solution using an upsampling factor of p=3 are shown in the bottom row.


We repeat the same experiment using a decimation factor of 8, and an upsampling factor of p=4. FIG. 5 shows Lanczos-magnified LR images vs. upsampling.


If we compare both FIG. 4 and FIG. 5, it becomes immediately evident that the upsampling quality is higher for p=3 compared to p=4. Of course, this can be justified by the fact that the LR sequence of FIG. 4 is less degraded than FIG. 5 LR sequence. But why, specifically in our case, is it easier to get better results for lower upsampling factors? It all goes back to (local) LR dictionaries' ability to capture the subtle differences between PPCs. Specifically, the higher the upsampling factor needs to be, the higher the number of PPCs, and the more difficult it becomes to represent the subtle details differentiating between PPCs, especially when the scene is highly dynamic. Moreover, with larger upsampling factors, the patch size has to be larger (recall (2) and (41)), and the larger the patches the more difficult it becomes to find good example images for learning local dictionaries.


However, this gap in upsampling quality (e.g. p=4 vs. p=3) would become smaller when the LR sequence is not highly dynamic (specifically, when scene changes, across a set of captured frames, are not too large because the frame rate is sufficiently high).


7. Denoising by Upsampling: An Additional Use for the Structured Subspace Model

In section 3 (The Need for an Arbitrator), we explained that solving the equation Vα=W β is the equivalent of finding the image in span(V) and the image in span(W) that are the closest to each other, and we chose equation (72) as the ‘yardstick’ to measure the distance.


To reiterate, since we adopt a feature selection (FS) strategy for selecting the atoms of the primary LR dictionary Ψ (from which the HR basis matrix V is constructed) and the atoms of the secondary LR dictionary Φ (from which the HR basis matrix W is constructed), we chose (72) as the distance measure to guard against picking an inferior solution.


In particular, since images with the simplest structure tend to have the smallest discrepancies, solving Vα=Wβ can favor an overly smooth solution in the absence of a proper distance measure.


On the other hand, smoothing processes are known to be useful for noise removal (denoising), but the challenge is to remove the noise ‘while’ leaving (most of) the underlying signal intact. This leads us to the following question: rather than minimizing it, would it be useful to ‘augment’ this smoothing effect to solve the noise problem?


As it turns out, by making a modification on the upsampling baseline outlined in section 4 (Learning Pairs of LR Dictionaries from a single LR Sequence: The Baseline), we can harness the smoothing effect to address the noise problem. In particular, while a sequence of images might not necessarily be in need for upsampling per se, we still engage a slightly modified version of the upsampling baseline outlined in section 4 to get rid of the noise in a process we call ‘denoising by upsampling’, which we describe next.


A. Iterative Noise Shedding

Recall that the original purpose of the iterations in the upsampling context was to re-estimate a better version of the (missing) secondary LR sequence. In the proposed noise removal context, however, re-engaging the structured subspace framework is a goal, in and of itself, to invoke the aforementioned smoothing effect multiple times until noise is reduced to acceptable levels.


Said differently, even if the imaging system was built such that it produces two LR sequences with different resolution, if our goal is denoising, then we propose to ignore the originally available secondary sequence.


Indeed, in this embodiment, even the original (primary) LR sequence is disposed of after the first iteration. Specifically, the upsampled output of each iteration is decimated back to the same original pixel count before being fed again as the input for the next iteration. This way, a new iteration starts with a new input sequence with lower noise. Note that here, while the final output is an upsampled version of the original input, upsampling is a means to an end (noise removal).



FIG. 16 illustrates the modification on the baseline with the aim of augmenting the smoothing effect. In particular, note that in the new proposed baseline, both the primary LR sequence and the secondary LR sequence are updated with each iteration. The modification is also reflected in a pseudocode shown in Table 3.


Please note that in the proposed ‘denoising by upsampling’ context, ‘low-resolution (LR)’ not only means low-number of pixels, but it also means ‘noisy’ images. Similarly, the term ‘high-resolution (HR)’ here simply means ‘upsampled’ (primarily for the purpose of noise removal).


Finally, the proposed modification for ‘denoising by upsampling’ is by no means the only option to engage the structured subspace model for the purpose of denoising, but we believe it to be the simplest and most straightforward.









TABLE 3





Pseudocode illustrating the proposed changes on the basic


solution baseline to achieve ‘denoising by upsampling’















Original input: A sequence custom-character  of K LR (noisy) images, and the user-


defined upsampling factor, p.


Define custom-character(1) custom-character , i.e. we use the original input (noisy) sequence for the


first iteration only.


For t = 1:T


 Obtain custom-character(t) = custom-characterp2custom-character(t−1) for t > 1.


 Obtain custom-character(t) = custom-characterq2ε(custom-character(t)).


 For k = 1:K


  For s = 1:S


   For custom-characters = 1:L


     Learn the primary LR dictionary:


    Extract custom-character .


    Extract custom-character  ≡ custom-character (custom-character ).


    Compute custom-character  ≡ custom-character (custom-character ).


     Learn the secondary LR dictionary:


    Extract custom-character .


    Extract custom-character  ≡ custom-character (custom-character ).


    Compute custom-character  ≡ custom-character ( custom-character ).


     Estimate the upsampled image within the structured


     subspace framework (37). We recommend (72)


    Compute custom-character  ≡ custom-character ( custom-character ,custom-character ).


    Compute custom-character


    Compute smallest singular value custom-character  of matrix (82).


    Compute custom-character  (81).


    Compute custom-character  (83).


   End


  End





  
Computeu^¯(t)k𝒫({{u_^(t)ks}s=1L}s=1S).






 End


 Construct custom-character(t) = [û(t)1û(t)2 . . . û(t)K].


End


Output: custom-charactercustom-charactercustom-character(T) = [û(T)1û(T)2 . . . û(T)K].









B. Notational Change

Because even the input sequence itself is updated with every iteration, starting from equation (76) onward, the only notational changes needed in the new context is to simply add the subscript (t) to custom-character, custom-character, custom-character, custom-character, and custom-character.


C. Slightly Changed Roles of Estimation Process custom-character and the Control Parameter μ


The original role of estimation process custom-character was to provide—from the available (primary) LR sequence—an initial guess of the HR sequence to be then decimated to obtain a first estimate of the (missing) secondary LR sequence (see FIG. 3).


In the modified (denoising) baseline, however, we obtain updated secondary ‘and’ primary LR sequences to keep shedding more noise with each iteration. Therefore, the role of custom-character here is nothing more than obtaining an updated secondary sequence from the (updated) input sequence. For this purpose, custom-character is best kept as simple as possible. In fact, we recommend custom-character to simply be a ‘nearest-neighbor’ interpolator.


Also, because the primary purpose here is denoising, the role of the control parameter μ is greatly simplified. In particular, choosing a fixed, low value for μ (e.g. μ=0.2) for all image patches and across all iterations should suffice. In this context, μ only guarantees that an updated estimate does not venture too far from a previous estimate.


As to all other baseline components, they retain their same exact roles as previously described in section 4 and section 5 (A Working Solution).


8. Reclaiming Detail Lost to Upsampling: A Final Special Iteration

Another use for the structured subspace model can go beyond upsampling to recovering detail lost during the process. This can be useful as one last (special) iteration to be applied to the final output of an upsampling baseline.


In particular, recall that the only condition on the desired upsampling factor p and the secondary factor q is that q>p such that GCD(p,q)=1. Therefore, as long as q is a positive integer, then p=1 (i.e. no upsampling) will always satisfy the condition GCD(p,q)=1.


With that in mind, the following question arises: what possibly can we gain by utilizing the structured subspace model without any (further) upsampling? More specifically, if we already got the final sequence of upsampled frames, custom-character(T)=[custom-character(T)1 custom-character(T)2 . . . custom-character(T)K], is there a way to further benefit from the structured subspace model?


To avoid confusion, we shall add two dots on top of some symbols to indicate that we are engaging the structured subspace model ‘without’ upsampling. In particular, we shall use the symbols {umlaut over (p)} and {umlaut over (q)}, and {umlaut over (p)}=1 (i.e. no upsampling is desired), and {umlaut over (q)}=p (where p is the previous user-defined upsampling factor for obtaining custom-character). Now we are ready to ask the same question using more precise terms.


What kind of a HR image would we be seeking by solving the following equation?





{umlaut over (V)}{umlaut over (α)}={umlaut over (W)}{umlaut over (β)},   (93)


where {umlaut over (V)} is the first HR basis matrix whose columns are (the vectorized form of) images learned from the HR sequence custom-character, {umlaut over (W)} is the second HR basis matrix constructed such that it spans any HR image whose PPCs are spanned by the LR dictionary {umlaut over (Φ)} learned from the original LR sequence custom-character, and {umlaut over (α)} and {umlaut over (β)} are the representations of the sought after HR image in terms of the basis matrices {umlaut over (V)} and {umlaut over (W)}, respectively.


A. Signal vs. Artifact


Before we explain further, we use the term ‘artifact’ to indicate any undesired signal. For example, noise is an artifact, pixelation (due to low pixel density) is an artifact, etc. Now, since the whole point of computing the upsampled sequence custom-character is to minimize (if not completely eliminate) pixelation and/or noise artifacts, then the subspace spanned by the basis {umlaut over (V)} would be hardly inhabitable for images with such artifacts. Said differently, since {umlaut over (V)} is learned from custom-character, then it is expected to be hardly conducive to constructing noisy/pixelated images. On the other hand, the subspace spanned by {umlaut over (W)} is based upon the original LR sequence custom-character where all the original artifacts are present, as well as the underlying signal.


This suggests that, by properly solving equation (93), we are offered a chance to recover at least some of the signal lost during upsampling. In other words, the image we would obtain by utilizing the structured subspace model (without further upsampling) would be an enhanced version. Another way of looking at it is that while solving (37) is the equivalent of conducting a wide-scope search for a good estimate of the underlying image custom-character, solving (93) would represent an additional—much narrower—search for an even better estimate of custom-character, where the role of {umlaut over (W)} (which is extracted from custom-character) here is to guide the refined search for a better estimate of the HR image within {umlaut over (V)} (which is extracted from custom-character).


B. Solving Equation (93)

One option we found particularly useful to solve (93) is via solving the optimization problem











min



α
¨

_

,


β
¨

_





(







V
¨








α
¨

_


-


W
¨








β
¨

_





2

+





𝒟






V
¨








α
_

¨


-

x
_




2

+


μ
0







V
¨






Λ







α
_

¨




2



)


,




(
94
)







which has the solution












α
_


¨
^


=



[

I
+


μ
0



Λ
2


+



(

𝒟






V
¨


)

*


𝒟






V
¨


-



V
¨

*



W
¨




W
¨

*



V
¨



]


-
1





(

𝒟






V
¨


)

*



x
_



,




(
95
)







and therefore the enhanced estimate of the HR image (which we denote by using a ‘double hat’ accent) is given by












u

^
^


_

-


V
¨








α

¨
^


_



,




(
96
)







where

    • 1—The basis matrix {umlaut over (V)} is orthonormal, with columns that are the first few left singular vectors of a matrix containing (vectorized) images that are extracted from custom-character.
    • 2—The matrix Λ is a diagonal matrix that contain the inverse of the singular values corresponding to the aforementioned left singular vectors. Therefore, the product {umlaut over (V)}Λ simply scales each column of {umlaut over (V)} by the inverse of the corresponding singular value (note: we observed that a more useful variation is to enter ‘0’ for the first element on the diagonal of Λ, and divide all other diagonal elements by the first (i.e. largest) singular value).
    • 3—The LR dictionary {umlaut over (Φ)} is also orthonormalized using singular value decomposition (recall that this makes the HR basis matrix {umlaut over (W)} orthonormal as well).
    • 4—The term ∥custom-character{umlaut over (V)}{umlaut over (α)}−x∥2 in (94) makes sure that an enhanced estimate of the HR image must be such that a downsampled version of it would be close to the available (reference) LR image x (custom-character denotes downsampling). Said differently, ∥custom-character{umlaut over (V)}{umlaut over (α)}−x∥2 is simply a ‘data-fitting’ term.
    • 5—Adding the term μ0∥{umlaut over (V)}Λ{umlaut over (α)}∥2 minimizes the contribution of an atom (column) in {umlaut over (V)} in proportion to the corresponding diagonal entry in matrix Λ. This term was added because we noticed that high order left singular vectors, if their contribution to the solution is not regulated, can destabilize the solution. This is why we chose Λ to be a diagonal matrix as described above. The parameter μ0 simply controls how much we need this stabilization, and it is usually a small number (e.g. 1e−7 to 1e−5).


C. Patch-Based Processing

All required details (including those pertaining to patch-based processing) for using a structured subspace framework in a last iteration (meant to enhance an already upsampled sequence) can be derived from the disclosure herein. In other words, one only needs to adjust for {umlaut over (p)}=1 and {umlaut over (q)}=p. Table 4 and FIG. 17 summarize the proposed patch-wise processing for the purpose of enhancing an already upsampled sequence.









TABLE 4





Pseudocode illustrating one last (special) iteration for


enhancing the final output of an upsampling baseline.

















Input 1: the upsampled sequence custom-character  (to be enhanced).



Input 2: the original LR sequence custom-character .



For k = 1:K



 For s = 1:S



  For custom-characters= 1:L



   Learn the HR basis:



  Extract the HR patch custom-character .



  Extract local training set custom-character  ≡ custom-character (custom-character ).



  Compute custom-character  ≡ custom-character (custom-character  ).



   Learn the LR dictionary:



  Extract LR patch custom-character .



  Extract custom-character  ≡ custom-character (custom-character  ).



  Compute custom-character  ≡ custom-character (custom-character ).



  Construct custom-character  from custom-character .



   Estimate an enhanced version of custom-character






  
Computeα¨^_ks(apply(95)perpatch).







  
Computeu^^¯ks=V¨ksα¨^_ks







  End



 End







Computeu^^¯k𝒫({{u^^¯ks}s=1L}s=1S).







End






Output
𝕌^^=Δ[u_^^1u_^^2u_^^K].










A few details, however, need to be revisited here.

    • 1—The HR basis custom-character is best learned from a narrow 3D neighborhood around the current (k custom-characters-th) patch to be enhanced. This is consistent with the notion that this additional final iteration for enhancing upon an already upsampled image is the equivalent of a narrower search for a better estimate of the HR image. For example, choosing k=3 and b=3, we extract, from custom-character, k(2b+1)2=147 atoms only, from which we pick, say, 100 atoms (that are most similar to current patch custom-character), which are then vectorized and stacked next to each other as columns of a matrix whose SVD is computed. We retain a few (e.g. 30) of the left singular vectors to be the columns of the basis matrix custom-character.
    • 2—Since we seek to reclaim as much lost signal as possible, we choose a high number of atoms for the LR dictionary custom-character. In particular, we recommend






N
=


r
2

-



M

p
2









atoms, where M is the number of columns (atoms) of custom-character and r is the user-defined parameter that determine the patch size (d=r{umlaut over (p)}{umlaut over (q)}=rp).


A Real-World Noisy Sequence

To test the ‘denoising by upsampling’ functionality, we downloaded a particularly noisy CT-scan sequence from TCIA (a database sponsored by the National Institutes of Health), and utilized the ‘denoising by upsampling’ functionality to create HR images from the CT-scan sequence. FIG. 18 compares the originally downloaded CT-scan sequence on the left-hand side of FIG. 18 with the HR image results of the ‘denoising by upsampling’ functionality on the right-hand side of FIG. 18. The noise reduction between the original CT-scan sequence and the HR images is stark. The HR images provide a clear and enhanced image that may be easier for medical practitioners to read and utilize.



FIG. 18 demonstrates the versatility of the structured subspace framework. The structured subspace framework provides a search method for high-quality images within a pair of subspaces constructed from a ‘pair’ of (different-resolution) sequences of images (that are ultimately based on a ‘single’ source of images).


9. Conclusion

The basic principle of learning-based image upsampling is fairly straight-forward. In general, learning-based methods use the available LR image as ‘partial measurements’ (of the unknown HR image) to estimate the representation of the HR image in terms of a HR dictionary learned from a training set of HR example images. While these methods certainly differ in the details, the two most important features that account for most of the variation in results are the training sets they use (generic vs. narrow -special-sets) and the learning process of the HR dictionaries.


In contrast, the work in [1], [2], uses the same learning-based upsampling principle, although for different target signals: the PPCs of the HR image, which are LR signals themselves. This shift of focus allowed the authors of [1], [2] to exploit a sequence of LR images as a source of very narrow training sets, from which highly efficient LR dictionaries can be easily learned to represent the PPCs (of sought after HR images). The major obstacle to their idea was the lack of ‘partial measurements’ required to find the representations of the PPCs in terms of the LR dictionaries. To overcome this obstacle, they proposed a hardware modification such that the imaging device is equipped with another (different resolution) LR sensor (and a beam splitter) to provide the much needed partial measurements for their target images (the PPCs).


Compared to [1], [2], in current work, we shift the focus back to estimating the representation of the HR image itself, and in terms of a pair of HR bases, each of which is embedding a (different resolution) LR dictionary. The end result is a structured subspace framework through which pairs of LR dictionaries are used to estimate HR images directly, thus circumventing the very restrictive hardware assumption required in [1],[2], while still benefiting from the huge advantage of harvesting narrow training sets from a LR sequence of images.


Applied Example of Working Solution

In this section, we give a visualization of the basic solution baseline whose details were laid out above in “A Working Solution”. To help with understanding, it is important to keep the big picture in mind, and which we summarize as follows:


We estimate the entire HR sequence by estimating patches (pieces) of each frame.


Because we estimate each HR patch based on a pair of (local, different resolution) LR dictionaries, we need a pair of LR sequences (of different resolution) from which to extract the pair of local dictionaries (the algorithm takes in a pair of LR dictionaries, per patch, as input, and spits out an estimate of the HR patch).


The reason we need to iterate the solution is because we do NOT have the secondary (different resolution) LR sequence (we keep estimating a better version of the missing secondary LR sequence).


The initial guess (custom-character)


Let's first have a look at sample LR frames of an available LR sequence (we call it the primary LR sequence) in FIG. 6.


To get the first estimate of the secondary (missing) sequence custom-character(1), we need to first get an initial guess of the HR sequence custom-character(0). This can be done, for example, by using 2D interpolation (by a factor of p, we choose p=3 here) of each frame xk of the primary (available) LR sequence custom-character. Using Matlab, this can be done as follows:

    • for k=1:K






custom-character
(0)
k=imresize (xk,p,‘lanczos3’);   (97)

    • end


Now, let's take a look at the sample frames of the initial estimate of the HR sequence, shown in FIG. 7. Comparing FIG. 7 to FIG. 6, it becomes clear that the initial guess of the HR frames don't look appreciably better than the available (primary) LR frames, but this shall suffice for an initial guess. Note: each frame shown in FIG. 7 is actually p2 times bigger than the corresponding frame shown in FIG. 6.


First Iteration (t=1)

To obtain the first estimate of the secondary (different/lower resolution) LR sequence, we simply decimate each frame of the HR sequence by a factor of q (q=p+1=4 here). Using Matlab, this can be done using the following code:

    • for k=1:K





y(1)k=custom-character(0)k(1:q:end,1:q:end);   (98)

    • end


Because of the size of the figures, comparing FIG. 8 to FIG. 6, it might not be easy to see that the first estimate of the secondary LR sequence is lower resolution than the primary LR sequence, so in FIG. 9, we compare bigger versions of one of the frames (frame#31) from the primary and secondary sequences. It might still not be easy to see, but careful examination reveals the resolution is different.


Partitioning a Frame

So now that we have two (different resolution) LR sequences as input, custom-character and custom-character(1), how do we proceed? Like (almost) everyone else, we do not estimate an entire HR frame as a whole, instead we estimate patches (pieces) of each frame, so the next question is: how do we partition a frame into patches to be estimated?


In Section 5 (A Working Solution), we provided a description of the simplest partitioning scheme (equations (85)-(88)). For a pictorial illustration, we pick frame#31 of the primary sequence custom-character and display, in FIG. 10, the S=r2 possible ways of partitioning the same frame (we pick r=4 here, so each primary LR patch custom-character has size rq×rq=16×16).


The so-called ‘leading’ patch per partitioning pattern s is shown as a brightened square in FIG. 10. We use the description ‘leading’ patch because it simply tells us how to cut up a frame (its location defines the partitioning pattern). So a frame is estimated in patches (pieces), and there are multiple ways to cut up a frame into patches.


Now, what happens when we compute all the HR patches corresponding to all the squares (patches) shown in FIG. 10? We get S=r2=16 estimates of the same HR frame.


But why don't we simply use just one partitioning pattern? There's nothing wrong with that except that getting more estimates of the same frame (by partitioning it using S=r2 patterns instead of just one pattern), allows us to reduce estimation errors since different patterns would give different estimation errors and therefore having multiple estimates of the same frame allows us to reduce estimation errors. There are many different ways one could use multiple estimates of the same frame to reduce estimation errors (equation (90), for example, is only one of them), and most people simply take the mean—or the median—of all estimates.


To illustrate the benefits of having multiple estimates of the same frame, we show in FIG. 11 the S=r2=16 HR estimates of frame#31 after tiling (in place) all estimated HR patches {custom-character}, corresponding to all LR patches {custom-character}, shown in FIG. 10. Some of the most obvious estimation errors are present in some of the ‘edge’ patches, but beyond these obvious errors, subtle errors are present (and different) per the same image part. For example, if we concentrate on the mouth region, we'd see that it looks a little different across all 16 estimates.



FIG. 12 shows what we get if we compute the median of all 16 estimates of frame#31 shown in FIG. 11 vs. the initial estimate of the same frame (obtained using Lanczos interpolation).


But how did we compute each of the HR patches shown in FIG. 11? For example, how did we compute the HR patch custom-character at k=31, s=4, and custom-characters=20? Let's highlight the patch in question in FIG. 13.


Extracting Local Training Sets (custom-character)


Recall that, in our solution, estimating a HR patch requires a local pair of (different resolution) dictionaries to be extracted from a pair of LR sequences (of different resolution). We already had LR sequence custom-character (FIG. 6) and from which we created custom-character(1) (FIG. 8). So now, we visualize how we extract the training sets custom-character and custom-character from the LR sequences custom-character and custom-character(1), respectively, in preparation for computing the highlighted patch shown in FIG. 13.


First of all, recall parameters K (the width of the temporal neighborhood of frames around frame k from which training LR patches are to be extracted) and b (2b+1 is the width and height, in pixels, of the spatial window whose center is defined by the location of the LR patch corresponding to the HR patch we need to estimate). Here, we choose k=30 frames, and b=5 pixels. FIG. 14 illustrates the extraction process (custom-character) of example LR patches to be used as elements of the pair of training sets.


Learning the Dictionaries

With (2b +1)2=121 LR patches extracted from each frame in a predefined temporal neighborhood of size k=30 frames, the size of the entire training set custom-character is 30×121=3,630 LR patches (similarly, training set custom-character has 3,630 LR patches).


However, we do not need all patches in a training set. Indeed, just by looking at FIG. 14, it becomes evident that most of the extracted patches are irrelevant, anyway. But then why do we extract so many LR patches? The point is to find LR patches that are similar to the LR patch we want to upsample, so the extraction process described here is our way for ‘fishing’ for good example LR patches. And although most of the ‘catch’ is useless, it doesn't matter as long as we have enough ‘good’ LR patches in our training set.


Before we proceed, here we use M=N=2q2=32 as the number of (hopefully) ‘good’ enough LR patches to be chosen from each training set. All is left to do is define a measure of ‘goodness’. The following steps summarize the dictionary “learning” process we described in Section 5 (A Working Solution).


First, execute the following pseudocode:











For





i

=

1


:








κ


(


2

b

+
1

)


2











d
x



[
i
]


=







X

k







s





[
i
]






X

k







s





[
i
]





-


x

k







s






x

k







s









1










d
y



[
i
]


=







Y

(
1
)


k







s





[
i
]






Y

(
1
)


k







s





[
i
]





-


y

(
1
)


k







s






y

(
1
)


k







s









1







End




(
99
)







where the LR patches custom-character and custom-character correspond to the HR patch you want to estimate (these are the highlighted areas in the left and right LR frames, respectively, shown in the middle row of FIG. 14).


The i-th entry in dx is thus the value of the L1 distance between the i-th normalized LR patch in training set custom-character and normalized (reference) LR patch custom-character. Similarly, the i-th entry in dy is the value of the L1 distance between the i-th normalized LR patch in training set custom-character and normalized LR patch custom-character.


Second, pick the M LR patches, in the training set custom-character, that correspond to the smallest M entries in dx. These are the atoms of your local primary dictionary custom-character. Similarly, the atoms of the local secondary dictionary custom-character are the N LR patches, in the training set custom-character, that correspond to the smallest N entries in dy.


Third, since it's easier to work with orthonormal dictionaries, use SVD decomposition to orthonormalize the atoms of each dictionary. After converting atoms, of each dictionary, from their 2D form to their vector form, and stacking these (per dictionary) as column-vectors next to each other, we can use the following Matlab code to orthonormalize our dictionaries:





[custom-character, ˜, ˜]=svd (custom-character, 0);





[custom-character, ˜, ˜]=svd (custom-character, 0);   (100)



FIG. 15 visualizes the pair of dictionaries we used to compute the highlighted patch in FIG. 13.


Computing the HR Patch

All the previous steps were taken to set the stage for the proposed structured subspace framework to do its work: converting a pair of (different resolution) LR dictionaries into a HR image. In the following, we describe the most straightforward application of the structured subspace solution.


First, construct the HR basis matrix custom-character from the LR dictionary custom-character, and the HR basis matrix custom-character from the LR dictionary custom-character, by running the following Matlab code:






custom-character=LR2HRbasis (custom-character, r,p,q,M);






custom-character=LR2HRbasis (custom-character, r,p,q,M);   (101)


Recall that we chose p=3 (and thus q=p+1=4), r=4, and M=N=2q2=32. LR2HRbasis is a predefined function written using Matlab as follows:



















% function to construct HR basis from LR dictionary




function V = LR2HRbasis(PSI,r,p,q,M)




d = r*p*q;




PSI = reshape(PSI,r*q,r*q,M); % this line converts




             % the vectorized




             % atoms (columns)




             % of PSI back to




             % their 2D form




V = zeros(d*d,p*p*M);




zrs = zeros(d,d,M);




c = 0;




for i = 1:p




 for j = 1:p




  c = c+1;




  atomZshift = zrs;




  atomZshift(i:p:end,j:p:end,:) = PSI;




  V(:,(c−1)*M+1:c*M) = reshape(atomZshift,d*d,M);




 end




end










Second, solve the following equation whichever meaningful way possible: custom-character=custom-character. Recall:

    • custom-character is the HR basis matrix that spans any HR image (patch) whose PPCs are spanned by the LR dictionary custom-character.
    • custom-character is the HR basis matrix that spans any HR image (patch) whose PPCs are spanned by another (different resolution) LR dictionary custom-character.
    • custom-character is the representation of (first estimate of) the HR image (patch) in terms of custom-character.
    • custom-character is the representation of the same HR image in terms of custom-character.
    • Solving this equation is the equivalent of searching for the pair of HR images (one image in span(custom-character), and the other image in span(custom-character)) that are ‘closest’ to each other. So what remains is to define the “distance measure”, or the ‘yardstick’ with which distance is measured. We believe a good yardstick for measuring distance, in this context, is defined by optimization problem (72).
    • Using this yardstick (72), it just so happens that we do not need to construct V nor W, only the multiplication W*V is needed which, due to the particular structure of V and W, can be very efficiently computed using equations (62)-(68). However, generally speaking, other yardsticks one might choose to use (other than (72)) might require the explicit construction of V and W. For this reason, we shall ignore, in this demonstration, the special computational efficiency advantage imparted by (72).
    • Since we're working on patches, custom-character is the reference LR image here (the highlighted square in the LR frame shown on the left of the middle row of FIG. 14) whose HR version (the highlighted square in FIG. 13) we seek to compute.


The following is the Matlab code we used for computing the HR image patch highlighted in FIG. 13.






custom-character=computeHRpatch (custom-character,custom-character,custom-character,custom-character,custom-character,r,p,q,M);   (103)


where the control parameter custom-character is determined according to (91), and computeHRpatch is a predefined function as follows:

    • % function to solve Vα=Wβ, according to
    • % optimization problem (72) .
    • function u=computeHRpatch (V, W, PSI, x, mu, p, M)
    • A=W′*V;
    • A=A′*A;
    • x=x (:); % get the vector form of
      • % the reference LR image.
    • b=PSI′*x; % i.e. b=Ψ*x.
    • brep=repmat (mu/p/p/norm (b) *b,p*p, 1);
      • % this is







μ




Ψ
*



x
¯







*



Ψ
*




x
¯

.







    • J=1/p/p*repmat (eye (M) 1, p*p); * this is eqn (71).








J=J′*J;   (104)

    • E=zeros(p*p*M+1); % construct the matrix in (74).
    • E(1:p*p*M, 1:p*p*M)=mu*J+eye(p*p*M) −A;
    • E(1:p*p*M,p*p*M+1)=brep;
    • E(p*p*M+1, 1:p*p*M)=brep′;
    • E(p*p*M+1, p*p*M+1)=mu;
    • sigma=svd(E); sigma=sigma(end);
    • A=mu*J+(1-sigma)*eye(p*p*M) −A;
    • brep=repmat(mu/p/p*bfp*p, 1);
    • alpha=A\brep; % this is eqn (73).
    • u=V*alpha;
    • u=reshape(u,r*p*q,r*p*q); % get the 2D form of
      • % the estimated HR image.


Parallel Processing

Thus far, we explained how we got the HR patch highlighted in FIG. 13, which is patch custom-character at k=31, s=4 , and custom-characters=20, but that's just one patch among 48 patches (custom-characters=1, . . . , 48) per partitioning pattern s=4, and we have S=16 partitioning patterns with 48 patches per partitioning (FIG. 11). In our simple baseline, all of these HR patches are computed independently from each other, so if the computing hardware allows it, they can all be computed simultaneously, and for all K frames too. So if we have K=100 frames, we can compute 100*16*48=76,800 patches simultaneously, and then arrange them as 16 HR estimates per frame, and then compute the median of all 16 estimates per frame to get the first estimate of the our HR sequence custom-character(1). Of course, this can also be done in series, computing one HR patches at a time. In any case, parallel computing is beyond the scope of this invention.


2nd and Subsequent Iterations

Compared to the first iteration, the second iteration (and subsequent iterations) are no different except for working with an updated estimate of the secondary LR sequence, which is obtained by simply decimating the previous estimate of the HR sequence (e.g. we get the updated sequence custom-character(2) by decimating each frame in custom-character(1) by a factor of q) .


In particular, the primary LR sequence custom-character, remains the same for all iterations, so unless we adopt a different training set extraction strategy (custom-character) or a different dictionary learning procedure (custom-character) per iteration, there is no need to repeat the training/learning process of local primary LR dictionaries. For example, if we stored custom-character shown in FIG. 15, we do not need to repeat its extraction process for any iteration beyond the first one. This is why the symbol custom-character is missing the subscript (t) .


Equation Solution Section

To solve the optimization problem (72) using Lagrange multipliers, we first rewrite (72) as












min


α
_

,

β
_





(






V






α
_


-

W






β
_





2

+

μ






[

J






Ψ
*



x
_


]



[




α
_






-
1




]




2



)









s
.
t
.








α
_



2


=

c
>
0.








Now





let










(

A

.1

)









α
_

~

=



[




α
_






-
1




]







α
_


=

E







α
_

~




,
where




(

A

.2

)






E
=


[


I


p
2


M








O
_


]







p
2


M
×

(



p
2


M

+
1

)



.






(

A

.3

)







Also, let






R=[
custom-character
Ψ*x],   (A.4)


and rewrite (A.1) as











min



α
_

~

,

β
_





(






VE







α
_

~


-

W






β
_





2

+

μ





R







α
_

~




2



)









s
.
t
.









α
~

_



2


=


c
~

>
0.






(

A

.5

)







Let ƒ({tilde over (α)},β)=∥VE{tilde over (α)}−Wβ∥2+μ∥R{tilde over (α)}∥2, and recall (54), (55). Therefore, the objective function ƒ ({tilde over (α)},β) can be rewritten as





ƒ({tilde over (α)},β)={tilde over (α)}*(E*E+μR*R){tilde over (α)}−2{tilde over (α)}*E*V*Wβ+∥β∥2. (A.6)


Also, let





g({tilde over (α)},β)=∥{tilde over (α)}∥2−{tilde over (c)}.   (A.7)


Since we only have one constraint (A.7), there is only one Lagrange multiplier, which we denote with λ, and the Lagrangian is therefore






custom-character({tilde over (α)},β,λ)=ƒ({tilde over (α)},β)−λg({tilde over (α)},β).   (A.8)


Now the gradient of ƒ({tilde over (α)},β), with respect to {tilde over (α)} and β is
















α
_

~

,

β
_




f

=

[





2


(



E
*


E

+

μ






R
*


R


)




α
_

~


-

2


E
*



V
*


W






β
_








2


(


β
_

-


W
*


VE







α
_

~



)





]








Also
,





(

A

.9

)












α
_

~

,

β
_





g


[




2



α
_

~







O
_




]









Therefore
,





(

A

.10

)












α
_

~

,

β
_

,
λ





=


2


[






(



E
*


E

+

μ






R
*


R


)




α
~

_


-


E
*



V
*


W






β
_


-

λ







α
~

_












β
_

-


W
*


VE







α
_

~








0.5


(


c
~

-





α
_

~



2


)








]


.





(

A

.11

)













α
_

~

,

β
_

,
λ





=



O
_





(



E
*


E

+

μ






R
*


R


)




α
~

_


-


E
*



V
*


W






β
_




=

λ







α
~

_




,




(

A

.12

)








β
_

-


W
*


VE







α
_

~



,




(

A

.13

)











α
_

~



2

=


c
~

.





(

A

.14

)







Combining (A.12), (A.13), we get




(E*E+μR*R−E*V*WW*VE){tilde over (α)}=λ{tilde over (α)}.   (A.15)


We can now obtain the value of the objective function, by plugging (A.13) in (A.6), and then using (A.15) and (A.14)













α
~

_

*






(



E
*


E

+

μ






R
*


R

-


E
*



V
*



WW
*


VE


)



α
~





λ



α
_

~




=


λ







α
_

~



2



c



=

λ



c
~

.







(

A

.16

)







Now note that (A.3)









E
*


E

=

[




I


p
2


M





0
_






0
_



0



]


,







E
*



V
*



WW
*


VE

=

[





V
*



WW
*


V




0
_






0
_



0



]


,






and






(

A

.4

)



R
*


R

=

[





𝒥
*


𝒥





𝒥
*



Ψ
*



x
_








(


𝒥
*



Ψ
*



x
_


)

*








Ψ
*



x
_




2




]


,




and therefore equation (A.15) is rewritten as











[






μ𝒥
*


𝒥

+
I
-


V
*



WW
*


V






μ𝒥
*



Ψ
*



x
_








μ


(


𝒥
*



Ψ
*



x
_


)


*




μ






Ψ
*



x
_




2





]




α
_

~


=

λ




α
_

~

.






(

A

.17

)







Since any singular pair of the (symmetric) matrix in (A.17) is a solution, and since we seek to minimize the error (A.16), we pick the solution










α
~

_

^

=



c
~




v
_



,




where ν is the last singular vector of the matrix in (A.17), with associated (smallest) singular value σν (and thus









c
~


=

-

1
e



,




where e is the last element in ν).


Now recall (A.2) and rewrite (A.17) as











[






μ𝒥
*


𝒥

+
I
-


V
*



WW
*


V






μ𝒥
*



Ψ
*



x
_








μ


(


𝒥
*



Ψ
*



x
_


)


*




μ






Ψ
*



x
_




2





]



[





α
_

^






-
1




]


=



σ
v



[





α
_

^






-
1




]


.





(

A

.18

)







The top part of equation (A.18) is





custom-character*custom-character+I−V*WW*V){circumflex over (α)}−μcustom-character*Ψ*x=σν{circumflex over (α)}  (A.19)


and therefore





{circumflex over (α)}=μ[μcustom-character*custom-character+(1−σν)I−V*WW*V]−1(custom-character*Ψ*x).   (A.20)


Now, for numerical stability reasons, the computation of the smallest singular value of the matrix in A.17 might be inaccurate (because the last row and the last column in the matrix in A.17 have much higher values compared to μcustom-character*custom-character+I−V*WW*V), so instead of using Ψ*x in optimization problem A.1 we use its normalized version









Ψ
*



x
_






Ψ
*



x
_





.




So equation A.18 becomes











[






μ𝒥
*


𝒥

+
I
-


V
*



WW
*


V






μ




Ψ
*



x
_







𝒥
*



Ψ
*



x
_








μ




Ψ
*



x
_








(


𝒥
*



Ψ
*



x
_


)

*




μ



]



[





α
_

^






-
1




]


=


σ


[





α
_

^






-
1




]


.





(

A

.21

)







where σ is the smallest singular value the matrix in A.21. Therefore, the corresponding final solution (after scaling back by ∥Ψ*x∥) is





{circumflex over (α)}=μ[μcustom-character*custom-character+(1−σ)I−V*WW*V]−1(custom-character*Ψ*x).   (A.22)


System Requirements for Implementing Algorithm on Laptop
Laptop Information
OS Name Microsoft Windows 10 Home Version 10.0.17134 Build 17134
System Manufacturer Dell Inc.
System Model G3 3579

System Type x64-based PC


Processor Intel(R) Core™ i7-8750H CPU@2.20 GHz, 2208 Mhz, 6 Core(s).


Installed Physical Memory (RAM) 16.0 GB
Matlab Information:
MATLAB Version 9.4 (R2018a)

Add-on toolbox: Parallel Computing Toolbox Version 6.12 (R2018a)

    • This embodiment of the invention uses the very basic Matlab package+the parallel computing toolbox to take advantage of the laptop's 6-core CPU.
    • Matlab is known as a scientific computing programming language.
    • The computations in the algorithm are basic, so it can be done using other (free) programming platforms such as Anaconda (the Python distribution) or C or Java, etc.
    • The 100 frame Suzie sequence in the experiment section can be upsampled using Matlab+the listed laptop in a couple of hours (or even half an hour if we change some parameters and sacrifice a bit of quality for speed). Nevertheless, the algorithm's complexity is considered comparatively low (compared to the norm in the field of upsampling), but it is still too high for real-time/low-power applications, which makes optimization for faster computations potentially very useful. One might also want to consider other solutions (beyond computational optimization). For instance, images captured by a low power device (e.g. an iPhone or a satellite) can be dealt with using cloud computing.
    • As much as ×10-100 speed up can be obtained by converting Matlab code to C code.
    • The algorithm is parallelizable, so besides writing the algorithm in C, using a GPU (or the cloud) can considerably speed up calculations.

Claims
  • 1. A computer implemented method of denoising digital images, comprising the steps of: upsampling an input sequence of frames, using a structured subspace framework that leverages pairs of dictionaries, respectively, wherein one member of each of said pairs of dictionaries is directly created, and a second member of each of said pairs of dictionaries is indirectly created, from the input sequence of frames to create denoised versions of said frames.
  • 2. The method of claim 1, wherein each pair of said dictionaries is comprised of a first dictionary that is learned from the input sequence of frames, and a second, different-resolution, dictionary that is learned from a second sequence of frames obtained by decimating the input sequence.
  • 3. The method of claim 2, wherein the structured subspace framework is based upon estimating at least one of two representations (α and β) of a denoised image in terms of two bases (V and W), by solving Vα=Wβ
  • 4. The method of claim 3, wherein said denoised frames can be estimated by estimating patches thereof using patch-based processing, where the structured subspace framework is applied locally per patch to leverage a pair of local dictionaries learned per patch.
  • 5. The method of claim 4, wherein the first dictionary is learned from local image patches extracted from the input sequence of frames, whereas the second dictionary is learned from local patches extracted from the second, different-resolution, sequence obtained by decimating the input sequence.
  • 6. The method of claim 2, wherein the estimation of the denoised sequence is repeated, such that the input sequence for a new iteration is a decimated version of the output sequence of the previous iteration.
  • 7. The method of claim 1, further comprising the step of enhancing said denoised versions by creating a first dictionary learned from the input sequence of original frames, creating a second dictionary learned from the denoised versions, and utilizing the structured subspace framework to leverage both dictionaries to create enhanced denoised frames of said denoised versions.
  • 8. A computer implemented computational method for denoising digital images by creating a sequence of denoised frames from an input sequence of frames, using a structured subspace framework that leverages pairs of dictionaries, comprising the steps of: scanning into a computer memory an input sequence of frames;creating a sequence of denoised frames using said structured subspace framework by estimating representations of denoised images in terms of pairs of bases, each of which embeds a different dictionary, by solving, per image, the equation: Vα=Wβwhere:V is the first basis matrix constructed such that it spans any image whose polyphase components (PPCs) are spanned by a first dictionary Ψ;W is the second basis matrix constructed such that it spans any image whose PPCs are spanned by another different resolution second dictionary Φ;α is the representation of the denoised image in terms of V; and,β is the representation of the denoised image in terms of W;multiplying the basis matrix V with the representation α, computed from the previous step, to obtain the denoised image.
  • 9. The method of claim 8, further comprising: dividing a frame, to be upsampled (for denoising), into patches, and applying the upsampling process for each individual patch, independently of other patches.
  • 10. The method of claim 9, wherein: said patches have size d×d pixels, and overlap, vertically and horizontally, by pq pixels, where:d=rpq, p is the desired upsampling factor, q=p+1, and r is a user-defined factor that determines the size of patches.
  • 11. The method of claim 10, wherein: per patch, a first dictionary is learned from a first set of local image patches that are extracted from the input sequence of images, whereas a second dictionary is learned from a second set of local image patches that are extracted from the second different-resolution sequence obtained by decimating the input sequence, in accord with the steps of:extracting a first set of patches from a spatiotemporal neighborhood within the input sequence, where said neighborhood is centered at the spatiotemporal location of the patch that is to be upsampled into a denoised version, and patches extracted from a frame, that belong to said neighborhood, can overlap vertically and horizontally by at most rq−1 pixels; wherein the M patches, from the first set of extracted patches, that are most similar to the patch that is to be upsampled into the denoised version, are used as atoms of the first dictionary;extracting a second set of patches from a spatiotemporal neighborhood within the second different-resolution sequence, where said neighborhood is centered at the spatiotemporal location of the patch, in the second sequence, that corresponds to the patch that is to be upsampled into the denoised version, and patches extracted from a frame, that belong to said neighborhood, can overlap vertically and horizontally by at most rp−1 pixels;wherein the N patches, from the second set of extracted patches, that are most similar to the patch in the second sequence that corresponds to the patch that is to be upsampled into the denoised version, are used as atoms of the second dictionary.
  • 12. The method of claim 10, wherein, a denoised frame is obtained by: tiling its upsampled patches in their respective locations, forming r2 different estimates of the same denoised frame; and,combining all r2 estimates into one single denoised frame.
  • 13. The method of claim 12, where the denoised frames are decimated by p to create an updated version of the input sequence, and the solution is repeated, resulting in updated estimates of denoised frames until the denoised frames stop getting any better in accord with a predetermined criteria defined as when a user-defined number of iteration is exhausted.
Provisional Applications (1)
Number Date Country
62874698 Jul 2019 US
Continuation in Parts (1)
Number Date Country
Parent 16779121 Jan 2020 US
Child 17303466 US