Sale!

Assignment 5: CS 754, Advanced Image Processing

$30.00 $18.00

Category: You will Instantly receive a download link for .zip solution file upon Payment || To Order Original Work Click Custom Order?

Description

5/5 - (5 votes)

1. Consider an inverse problem of the form y = H(x) + η where y is the observed degraded and noisy image,
x is the underlying image to be estimated, η is a noise vector, and H represents a transformation operator.
In case of denoising, this operator is represented by the identity matrix. In case of compressed sensing,
it is the sensing matrix, and in case of deblurring, it represents a convolution. The aim is to estimate
x given y and H as well as the noise model. This is often framed as a Bayesian problem to maximize
p(x|y, H) ∝ p(y|x, H)p(x). In this relation, the first term in the product on the right hand side is the
likelihood term, and the second term represents a prior probability imposed on x.
With this in mind, we refer to the paper ‘User assisted separation of reflections from a single image using
a sparsity prior’ by Anat Levin, IEEE Transactions on Pattern Analysis and Machine Intelligence. Answer
the following questions:
ˆ In Eqn. (7), explain what Aj→ and bj represent, for each of the four terms in Eqn. (6).
ˆ In Eqn. (6), which terms are obtained from the prior and which terms are obtained from the likelihood?
What is the prior used in the paper? What is the likelihood used in the paper?
ˆ Why does the paper use a likelihood term that is different from the Gaussian likelihood (earlier the
question said ‘Gaussian prior’, which was incorrect)? [7+12+6=25 points]
2. Consider compressive measurements of the form y = Φx + η under the usual notations with y ∈ R
m, Φ ∈
R
m×n
, m  n, x ∈ R
n
, η ∼ N (0, σ2Im×m). Instead of the usual model of assuming signal sparsity in an
orthonormal basis, consider that x is a random draw from a zero-mean Gaussian distribution with known
covariance matrix Σx (of size n × n). Derive an expression for the maximum a posteriori (MAP) estimate
of x given y, Φ, Σx. Also, run the following simulation: Generate Σx = UΛUT
of size 128 × 128 where U
is a random orthonormal matrix, and Λ is a diagonal matrix of eigenvalues of the form ci−α where c = 1
is a constant, i is an index for the eigenvalues with 1 ≤ i ≤ n and α is a decay factor for the eigenvalues.
Generate 10 signals from N (0, Σx). For m ∈ {40, 50, 64, 80, 100, 120}, generate compressive measurements
of the form y = Φx + η for each signal x. In each case, Φ should be a matrix of iid Gaussian entries with
mean 0 and variance 1/m, and σ = 0.01× the average absolute value in Φx. Reconstruct x using the MAP
formula, and plot the average RMSE versus m for the case α = 3 and α = 0. Comment on the results – is
there any difference in the reconstruction performance when α is varied? If so, what could be the reason for
the difference? [25 points]
1
3. Read through the proof of Theorem 3.3 from the paper ‘Guaranteed Minimum-Rank Solutions of Linear
Matrix Equations via Nuclear Norm Minimization’ from the homework folder. This theorem refers to the
optimization problem in Eqn. 3.1 of the same paper. Answer all the questions highlighted within the proof.
You may directly use linear algebra results quoted in the paper without proving them from scratch, but
mention very clearly which result you used and where. [12 × 2 +1 = 25 points]
4. Read section 1 of the paper ‘Exact Matrix Completion via Convex Optimization’ from the homework folder.
Answer the following questions: (1) Why do the theorems on low rank matrix completion require that the
singular vectors be incoherent with the canonical basis (i.e. columns of the identity matrix)? (2) How would
this coherence condition change if the sampling operator were changed to the one in Eqn. 1.13 of the paper?
(3) The paper gives an example of a matrix which is low rank but cannot be recovered from its randomly
sampled entries. What is that example and why cannot it not be recovered by the techniques in the paper?
[5+5+5=15 points]
5. Read section 5.9 of the paper ‘Low-Rank Modeling and Its Applications in Image Analysis’ from the homework folder. You will find numerous image analysis or computer vision applications of low rank matrix
modelling and/or RPCA, which we did not cover in class. Your task is to glance through any one of the
papers cited in this section and answer the following: (1) State the title and venue of the paper; (2) Briefly
explain the problem being solved in the paper; (3) Explain how low rank matrix recovery/completion or
RPCA is being used to solve that problem. Write down the objective function being optimized in the paper
with meaning of all symbols clearly explained. [10 points]
2