Description
1. Suppose that S is an n × n matrix where n may be very large and the elements of S may
not be explicitly defined. We are interested in approximating the trace of S, that is, the sum
of its diagonal elements. For example, if S is a smoothing matrix in regression (yb = Sy)
then the trace of S gives a measure of the effective number of parameters using in the
smoothing method. (In multiple regression models, the smoothing matrix is the projection
matrix X(XT X)
−1XT whose trace is the number of columns of X.)
(a) Show that if A and B are m×n and n×m matrices, respectively, then tr(AB) = tr(BA).
(This is a well-known fact but humour me with a proof!)
(b) Suppose that V is a random vector of length n such that E[V V T
] = I. If S is an n × n
non-random matrix, show that
E
h
V
T SV
i
= E
h
tr
SV V T
i = tr h
SE
V V T
i = tr(S)
and so tr(S) can be estimated by
tr(dS) = 1
m
Xm
i=1
V
T
i SV i
where V 1, · · · ,V m are independent random vectors with E[V iV
T
i
] = I.
(c) Suppose that the elements of each V i are independent, identically distribution random
variables with mean 0 and variance 1. Show that Var(tr(dS) is minimized by taking the
elements of V i to be ±1 each with probability 1/2.
Hint: This is easier than it looks – Var(V
T SV ) = E[(V
T SV )
2
] − tr(S)
2
so it suffices to
minimize
E[(V
T SV )
2
] = Xn
i=1
Xn
j=1
Xn
k=1
Xn
`=1
sijsk`E(ViVjVkV`).
Given our conditions on the elements of V i
, V1, · · · , Vn, most of E(ViVjVkV`) are either 0 or
1. You should be able to show that
E[(V
T SV )
2
] = Xn
i=1
s
2
iiE(V
4
i
) + constant
1
and find Vi to minimize E(V
4
i
) subject to E(V
2
i
) = 1.
(d) Suppose we estimate the function g in the non-parametric regression model
yi = g(xi) + εi
for i = 1, · · · , n
using loess (i.e. the R function loess) where the smoothness is determined by the parameter
span lying between 0 and 1. Given a set of predictors {xi} and a value of span, write an R
function to approximate the effective number of parameters.
2. Suppose that X1, · · · , Xn are independent Gamma random variables with common density
f(x; α, λ) = λ
αx
α−1
exp(−λx)
Γ(α)
for x > 0
where α > 0 and λ > 0 are unknown parameters.
(a) The mean and variance of the Gamma distribution are α/λ and α/λ2
, respectively. Use
these to define method of moments estimates of α and λ based on the sample mean and
variance of the data x1, · · · , xn
(b) Derive the likelihood equations for the MLEs of α and λ and derive a Newton-Raphson
algorithm for computing the MLEs based on x1, · · · , xn. Implement this algorithm in R and
test on data generated from a Gamma distribution (using the R function rgamma). Your
function should also output an estimate of the variance-covariance matrix of the MLEs –
this can be obtained from the Hessian of the log-likelihood function.
Important note: To implement the Newton-Raphson algorithm, you will need to compute
the first and second derivatives of ln Γ(α). These two derivatives are called (respectively)
the digamma and trigamma functions, and these functions are available in R as digamma
and trigamma; for example,
> gamma(2) # gamma function evaluated at 2
[1] 1
> digamma(2) # digamma function evaluated at 2
[1] 0.4227843
> trigamma(2) # trigamma function evaluated at 2
[1] 0.6449341
2
Supplemental problems:
3. Consider LASSO estimation in linear regression where we define bβλ
to minimize
Xn
i=1
(yi − y¯ − x
T
i β)
2 + λ
X
p
j=1
|βj
|
for some λ > 0. (We assume that the predictors are centred and scaled to have mean 0 and
variance 1, in which case ¯y is the estimate of the intercept.) Suppose that the least squares
estimate (i.e. for λ = 0) is non-unique — this may occur, for example, if there is some exact
linear dependence in the predictors or if p > n. Define
τ = min
β
Xn
i=1
(yi − y¯ − x
T
i β)
2
and the set
C =
(
β :
Xn
i=1
(yi − y¯ − x
T
i β)
2 = τ
)
.
We want to look at what happens to the LASSO estimate bβλ as λ ↓ 0.
(a) Show that bβλ minimizes
1
λ
(Xn
i=1
(yi − y¯ − x
T
i β)
2 − τ
)
+
X
p
j=1
|βj
|.
(b) Find the limit of
1
λ
(Xn
i=1
(yi − y¯ − x
T
i β)
2 − τ
)
as λ ↓ 0 as a function of β. (What happens when β 6∈ C?) Use this to deduce that as λ ↓ 0,
bβλ → bβ0 where bβ0 minimizes X
p
j=1
|βj
| on the set C.
(c) Show that bβ0
is the solution of a linear programming problem. (Hint: Note that C can
be expressed in terms of β satisfying p linear equations.)
4. Consider minimizing the function
g(x) = x
2 − 2αx + λ|x|
γ
where λ > 0 and 0 < γ < 1. (This problem arises, in a somewhat more complicated form, in
shrinkage estimation in regression.) The function |x|
γ has a “cusp” at 0, which mean that if
λ is sufficient large then g is minimized at x = 0.
3
(a) g is minimized at x = 0 if, and only if,
λ ≥
2
2 − γ
”
2 − 2γ
2 − γ
#1−γ
|α|
2−γ
. (1)
Otherwise, g is minimized at x
∗
satisfying g
0
(x
∗
) = 0. Using R, compare the following two
iterative algorithms for computing x
∗
(when condition (1) does not hold):
(i) Set x0 = α and define
xk = α −
λγ
2
|xk−1|
γ
xk−1
k = 1, 2, 3, · · ·
(ii) The Newton-Raphson algorithm with x0 = α.
Use different values of α, γ, and λ to test these algorithms. Which algorithm is faster?
(b) Functions like g arise in so-called bridge estimation in linear regression (which are generalizations of the LASSO) – such estimation combines the features of ridge regression (which
shrinks least squares estimates towards 0) and model selection methods (which produce exact 0 estimates for some or all parameters). Bridge estimates bβ minimize (for some γ > 0
and λ > 0),
Xn
i=1
(yi − x
T
i β)
2 + λ
X
p
j=1
|βj
|
γ
. (2)
See the paper by Huang, Horowitz and Ma (2008) (“Asymptotic properties of bridge estimators in sparse high-dimensional regression models” Annals of Statistics. 36, 587–613) for
details. Describe how the algorithms in part (a) could be used to define a coordinate descent
algorithm to find bβ minimizing (2) iteratively one parameter at a time.
(c) Prove that g is minimized at 0 if, and only if, condition (1) in part (a) holds.
5. Suppose that A is a symmetric non-negative definite matrix with eigenvalues λ1 ≥ λ2 ≥
· · · ≥ λn ≥ 0. Consider the following algorithm for computing the maximum eigenvalue λ1:
Given x0, define for k = 0, 1, 2, · · ·, xk+1 =
Axk
kAxkk2
and µk+1 =
x
T
k+1Axk+1
x
T
k+1xk+1
.
Under certain conditions, µk → λ1, the maximum eigenvalue of A; this algorithm is known
as the power method and is particularly useful when A is sparse.
(a) Suppose that v1, · · · , vn are the eigenvectors of A corresponding to the eigenvalues
λ1, · · · , λn. Show that µk → λ1 if x
T
0 v1 6= 0 and λ1 > λ2.
(b) What happens to the algorithm if if the maximum eigenvalue is not unique, that is,
λ1 = λ2 = · · · = λk?
4
6. Consider the estimation procedure in problem 2 of Assignment #2 (where we used the
Gauss-Seidel algorithm to estimate {θi}). Use both gradient descent and accelerated gradient
descent to estimate {θi}. To find an appropriate value of , it is useful to approximate the
maximum eigenvalue of the Hessian matrix of the objective function – the algorithm in
problem 5 is useful in this regard.
5