## Description

(1) (Strong Duality) Prove part (ii) of the Strong Duality Theorem for Inequality Constraints

that was left out in Lecture 15.

(Hint: Consider the non-vertical hyperplane from the proof of part (i) with normal (˜µ, 1)t

that achieves q

?

, and show that

Xm

j=1

µ˜j ≤

f(y) − q

?

min

j

|gj (y)|

< ∞,

where y is the vector achieveing the Slater’s Condition.)

Remark: It turns out that, conversely, Slater’s condition must hold for a problem with

bounded set of dual optimal solutions.

(2) (Optimal Primal-Dual Pairs) Prove the primal-dual optimality theorem that is left out

on Lecture 16.

(Hint: Since we have already proven a very similar theorem stated on Lecture 15, all

you need to prove is that condition (iv) of the previous theorem is true if and only if the

complementary slackness condition of this theorem holds.)

(3) (Lagrangian Duality) Problem 5.1 from Boyd & Vanderberghe.

(4) (Multiple Duals of the same Primal) Consider the following optimization problem in R2

:

minimize e

−x2

subject to kxk ≤ x1, x2 ≥ 0.

(a) What is the constraint set? What is the optimal value of the problem?

(b) Consider a dual problem obtained by relaxing only the constraint x2 ≥ 0. What

is the dual optimal value? Is there a duality gap?

(c) Consider a dual problem obtained by relaxing only the constraint kxk ≤ x1. What

is the dual optimal value in this case? Is there a duality gap?

(d) What is your interpretation of the meaning of the results obtained in (b) and (c)?

(5) (Convexity and Duality) Problem 5.22 parts (a), (b), and (d) from Boyd & Vanderberghe.

(6) (KKT Conditions) Problem 5.29 from Boyd & Vanderberghe.

(7) (Dual Formulation and Solution) Derive the dual of the following problem as an unconstrained maximization over a k dimensional space, comment on the existence of primal

dual solutions, and clearly relate (if they exist) the dual optimal to the primal optimal

solutions.

minimize Pn

i=1 h

xi

log

xi

ci

− xi

i

subject to x 0, Ax = b,

where c 0, b ∈ Rk

, and A ∈ Rk×n

is a k × n matrix with i

th column denoted as

Ai ∈ Rk

.

(8) (Gradient-Projection Method Implementation) Write the code (submit along with your

solution) for the projection gradient method that solves:

minimize x

2

1 + 9x

2

2

subject to x 0,

2×1 + x2 ≥ 1,

x1 + 3×2 ≥ 1.

Choose different stepsizes and demonstrate how the algorithm converges and/or diverges,

and provide a justified suggestion of an optimal stepsize choice.