Sale!

ROB 501 – Mathematics for Robotics HW 1

$30.00 $18.00

Category: You will Instantly receive a download link for .zip solution file upon Payment || To Order Original Work Click Custom Order?

Description

5/5 - (1 vote)

1. Many problems with matrices are easier to work out when the matrices are partitioned nicely, often
into rows or columns. Let A be an n × m matrix and B an m × p matrix. Denote the i-th row of A
by a
i and the j-th column of B by b
j
. Show the following
(a) AB =

Ab1
|Ab2
| . . . |Abp

(b) AB =





a
1B
a
2B
.
.
.
a
nB





(c) [AB]ij = a
i
b
j

Notation: For a matrix M, where the entry of the i-th row and j-th column is mij , we use the
notation

M

ij = mij .

2. Let A be an n × n (i.e., square) real matrix, and denote the entry of the i-th row and j-th column by
aij . The trace of a matrix is the sum of its diagonal entries, and thus
trace of A = tr(A) = Xn
i=1
aii.
Compute the trace of the following matrices:

(a) A =


1 2 3
4 5 6
7 8 9


1
(b) Let x =





x1
x2
.
.
.
xn





be an vector in R
n. Compute trace
xx>

.

(c) Suppose that K is an n×m real matrix and Q is a square n×n matrix. Let k
i be the i-th column
of K. Compute
tr(K>QK)
where K> denotes the transpose of K. Your answer should be in terms of k
i and Q.

3. Recall that a real matrix M is symmetric if it is equal to its transpose: M> = M. Hence, a symmetric
matrix is square.
(a) By hand, compute the eigenvalues and eigenvectors of M =

2 1
1 3 
.
(b) Let v
1 and v
2 be the e-vectors computed above and compute (v
1
)
>v
2
.

(c) Show that M = A>A is symmetric for any real n × m matrix A. [Yes, this is trivial, but useful
to know].

(d) Use the rand command in MATLAB to form 10 different real n × m matrices A, for n ≥ m of
your choice, as long as some of the m are greater than or equal to 3.

For each matrix, do the
following
i. Form the m × m matrix M = A>A.
ii. Use the eig command in MATLAB to compute e-values and e-vectors of M.
iii. Denoting the e-vectors by {v
1
, · · · , vm}, form the “inner product” (v
i
)
>v
j
for a few of the
e-vectors, with i 6= j, and see what you get. Note that (v
i
)
>v
j
should be a real number.

iv. Sum up the e-values and compare to the trace of the matrix.
v. Multiply the e-values and compare to the determinant of the matrix (use det command in
MATLAB to compute the determinant),

vi. Summarize your observations on the e-values and the e-vectors in a few sentences. Please do
not report your matrices and calculations. There is no need to turn in your MATLAB code.

4. Let X be a Gaussian random variable with mean µ and standard deviation σ > 0; hence X has density
fX(x) = 1
σ


e

(x−µ)
2
2σ2
If you prefer to call it a normal distribution, that is fine too!

(a) On the same graph, plot the density for µ = 0 and σ = 1 as well as for µ = 0 and σ = 3.
Obviously, you cannot represent −∞ < x < ∞, so just choose a reasonable subset.
(b) For µ = 2 and σ = 5.0, determine
i. P{X ≥ 4}
ii. P{−2 ≤ X ≤ 4}
iii. P(X ∈ A), where A = [−2, 4] ∪ [8, 100]

(c) What is the density of the random variable Y = 2X + 4? Use the X defined in (b).

5. Let X and Y be jointly distributed random variables with joint density
fXY (x, y) = K(x + y)
2
, 0 ≤ x ≤ 1, 0 ≤ y ≤ 2.

(a) Determine the constant K so that fXY (x, y) is a density. Fix K at this value for the rest of the
problem.
(b) Determine the marginal distributions of X and Y ; in particular, give their densities.
(c) Determine the conditional distribution of X given Y = y; in particular, give the conditional
density.

6. Use the method of Lagrange multipliers to solve the following minimization problem, for x1 ∈ R and
x2 ∈ R:
min (x1)
2 + (x2)
2
subject to: x1 + 3×2 = 4.

No cheating: If you solve for x1 in terms of x2 and substitute into the minimization problem, you get
zero points! The purpose is really to review something you learned in Calculus on a simple problem.

7. Challenge Problem: Let X and Y be jointly Gaussian random variables with means µX = 1 and
µY = 2, and covariance matrix Σ = 
3

5

5 2 
. (If you prefer to call these bivariate normal random
variables, that is fine too.)

(a) Determine the marginal distributions of X and Y ; in particular, give their densities.
(b) Determine the conditional distribution of X given Y = y; in particular, give the conditional
density.

(c) Is the variance of X given Y = y greater or less than the variance of X without knowing anything
about the value of Y ?
(d) Plot the conditional density of X given Y = 10.

Remark: If you cannot work Prob. 7, do not worry too much about it. However, do take it as
motivation to start reading on your own about jointly Gaussian random variables. We will need this
material when we cover the Kalman filter.

Hints
Hints: Prob. 1 While these are technically proofs, they are direct proofs, and the approach for each of the
three is the same: simply show that the ij entry of the left hand side is equal to the ij entry of the right
hand side. Note that by definition of matrix multiplication,
[AB]ij =
Xm
k=1
aikbkj

Notation that is useful for (a): For a column vector, we let subscript i denote its i-th component. Hence,
[b
j
]i = bij .

Notation that is useful for (b): For a row vector, we let subscript j denote its j-th component. Hence,
[a
i
]j = aij .

Hints: Prob. 2 For facts about the trace operator, see
http://en.wikipedia.org/wiki/Trace_(linear_algebra).
Among the many facts on the above web page, it is worth knowing that if A is an n × m matrix and B is an
m × n matrix, then
trace(AB) = trace(BA).
This can be used to quickly simplify part (b), for example. For part (c), use your results on partitioned
matrices from Prob. 1.

Hints: Prob. 3 In MATLAB, try these commands:
» help rand
» A=rand(3,4)

The command defaults to the uniform distribution. In (d)-(iii), you can form ALL of the inner products
very quickly as follows:
• Define a matrix V with the columns being the eigenvectors; in fact, this is what the eig command
gives in MATLAB.
• Evaluate V
>V , and note from Prob. 1(c), that [V
>V ]ij = (v
i
)
>v
j
.

Hints: Prob. 4
• » help plot
• » help hold
• Part (b) requires integration of the density. Once you set up the integrals, you do NOT have to do
them analytically. You can do them numerically in MATLAB. Try » help quad.
• Part (c) We will review this later in the term. See Example 4 here:
https://www.statlect.com/probability-distributions/normal-distribution-linear-combinations

Hints: Prob. 5
• It must integrate to 1. You may find it easier to expand (x + y)
2 before integrating.
• Recall fX(x) = R
2
0
fXY (x, y)dy.
• There is a ratio of two densities involved!

Hints: Prob. 6 Review your Lagrange multipliers online, for example
• https://www.youtube.com/watch?v=ry9cgNx1QV8
• Google “lagrange multipliers” if you need more help.

Hints: Prob. 7
• No further hints given in office hours. This problem will not count towards assignment points.
• X and Y are Gaussian random variables. You can read their means and variances from the provided
data without doing any calculations.

• X given Y is also a Gaussian random variable. You can determine its mean and variance from the
given data with very simple calculations. If you try to derive it from
fX|Y (x|y) = fX,Y (x, y)
fY (y)
you are in for a lot of algebra. I realize that you may not have covered this material in your undergraduate probability course, and we will review it later in the term. In the meantime, search on the
web to figure out the answer.

• If your search fails, look at the following link.
https://www.probabilitycourse.com/chapter5/5_3_2_bivariate_normal_dist.php
5