Description
13. Write a routine to generate an n by n matrix with a given 2-norm condition number. You
can make your routine a function in MATLAB that takes two input arguments – the matrix
size n and the desired condition number condno – and produces an n by n matrix A with the
given condition number as output:
function A = matgen(n, condno)
Form A by generating two random orthogonal matrices U and V and a diagonal matrix ⌃
with ii = condno(i1)/(n1), and setting A = U⌃V T . (Note that the largest diagonal entry
of ⌃ is 1 and the smallest is condno1, so the ratio is condno.) You can generate a random
orthogonal matrix in MATLAB by first generating a random matrix, Mat = randn(n,n), and
then computing its QR decomposition, [Q,R] = qr(Mat). The matrix Q is then a random
orthogonal matrix. You can check the condition number of the matrix you generate by using
the function cond in MATLAB. Turn in a listing of your code.
For condno= (1, 104, 108, 1012, 1016), use your routine to generate a random matrix A with
condition number condno. Also generate a random vector xtrue of length n, and compute
the product b = A*xtrue.
Solve Ax = b using Gaussian elimination with partial pivoting. This can be done in MATLAB
by typing x = A\b. Determine the 2-norm of the relative error in your computed solution,
kxxtruek/kxtruek. Explain how this is related to the condition number of A. Compute the
2-norm of the relative residual, kb Axk/(kAkkxk). Does the algorithm for solving Ax = b
appear to be backward stable; that is, is the computed solution the exact solution to a nearby
problem?
Solve Ax = b by inverting A: Ainv = inv(A), and then multiplying b by A1: x = Ainv*b.
Again look at relative errors and residuals. Does this algorithm appear to be backward stable?
Finally, solve Ax = b using Cramer’s rule. Use MATLAB function det to compute determinants. (Don’t worry — it does not use the n! algorithm discussed in Section ??.) Again look
at relative errors and residuals and determine whether this algorithm is backward stable.
Turn in a table showing the relative errors and residuals for each of the three algorithms and
each of the condition numbers tested, along with a brief explanation of the results.
Following are the results obtained in MATLAB for Gaussian elimination with partial pivoting, computing the inverse, and using Cramer’s rule:
83
Now note that kbk/kxk = kAxk/kxkkAk, and again equality will hold if x is a
vector for which kAxk = kAk · kxk. Making this substitution above, we derive the
inequality
kx ˆxk
kxk (A)
kb bˆk
kbk .
In summary, equality will hold if kA1(b bˆ)k = kA1k · kb bˆk and kAxk =
kAk · kxk.
13. Write a routine to generate an n by n matrix with a given 2-norm condition number. You
can make your routine a function in MATLAB that takes two input arguments – the matrix
size n and the desired condition number condno – and produces an n by n matrix A with the
given condition number as output:
function A = matgen(n, condno)
Form A by generating two random orthogonal matrices U and V and a diagonal matrix ⌃
with ii = condno(i1)/(n1), and setting A = U⌃V T . (Note that the largest diagonal entry
of ⌃ is 1 and the smallest is condno1, so the ratio is condno.) You can generate a random
orthogonal matrix in MATLAB by first generating a random matrix, Mat = randn(n,n), and
then computing its QR decomposition, [Q,R] = qr(Mat). The matrix Q is then a random
orthogonal matrix. You can check the condition number of the matrix you generate by using
the function cond in MATLAB. Turn in a listing of your code.
For condno= (1, 104, 108, 1012, 1016), use your routine to generate a random matrix A with
condition number condno. Also generate a random vector xtrue of length n, and compute
the product b = A*xtrue.
Solve Ax = b using Gaussian elimination with partial pivoting. This can be done in MATLAB
by typing x = A\b. Determine the 2-norm of the relative error in your computed solution,
kxxtruek/kxtruek. Explain how this is related to the condition number of A. Compute the
2-norm of the relative residual, kb Axk/(kAkkxk). Does the algorithm for solving Ax = b
appear to be backward stable; that is, is the computed solution the exact solution to a nearby
problem?
Solve Ax = b by inverting A: Ainv = inv(A), and then multiplying b by A1: x = Ainv*b.
Again look at relative errors and residuals. Does this algorithm appear to be backward stable?
Finally, solve Ax = b using Cramer’s rule. Use MATLAB function det to compute determinants. (Don’t worry — it does not use the n! algorithm discussed in Section ??.) Again look
at relative errors and residuals and determine whether this algorithm is backward stable.
Turn in a table showing the relative errors and residuals for each of the three algorithms and
each of the condition numbers tested, along with a brief explanation of the results.
Following are the results obtained in MATLAB for Gaussian elimination with partial pivoting, computing the inverse, and using Cramer’s rule:
83
Now note that kbk/kxk = kAxk/kxkkAk, and again equality will hold if x is a
vector for which kAxk = kAk · kxk. Making this substitution above, we derive the
inequality
kx ˆxk
kxk (A)
kb bˆk
kbk .
In summary, equality will hold if kA1(b bˆ)k = kA1k · kb bˆk and kAxk =
kAk · kxk.
13. Write a routine to generate an n by n matrix with a given 2-norm condition number. You
can make your routine a function in MATLAB that takes two input arguments – the matrix
size n and the desired condition number condno – and produces an n by n matrix A with the
given condition number as output:
function A = matgen(n, condno)
Form A by generating two random orthogonal matrices U and V and a diagonal matrix ⌃
with ii = condno(i1)/(n1), and setting A = U⌃V T . (Note that the largest diagonal entry
of ⌃ is 1 and the smallest is condno1, so the ratio is condno.) You can generate a random
orthogonal matrix in MATLAB by first generating a random matrix, Mat = randn(n,n), and
then computing its QR decomposition, [Q,R] = qr(Mat). The matrix Q is then a random
orthogonal matrix. You can check the condition number of the matrix you generate by using
the function cond in MATLAB. Turn in a listing of your code.
For condno= (1, 104, 108, 1012, 1016), use your routine to generate a random matrix A with
condition number condno. Also generate a random vector xtrue of length n, and compute
the product b = A*xtrue.
Solve Ax = b using Gaussian elimination with partial pivoting. This can be done in MATLAB
by typing x = A\b. Determine the 2-norm of the relative error in your computed solution,
kxxtruek/kxtruek. Explain how this is related to the condition number of A. Compute the
2-norm of the relative residual, kb Axk/(kAkkxk). Does the algorithm for solving Ax = b
appear to be backward stable; that is, is the computed solution the exact solution to a nearby
problem?
Solve Ax = b by inverting A: Ainv = inv(A), and then multiplying b by A1: x = Ainv*b.
Again look at relative errors and residuals. Does this algorithm appear to be backward stable?
Finally, solve Ax = b using Cramer’s rule. Use MATLAB function det to compute determinants. (Don’t worry — it does not use the n! algorithm discussed in Section ??.) Again look
at relative errors and residuals and determine whether this algorithm is backward stable.
Turn in a table showing the relative errors and residuals for each of the three algorithms and
each of the condition numbers tested, along with a brief explanation of the results.
Following are the results obtained in MATLAB for Gaussian elimination with partial pivoting, computing the inverse, and using Cramer’s rule:
83
Now note that kbk/kxk = kAxk/kxkkAk, and again equality will hold if x is a
vector for which kAxk = kAk · kxk. Making this substitution above, we derive the
inequality
kx ˆxk
kxk (A)
kb bˆk
kbk .
In summary, equality will hold if kA1(b bˆ)k = kA1k · kb bˆk and kAxk =
kAk · kxk.
13. Write a routine to generate an n by n matrix with a given 2-norm condition number. You
can make your routine a function in MATLAB that takes two input arguments – the matrix
size n and the desired condition number condno – and produces an n by n matrix A with the
given condition number as output:
function A = matgen(n, condno)
Form A by generating two random orthogonal matrices U and V and a diagonal matrix ⌃
with ii = condno(i1)/(n1), and setting A = U⌃V T . (Note that the largest diagonal entry
of ⌃ is 1 and the smallest is condno1, so the ratio is condno.) You can generate a random
orthogonal matrix in MATLAB by first generating a random matrix, Mat = randn(n,n), and
then computing its QR decomposition, [Q,R] = qr(Mat). The matrix Q is then a random
orthogonal matrix. You can check the condition number of the matrix you generate by using
the function cond in MATLAB. Turn in a listing of your code.
For condno= (1, 104, 108, 1012, 1016), use your routine to generate a random matrix A with
condition number condno. Also generate a random vector xtrue of length n, and compute
the product b = A*xtrue.
Solve Ax = b using Gaussian elimination with partial pivoting. This can be done in MATLAB
by typing x = A\b. Determine the 2-norm of the relative error in your computed solution,
kxxtruek/kxtruek. Explain how this is related to the condition number of A. Compute the
2-norm of the relative residual, kb Axk/(kAkkxk). Does the algorithm for solving Ax = b
appear to be backward stable; that is, is the computed solution the exact solution to a nearby
problem?
Solve Ax = b by inverting A: Ainv = inv(A), and then multiplying b by A1: x = Ainv*b.
Again look at relative errors and residuals. Does this algorithm appear to be backward stable?
Finally, solve Ax = b using Cramer’s rule. Use MATLAB function det to compute determinants. (Don’t worry — it does not use the n! algorithm discussed in Section ??.) Again look
at relative errors and residuals and determine whether this algorithm is backward stable.
Turn in a table showing the relative errors and residuals for each of the three algorithms and
each of the condition numbers tested, along with a brief explanation of the results.
Following are the results obtained in MATLAB for Gaussian elimination with partial pivoting, computing the inverse, and using Cramer’s rule:
83
Fall 2017
Problem Set 3:Due Oct 26, 2017.
• (7.7.16)
2
%%% Plot the data points
plot(h,d,’b*’), title(’Least-Squares Fit of the form c0 + c1 sqrt(h)’), hold
xlabel(’release height’), ylabel(’horizontal distance’)
%%% Plot the line of best fit
hmin = min(h); hmax = max(h); h1 = [hmin:(hmax – hmin)/100:hmax];
plot(h1,polyval(cc,sqrt(h1),’r’)), axis tight
0.3 0.4 0.5 0.6 0.7 0.8 0.9
0.8
0.9
1
1.1
1.2
1.3
1.4
Least−Squares Fit of the form c0 + c1 sqrt(h)
release height
horizontal distance
16. Consider the following least squares approach for ranking sports teams. Suppose we have
four college football teams, called simply T1, T2, T3, and T4. These four teams play each
other with the following outcomes:
• T1 beats T2 by 4 points: 21 to 17.
• T3 beats T1 by 9 points: 27 to 18.
• T1 beats T4 by 6 points: 16 to 10.
• T3 beats T4 by 3 points: 10 to 7.
• T2 beats T4 by 7 points: 17 to 10.
To determine ranking points r1,…,r4 for each team, we do a least squares fit to the overdetermined linear system:
r1 r2 = 4
r3 r1 = 9
r1 r4 = 6
r3 r4 = 3
r2 r4 = 7.
89 This system does not have a unique least squares solution, however, since if (r1,…,r4)T is
one solution and we add to it any constant vector, such as the vector (1, 1, 1, 1)T , then we
obtain another vector for which the residual is exactly the same. Show that if (r1,…,r4)T
solves the least squares problem above then so does the vector (r1 + c, . . . , r4 + c)T for any
constant c.
Since (ri + c) (rj + c) = ri rj for any constant c, the residual vector (4 (r1
r2), 9 (r3 r1),…, 7 (r2 r4))T in the above least squares problem is the same
for (r1,…,r4)T as for (r1 + c, . . . , r4 + c)T . Hence if one of these is a solution then
so is the other.
To make the solution unique, we can fix the total number of ranking points, say, at 20. To
do this, we add the following equation to those listed above:
r1 + r2 + r3 + r4 = 20.
Note that this equation will be satisfied exactly since it will not a↵ect how well the other
equalities can be approximated. Use MATLAB to determine the values r1, r2, r3, r4 that most
closely satisfy these equations, and based on your results, rank the four teams. [This method
of ranking sports teams is a simplification of one introduced by Ken Massey in 1997 [?]. It
has evolved into a part of the famous BCS (Bowl Championship Series) model for ranking
college football teams and is one factor in determining which teams play in bowl games.]
The following MATLAB code returned the ranking points below for teams 1 through
4:
% Rank sports teams using least squares.
A = zeros(6,4); % 6 eqs in 4 unknowns
A(1,1)=1; A(1,2)=-1;
A(2,3)=1; A(2,1)=-1;
A(3,1)=1; A(3,4)=-1;
A(4,3)=1; A(4,4)=-1;
A(5,2)=1; A(5,4)=-1;
A(6,:) = ones(4,1);
b = [4; 9; 6; 3; 7; 20];
r = A\b
r =
5.2500
90
• (7.7.17)
3
4.6250
9.1250
1.0000
Based on these ranking points, T3 is best, followed by T1, then T2, and then T4.
17. This exercise uses the MNIST database of handwritten digits, which contains a training set of
60,000 numbers and a test set of 10,000 numbers. Each digit in the database was placed in a
28⇥28 grayscale image such that the center of mass of its pixels is at the center of the picture.
To load the database, download it from the book’s web page and type load mnist_all.mat
in MATLAB. Type who to see the variables containing training digits (train0 – train9) and
test digits (test0 – test9). You will find digits intended to train an algorithm to recognize
a handwritten 0 in the matrix train0, which has 5923 rows and 784 columns. Each row
corresponds to one handwritten zero. To visualize the first image in this matrix, type:
digit = train0(1,:);
digitImage = reshape(digit,28,28);
image(rot90(flipud(digitImage),-1)),
colormap(gray(256)), axis square tight off;
Note, the rot90 and flipud commands are used so the digits appear as we write them, which
is more noticeable with digits like 2 or 3.
(a) Create a 10 ⇥ 784 matrix T whose ith row contains the average pixel values over all the
training images of the number i 1. For instance, the first row of T can be formed
by typing T(1,:) = mean(train0);. Visualize these average digits using the subplot
command as seen below.
(b) A simple way to identify a test digit is to compare its pixels to those in each row of T and
determine which row most closely resembles the test digit. Set d to be the first test digit
in test0 by typing d = double(test0(1,:));. For each row i = 1,…, 10, compute
norm(T(i,:) – d), and determine for which value of i this is smallest; d probably is
the digit i 1. Try some other digits as well and report on your results.
[A more sophisticated approach called Principal Component Analysis 91 or PCA attempts
to identify characteristic properties of each digit, based on the training data, and compare
these properties with those of the test digit in order to make an identification.]
The following MATLAB code tries to identify test digits by comparing them
with the average values of the training digits. On one run, it misidentified three
of the ten digits. Looking at those digits, one can see that they do not really look
much like any of the average digits; the well-written test digits were correctly
identified.
% Digit recognition using MNIST database
load mnist_all.mat
% Visualize the first image
figure(1)
digit = train0(1,:);
digitImage = reshape(digit,28,28);
image(rot90(flipud(digitImage),-1))
colormap(gray(256)), axis square tight off;
% Create a matrix of mean pixel values for each digit.
T = zeros(10,784);
T(1,:) = mean(train0); T(2,:) = mean(train1);
T(3,:) = mean(train2); T(4,:) = mean(train3);
T(5,:) = mean(train4); T(6,:) = mean(train5);
T(7,:) = mean(train6); T(8,:) = mean(train7);
T(9,:) = mean(train8); T(10,:) = mean(train9);
% Plot these “average” digits.
figure(2)
for i=1:10,
subplot(2,5,i)
digit = T(i,:);
digitImage = reshape(digit,28,28);
image(rot90(flipud(digitImage),-1))
colormap(gray(256)), axis square tight off;
end;
% Compare test digits to these “average” digits to identify
% the closest match. First try the first test 0.
d = double(test0(1,:));
diffmin = norm(T(1,:)-d); imin = 1;
for i=2:10,
92
Note: mnist_all.mat is available from the
Course Blackboard Page as well. Note: mnist_all.mat is available from the Piazza site
or textbook web site
• (8.8.4)
4
4. (a) Use MATLAB to fit a polynomial of degree 12 to the Runge function
f(x) = 1
1 + x2 ,
interpolating the function at 13 equally spaced points between 5 and 5. (You can
set the points with the command x = [-5:5/6:5];.) You may use MATLAB routine
polyfit to do this computation or you may use your own routine. (To find out how to
use polyfit type help polyfit in MATLAB.) Plot the function and your 12th degree
interpolant on the same graph. You can evaluate the polynomial at points throughout the
interval [5, 5] using MATLAB’s polyval routine, or you may write your own routine.
Turn in your plot and a listing of your code.
The following code produces the plot below:
% Demonstration of the Runge phenomenon
f = inline(’1 ./ (1 + x.^2)’); % Runge function
npts = 13;
x = [-5:5/6:5]; % 13 equally spaced interpolation pts
y = f(x);
p = polyfit(x,y,npts-1); % Fit a 12th degree polynomial
xx = [-5:.01:5]; % Plot function and its interpolant
fxx = f(xx); % along with data pts
pxx = polyval(p,xx);
plot(xx,fxx,’-’, xx,pxx,’–r’, x,y,’o’)
xlabel(’x’)
ylabel(’1/(1 + x^2)’)
title(’Runge function and its interpolant at equally spaced points’)
−5 −4 −3 −2 −1 0 1 2 3 4 5 −4
−3
−2
−1
0
1
2
x
1/(1 + x2
)
Runge function and its interpolant at equally spaced points
98
a,b,c
(b) Repeat part (a) with a polynomial of degree 12 that interpolates f at 13 scaled Chebyshev
points:
xj = 5 cos
⇡j
12 , j = 0,…, 12.
Again you may use MATLAB’s polyfit and polyval routines to fit the polynomial to
the function at these points and evaluate the result at points throughout the interval,
or you may use the barycentric interpolation formula (??) with weights defined by (??)
or (??) to evaluate the interpolation polynomial at points other than the interpolation
points in the interval [5, 5]. Turn in your plot and a listing of your code.
We can just replace the line x = [-5:5/6:5]; in the above code by the line
x = 5*cos(pi*[0:12]/12); and then we obtain the following plot:
−5 −4 −3 −2 −1 0 1 2 3 4 5 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
1/(1 + x2
)
Runge function and its interpolant at Chebyshev points
(c) If one attempts to fit a much higher degree interpolation polynomial to the Runge function using MATLAB’s polyfit routine, MATLAB will issue a warning that the problem
is ill-conditioned and answers may not be accurate. Still, the barycentric interpolation
formula (??) with the special weights (??) for the values of a polynomial that interpolates f at the Chebyshev points may be used. Use this formula with 21 scaled Chebyshev
interpolation points xj = 5 cos ⇡j
20 , j = 0,…, 20, to compute a more accurate approximation to this function. Turn in a plot of the interpolation polynomial p20(x) and the
errors |f(x) p20(x)| over the interval [5, 5].
The following code produces the interpolation polynomial p20 and plots it along
with the Runge function. It also plots the errors f(x) p20(x), and these can
be seen to be less than .02 throughout the interval [5, 5].
% Interpolation of the Runge function at Chebyshev points
% using Barycentric formula.
f = inline(’1 ./ (1 + x.^2)’); % Runge function
99
(b) Repeat part (a) with a polynomial of degree 12 that interpolates f at 13 scaled Chebyshev
points:
xj = 5 cos
⇡j
12 , j = 0,…, 12.
Again you may use MATLAB’s polyfit and polyval routines to fit the polynomial to
the function at these points and evaluate the result at points throughout the interval,
or you may use the barycentric interpolation formula (??) with weights defined by (??)
or (??) to evaluate the interpolation polynomial at points other than the interpolation
points in the interval [5, 5]. Turn in your plot and a listing of your code.
We can just replace the line x = [-5:5/6:5]; in the above code by the line
x = 5*cos(pi*[0:12]/12); and then we obtain the following plot:
−5 −4 −3 −2 −1 0 1 2 3 4 5 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
1/(1 + x2
)
Runge function and its interpolant at Chebyshev points
(c) If one attempts to fit a much higher degree interpolation polynomial to the Runge function using MATLAB’s polyfit routine, MATLAB will issue a warning that the problem
is ill-conditioned and answers may not be accurate. Still, the barycentric interpolation
formula (??) with the special weights (??) for the values of a polynomial that interpolates f at the Chebyshev points may be used. Use this formula with 21 scaled Chebyshev
interpolation points xj = 5 cos ⇡j
20 , j = 0,…, 20, to compute a more accurate approximation to this function. Turn in a plot of the interpolation polynomial p20(x) and the
errors |f(x) p20(x)| over the interval [5, 5].
The following code produces the interpolation polynomial p20 and plots it along
with the Runge function. It also plots the errors f(x) p20(x), and these can
be seen to be less than .02 throughout the interval [5, 5].
% Interpolation of the Runge function at Chebyshev points
% using Barycentric formula.
f = inline(’1 ./ (1 + x.^2)’); % Runge function
99
(8.6)
(b) Repeat part (a) with a polynomial of degree 12 that interpolates f at 13 scaled Chebyshev
points:
xj = 5 cos
⇡j
12 , j = 0,…, 12.
Again you may use MATLAB’s polyfit and polyval routines to fit the polynomial to
the function at these points and evaluate the result at points throughout the interval,
or you may use the barycentric interpolation formula (??) with weights defined by (??)
or (??) to evaluate the interpolation polynomial at points other than the interpolation
points in the interval [5, 5]. Turn in your plot and a listing of your code.
We can just replace the line x = [-5:5/6:5]; in the above code by the line
x = 5*cos(pi*[0:12]/12); and then we obtain the following plot:
−5 −4 −3 −2 −1 0 1 2 3 4 5 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
1/(1 + x2
)
Runge function and its interpolant at Chebyshev points
(c) If one attempts to fit a much higher degree interpolation polynomial to the Runge function using MATLAB’s polyfit routine, MATLAB will issue a warning that the problem
is ill-conditioned and answers may not be accurate. Still, the barycentric interpolation
formula (??) with the special weights (??) for the values of a polynomial that interpolates f at the Chebyshev points may be used. Use this formula with 21 scaled Chebyshev
interpolation points xj = 5 cos ⇡j
20 , j = 0,…, 20, to compute a more accurate approximation to this function. Turn in a plot of the interpolation polynomial p20(x) and the
errors |f(x) p20(x)| over the interval [5, 5].
The following code produces the interpolation polynomial p20 and plots it along
with the Runge function. It also plots the errors f(x) p20(x), and these can
be seen to be less than .02 throughout the interval [5, 5].
% Interpolation of the Runge function at Chebyshev points
% using Barycentric formula.
f = inline(’1 ./ (1 + x.^2)’); % Runge function
99
(8.5)
(b) Repeat part (a) with a polynomial of degree 12 that interpolates f at 13 scaled Chebyshev
points:
xj = 5 cos
⇡j
12 , j = 0,…, 12.
Again you may use MATLAB’s polyfit and polyval routines to fit the polynomial to
the function at these points and evaluate the result at points throughout the interval,
or you may use the barycentric interpolation formula (??) with weights defined by (??)
or (??) to evaluate the interpolation polynomial at points other than the interpolation
points in the interval [5, 5]. Turn in your plot and a listing of your code.
We can just replace the line x = [-5:5/6:5]; in the above code by the line
x = 5*cos(pi*[0:12]/12); and then we obtain the following plot:
−5 −4 −3 −2 −1 0 1 2 3 4 5 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
1/(1 + x2
)
Runge function and its interpolant at Chebyshev points
(c) If one attempts to fit a much higher degree interpolation polynomial to the Runge function using MATLAB’s polyfit routine, MATLAB will issue a warning that the problem
is ill-conditioned and answers may not be accurate. Still, the barycentric interpolation
formula (??) with the special weights (??) for the values of a polynomial that interpolates f at the Chebyshev points may be used. Use this formula with 21 scaled Chebyshev
interpolation points xj = 5 cos ⇡j
20 , j = 0,…, 20, to compute a more accurate approximation to this function. Turn in a plot of the interpolation polynomial p20(x) and the
errors |f(x) p20(x)| over the interval [5, 5].
The following code produces the interpolation polynomial p20 and plots it along
with the Runge function. It also plots the errors f(x) p20(x), and these can
be seen to be less than .02 throughout the interval [5, 5].
% Interpolation of the Runge function at Chebyshev points
% using Barycentric formula.
f = inline(’1 ./ (1 + x.^2)’); % Runge function
99
(8.16)
(b) Repeat part (a) with a polynomial of degree 12 that interpolates f at 13 scaled Chebyshev
points:
xj = 5 cos
⇡j
12 , j = 0,…, 12.
Again you may use MATLAB’s polyfit and polyval routines to fit the polynomial to
the function at these points and evaluate the result at points throughout the interval,
or you may use the barycentric interpolation formula (??) with weights defined by (??)
or (??) to evaluate the interpolation polynomial at points other than the interpolation
points in the interval [5, 5]. Turn in your plot and a listing of your code.
We can just replace the line x = [-5:5/6:5]; in the above code by the line
x = 5*cos(pi*[0:12]/12); and then we obtain the following plot:
−5 −4 −3 −2 −1 0 1 2 3 4 5 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
1/(1 + x2
)
Runge function and its interpolant at Chebyshev points
(c) If one attempts to fit a much higher degree interpolation polynomial to the Runge function using MATLAB’s polyfit routine, MATLAB will issue a warning that the problem
is ill-conditioned and answers may not be accurate. Still, the barycentric interpolation
formula (??) with the special weights (??) for the values of a polynomial that interpolates f at the Chebyshev points may be used. Use this formula with 21 scaled Chebyshev
interpolation points xj = 5 cos ⇡j
20 , j = 0,…, 20, to compute a more accurate approximation to this function. Turn in a plot of the interpolation polynomial p20(x) and the
errors |f(x) p20(x)| over the interval [5, 5].
The following code produces the interpolation polynomial p20 and plots it along
with the Runge function. It also plots the errors f(x) p20(x), and these can
be seen to be less than .02 throughout the interval [5, 5].
% Interpolation of the Runge function at Chebyshev points
% using Barycentric formula.
f = inline(’1 ./ (1 + x.^2)’); % Runge function
99
(b) Repeat part (a) with a polynomial of degree 12 that interpolates f at 13 scaled Chebyshev
points:
xj = 5 cos
⇡j
12 , j = 0,…, 12.
Again you may use MATLAB’s polyfit and polyval routines to fit the polynomial to
the function at these points and evaluate the result at points throughout the interval,
or you may use the barycentric interpolation formula (??) with weights defined by (??)
or (??) to evaluate the interpolation polynomial at points other than the interpolation
points in the interval [5, 5]. Turn in your plot and a listing of your code.
We can just replace the line x = [-5:5/6:5]; in the above code by the line
x = 5*cos(pi*[0:12]/12); and then we obtain the following plot:
−5 −4 −3 −2 −1 0 1 2 3 4 5 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
1/(1 + x2
)
Runge function and its interpolant at Chebyshev points
(c) If one attempts to fit a much higher degree interpolation polynomial to the Runge function using MATLAB’s polyfit routine, MATLAB will issue a warning that the problem
is ill-conditioned and answers may not be accurate. Still, the barycentric interpolation
formula (??) with the special weights (??) for the values of a polynomial that interpolates f at the Chebyshev points may be used. Use this formula with 21 scaled Chebyshev
interpolation points xj = 5 cos ⇡j
20 , j = 0,…, 20, to compute a more accurate approximation to this function. Turn in a plot of the interpolation polynomial p20(x) and the
errors |f(x) p20(x)| over the interval [5, 5].
The following code produces the interpolation polynomial p20 and plots it along
with the Runge function. It also plots the errors f(x) p20(x), and these can
be seen to be less than .02 throughout the interval [5, 5].
% Interpolation of the Runge function at Chebyshev points
% using Barycentric formula.
f = inline(’1 ./ (1 + x.^2)’); % Runge function
99
(b) Repeat part (a) with a polynomial of degree 12 that interpolates f at 13 scaled Chebyshev
points:
xj = 5 cos
⇡j
12 , j = 0,…, 12.
Again you may use MATLAB’s polyfit and polyval routines to fit the polynomial to
the function at these points and evaluate the result at points throughout the interval,
or you may use the barycentric interpolation formula (??) with weights defined by (??)
or (??) to evaluate the interpolation polynomial at points other than the interpolation
points in the interval [5, 5]. Turn in your plot and a listing of your code.
We can just replace the line x = [-5:5/6:5]; in the above code by the line
x = 5*cos(pi*[0:12]/12); and then we obtain the following plot:
−5 −4 −3 −2 −1 0 1 2 3 4 5 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
1/(1 + x2
)
Runge function and its interpolant at Chebyshev points
(c) If one attempts to fit a much higher degree interpolation polynomial to the Runge function using MATLAB’s polyfit routine, MATLAB will issue a warning that the problem
is ill-conditioned and answers may not be accurate. Still, the barycentric interpolation
formula (??) with the special weights (??) for the values of a polynomial that interpolates f at the Chebyshev points may be used. Use this formula with 21 scaled Chebyshev
interpolation points xj = 5 cos ⇡j
20 , j = 0,…, 20, to compute a more accurate approximation to this function. Turn in a plot of the interpolation polynomial p20(x) and the
errors |f(x) p20(x)| over the interval [5, 5].
The following code produces the interpolation polynomial p20 and plots it along
with the Runge function. It also plots the errors f(x) p20(x), and these can
be seen to be less than .02 throughout the interval [5, 5].
% Interpolation of the Runge function at Chebyshev points
% using Barycentric formula.
f = inline(’1 ./ (1 + x.^2)’); % Runge function
99
(b) Repeat part (a) with a polynomial of degree 12 that interpolates f at 13 scaled Chebyshev
points:
xj = 5 cos
⇡j
12 , j = 0,…, 12.
Again you may use MATLAB’s polyfit and polyval routines to fit the polynomial to
the function at these points and evaluate the result at points throughout the interval,
or you may use the barycentric interpolation formula (??) with weights defined by (??)
or (??) to evaluate the interpolation polynomial at points other than the interpolation
points in the interval [5, 5]. Turn in your plot and a listing of your code.
We can just replace the line x = [-5:5/6:5]; in the above code by the line
x = 5*cos(pi*[0:12]/12); and then we obtain the following plot:
−5 −4 −3 −2 −1 0 1 2 3 4 5 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
1/(1 + x2
)
Runge function and its interpolant at Chebyshev points
(c) If one attempts to fit a much higher degree interpolation polynomial to the Runge function using MATLAB’s polyfit routine, MATLAB will issue a warning that the problem
is ill-conditioned and answers may not be accurate. Still, the barycentric interpolation
formula (??) with the special weights (??) for the values of a polynomial that interpolates f at the Chebyshev points may be used. Use this formula with 21 scaled Chebyshev
interpolation points xj = 5 cos ⇡j
20 , j = 0,…, 20, to compute a more accurate approximation to this function. Turn in a plot of the interpolation polynomial p20(x) and the
errors |f(x) p20(x)| over the interval [5, 5].
The following code produces the interpolation polynomial p20 and plots it along
with the Runge function. It also plots the errors f(x) p20(x), and these can
be seen to be less than .02 throughout the interval [5, 5].
% Interpolation of the Runge function at Chebyshev points
% using Barycentric formula.
f = inline(’1 ./ (1 + x.^2)’); % Runge function
99
(8.6)
(b) Repeat part (a) with a polynomial of degree 12 that interpolates f at 13 scaled Chebyshev
points:
xj = 5 cos
⇡j
12 , j = 0,…, 12.
Again you may use MATLAB’s polyfit and polyval routines to fit the polynomial to
the function at these points and evaluate the result at points throughout the interval,
or you may use the barycentric interpolation formula (??) with weights defined by (??)
or (??) to evaluate the interpolation polynomial at points other than the interpolation
points in the interval [5, 5]. Turn in your plot and a listing of your code.
We can just replace the line x = [-5:5/6:5]; in the above code by the line
x = 5*cos(pi*[0:12]/12); and then we obtain the following plot:
−5 −4 −3 −2 −1 0 1 2 3 4 5 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
1/(1 + x2
)
Runge function and its interpolant at Chebyshev points
(c) If one attempts to fit a much higher degree interpolation polynomial to the Runge function using MATLAB’s polyfit routine, MATLAB will issue a warning that the problem
is ill-conditioned and answers may not be accurate. Still, the barycentric interpolation
formula (??) with the special weights (??) for the values of a polynomial that interpolates f at the Chebyshev points may be used. Use this formula with 21 scaled Chebyshev
interpolation points xj = 5 cos ⇡j
20 , j = 0,…, 20, to compute a more accurate approximation to this function. Turn in a plot of the interpolation polynomial p20(x) and the
errors |f(x) p20(x)| over the interval [5, 5].
The following code produces the interpolation polynomial p20 and plots it along
with the Runge function. It also plots the errors f(x) p20(x), and these can
be seen to be less than .02 throughout the interval [5, 5].
% Interpolation of the Runge function at Chebyshev points
% using Barycentric formula.
f = inline(’1 ./ (1 + x.^2)’); % Runge function
99
(8.16)
(b) Repeat part (a) with a polynomial of degree 12 that interpolates f at 13 scaled Chebyshev
points:
xj = 5 cos
⇡j
12 , j = 0,…, 12.
Again you may use MATLAB’s polyfit and polyval routines to fit the polynomial to
the function at these points and evaluate the result at points throughout the interval,
or you may use the barycentric interpolation formula (??) with weights defined by (??)
or (??) to evaluate the interpolation polynomial at points other than the interpolation
points in the interval [5, 5]. Turn in your plot and a listing of your code.
We can just replace the line x = [-5:5/6:5]; in the above code by the line
x = 5*cos(pi*[0:12]/12); and then we obtain the following plot:
−5 −4 −3 −2 −1 0 1 2 3 4 5 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
1/(1 + x2
)
Runge function and its interpolant at Chebyshev points
(c) If one attempts to fit a much higher degree interpolation polynomial to the Runge function using MATLAB’s polyfit routine, MATLAB will issue a warning that the problem
is ill-conditioned and answers may not be accurate. Still, the barycentric interpolation
formula (??) with the special weights (??) for the values of a polynomial that interpolates f at the Chebyshev points may be used. Use this formula with 21 scaled Chebyshev
interpolation points xj = 5 cos ⇡j
20 , j = 0,…, 20, to compute a more accurate approximation to this function. Turn in a plot of the interpolation polynomial p20(x) and the
errors |f(x) p20(x)| over the interval [5, 5].
The following code produces the interpolation polynomial p20 and plots it along
with the Runge function. It also plots the errors f(x) p20(x), and these can
be seen to be less than .02 throughout the interval [5, 5].
% Interpolation of the Runge function at Chebyshev points
% using Barycentric formula.
f = inline(’1 ./ (1 + x.^2)’); % Runge function
99
(b) Repeat part (a) with a polynomial of degree 12 that interpolates f at 13 scaled Chebyshev
points:
xj = 5 cos
⇡j
12 , j = 0,…, 12.
Again you may use MATLAB’s polyfit and polyval routines to fit the polynomial to
the function at these points and evaluate the result at points throughout the interval,
or you may use the barycentric interpolation formula (??) with weights defined by (??)
or (??) to evaluate the interpolation polynomial at points other than the interpolation
points in the interval [5, 5]. Turn in your plot and a listing of your code.
We can just replace the line x = [-5:5/6:5]; in the above code by the line
x = 5*cos(pi*[0:12]/12); and then we obtain the following plot:
−5 −4 −3 −2 −1 0 1 2 3 4 5 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
1/(1 + x2
)
Runge function and its interpolant at Chebyshev points
(c) If one attempts to fit a much higher degree interpolation polynomial to the Runge function using MATLAB’s polyfit routine, MATLAB will issue a warning that the problem
is ill-conditioned and answers may not be accurate. Still, the barycentric interpolation
formula (??) with the special weights (??) for the values of a polynomial that interpolates f at the Chebyshev points may be used. Use this formula with 21 scaled Chebyshev
interpolation points xj = 5 cos ⇡j
20 , j = 0,…, 20, to compute a more accurate approximation to this function. Turn in a plot of the interpolation polynomial p20(x) and the
errors |f(x) p20(x)| over the interval [5, 5].
The following code produces the interpolation polynomial p20 and plots it along
with the Runge function. It also plots the errors f(x) p20(x), and these can
be seen to be less than .02 throughout the interval [5, 5].
% Interpolation of the Runge function at Chebyshev points
% using Barycentric formula.
f = inline(’1 ./ (1 + x.^2)’); % Runge function
99