Find largest eigenvalue of specific nonnegative matrix
up vote
1
down vote
favorite
I look for the largest eigenvalue of the following matrix (or at least a small upper bound).
The only thing I know is that the eigenvalue is smaller than 1 and converges to $cos(frac{pi}{2n+1})$ with growing n.
In general, it is very hard to compute the characteristic polynomial to calculate the eigenvalue and that's why I hope for an easier way.
Has anyone some ideas?
The dimension of the matrix is $n times n$.
$A = begin{bmatrix}
frac{1}{2-frac{1}{n+1}}& frac{1}{2} & 0 & 0 & dots & 0 \
frac{1}{2} & 0 & frac{1}{2} & 0 & dots & 0 \
0 & frac{1}{2} & 0 & frac{1}{2} & dots & 0 \
vdots & vdots & vdots & vdots & vdots & vdots \
0 & 0 & 0 &0 & frac{1}{2} & 0
end{bmatrix}$
linear-algebra matrices eigenvalues-eigenvectors
add a comment |
up vote
1
down vote
favorite
I look for the largest eigenvalue of the following matrix (or at least a small upper bound).
The only thing I know is that the eigenvalue is smaller than 1 and converges to $cos(frac{pi}{2n+1})$ with growing n.
In general, it is very hard to compute the characteristic polynomial to calculate the eigenvalue and that's why I hope for an easier way.
Has anyone some ideas?
The dimension of the matrix is $n times n$.
$A = begin{bmatrix}
frac{1}{2-frac{1}{n+1}}& frac{1}{2} & 0 & 0 & dots & 0 \
frac{1}{2} & 0 & frac{1}{2} & 0 & dots & 0 \
0 & frac{1}{2} & 0 & frac{1}{2} & dots & 0 \
vdots & vdots & vdots & vdots & vdots & vdots \
0 & 0 & 0 &0 & frac{1}{2} & 0
end{bmatrix}$
linear-algebra matrices eigenvalues-eigenvectors
Is the dimension of the matrix $n times n$, and should $a_{n-1, n}$ be $1/2$ instead of zero?
– Maxim
Nov 14 at 17:56
Yes, you are right. Instead of the zero I wanted to include $vdots$but you are right that $a_{n-1,n}$ is $frac{1}{2}$.
– Jannik
Nov 14 at 18:12
This is equivalent to maximizing $(n + 1) x_1^2/(2 n + 1) + x_1 x_2 + ldots + x_{n - 1} x_n$ over the sphere $lVert x rVert = 1$. But that doesn't seem to make things simpler.
– Maxim
Nov 14 at 18:25
$1-frac{1}{2(n-1)(2n-1)}$ seems to be an upper bound on the largest eigenvalue (verified via testing for $1 leq n leq 10 000$ in MatLab). Now, only the proof is missing ...
– Jannik
Nov 14 at 19:31
add a comment |
up vote
1
down vote
favorite
up vote
1
down vote
favorite
I look for the largest eigenvalue of the following matrix (or at least a small upper bound).
The only thing I know is that the eigenvalue is smaller than 1 and converges to $cos(frac{pi}{2n+1})$ with growing n.
In general, it is very hard to compute the characteristic polynomial to calculate the eigenvalue and that's why I hope for an easier way.
Has anyone some ideas?
The dimension of the matrix is $n times n$.
$A = begin{bmatrix}
frac{1}{2-frac{1}{n+1}}& frac{1}{2} & 0 & 0 & dots & 0 \
frac{1}{2} & 0 & frac{1}{2} & 0 & dots & 0 \
0 & frac{1}{2} & 0 & frac{1}{2} & dots & 0 \
vdots & vdots & vdots & vdots & vdots & vdots \
0 & 0 & 0 &0 & frac{1}{2} & 0
end{bmatrix}$
linear-algebra matrices eigenvalues-eigenvectors
I look for the largest eigenvalue of the following matrix (or at least a small upper bound).
The only thing I know is that the eigenvalue is smaller than 1 and converges to $cos(frac{pi}{2n+1})$ with growing n.
In general, it is very hard to compute the characteristic polynomial to calculate the eigenvalue and that's why I hope for an easier way.
Has anyone some ideas?
The dimension of the matrix is $n times n$.
$A = begin{bmatrix}
frac{1}{2-frac{1}{n+1}}& frac{1}{2} & 0 & 0 & dots & 0 \
frac{1}{2} & 0 & frac{1}{2} & 0 & dots & 0 \
0 & frac{1}{2} & 0 & frac{1}{2} & dots & 0 \
vdots & vdots & vdots & vdots & vdots & vdots \
0 & 0 & 0 &0 & frac{1}{2} & 0
end{bmatrix}$
linear-algebra matrices eigenvalues-eigenvectors
linear-algebra matrices eigenvalues-eigenvectors
edited Nov 14 at 18:11
asked Nov 14 at 15:53
Jannik
467
467
Is the dimension of the matrix $n times n$, and should $a_{n-1, n}$ be $1/2$ instead of zero?
– Maxim
Nov 14 at 17:56
Yes, you are right. Instead of the zero I wanted to include $vdots$but you are right that $a_{n-1,n}$ is $frac{1}{2}$.
– Jannik
Nov 14 at 18:12
This is equivalent to maximizing $(n + 1) x_1^2/(2 n + 1) + x_1 x_2 + ldots + x_{n - 1} x_n$ over the sphere $lVert x rVert = 1$. But that doesn't seem to make things simpler.
– Maxim
Nov 14 at 18:25
$1-frac{1}{2(n-1)(2n-1)}$ seems to be an upper bound on the largest eigenvalue (verified via testing for $1 leq n leq 10 000$ in MatLab). Now, only the proof is missing ...
– Jannik
Nov 14 at 19:31
add a comment |
Is the dimension of the matrix $n times n$, and should $a_{n-1, n}$ be $1/2$ instead of zero?
– Maxim
Nov 14 at 17:56
Yes, you are right. Instead of the zero I wanted to include $vdots$but you are right that $a_{n-1,n}$ is $frac{1}{2}$.
– Jannik
Nov 14 at 18:12
This is equivalent to maximizing $(n + 1) x_1^2/(2 n + 1) + x_1 x_2 + ldots + x_{n - 1} x_n$ over the sphere $lVert x rVert = 1$. But that doesn't seem to make things simpler.
– Maxim
Nov 14 at 18:25
$1-frac{1}{2(n-1)(2n-1)}$ seems to be an upper bound on the largest eigenvalue (verified via testing for $1 leq n leq 10 000$ in MatLab). Now, only the proof is missing ...
– Jannik
Nov 14 at 19:31
Is the dimension of the matrix $n times n$, and should $a_{n-1, n}$ be $1/2$ instead of zero?
– Maxim
Nov 14 at 17:56
Is the dimension of the matrix $n times n$, and should $a_{n-1, n}$ be $1/2$ instead of zero?
– Maxim
Nov 14 at 17:56
Yes, you are right. Instead of the zero I wanted to include $vdots$but you are right that $a_{n-1,n}$ is $frac{1}{2}$.
– Jannik
Nov 14 at 18:12
Yes, you are right. Instead of the zero I wanted to include $vdots$but you are right that $a_{n-1,n}$ is $frac{1}{2}$.
– Jannik
Nov 14 at 18:12
This is equivalent to maximizing $(n + 1) x_1^2/(2 n + 1) + x_1 x_2 + ldots + x_{n - 1} x_n$ over the sphere $lVert x rVert = 1$. But that doesn't seem to make things simpler.
– Maxim
Nov 14 at 18:25
This is equivalent to maximizing $(n + 1) x_1^2/(2 n + 1) + x_1 x_2 + ldots + x_{n - 1} x_n$ over the sphere $lVert x rVert = 1$. But that doesn't seem to make things simpler.
– Maxim
Nov 14 at 18:25
$1-frac{1}{2(n-1)(2n-1)}$ seems to be an upper bound on the largest eigenvalue (verified via testing for $1 leq n leq 10 000$ in MatLab). Now, only the proof is missing ...
– Jannik
Nov 14 at 19:31
$1-frac{1}{2(n-1)(2n-1)}$ seems to be an upper bound on the largest eigenvalue (verified via testing for $1 leq n leq 10 000$ in MatLab). Now, only the proof is missing ...
– Jannik
Nov 14 at 19:31
add a comment |
1 Answer
1
active
oldest
votes
up vote
1
down vote
accepted
An asymptotic expansion for large $n$ can be obtained as follows. Expanding $A - lambda I$ by the first row and expanding one of the resulting matrices by the first column gives
$$det(A - lambda I) =
left( frac {n + 1} {2 n + 1} - lambda right) det tilde A_{n - 1} -
frac 1 4 det tilde A_{n - 2},$$
where $tilde A_n$ is a tridiagonal Toeplitz matrix with $-lambda$ on the main diagonal and $1/2$ on the two adjacent diagonals. The determinant of $tilde A_n$ is $2^{-n} U_n(-lambda)$, where $U_n$ is the Chebyshev polynomial of the second kind. Since $U_n(-cos theta) = (-1)^n csc theta ,sin ,(n + 1) theta$, $det(A - lambda I) = 0$ reduces to
$$cot n theta =
left( frac {2 n + 2} {2 n + 1} - cos theta right) csc theta.$$
Taking $theta = alpha/n$ and expanding both sides of the equation into series, we obtain $cot alpha = 1/(2 alpha) + O(1/n)$, determining $alpha$. Then
$$lambda_{max} = cos theta sim
1 -frac {alpha^2} {2 n^2},$$
where $alpha$ is the smallest positive root of $cot alpha = 1/(2 alpha)$. The next terms can be obtained in the same manner.
Wow, thanks for your help. This seems to be correct but I have a question. Why do you compute $U_n(cos(Delta))$? Because the largest eigenvalue of A seems to be related to a cosine term or is there a different mathematical evidence?
– Jannik
Nov 15 at 7:25
The change of variables is $lambda = cos theta$, because we want to use the identity $U_n(-cos x) = (-1)^n csc x ,sin ,(n + 1) x$. Then it's just some algebraic manipulations to get the equation for $theta$. $lambda$ close to one corresponds to $theta$ close to zero.
– Maxim
Nov 15 at 15:58
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
accepted
An asymptotic expansion for large $n$ can be obtained as follows. Expanding $A - lambda I$ by the first row and expanding one of the resulting matrices by the first column gives
$$det(A - lambda I) =
left( frac {n + 1} {2 n + 1} - lambda right) det tilde A_{n - 1} -
frac 1 4 det tilde A_{n - 2},$$
where $tilde A_n$ is a tridiagonal Toeplitz matrix with $-lambda$ on the main diagonal and $1/2$ on the two adjacent diagonals. The determinant of $tilde A_n$ is $2^{-n} U_n(-lambda)$, where $U_n$ is the Chebyshev polynomial of the second kind. Since $U_n(-cos theta) = (-1)^n csc theta ,sin ,(n + 1) theta$, $det(A - lambda I) = 0$ reduces to
$$cot n theta =
left( frac {2 n + 2} {2 n + 1} - cos theta right) csc theta.$$
Taking $theta = alpha/n$ and expanding both sides of the equation into series, we obtain $cot alpha = 1/(2 alpha) + O(1/n)$, determining $alpha$. Then
$$lambda_{max} = cos theta sim
1 -frac {alpha^2} {2 n^2},$$
where $alpha$ is the smallest positive root of $cot alpha = 1/(2 alpha)$. The next terms can be obtained in the same manner.
Wow, thanks for your help. This seems to be correct but I have a question. Why do you compute $U_n(cos(Delta))$? Because the largest eigenvalue of A seems to be related to a cosine term or is there a different mathematical evidence?
– Jannik
Nov 15 at 7:25
The change of variables is $lambda = cos theta$, because we want to use the identity $U_n(-cos x) = (-1)^n csc x ,sin ,(n + 1) x$. Then it's just some algebraic manipulations to get the equation for $theta$. $lambda$ close to one corresponds to $theta$ close to zero.
– Maxim
Nov 15 at 15:58
add a comment |
up vote
1
down vote
accepted
An asymptotic expansion for large $n$ can be obtained as follows. Expanding $A - lambda I$ by the first row and expanding one of the resulting matrices by the first column gives
$$det(A - lambda I) =
left( frac {n + 1} {2 n + 1} - lambda right) det tilde A_{n - 1} -
frac 1 4 det tilde A_{n - 2},$$
where $tilde A_n$ is a tridiagonal Toeplitz matrix with $-lambda$ on the main diagonal and $1/2$ on the two adjacent diagonals. The determinant of $tilde A_n$ is $2^{-n} U_n(-lambda)$, where $U_n$ is the Chebyshev polynomial of the second kind. Since $U_n(-cos theta) = (-1)^n csc theta ,sin ,(n + 1) theta$, $det(A - lambda I) = 0$ reduces to
$$cot n theta =
left( frac {2 n + 2} {2 n + 1} - cos theta right) csc theta.$$
Taking $theta = alpha/n$ and expanding both sides of the equation into series, we obtain $cot alpha = 1/(2 alpha) + O(1/n)$, determining $alpha$. Then
$$lambda_{max} = cos theta sim
1 -frac {alpha^2} {2 n^2},$$
where $alpha$ is the smallest positive root of $cot alpha = 1/(2 alpha)$. The next terms can be obtained in the same manner.
Wow, thanks for your help. This seems to be correct but I have a question. Why do you compute $U_n(cos(Delta))$? Because the largest eigenvalue of A seems to be related to a cosine term or is there a different mathematical evidence?
– Jannik
Nov 15 at 7:25
The change of variables is $lambda = cos theta$, because we want to use the identity $U_n(-cos x) = (-1)^n csc x ,sin ,(n + 1) x$. Then it's just some algebraic manipulations to get the equation for $theta$. $lambda$ close to one corresponds to $theta$ close to zero.
– Maxim
Nov 15 at 15:58
add a comment |
up vote
1
down vote
accepted
up vote
1
down vote
accepted
An asymptotic expansion for large $n$ can be obtained as follows. Expanding $A - lambda I$ by the first row and expanding one of the resulting matrices by the first column gives
$$det(A - lambda I) =
left( frac {n + 1} {2 n + 1} - lambda right) det tilde A_{n - 1} -
frac 1 4 det tilde A_{n - 2},$$
where $tilde A_n$ is a tridiagonal Toeplitz matrix with $-lambda$ on the main diagonal and $1/2$ on the two adjacent diagonals. The determinant of $tilde A_n$ is $2^{-n} U_n(-lambda)$, where $U_n$ is the Chebyshev polynomial of the second kind. Since $U_n(-cos theta) = (-1)^n csc theta ,sin ,(n + 1) theta$, $det(A - lambda I) = 0$ reduces to
$$cot n theta =
left( frac {2 n + 2} {2 n + 1} - cos theta right) csc theta.$$
Taking $theta = alpha/n$ and expanding both sides of the equation into series, we obtain $cot alpha = 1/(2 alpha) + O(1/n)$, determining $alpha$. Then
$$lambda_{max} = cos theta sim
1 -frac {alpha^2} {2 n^2},$$
where $alpha$ is the smallest positive root of $cot alpha = 1/(2 alpha)$. The next terms can be obtained in the same manner.
An asymptotic expansion for large $n$ can be obtained as follows. Expanding $A - lambda I$ by the first row and expanding one of the resulting matrices by the first column gives
$$det(A - lambda I) =
left( frac {n + 1} {2 n + 1} - lambda right) det tilde A_{n - 1} -
frac 1 4 det tilde A_{n - 2},$$
where $tilde A_n$ is a tridiagonal Toeplitz matrix with $-lambda$ on the main diagonal and $1/2$ on the two adjacent diagonals. The determinant of $tilde A_n$ is $2^{-n} U_n(-lambda)$, where $U_n$ is the Chebyshev polynomial of the second kind. Since $U_n(-cos theta) = (-1)^n csc theta ,sin ,(n + 1) theta$, $det(A - lambda I) = 0$ reduces to
$$cot n theta =
left( frac {2 n + 2} {2 n + 1} - cos theta right) csc theta.$$
Taking $theta = alpha/n$ and expanding both sides of the equation into series, we obtain $cot alpha = 1/(2 alpha) + O(1/n)$, determining $alpha$. Then
$$lambda_{max} = cos theta sim
1 -frac {alpha^2} {2 n^2},$$
where $alpha$ is the smallest positive root of $cot alpha = 1/(2 alpha)$. The next terms can be obtained in the same manner.
edited Nov 15 at 16:24
answered Nov 15 at 0:05
Maxim
3,716217
3,716217
Wow, thanks for your help. This seems to be correct but I have a question. Why do you compute $U_n(cos(Delta))$? Because the largest eigenvalue of A seems to be related to a cosine term or is there a different mathematical evidence?
– Jannik
Nov 15 at 7:25
The change of variables is $lambda = cos theta$, because we want to use the identity $U_n(-cos x) = (-1)^n csc x ,sin ,(n + 1) x$. Then it's just some algebraic manipulations to get the equation for $theta$. $lambda$ close to one corresponds to $theta$ close to zero.
– Maxim
Nov 15 at 15:58
add a comment |
Wow, thanks for your help. This seems to be correct but I have a question. Why do you compute $U_n(cos(Delta))$? Because the largest eigenvalue of A seems to be related to a cosine term or is there a different mathematical evidence?
– Jannik
Nov 15 at 7:25
The change of variables is $lambda = cos theta$, because we want to use the identity $U_n(-cos x) = (-1)^n csc x ,sin ,(n + 1) x$. Then it's just some algebraic manipulations to get the equation for $theta$. $lambda$ close to one corresponds to $theta$ close to zero.
– Maxim
Nov 15 at 15:58
Wow, thanks for your help. This seems to be correct but I have a question. Why do you compute $U_n(cos(Delta))$? Because the largest eigenvalue of A seems to be related to a cosine term or is there a different mathematical evidence?
– Jannik
Nov 15 at 7:25
Wow, thanks for your help. This seems to be correct but I have a question. Why do you compute $U_n(cos(Delta))$? Because the largest eigenvalue of A seems to be related to a cosine term or is there a different mathematical evidence?
– Jannik
Nov 15 at 7:25
The change of variables is $lambda = cos theta$, because we want to use the identity $U_n(-cos x) = (-1)^n csc x ,sin ,(n + 1) x$. Then it's just some algebraic manipulations to get the equation for $theta$. $lambda$ close to one corresponds to $theta$ close to zero.
– Maxim
Nov 15 at 15:58
The change of variables is $lambda = cos theta$, because we want to use the identity $U_n(-cos x) = (-1)^n csc x ,sin ,(n + 1) x$. Then it's just some algebraic manipulations to get the equation for $theta$. $lambda$ close to one corresponds to $theta$ close to zero.
– Maxim
Nov 15 at 15:58
add a comment |
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2998417%2ffind-largest-eigenvalue-of-specific-nonnegative-matrix%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Is the dimension of the matrix $n times n$, and should $a_{n-1, n}$ be $1/2$ instead of zero?
– Maxim
Nov 14 at 17:56
Yes, you are right. Instead of the zero I wanted to include $vdots$but you are right that $a_{n-1,n}$ is $frac{1}{2}$.
– Jannik
Nov 14 at 18:12
This is equivalent to maximizing $(n + 1) x_1^2/(2 n + 1) + x_1 x_2 + ldots + x_{n - 1} x_n$ over the sphere $lVert x rVert = 1$. But that doesn't seem to make things simpler.
– Maxim
Nov 14 at 18:25
$1-frac{1}{2(n-1)(2n-1)}$ seems to be an upper bound on the largest eigenvalue (verified via testing for $1 leq n leq 10 000$ in MatLab). Now, only the proof is missing ...
– Jannik
Nov 14 at 19:31