Find largest eigenvalue of specific nonnegative matrix











up vote
1
down vote

favorite
1












I look for the largest eigenvalue of the following matrix (or at least a small upper bound).
The only thing I know is that the eigenvalue is smaller than 1 and converges to $cos(frac{pi}{2n+1})$ with growing n.



In general, it is very hard to compute the characteristic polynomial to calculate the eigenvalue and that's why I hope for an easier way.



Has anyone some ideas?



The dimension of the matrix is $n times n$.
$A = begin{bmatrix}
frac{1}{2-frac{1}{n+1}}& frac{1}{2} & 0 & 0 & dots & 0 \
frac{1}{2} & 0 & frac{1}{2} & 0 & dots & 0 \
0 & frac{1}{2} & 0 & frac{1}{2} & dots & 0 \
vdots & vdots & vdots & vdots & vdots & vdots \
0 & 0 & 0 &0 & frac{1}{2} & 0
end{bmatrix}$










share|cite|improve this question
























  • Is the dimension of the matrix $n times n$, and should $a_{n-1, n}$ be $1/2$ instead of zero?
    – Maxim
    Nov 14 at 17:56










  • Yes, you are right. Instead of the zero I wanted to include $vdots$but you are right that $a_{n-1,n}$ is $frac{1}{2}$.
    – Jannik
    Nov 14 at 18:12










  • This is equivalent to maximizing $(n + 1) x_1^2/(2 n + 1) + x_1 x_2 + ldots + x_{n - 1} x_n$ over the sphere $lVert x rVert = 1$. But that doesn't seem to make things simpler.
    – Maxim
    Nov 14 at 18:25












  • $1-frac{1}{2(n-1)(2n-1)}$ seems to be an upper bound on the largest eigenvalue (verified via testing for $1 leq n leq 10 000$ in MatLab). Now, only the proof is missing ...
    – Jannik
    Nov 14 at 19:31

















up vote
1
down vote

favorite
1












I look for the largest eigenvalue of the following matrix (or at least a small upper bound).
The only thing I know is that the eigenvalue is smaller than 1 and converges to $cos(frac{pi}{2n+1})$ with growing n.



In general, it is very hard to compute the characteristic polynomial to calculate the eigenvalue and that's why I hope for an easier way.



Has anyone some ideas?



The dimension of the matrix is $n times n$.
$A = begin{bmatrix}
frac{1}{2-frac{1}{n+1}}& frac{1}{2} & 0 & 0 & dots & 0 \
frac{1}{2} & 0 & frac{1}{2} & 0 & dots & 0 \
0 & frac{1}{2} & 0 & frac{1}{2} & dots & 0 \
vdots & vdots & vdots & vdots & vdots & vdots \
0 & 0 & 0 &0 & frac{1}{2} & 0
end{bmatrix}$










share|cite|improve this question
























  • Is the dimension of the matrix $n times n$, and should $a_{n-1, n}$ be $1/2$ instead of zero?
    – Maxim
    Nov 14 at 17:56










  • Yes, you are right. Instead of the zero I wanted to include $vdots$but you are right that $a_{n-1,n}$ is $frac{1}{2}$.
    – Jannik
    Nov 14 at 18:12










  • This is equivalent to maximizing $(n + 1) x_1^2/(2 n + 1) + x_1 x_2 + ldots + x_{n - 1} x_n$ over the sphere $lVert x rVert = 1$. But that doesn't seem to make things simpler.
    – Maxim
    Nov 14 at 18:25












  • $1-frac{1}{2(n-1)(2n-1)}$ seems to be an upper bound on the largest eigenvalue (verified via testing for $1 leq n leq 10 000$ in MatLab). Now, only the proof is missing ...
    – Jannik
    Nov 14 at 19:31















up vote
1
down vote

favorite
1









up vote
1
down vote

favorite
1






1





I look for the largest eigenvalue of the following matrix (or at least a small upper bound).
The only thing I know is that the eigenvalue is smaller than 1 and converges to $cos(frac{pi}{2n+1})$ with growing n.



In general, it is very hard to compute the characteristic polynomial to calculate the eigenvalue and that's why I hope for an easier way.



Has anyone some ideas?



The dimension of the matrix is $n times n$.
$A = begin{bmatrix}
frac{1}{2-frac{1}{n+1}}& frac{1}{2} & 0 & 0 & dots & 0 \
frac{1}{2} & 0 & frac{1}{2} & 0 & dots & 0 \
0 & frac{1}{2} & 0 & frac{1}{2} & dots & 0 \
vdots & vdots & vdots & vdots & vdots & vdots \
0 & 0 & 0 &0 & frac{1}{2} & 0
end{bmatrix}$










share|cite|improve this question















I look for the largest eigenvalue of the following matrix (or at least a small upper bound).
The only thing I know is that the eigenvalue is smaller than 1 and converges to $cos(frac{pi}{2n+1})$ with growing n.



In general, it is very hard to compute the characteristic polynomial to calculate the eigenvalue and that's why I hope for an easier way.



Has anyone some ideas?



The dimension of the matrix is $n times n$.
$A = begin{bmatrix}
frac{1}{2-frac{1}{n+1}}& frac{1}{2} & 0 & 0 & dots & 0 \
frac{1}{2} & 0 & frac{1}{2} & 0 & dots & 0 \
0 & frac{1}{2} & 0 & frac{1}{2} & dots & 0 \
vdots & vdots & vdots & vdots & vdots & vdots \
0 & 0 & 0 &0 & frac{1}{2} & 0
end{bmatrix}$







linear-algebra matrices eigenvalues-eigenvectors






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Nov 14 at 18:11

























asked Nov 14 at 15:53









Jannik

467




467












  • Is the dimension of the matrix $n times n$, and should $a_{n-1, n}$ be $1/2$ instead of zero?
    – Maxim
    Nov 14 at 17:56










  • Yes, you are right. Instead of the zero I wanted to include $vdots$but you are right that $a_{n-1,n}$ is $frac{1}{2}$.
    – Jannik
    Nov 14 at 18:12










  • This is equivalent to maximizing $(n + 1) x_1^2/(2 n + 1) + x_1 x_2 + ldots + x_{n - 1} x_n$ over the sphere $lVert x rVert = 1$. But that doesn't seem to make things simpler.
    – Maxim
    Nov 14 at 18:25












  • $1-frac{1}{2(n-1)(2n-1)}$ seems to be an upper bound on the largest eigenvalue (verified via testing for $1 leq n leq 10 000$ in MatLab). Now, only the proof is missing ...
    – Jannik
    Nov 14 at 19:31




















  • Is the dimension of the matrix $n times n$, and should $a_{n-1, n}$ be $1/2$ instead of zero?
    – Maxim
    Nov 14 at 17:56










  • Yes, you are right. Instead of the zero I wanted to include $vdots$but you are right that $a_{n-1,n}$ is $frac{1}{2}$.
    – Jannik
    Nov 14 at 18:12










  • This is equivalent to maximizing $(n + 1) x_1^2/(2 n + 1) + x_1 x_2 + ldots + x_{n - 1} x_n$ over the sphere $lVert x rVert = 1$. But that doesn't seem to make things simpler.
    – Maxim
    Nov 14 at 18:25












  • $1-frac{1}{2(n-1)(2n-1)}$ seems to be an upper bound on the largest eigenvalue (verified via testing for $1 leq n leq 10 000$ in MatLab). Now, only the proof is missing ...
    – Jannik
    Nov 14 at 19:31


















Is the dimension of the matrix $n times n$, and should $a_{n-1, n}$ be $1/2$ instead of zero?
– Maxim
Nov 14 at 17:56




Is the dimension of the matrix $n times n$, and should $a_{n-1, n}$ be $1/2$ instead of zero?
– Maxim
Nov 14 at 17:56












Yes, you are right. Instead of the zero I wanted to include $vdots$but you are right that $a_{n-1,n}$ is $frac{1}{2}$.
– Jannik
Nov 14 at 18:12




Yes, you are right. Instead of the zero I wanted to include $vdots$but you are right that $a_{n-1,n}$ is $frac{1}{2}$.
– Jannik
Nov 14 at 18:12












This is equivalent to maximizing $(n + 1) x_1^2/(2 n + 1) + x_1 x_2 + ldots + x_{n - 1} x_n$ over the sphere $lVert x rVert = 1$. But that doesn't seem to make things simpler.
– Maxim
Nov 14 at 18:25






This is equivalent to maximizing $(n + 1) x_1^2/(2 n + 1) + x_1 x_2 + ldots + x_{n - 1} x_n$ over the sphere $lVert x rVert = 1$. But that doesn't seem to make things simpler.
– Maxim
Nov 14 at 18:25














$1-frac{1}{2(n-1)(2n-1)}$ seems to be an upper bound on the largest eigenvalue (verified via testing for $1 leq n leq 10 000$ in MatLab). Now, only the proof is missing ...
– Jannik
Nov 14 at 19:31






$1-frac{1}{2(n-1)(2n-1)}$ seems to be an upper bound on the largest eigenvalue (verified via testing for $1 leq n leq 10 000$ in MatLab). Now, only the proof is missing ...
– Jannik
Nov 14 at 19:31












1 Answer
1






active

oldest

votes

















up vote
1
down vote



accepted










An asymptotic expansion for large $n$ can be obtained as follows. Expanding $A - lambda I$ by the first row and expanding one of the resulting matrices by the first column gives
$$det(A - lambda I) =
left( frac {n + 1} {2 n + 1} - lambda right) det tilde A_{n - 1} -
frac 1 4 det tilde A_{n - 2},$$

where $tilde A_n$ is a tridiagonal Toeplitz matrix with $-lambda$ on the main diagonal and $1/2$ on the two adjacent diagonals. The determinant of $tilde A_n$ is $2^{-n} U_n(-lambda)$, where $U_n$ is the Chebyshev polynomial of the second kind. Since $U_n(-cos theta) = (-1)^n csc theta ,sin ,(n + 1) theta$, $det(A - lambda I) = 0$ reduces to
$$cot n theta =
left( frac {2 n + 2} {2 n + 1} - cos theta right) csc theta.$$

Taking $theta = alpha/n$ and expanding both sides of the equation into series, we obtain $cot alpha = 1/(2 alpha) + O(1/n)$, determining $alpha$. Then
$$lambda_{max} = cos theta sim
1 -frac {alpha^2} {2 n^2},$$

where $alpha$ is the smallest positive root of $cot alpha = 1/(2 alpha)$. The next terms can be obtained in the same manner.






share|cite|improve this answer























  • Wow, thanks for your help. This seems to be correct but I have a question. Why do you compute $U_n(cos(Delta))$? Because the largest eigenvalue of A seems to be related to a cosine term or is there a different mathematical evidence?
    – Jannik
    Nov 15 at 7:25












  • The change of variables is $lambda = cos theta$, because we want to use the identity $U_n(-cos x) = (-1)^n csc x ,sin ,(n + 1) x$. Then it's just some algebraic manipulations to get the equation for $theta$. $lambda$ close to one corresponds to $theta$ close to zero.
    – Maxim
    Nov 15 at 15:58











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














 

draft saved


draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2998417%2ffind-largest-eigenvalue-of-specific-nonnegative-matrix%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
1
down vote



accepted










An asymptotic expansion for large $n$ can be obtained as follows. Expanding $A - lambda I$ by the first row and expanding one of the resulting matrices by the first column gives
$$det(A - lambda I) =
left( frac {n + 1} {2 n + 1} - lambda right) det tilde A_{n - 1} -
frac 1 4 det tilde A_{n - 2},$$

where $tilde A_n$ is a tridiagonal Toeplitz matrix with $-lambda$ on the main diagonal and $1/2$ on the two adjacent diagonals. The determinant of $tilde A_n$ is $2^{-n} U_n(-lambda)$, where $U_n$ is the Chebyshev polynomial of the second kind. Since $U_n(-cos theta) = (-1)^n csc theta ,sin ,(n + 1) theta$, $det(A - lambda I) = 0$ reduces to
$$cot n theta =
left( frac {2 n + 2} {2 n + 1} - cos theta right) csc theta.$$

Taking $theta = alpha/n$ and expanding both sides of the equation into series, we obtain $cot alpha = 1/(2 alpha) + O(1/n)$, determining $alpha$. Then
$$lambda_{max} = cos theta sim
1 -frac {alpha^2} {2 n^2},$$

where $alpha$ is the smallest positive root of $cot alpha = 1/(2 alpha)$. The next terms can be obtained in the same manner.






share|cite|improve this answer























  • Wow, thanks for your help. This seems to be correct but I have a question. Why do you compute $U_n(cos(Delta))$? Because the largest eigenvalue of A seems to be related to a cosine term or is there a different mathematical evidence?
    – Jannik
    Nov 15 at 7:25












  • The change of variables is $lambda = cos theta$, because we want to use the identity $U_n(-cos x) = (-1)^n csc x ,sin ,(n + 1) x$. Then it's just some algebraic manipulations to get the equation for $theta$. $lambda$ close to one corresponds to $theta$ close to zero.
    – Maxim
    Nov 15 at 15:58















up vote
1
down vote



accepted










An asymptotic expansion for large $n$ can be obtained as follows. Expanding $A - lambda I$ by the first row and expanding one of the resulting matrices by the first column gives
$$det(A - lambda I) =
left( frac {n + 1} {2 n + 1} - lambda right) det tilde A_{n - 1} -
frac 1 4 det tilde A_{n - 2},$$

where $tilde A_n$ is a tridiagonal Toeplitz matrix with $-lambda$ on the main diagonal and $1/2$ on the two adjacent diagonals. The determinant of $tilde A_n$ is $2^{-n} U_n(-lambda)$, where $U_n$ is the Chebyshev polynomial of the second kind. Since $U_n(-cos theta) = (-1)^n csc theta ,sin ,(n + 1) theta$, $det(A - lambda I) = 0$ reduces to
$$cot n theta =
left( frac {2 n + 2} {2 n + 1} - cos theta right) csc theta.$$

Taking $theta = alpha/n$ and expanding both sides of the equation into series, we obtain $cot alpha = 1/(2 alpha) + O(1/n)$, determining $alpha$. Then
$$lambda_{max} = cos theta sim
1 -frac {alpha^2} {2 n^2},$$

where $alpha$ is the smallest positive root of $cot alpha = 1/(2 alpha)$. The next terms can be obtained in the same manner.






share|cite|improve this answer























  • Wow, thanks for your help. This seems to be correct but I have a question. Why do you compute $U_n(cos(Delta))$? Because the largest eigenvalue of A seems to be related to a cosine term or is there a different mathematical evidence?
    – Jannik
    Nov 15 at 7:25












  • The change of variables is $lambda = cos theta$, because we want to use the identity $U_n(-cos x) = (-1)^n csc x ,sin ,(n + 1) x$. Then it's just some algebraic manipulations to get the equation for $theta$. $lambda$ close to one corresponds to $theta$ close to zero.
    – Maxim
    Nov 15 at 15:58













up vote
1
down vote



accepted







up vote
1
down vote



accepted






An asymptotic expansion for large $n$ can be obtained as follows. Expanding $A - lambda I$ by the first row and expanding one of the resulting matrices by the first column gives
$$det(A - lambda I) =
left( frac {n + 1} {2 n + 1} - lambda right) det tilde A_{n - 1} -
frac 1 4 det tilde A_{n - 2},$$

where $tilde A_n$ is a tridiagonal Toeplitz matrix with $-lambda$ on the main diagonal and $1/2$ on the two adjacent diagonals. The determinant of $tilde A_n$ is $2^{-n} U_n(-lambda)$, where $U_n$ is the Chebyshev polynomial of the second kind. Since $U_n(-cos theta) = (-1)^n csc theta ,sin ,(n + 1) theta$, $det(A - lambda I) = 0$ reduces to
$$cot n theta =
left( frac {2 n + 2} {2 n + 1} - cos theta right) csc theta.$$

Taking $theta = alpha/n$ and expanding both sides of the equation into series, we obtain $cot alpha = 1/(2 alpha) + O(1/n)$, determining $alpha$. Then
$$lambda_{max} = cos theta sim
1 -frac {alpha^2} {2 n^2},$$

where $alpha$ is the smallest positive root of $cot alpha = 1/(2 alpha)$. The next terms can be obtained in the same manner.






share|cite|improve this answer














An asymptotic expansion for large $n$ can be obtained as follows. Expanding $A - lambda I$ by the first row and expanding one of the resulting matrices by the first column gives
$$det(A - lambda I) =
left( frac {n + 1} {2 n + 1} - lambda right) det tilde A_{n - 1} -
frac 1 4 det tilde A_{n - 2},$$

where $tilde A_n$ is a tridiagonal Toeplitz matrix with $-lambda$ on the main diagonal and $1/2$ on the two adjacent diagonals. The determinant of $tilde A_n$ is $2^{-n} U_n(-lambda)$, where $U_n$ is the Chebyshev polynomial of the second kind. Since $U_n(-cos theta) = (-1)^n csc theta ,sin ,(n + 1) theta$, $det(A - lambda I) = 0$ reduces to
$$cot n theta =
left( frac {2 n + 2} {2 n + 1} - cos theta right) csc theta.$$

Taking $theta = alpha/n$ and expanding both sides of the equation into series, we obtain $cot alpha = 1/(2 alpha) + O(1/n)$, determining $alpha$. Then
$$lambda_{max} = cos theta sim
1 -frac {alpha^2} {2 n^2},$$

where $alpha$ is the smallest positive root of $cot alpha = 1/(2 alpha)$. The next terms can be obtained in the same manner.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Nov 15 at 16:24

























answered Nov 15 at 0:05









Maxim

3,716217




3,716217












  • Wow, thanks for your help. This seems to be correct but I have a question. Why do you compute $U_n(cos(Delta))$? Because the largest eigenvalue of A seems to be related to a cosine term or is there a different mathematical evidence?
    – Jannik
    Nov 15 at 7:25












  • The change of variables is $lambda = cos theta$, because we want to use the identity $U_n(-cos x) = (-1)^n csc x ,sin ,(n + 1) x$. Then it's just some algebraic manipulations to get the equation for $theta$. $lambda$ close to one corresponds to $theta$ close to zero.
    – Maxim
    Nov 15 at 15:58


















  • Wow, thanks for your help. This seems to be correct but I have a question. Why do you compute $U_n(cos(Delta))$? Because the largest eigenvalue of A seems to be related to a cosine term or is there a different mathematical evidence?
    – Jannik
    Nov 15 at 7:25












  • The change of variables is $lambda = cos theta$, because we want to use the identity $U_n(-cos x) = (-1)^n csc x ,sin ,(n + 1) x$. Then it's just some algebraic manipulations to get the equation for $theta$. $lambda$ close to one corresponds to $theta$ close to zero.
    – Maxim
    Nov 15 at 15:58
















Wow, thanks for your help. This seems to be correct but I have a question. Why do you compute $U_n(cos(Delta))$? Because the largest eigenvalue of A seems to be related to a cosine term or is there a different mathematical evidence?
– Jannik
Nov 15 at 7:25






Wow, thanks for your help. This seems to be correct but I have a question. Why do you compute $U_n(cos(Delta))$? Because the largest eigenvalue of A seems to be related to a cosine term or is there a different mathematical evidence?
– Jannik
Nov 15 at 7:25














The change of variables is $lambda = cos theta$, because we want to use the identity $U_n(-cos x) = (-1)^n csc x ,sin ,(n + 1) x$. Then it's just some algebraic manipulations to get the equation for $theta$. $lambda$ close to one corresponds to $theta$ close to zero.
– Maxim
Nov 15 at 15:58




The change of variables is $lambda = cos theta$, because we want to use the identity $U_n(-cos x) = (-1)^n csc x ,sin ,(n + 1) x$. Then it's just some algebraic manipulations to get the equation for $theta$. $lambda$ close to one corresponds to $theta$ close to zero.
– Maxim
Nov 15 at 15:58


















 

draft saved


draft discarded



















































 


draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2998417%2ffind-largest-eigenvalue-of-specific-nonnegative-matrix%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Ellipse (mathématiques)

Quarter-circle Tiles

Mont Emei