Proving $lambda_{text{min}}leq R(x)$ without Spectral Theorem












2












$begingroup$


Given a real-symmetric (or Hermitian), positive definite matrix $A$, it is well known that: $$lambda_{min}leqdfrac{(x,Ax)}{(x,x)}. tag{1}$$



This is a direct consequence of the min-max theorem and also easily proved by the fact that such an $A$ has orthonormal eigenbasis. But is there any way to prove this without invoking the Spectral theorem or SVD decomposition or anything that is similarly powerful?



The best I could do was:
$$dfrac{(x,Ax)}{(x,x)}geq lambdacdotdfrac{(x,v)^2}{(v,v)(x,x)} tag{2}$$
where $lambda$ and $v$ are an arbitrary eigenpair of $A$, which is weaker than $(1)$ by Cauchy-Schwartz.










share|cite|improve this question











$endgroup$








  • 5




    $begingroup$
    How about showing that the minimizer of $(Ax,x)$ over the unit sphere ${x:|x|=1}$ exists (due to the compactness of the unit sphere in the finite-dimensional normed space) and is an eigenvector corresponding to the very minimum? This is a usual variational argument.
    $endgroup$
    – Sangchul Lee
    Dec 14 '18 at 18:55










  • $begingroup$
    @SangchulLee Though it is a nice argument, if the spectral theorem is considered too powerful, then certainly the use of compactness here is.
    $endgroup$
    – SvanN
    Dec 14 '18 at 18:56










  • $begingroup$
    @SvanN we can 'indirectly' appeal to compactness by considering a minimizing sequence (positive-definiteness guarantees a minimum exists) and then using Bolzano-Weierstrauss to extract the convergent subsequence by hand.
    $endgroup$
    – Strants
    Dec 14 '18 at 19:04










  • $begingroup$
    @SangchulLee I like that idea but how does one argue that the minimizer is an eigenvector? Is it by finding the gradient to show the critical points are the eigenvectors and then computing the Hessian?
    $endgroup$
    – dezdichado
    Dec 14 '18 at 19:15










  • $begingroup$
    I am puzzled by the fact that you want to prove the result for a symmetric definite matrix although this result is valid for a general symmetric matrix. Do you desire an alternate proof specific to the SDP case ?
    $endgroup$
    – Jean Marie
    Dec 14 '18 at 21:51
















2












$begingroup$


Given a real-symmetric (or Hermitian), positive definite matrix $A$, it is well known that: $$lambda_{min}leqdfrac{(x,Ax)}{(x,x)}. tag{1}$$



This is a direct consequence of the min-max theorem and also easily proved by the fact that such an $A$ has orthonormal eigenbasis. But is there any way to prove this without invoking the Spectral theorem or SVD decomposition or anything that is similarly powerful?



The best I could do was:
$$dfrac{(x,Ax)}{(x,x)}geq lambdacdotdfrac{(x,v)^2}{(v,v)(x,x)} tag{2}$$
where $lambda$ and $v$ are an arbitrary eigenpair of $A$, which is weaker than $(1)$ by Cauchy-Schwartz.










share|cite|improve this question











$endgroup$








  • 5




    $begingroup$
    How about showing that the minimizer of $(Ax,x)$ over the unit sphere ${x:|x|=1}$ exists (due to the compactness of the unit sphere in the finite-dimensional normed space) and is an eigenvector corresponding to the very minimum? This is a usual variational argument.
    $endgroup$
    – Sangchul Lee
    Dec 14 '18 at 18:55










  • $begingroup$
    @SangchulLee Though it is a nice argument, if the spectral theorem is considered too powerful, then certainly the use of compactness here is.
    $endgroup$
    – SvanN
    Dec 14 '18 at 18:56










  • $begingroup$
    @SvanN we can 'indirectly' appeal to compactness by considering a minimizing sequence (positive-definiteness guarantees a minimum exists) and then using Bolzano-Weierstrauss to extract the convergent subsequence by hand.
    $endgroup$
    – Strants
    Dec 14 '18 at 19:04










  • $begingroup$
    @SangchulLee I like that idea but how does one argue that the minimizer is an eigenvector? Is it by finding the gradient to show the critical points are the eigenvectors and then computing the Hessian?
    $endgroup$
    – dezdichado
    Dec 14 '18 at 19:15










  • $begingroup$
    I am puzzled by the fact that you want to prove the result for a symmetric definite matrix although this result is valid for a general symmetric matrix. Do you desire an alternate proof specific to the SDP case ?
    $endgroup$
    – Jean Marie
    Dec 14 '18 at 21:51














2












2








2


0



$begingroup$


Given a real-symmetric (or Hermitian), positive definite matrix $A$, it is well known that: $$lambda_{min}leqdfrac{(x,Ax)}{(x,x)}. tag{1}$$



This is a direct consequence of the min-max theorem and also easily proved by the fact that such an $A$ has orthonormal eigenbasis. But is there any way to prove this without invoking the Spectral theorem or SVD decomposition or anything that is similarly powerful?



The best I could do was:
$$dfrac{(x,Ax)}{(x,x)}geq lambdacdotdfrac{(x,v)^2}{(v,v)(x,x)} tag{2}$$
where $lambda$ and $v$ are an arbitrary eigenpair of $A$, which is weaker than $(1)$ by Cauchy-Schwartz.










share|cite|improve this question











$endgroup$




Given a real-symmetric (or Hermitian), positive definite matrix $A$, it is well known that: $$lambda_{min}leqdfrac{(x,Ax)}{(x,x)}. tag{1}$$



This is a direct consequence of the min-max theorem and also easily proved by the fact that such an $A$ has orthonormal eigenbasis. But is there any way to prove this without invoking the Spectral theorem or SVD decomposition or anything that is similarly powerful?



The best I could do was:
$$dfrac{(x,Ax)}{(x,x)}geq lambdacdotdfrac{(x,v)^2}{(v,v)(x,x)} tag{2}$$
where $lambda$ and $v$ are an arbitrary eigenpair of $A$, which is weaker than $(1)$ by Cauchy-Schwartz.







linear-algebra eigenvalues-eigenvectors spectral-theory






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Dec 18 '18 at 10:08









Rócherz

2,7762721




2,7762721










asked Dec 14 '18 at 18:47









dezdichadodezdichado

6,3711929




6,3711929








  • 5




    $begingroup$
    How about showing that the minimizer of $(Ax,x)$ over the unit sphere ${x:|x|=1}$ exists (due to the compactness of the unit sphere in the finite-dimensional normed space) and is an eigenvector corresponding to the very minimum? This is a usual variational argument.
    $endgroup$
    – Sangchul Lee
    Dec 14 '18 at 18:55










  • $begingroup$
    @SangchulLee Though it is a nice argument, if the spectral theorem is considered too powerful, then certainly the use of compactness here is.
    $endgroup$
    – SvanN
    Dec 14 '18 at 18:56










  • $begingroup$
    @SvanN we can 'indirectly' appeal to compactness by considering a minimizing sequence (positive-definiteness guarantees a minimum exists) and then using Bolzano-Weierstrauss to extract the convergent subsequence by hand.
    $endgroup$
    – Strants
    Dec 14 '18 at 19:04










  • $begingroup$
    @SangchulLee I like that idea but how does one argue that the minimizer is an eigenvector? Is it by finding the gradient to show the critical points are the eigenvectors and then computing the Hessian?
    $endgroup$
    – dezdichado
    Dec 14 '18 at 19:15










  • $begingroup$
    I am puzzled by the fact that you want to prove the result for a symmetric definite matrix although this result is valid for a general symmetric matrix. Do you desire an alternate proof specific to the SDP case ?
    $endgroup$
    – Jean Marie
    Dec 14 '18 at 21:51














  • 5




    $begingroup$
    How about showing that the minimizer of $(Ax,x)$ over the unit sphere ${x:|x|=1}$ exists (due to the compactness of the unit sphere in the finite-dimensional normed space) and is an eigenvector corresponding to the very minimum? This is a usual variational argument.
    $endgroup$
    – Sangchul Lee
    Dec 14 '18 at 18:55










  • $begingroup$
    @SangchulLee Though it is a nice argument, if the spectral theorem is considered too powerful, then certainly the use of compactness here is.
    $endgroup$
    – SvanN
    Dec 14 '18 at 18:56










  • $begingroup$
    @SvanN we can 'indirectly' appeal to compactness by considering a minimizing sequence (positive-definiteness guarantees a minimum exists) and then using Bolzano-Weierstrauss to extract the convergent subsequence by hand.
    $endgroup$
    – Strants
    Dec 14 '18 at 19:04










  • $begingroup$
    @SangchulLee I like that idea but how does one argue that the minimizer is an eigenvector? Is it by finding the gradient to show the critical points are the eigenvectors and then computing the Hessian?
    $endgroup$
    – dezdichado
    Dec 14 '18 at 19:15










  • $begingroup$
    I am puzzled by the fact that you want to prove the result for a symmetric definite matrix although this result is valid for a general symmetric matrix. Do you desire an alternate proof specific to the SDP case ?
    $endgroup$
    – Jean Marie
    Dec 14 '18 at 21:51








5




5




$begingroup$
How about showing that the minimizer of $(Ax,x)$ over the unit sphere ${x:|x|=1}$ exists (due to the compactness of the unit sphere in the finite-dimensional normed space) and is an eigenvector corresponding to the very minimum? This is a usual variational argument.
$endgroup$
– Sangchul Lee
Dec 14 '18 at 18:55




$begingroup$
How about showing that the minimizer of $(Ax,x)$ over the unit sphere ${x:|x|=1}$ exists (due to the compactness of the unit sphere in the finite-dimensional normed space) and is an eigenvector corresponding to the very minimum? This is a usual variational argument.
$endgroup$
– Sangchul Lee
Dec 14 '18 at 18:55












$begingroup$
@SangchulLee Though it is a nice argument, if the spectral theorem is considered too powerful, then certainly the use of compactness here is.
$endgroup$
– SvanN
Dec 14 '18 at 18:56




$begingroup$
@SangchulLee Though it is a nice argument, if the spectral theorem is considered too powerful, then certainly the use of compactness here is.
$endgroup$
– SvanN
Dec 14 '18 at 18:56












$begingroup$
@SvanN we can 'indirectly' appeal to compactness by considering a minimizing sequence (positive-definiteness guarantees a minimum exists) and then using Bolzano-Weierstrauss to extract the convergent subsequence by hand.
$endgroup$
– Strants
Dec 14 '18 at 19:04




$begingroup$
@SvanN we can 'indirectly' appeal to compactness by considering a minimizing sequence (positive-definiteness guarantees a minimum exists) and then using Bolzano-Weierstrauss to extract the convergent subsequence by hand.
$endgroup$
– Strants
Dec 14 '18 at 19:04












$begingroup$
@SangchulLee I like that idea but how does one argue that the minimizer is an eigenvector? Is it by finding the gradient to show the critical points are the eigenvectors and then computing the Hessian?
$endgroup$
– dezdichado
Dec 14 '18 at 19:15




$begingroup$
@SangchulLee I like that idea but how does one argue that the minimizer is an eigenvector? Is it by finding the gradient to show the critical points are the eigenvectors and then computing the Hessian?
$endgroup$
– dezdichado
Dec 14 '18 at 19:15












$begingroup$
I am puzzled by the fact that you want to prove the result for a symmetric definite matrix although this result is valid for a general symmetric matrix. Do you desire an alternate proof specific to the SDP case ?
$endgroup$
– Jean Marie
Dec 14 '18 at 21:51




$begingroup$
I am puzzled by the fact that you want to prove the result for a symmetric definite matrix although this result is valid for a general symmetric matrix. Do you desire an alternate proof specific to the SDP case ?
$endgroup$
– Jean Marie
Dec 14 '18 at 21:51










2 Answers
2






active

oldest

votes


















3












$begingroup$

Let $m = inf_{xne 0}frac{langle Ax,xrangle}{|x|^2} = inf_{|x|=1}langle Ax,xrangle$. We claim that $lambda_{text{min}} le m$.



It suffices to show that $m$ is indeed an eigenvalue for $A$.



For that, consider a positive definite matrix $B ge 0$ and recall that $$|B| = sup_{|x| = 1} |langle Bx,xrangle| = sup_{|x| = 1} langle Bx,xrangle$$



Pick a sequence $(x_n)_n$ of vectors on the unit sphere such that $|B| = lim_{ntoinfty} langle Bx_n, x_nrangle$. We have
$$|Bx_n - |B|x_n|^2 = |Bx_n|^2 + |B|^2 - 2|B|langle Bx_n, x_nrangle le 2|B|(|B| - langle Bx_n, x_nrangle ) xrightarrow{ntoinfty} 0$$



so $lim_{ntoinfty} |Bx_n - |B|x_n| = 0$. We conclude that $B - |B|I$ is not bounded from below so it cannot be invertible. Hence $|B|$ is an eigenvalue of $B$.



Now consider $B = |A|I - A ge 0$. We have $$|B| = sup_{|x| = 1} langle Bx,xrangle = |A| - inf_{|x| = 1} langle Ax,xrangle = |A| - m$$



Above we showed that $|B| = |A|-m$ is an eigenvalue of $B = |A|I - A$ so $m$ is an eigenvalue of $A$.






In fact, we have $lambda_{text{min}}= m$.

To see this, notice that for any eigenvalue $lambda$ of $A$ holds $lambda ge m$. Indeed, consider $lambda = m-varepsilon$ for some $varepsilon > 0$.



Then for any vector $x$ we have $$langle (A-lambda I)x,xrangle = langle Ax,xrangle - lambdalangle x,xrangle ge (m-lambda)|x|^2 = varepsilon|x|^2$$



so $$|(A-lambda I)x||x| ge |langle (A-lambda I)x,xrangle|ge varepsilon|x|^2$$



Hence $A-lambda I$ is bounded from below and thus injective. Therefore $lambda$ cannot be an eigenvalue.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    thanks, this is excellent. So I guess this proof is possible because $A$ is SPD which is what I wanted anyway.
    $endgroup$
    – dezdichado
    Dec 15 '18 at 1:08










  • $begingroup$
    @dezdichado It is sufficient that $A$ is hermitian (or symmetric). I haven't used $A ge 0$ anywhere. This answer claims the same thing.
    $endgroup$
    – mechanodroid
    Dec 15 '18 at 8:08





















1












$begingroup$

$$text{Let} R(X) = frac{langle X, AX rangle}{langle X, X rangle},$$



which (by setting $U=X/|X|$, i.e. restricting our attention to the unit sphere)is the same as



$$R(U)=langle U, AU rangle.$$



There is a classical result



Proposition : For any $U$ belonging to the unit sphere, $R(U)$ is a barycenter (normlized weighted mean) with positive coefficients of the eigenvalues of $A$.



Corollary $R(U)$ is necessary inside the range $[lambda_{min},lambda_{max}]$.



As I haven't seen it explained in a simple, non allusive way, I prove it again :



Proof : consider an eigendecomposition :



$$A=P^TDP text{with} D=diag(lambda_1,...lambda_n)$$



Then : $$R(U)=langle U, P^TDPU rangle =langle underbrace{PU}_V, DPU rangle=langle V, DV rangle tag{1}$$



Letting $v_1,v_2,... v_n$ be the coordinates of $V$, (recall that
$v_1^2+v_2^2+...+v_n^2=1$ because $U$ belongs to the unit sphere), we can write :



$$R(U)=v_1lambda_1v_1+ v_2lambda_2v_2+cdots + v_nlambda_nv_n$$



i.e.,



$$R(U)=v_1^2lambda_1+ v_2^2lambda_2+cdots + v_n^2lambda_n$$



which is the looked for weighted average of the $lambda_k$ by positive weights $v_k^2$ whose sum is $1$.






share|cite|improve this answer











$endgroup$













    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3039766%2fproving-lambda-textmin-leq-rx-without-spectral-theorem%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    3












    $begingroup$

    Let $m = inf_{xne 0}frac{langle Ax,xrangle}{|x|^2} = inf_{|x|=1}langle Ax,xrangle$. We claim that $lambda_{text{min}} le m$.



    It suffices to show that $m$ is indeed an eigenvalue for $A$.



    For that, consider a positive definite matrix $B ge 0$ and recall that $$|B| = sup_{|x| = 1} |langle Bx,xrangle| = sup_{|x| = 1} langle Bx,xrangle$$



    Pick a sequence $(x_n)_n$ of vectors on the unit sphere such that $|B| = lim_{ntoinfty} langle Bx_n, x_nrangle$. We have
    $$|Bx_n - |B|x_n|^2 = |Bx_n|^2 + |B|^2 - 2|B|langle Bx_n, x_nrangle le 2|B|(|B| - langle Bx_n, x_nrangle ) xrightarrow{ntoinfty} 0$$



    so $lim_{ntoinfty} |Bx_n - |B|x_n| = 0$. We conclude that $B - |B|I$ is not bounded from below so it cannot be invertible. Hence $|B|$ is an eigenvalue of $B$.



    Now consider $B = |A|I - A ge 0$. We have $$|B| = sup_{|x| = 1} langle Bx,xrangle = |A| - inf_{|x| = 1} langle Ax,xrangle = |A| - m$$



    Above we showed that $|B| = |A|-m$ is an eigenvalue of $B = |A|I - A$ so $m$ is an eigenvalue of $A$.






    In fact, we have $lambda_{text{min}}= m$.

    To see this, notice that for any eigenvalue $lambda$ of $A$ holds $lambda ge m$. Indeed, consider $lambda = m-varepsilon$ for some $varepsilon > 0$.



    Then for any vector $x$ we have $$langle (A-lambda I)x,xrangle = langle Ax,xrangle - lambdalangle x,xrangle ge (m-lambda)|x|^2 = varepsilon|x|^2$$



    so $$|(A-lambda I)x||x| ge |langle (A-lambda I)x,xrangle|ge varepsilon|x|^2$$



    Hence $A-lambda I$ is bounded from below and thus injective. Therefore $lambda$ cannot be an eigenvalue.






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      thanks, this is excellent. So I guess this proof is possible because $A$ is SPD which is what I wanted anyway.
      $endgroup$
      – dezdichado
      Dec 15 '18 at 1:08










    • $begingroup$
      @dezdichado It is sufficient that $A$ is hermitian (or symmetric). I haven't used $A ge 0$ anywhere. This answer claims the same thing.
      $endgroup$
      – mechanodroid
      Dec 15 '18 at 8:08


















    3












    $begingroup$

    Let $m = inf_{xne 0}frac{langle Ax,xrangle}{|x|^2} = inf_{|x|=1}langle Ax,xrangle$. We claim that $lambda_{text{min}} le m$.



    It suffices to show that $m$ is indeed an eigenvalue for $A$.



    For that, consider a positive definite matrix $B ge 0$ and recall that $$|B| = sup_{|x| = 1} |langle Bx,xrangle| = sup_{|x| = 1} langle Bx,xrangle$$



    Pick a sequence $(x_n)_n$ of vectors on the unit sphere such that $|B| = lim_{ntoinfty} langle Bx_n, x_nrangle$. We have
    $$|Bx_n - |B|x_n|^2 = |Bx_n|^2 + |B|^2 - 2|B|langle Bx_n, x_nrangle le 2|B|(|B| - langle Bx_n, x_nrangle ) xrightarrow{ntoinfty} 0$$



    so $lim_{ntoinfty} |Bx_n - |B|x_n| = 0$. We conclude that $B - |B|I$ is not bounded from below so it cannot be invertible. Hence $|B|$ is an eigenvalue of $B$.



    Now consider $B = |A|I - A ge 0$. We have $$|B| = sup_{|x| = 1} langle Bx,xrangle = |A| - inf_{|x| = 1} langle Ax,xrangle = |A| - m$$



    Above we showed that $|B| = |A|-m$ is an eigenvalue of $B = |A|I - A$ so $m$ is an eigenvalue of $A$.






    In fact, we have $lambda_{text{min}}= m$.

    To see this, notice that for any eigenvalue $lambda$ of $A$ holds $lambda ge m$. Indeed, consider $lambda = m-varepsilon$ for some $varepsilon > 0$.



    Then for any vector $x$ we have $$langle (A-lambda I)x,xrangle = langle Ax,xrangle - lambdalangle x,xrangle ge (m-lambda)|x|^2 = varepsilon|x|^2$$



    so $$|(A-lambda I)x||x| ge |langle (A-lambda I)x,xrangle|ge varepsilon|x|^2$$



    Hence $A-lambda I$ is bounded from below and thus injective. Therefore $lambda$ cannot be an eigenvalue.






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      thanks, this is excellent. So I guess this proof is possible because $A$ is SPD which is what I wanted anyway.
      $endgroup$
      – dezdichado
      Dec 15 '18 at 1:08










    • $begingroup$
      @dezdichado It is sufficient that $A$ is hermitian (or symmetric). I haven't used $A ge 0$ anywhere. This answer claims the same thing.
      $endgroup$
      – mechanodroid
      Dec 15 '18 at 8:08
















    3












    3








    3





    $begingroup$

    Let $m = inf_{xne 0}frac{langle Ax,xrangle}{|x|^2} = inf_{|x|=1}langle Ax,xrangle$. We claim that $lambda_{text{min}} le m$.



    It suffices to show that $m$ is indeed an eigenvalue for $A$.



    For that, consider a positive definite matrix $B ge 0$ and recall that $$|B| = sup_{|x| = 1} |langle Bx,xrangle| = sup_{|x| = 1} langle Bx,xrangle$$



    Pick a sequence $(x_n)_n$ of vectors on the unit sphere such that $|B| = lim_{ntoinfty} langle Bx_n, x_nrangle$. We have
    $$|Bx_n - |B|x_n|^2 = |Bx_n|^2 + |B|^2 - 2|B|langle Bx_n, x_nrangle le 2|B|(|B| - langle Bx_n, x_nrangle ) xrightarrow{ntoinfty} 0$$



    so $lim_{ntoinfty} |Bx_n - |B|x_n| = 0$. We conclude that $B - |B|I$ is not bounded from below so it cannot be invertible. Hence $|B|$ is an eigenvalue of $B$.



    Now consider $B = |A|I - A ge 0$. We have $$|B| = sup_{|x| = 1} langle Bx,xrangle = |A| - inf_{|x| = 1} langle Ax,xrangle = |A| - m$$



    Above we showed that $|B| = |A|-m$ is an eigenvalue of $B = |A|I - A$ so $m$ is an eigenvalue of $A$.






    In fact, we have $lambda_{text{min}}= m$.

    To see this, notice that for any eigenvalue $lambda$ of $A$ holds $lambda ge m$. Indeed, consider $lambda = m-varepsilon$ for some $varepsilon > 0$.



    Then for any vector $x$ we have $$langle (A-lambda I)x,xrangle = langle Ax,xrangle - lambdalangle x,xrangle ge (m-lambda)|x|^2 = varepsilon|x|^2$$



    so $$|(A-lambda I)x||x| ge |langle (A-lambda I)x,xrangle|ge varepsilon|x|^2$$



    Hence $A-lambda I$ is bounded from below and thus injective. Therefore $lambda$ cannot be an eigenvalue.






    share|cite|improve this answer











    $endgroup$



    Let $m = inf_{xne 0}frac{langle Ax,xrangle}{|x|^2} = inf_{|x|=1}langle Ax,xrangle$. We claim that $lambda_{text{min}} le m$.



    It suffices to show that $m$ is indeed an eigenvalue for $A$.



    For that, consider a positive definite matrix $B ge 0$ and recall that $$|B| = sup_{|x| = 1} |langle Bx,xrangle| = sup_{|x| = 1} langle Bx,xrangle$$



    Pick a sequence $(x_n)_n$ of vectors on the unit sphere such that $|B| = lim_{ntoinfty} langle Bx_n, x_nrangle$. We have
    $$|Bx_n - |B|x_n|^2 = |Bx_n|^2 + |B|^2 - 2|B|langle Bx_n, x_nrangle le 2|B|(|B| - langle Bx_n, x_nrangle ) xrightarrow{ntoinfty} 0$$



    so $lim_{ntoinfty} |Bx_n - |B|x_n| = 0$. We conclude that $B - |B|I$ is not bounded from below so it cannot be invertible. Hence $|B|$ is an eigenvalue of $B$.



    Now consider $B = |A|I - A ge 0$. We have $$|B| = sup_{|x| = 1} langle Bx,xrangle = |A| - inf_{|x| = 1} langle Ax,xrangle = |A| - m$$



    Above we showed that $|B| = |A|-m$ is an eigenvalue of $B = |A|I - A$ so $m$ is an eigenvalue of $A$.






    In fact, we have $lambda_{text{min}}= m$.

    To see this, notice that for any eigenvalue $lambda$ of $A$ holds $lambda ge m$. Indeed, consider $lambda = m-varepsilon$ for some $varepsilon > 0$.



    Then for any vector $x$ we have $$langle (A-lambda I)x,xrangle = langle Ax,xrangle - lambdalangle x,xrangle ge (m-lambda)|x|^2 = varepsilon|x|^2$$



    so $$|(A-lambda I)x||x| ge |langle (A-lambda I)x,xrangle|ge varepsilon|x|^2$$



    Hence $A-lambda I$ is bounded from below and thus injective. Therefore $lambda$ cannot be an eigenvalue.







    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited Dec 15 '18 at 0:12

























    answered Dec 15 '18 at 0:05









    mechanodroidmechanodroid

    27.3k62446




    27.3k62446












    • $begingroup$
      thanks, this is excellent. So I guess this proof is possible because $A$ is SPD which is what I wanted anyway.
      $endgroup$
      – dezdichado
      Dec 15 '18 at 1:08










    • $begingroup$
      @dezdichado It is sufficient that $A$ is hermitian (or symmetric). I haven't used $A ge 0$ anywhere. This answer claims the same thing.
      $endgroup$
      – mechanodroid
      Dec 15 '18 at 8:08




















    • $begingroup$
      thanks, this is excellent. So I guess this proof is possible because $A$ is SPD which is what I wanted anyway.
      $endgroup$
      – dezdichado
      Dec 15 '18 at 1:08










    • $begingroup$
      @dezdichado It is sufficient that $A$ is hermitian (or symmetric). I haven't used $A ge 0$ anywhere. This answer claims the same thing.
      $endgroup$
      – mechanodroid
      Dec 15 '18 at 8:08


















    $begingroup$
    thanks, this is excellent. So I guess this proof is possible because $A$ is SPD which is what I wanted anyway.
    $endgroup$
    – dezdichado
    Dec 15 '18 at 1:08




    $begingroup$
    thanks, this is excellent. So I guess this proof is possible because $A$ is SPD which is what I wanted anyway.
    $endgroup$
    – dezdichado
    Dec 15 '18 at 1:08












    $begingroup$
    @dezdichado It is sufficient that $A$ is hermitian (or symmetric). I haven't used $A ge 0$ anywhere. This answer claims the same thing.
    $endgroup$
    – mechanodroid
    Dec 15 '18 at 8:08






    $begingroup$
    @dezdichado It is sufficient that $A$ is hermitian (or symmetric). I haven't used $A ge 0$ anywhere. This answer claims the same thing.
    $endgroup$
    – mechanodroid
    Dec 15 '18 at 8:08













    1












    $begingroup$

    $$text{Let} R(X) = frac{langle X, AX rangle}{langle X, X rangle},$$



    which (by setting $U=X/|X|$, i.e. restricting our attention to the unit sphere)is the same as



    $$R(U)=langle U, AU rangle.$$



    There is a classical result



    Proposition : For any $U$ belonging to the unit sphere, $R(U)$ is a barycenter (normlized weighted mean) with positive coefficients of the eigenvalues of $A$.



    Corollary $R(U)$ is necessary inside the range $[lambda_{min},lambda_{max}]$.



    As I haven't seen it explained in a simple, non allusive way, I prove it again :



    Proof : consider an eigendecomposition :



    $$A=P^TDP text{with} D=diag(lambda_1,...lambda_n)$$



    Then : $$R(U)=langle U, P^TDPU rangle =langle underbrace{PU}_V, DPU rangle=langle V, DV rangle tag{1}$$



    Letting $v_1,v_2,... v_n$ be the coordinates of $V$, (recall that
    $v_1^2+v_2^2+...+v_n^2=1$ because $U$ belongs to the unit sphere), we can write :



    $$R(U)=v_1lambda_1v_1+ v_2lambda_2v_2+cdots + v_nlambda_nv_n$$



    i.e.,



    $$R(U)=v_1^2lambda_1+ v_2^2lambda_2+cdots + v_n^2lambda_n$$



    which is the looked for weighted average of the $lambda_k$ by positive weights $v_k^2$ whose sum is $1$.






    share|cite|improve this answer











    $endgroup$


















      1












      $begingroup$

      $$text{Let} R(X) = frac{langle X, AX rangle}{langle X, X rangle},$$



      which (by setting $U=X/|X|$, i.e. restricting our attention to the unit sphere)is the same as



      $$R(U)=langle U, AU rangle.$$



      There is a classical result



      Proposition : For any $U$ belonging to the unit sphere, $R(U)$ is a barycenter (normlized weighted mean) with positive coefficients of the eigenvalues of $A$.



      Corollary $R(U)$ is necessary inside the range $[lambda_{min},lambda_{max}]$.



      As I haven't seen it explained in a simple, non allusive way, I prove it again :



      Proof : consider an eigendecomposition :



      $$A=P^TDP text{with} D=diag(lambda_1,...lambda_n)$$



      Then : $$R(U)=langle U, P^TDPU rangle =langle underbrace{PU}_V, DPU rangle=langle V, DV rangle tag{1}$$



      Letting $v_1,v_2,... v_n$ be the coordinates of $V$, (recall that
      $v_1^2+v_2^2+...+v_n^2=1$ because $U$ belongs to the unit sphere), we can write :



      $$R(U)=v_1lambda_1v_1+ v_2lambda_2v_2+cdots + v_nlambda_nv_n$$



      i.e.,



      $$R(U)=v_1^2lambda_1+ v_2^2lambda_2+cdots + v_n^2lambda_n$$



      which is the looked for weighted average of the $lambda_k$ by positive weights $v_k^2$ whose sum is $1$.






      share|cite|improve this answer











      $endgroup$
















        1












        1








        1





        $begingroup$

        $$text{Let} R(X) = frac{langle X, AX rangle}{langle X, X rangle},$$



        which (by setting $U=X/|X|$, i.e. restricting our attention to the unit sphere)is the same as



        $$R(U)=langle U, AU rangle.$$



        There is a classical result



        Proposition : For any $U$ belonging to the unit sphere, $R(U)$ is a barycenter (normlized weighted mean) with positive coefficients of the eigenvalues of $A$.



        Corollary $R(U)$ is necessary inside the range $[lambda_{min},lambda_{max}]$.



        As I haven't seen it explained in a simple, non allusive way, I prove it again :



        Proof : consider an eigendecomposition :



        $$A=P^TDP text{with} D=diag(lambda_1,...lambda_n)$$



        Then : $$R(U)=langle U, P^TDPU rangle =langle underbrace{PU}_V, DPU rangle=langle V, DV rangle tag{1}$$



        Letting $v_1,v_2,... v_n$ be the coordinates of $V$, (recall that
        $v_1^2+v_2^2+...+v_n^2=1$ because $U$ belongs to the unit sphere), we can write :



        $$R(U)=v_1lambda_1v_1+ v_2lambda_2v_2+cdots + v_nlambda_nv_n$$



        i.e.,



        $$R(U)=v_1^2lambda_1+ v_2^2lambda_2+cdots + v_n^2lambda_n$$



        which is the looked for weighted average of the $lambda_k$ by positive weights $v_k^2$ whose sum is $1$.






        share|cite|improve this answer











        $endgroup$



        $$text{Let} R(X) = frac{langle X, AX rangle}{langle X, X rangle},$$



        which (by setting $U=X/|X|$, i.e. restricting our attention to the unit sphere)is the same as



        $$R(U)=langle U, AU rangle.$$



        There is a classical result



        Proposition : For any $U$ belonging to the unit sphere, $R(U)$ is a barycenter (normlized weighted mean) with positive coefficients of the eigenvalues of $A$.



        Corollary $R(U)$ is necessary inside the range $[lambda_{min},lambda_{max}]$.



        As I haven't seen it explained in a simple, non allusive way, I prove it again :



        Proof : consider an eigendecomposition :



        $$A=P^TDP text{with} D=diag(lambda_1,...lambda_n)$$



        Then : $$R(U)=langle U, P^TDPU rangle =langle underbrace{PU}_V, DPU rangle=langle V, DV rangle tag{1}$$



        Letting $v_1,v_2,... v_n$ be the coordinates of $V$, (recall that
        $v_1^2+v_2^2+...+v_n^2=1$ because $U$ belongs to the unit sphere), we can write :



        $$R(U)=v_1lambda_1v_1+ v_2lambda_2v_2+cdots + v_nlambda_nv_n$$



        i.e.,



        $$R(U)=v_1^2lambda_1+ v_2^2lambda_2+cdots + v_n^2lambda_n$$



        which is the looked for weighted average of the $lambda_k$ by positive weights $v_k^2$ whose sum is $1$.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited Dec 14 '18 at 23:30

























        answered Dec 14 '18 at 22:42









        Jean MarieJean Marie

        29.6k42051




        29.6k42051






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3039766%2fproving-lambda-textmin-leq-rx-without-spectral-theorem%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Quarter-circle Tiles

            build a pushdown automaton that recognizes the reverse language of a given pushdown automaton?

            Mont Emei