Uniqueness in Bernstein's theorem of calculus of variations











up vote
5
down vote

favorite












I'm working through Gelfand and Fomin's book on calculus of variations. One of the book's exercises is to prove the uniqueness portion of a result called "Bernstein's theorem" on solutions to equations of the form $y'' = F(x, y, y')$. The book states the theorem thus:




If the functions $F$, $F_y$, and $F_{y'}$ are continuous at every finite point $(x, y)$ for every finite $y$, and if a constant $k > 0$ and functions $$alpha = alpha(x, y) geq 0, qquad beta = beta(x, y) geq 0$$ (which are bounded in every finite region of the plane) can be found such that $$F_y(x, y, y') > k, quad |F(x, y, y')| < alpha y'^2 + beta,$$ then one and only one integral curve satisfying $y'' = F(x, y, y')$ passes through any two points $(a, A)$ and $(b, B)$ with different abscissas ($a neq b$).




(Subscripts on $F$ mean partial derivatives.) The hint for the exercise is:




Let $Delta(x) = varphi_2(x) - varphi_1(x)$, where $varphi_1(x)$ and $varphi_2(x)$ are two solutions of $y'' = F(x, y, y')$, write an expression for $Delta''$ and use the condition $F_y(x, y, y') > k$.




Following the hint, I got the expression $$Delta''(x) = F(x, varphi_2(x), varphi'_2(x)) - F(x, varphi_1(x), varphi_1'(x)).$$



I thought that I could use the condition on $F_y$ to get some sort of lower bound on the magnitude of the RHS of this equation, and then try to turn that into some sort of proof that $Delta(a)$ and $Delta(b)$ cannot both be zero. But because $varphi_1'(x) neq varphi_2'(x)$, I don't know what I can conclude about $ F(x, varphi_2(x), varphi'_2(x)) - F(x, varphi_1(x), varphi_1'(x))$ unless I also know something about $F_{y'}$ as well as $F_y$, and the theorem imposes only a very weak hypothesis, continuity, on $F_{y'}$.










share|cite|improve this question















This question had a bounty worth +50
reputation from Connor Harris that ended 11 hours ago. Grace period ends in 12 hours


This question has not received enough attention.


I'd especially like to see how the book's hint can be made into a proof.
















  • Related question here: math.stackexchange.com/questions/1910803/…
    – Connor Harris
    Nov 16 at 16:04










  • Hi @ConnorHarris, is there a way I can reach you by email?
    – Get Off The Internet
    Nov 19 at 21:13

















up vote
5
down vote

favorite












I'm working through Gelfand and Fomin's book on calculus of variations. One of the book's exercises is to prove the uniqueness portion of a result called "Bernstein's theorem" on solutions to equations of the form $y'' = F(x, y, y')$. The book states the theorem thus:




If the functions $F$, $F_y$, and $F_{y'}$ are continuous at every finite point $(x, y)$ for every finite $y$, and if a constant $k > 0$ and functions $$alpha = alpha(x, y) geq 0, qquad beta = beta(x, y) geq 0$$ (which are bounded in every finite region of the plane) can be found such that $$F_y(x, y, y') > k, quad |F(x, y, y')| < alpha y'^2 + beta,$$ then one and only one integral curve satisfying $y'' = F(x, y, y')$ passes through any two points $(a, A)$ and $(b, B)$ with different abscissas ($a neq b$).




(Subscripts on $F$ mean partial derivatives.) The hint for the exercise is:




Let $Delta(x) = varphi_2(x) - varphi_1(x)$, where $varphi_1(x)$ and $varphi_2(x)$ are two solutions of $y'' = F(x, y, y')$, write an expression for $Delta''$ and use the condition $F_y(x, y, y') > k$.




Following the hint, I got the expression $$Delta''(x) = F(x, varphi_2(x), varphi'_2(x)) - F(x, varphi_1(x), varphi_1'(x)).$$



I thought that I could use the condition on $F_y$ to get some sort of lower bound on the magnitude of the RHS of this equation, and then try to turn that into some sort of proof that $Delta(a)$ and $Delta(b)$ cannot both be zero. But because $varphi_1'(x) neq varphi_2'(x)$, I don't know what I can conclude about $ F(x, varphi_2(x), varphi'_2(x)) - F(x, varphi_1(x), varphi_1'(x))$ unless I also know something about $F_{y'}$ as well as $F_y$, and the theorem imposes only a very weak hypothesis, continuity, on $F_{y'}$.










share|cite|improve this question















This question had a bounty worth +50
reputation from Connor Harris that ended 11 hours ago. Grace period ends in 12 hours


This question has not received enough attention.


I'd especially like to see how the book's hint can be made into a proof.
















  • Related question here: math.stackexchange.com/questions/1910803/…
    – Connor Harris
    Nov 16 at 16:04










  • Hi @ConnorHarris, is there a way I can reach you by email?
    – Get Off The Internet
    Nov 19 at 21:13















up vote
5
down vote

favorite









up vote
5
down vote

favorite











I'm working through Gelfand and Fomin's book on calculus of variations. One of the book's exercises is to prove the uniqueness portion of a result called "Bernstein's theorem" on solutions to equations of the form $y'' = F(x, y, y')$. The book states the theorem thus:




If the functions $F$, $F_y$, and $F_{y'}$ are continuous at every finite point $(x, y)$ for every finite $y$, and if a constant $k > 0$ and functions $$alpha = alpha(x, y) geq 0, qquad beta = beta(x, y) geq 0$$ (which are bounded in every finite region of the plane) can be found such that $$F_y(x, y, y') > k, quad |F(x, y, y')| < alpha y'^2 + beta,$$ then one and only one integral curve satisfying $y'' = F(x, y, y')$ passes through any two points $(a, A)$ and $(b, B)$ with different abscissas ($a neq b$).




(Subscripts on $F$ mean partial derivatives.) The hint for the exercise is:




Let $Delta(x) = varphi_2(x) - varphi_1(x)$, where $varphi_1(x)$ and $varphi_2(x)$ are two solutions of $y'' = F(x, y, y')$, write an expression for $Delta''$ and use the condition $F_y(x, y, y') > k$.




Following the hint, I got the expression $$Delta''(x) = F(x, varphi_2(x), varphi'_2(x)) - F(x, varphi_1(x), varphi_1'(x)).$$



I thought that I could use the condition on $F_y$ to get some sort of lower bound on the magnitude of the RHS of this equation, and then try to turn that into some sort of proof that $Delta(a)$ and $Delta(b)$ cannot both be zero. But because $varphi_1'(x) neq varphi_2'(x)$, I don't know what I can conclude about $ F(x, varphi_2(x), varphi'_2(x)) - F(x, varphi_1(x), varphi_1'(x))$ unless I also know something about $F_{y'}$ as well as $F_y$, and the theorem imposes only a very weak hypothesis, continuity, on $F_{y'}$.










share|cite|improve this question













I'm working through Gelfand and Fomin's book on calculus of variations. One of the book's exercises is to prove the uniqueness portion of a result called "Bernstein's theorem" on solutions to equations of the form $y'' = F(x, y, y')$. The book states the theorem thus:




If the functions $F$, $F_y$, and $F_{y'}$ are continuous at every finite point $(x, y)$ for every finite $y$, and if a constant $k > 0$ and functions $$alpha = alpha(x, y) geq 0, qquad beta = beta(x, y) geq 0$$ (which are bounded in every finite region of the plane) can be found such that $$F_y(x, y, y') > k, quad |F(x, y, y')| < alpha y'^2 + beta,$$ then one and only one integral curve satisfying $y'' = F(x, y, y')$ passes through any two points $(a, A)$ and $(b, B)$ with different abscissas ($a neq b$).




(Subscripts on $F$ mean partial derivatives.) The hint for the exercise is:




Let $Delta(x) = varphi_2(x) - varphi_1(x)$, where $varphi_1(x)$ and $varphi_2(x)$ are two solutions of $y'' = F(x, y, y')$, write an expression for $Delta''$ and use the condition $F_y(x, y, y') > k$.




Following the hint, I got the expression $$Delta''(x) = F(x, varphi_2(x), varphi'_2(x)) - F(x, varphi_1(x), varphi_1'(x)).$$



I thought that I could use the condition on $F_y$ to get some sort of lower bound on the magnitude of the RHS of this equation, and then try to turn that into some sort of proof that $Delta(a)$ and $Delta(b)$ cannot both be zero. But because $varphi_1'(x) neq varphi_2'(x)$, I don't know what I can conclude about $ F(x, varphi_2(x), varphi'_2(x)) - F(x, varphi_1(x), varphi_1'(x))$ unless I also know something about $F_{y'}$ as well as $F_y$, and the theorem imposes only a very weak hypothesis, continuity, on $F_{y'}$.







functional-analysis calculus-of-variations






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked May 29 at 13:25









Connor Harris

4,149723




4,149723






This question had a bounty worth +50
reputation from Connor Harris that ended 11 hours ago. Grace period ends in 12 hours


This question has not received enough attention.


I'd especially like to see how the book's hint can be made into a proof.








This question had a bounty worth +50
reputation from Connor Harris that ended 11 hours ago. Grace period ends in 12 hours


This question has not received enough attention.


I'd especially like to see how the book's hint can be made into a proof.














  • Related question here: math.stackexchange.com/questions/1910803/…
    – Connor Harris
    Nov 16 at 16:04










  • Hi @ConnorHarris, is there a way I can reach you by email?
    – Get Off The Internet
    Nov 19 at 21:13




















  • Related question here: math.stackexchange.com/questions/1910803/…
    – Connor Harris
    Nov 16 at 16:04










  • Hi @ConnorHarris, is there a way I can reach you by email?
    – Get Off The Internet
    Nov 19 at 21:13


















Related question here: math.stackexchange.com/questions/1910803/…
– Connor Harris
Nov 16 at 16:04




Related question here: math.stackexchange.com/questions/1910803/…
– Connor Harris
Nov 16 at 16:04












Hi @ConnorHarris, is there a way I can reach you by email?
– Get Off The Internet
Nov 19 at 21:13






Hi @ConnorHarris, is there a way I can reach you by email?
– Get Off The Internet
Nov 19 at 21:13












2 Answers
2






active

oldest

votes

















up vote
0
down vote













Fix $x$ and use the mean value theorem applied to the function $$g(t):=F(x, tvarphi_2(x)+(1-t)varphi_1(x), tvarphi'_2(x)+(1-t)varphi_1'(x))$$
to find
begin{align}Delta''(x)&=g(1)-g(0)=g'(c)1=F_y(x,f_c(x),f'_c(x))(varphi_2(x)-varphi_1(x))\
&quad+F_{y'}(x,f_c(x),f'_c(x))(varphi_2'(x)-varphi_1'(x))
\:&=-G(x)Delta(x)-H(x)Delta'(x),end{align}

where $f_c(x):=cvarphi_2(x)+(1-c)varphi_1(x)$,
$G(x):=-F_y(x,f_c(x),f'_c(x))$, and $H(x):=-F_{y'}(x,f_c(x),f'_c(x))$. So now you have the linear equation
$$Delta''(x)+H(x)Delta'(x)+G(x)Delta(x)=0,$$ where you know that $Delta(a)=0$, $Delta(b)=0$ and $G(x)le -k<0$. Now you have to apply the maximum principle, which says that if $H$ and $G$ are bounded, $Hle0$, and $Delta$ achieves a nonnegative maximum value $M$ at an interior point $d$ then $Delta(x)equiv M$.
Assume by contradiction that $Delta>0$ somewhere in $(a,b)$, then by continuity it must have a maximum value $M>0$ at some $din (a,b)$ and so $Delta(x)equiv M$ by the maximum principle, which contradicts the fact that $Delta(b)=0$. This shows that $Deltale 0$. By interchanging $varphi_1$ with $varphi_2$ you get that $Delta=0$.



Are you familiar with the maximum principle? You can find it in the book of Protter and Weinberger. Theorem 3. The trick is to take the function
$$z(x):=Delta (x)+varepsilon (e^{alpha (x-d)}-1),$$
where $varepsilon$ is small and $alpha$ very large. Let me know if you want more details.






share|cite|improve this answer






























    up vote
    0
    down vote













    First of all, we may assume without loss of generality that $Delta$ does not change sign on the interval $(a,b)$. For if it does, we instead prove the theorem on a smaller interval $(a,b_*)$ where $b_*$ is the first time after $a$ that $Delta$ vanishes. We conclude from the proof in the special case that $Delta$ is identically $0$ on that shorter interval, and by the uniqueness theorem for the initial value problem---at $b_*$---both solutions are identical on the rest of the interval, too.



    So let us assume that $Delta>0$ on that interval---otherwise we swap the two solutions. You prove$$Delta''>kDelta + h(x)Delta'$$with some function $h(x)$. We get $h(x)$ from the mean value theorem applied to $F$ as a function of $y'$. Can you justify this step?



    Let us skip a technicality and assume without proof that $h$ is at least good enough to have a definite integral$$H(x):= int_a^x h(t),dt.$$Of course $h$ may not be uniquely defined, so we mean to assume that some choice of $h$ is good enough to have a definite integral.



    Then multiply the inequality$$Delta'' - h(x)Delta' >kDelta$$with $exp[-H(x)]$ and obtain$$[ exp(-H(x)) Delta'(x) ]' > k exp(-H(x))Delta(x) >0.$$Can you finish up from here?



    Now to fix the gap with that technical point about the construction of $H(x)$, we may say that we can write $h(x)$ as an integral expression from which it is clear that $h$ is continuous:$$F(varphi_2')-F(varphi_1') = int_0^1 frac{d}{dt} F((1-t)varphi_1'+tvarphi_2') , dt$$from the Fundamental Theorem of Calculus---and we omitted the other variables $x$, $varphi_1(x)$ here.




    Are you familiar with the maximum principle? You can find it in the book of Protter and Weinberger. Theorem 3. The trick is to take the function
    $$z(x):=Delta (x)+varepsilon (e^{alpha (x-d)}-1),$$
    where $varepsilon$ is small and $alpha$ very large.




    The argument I intended is tantamount to the $1$-dimensional maximum principle.



    The trick with the integrating factor $exp[-H(x)]$ is a standard trick in ordinary differential equations---normally used with $1$st order, but here towards the first-order term in $Delta'$.



    The reasoning I had in mind when arguing from$$[exp(-H(x))Delta'(x)]' > kexp(-H(x))Delta(x) > 0$$is that the expression $exp(-H(x)) Delta'(x)$ is strictly increasing over the interval $[a, b]$ because it has positive derivative. On the other hand, at $x = a$, this expression is nonnegative because $Delta > 0$ for $x > a$ and $Delta(a) = 0$, whereas at $x = b$, it is nonpositive for an analogous reason. This is a contradiction.



    Of course this very argument can be used to prove the maximum principle in $1$ dimension.






    share|cite|improve this answer





















      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "69"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














       

      draft saved


      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2800586%2funiqueness-in-bernsteins-theorem-of-calculus-of-variations%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      0
      down vote













      Fix $x$ and use the mean value theorem applied to the function $$g(t):=F(x, tvarphi_2(x)+(1-t)varphi_1(x), tvarphi'_2(x)+(1-t)varphi_1'(x))$$
      to find
      begin{align}Delta''(x)&=g(1)-g(0)=g'(c)1=F_y(x,f_c(x),f'_c(x))(varphi_2(x)-varphi_1(x))\
      &quad+F_{y'}(x,f_c(x),f'_c(x))(varphi_2'(x)-varphi_1'(x))
      \:&=-G(x)Delta(x)-H(x)Delta'(x),end{align}

      where $f_c(x):=cvarphi_2(x)+(1-c)varphi_1(x)$,
      $G(x):=-F_y(x,f_c(x),f'_c(x))$, and $H(x):=-F_{y'}(x,f_c(x),f'_c(x))$. So now you have the linear equation
      $$Delta''(x)+H(x)Delta'(x)+G(x)Delta(x)=0,$$ where you know that $Delta(a)=0$, $Delta(b)=0$ and $G(x)le -k<0$. Now you have to apply the maximum principle, which says that if $H$ and $G$ are bounded, $Hle0$, and $Delta$ achieves a nonnegative maximum value $M$ at an interior point $d$ then $Delta(x)equiv M$.
      Assume by contradiction that $Delta>0$ somewhere in $(a,b)$, then by continuity it must have a maximum value $M>0$ at some $din (a,b)$ and so $Delta(x)equiv M$ by the maximum principle, which contradicts the fact that $Delta(b)=0$. This shows that $Deltale 0$. By interchanging $varphi_1$ with $varphi_2$ you get that $Delta=0$.



      Are you familiar with the maximum principle? You can find it in the book of Protter and Weinberger. Theorem 3. The trick is to take the function
      $$z(x):=Delta (x)+varepsilon (e^{alpha (x-d)}-1),$$
      where $varepsilon$ is small and $alpha$ very large. Let me know if you want more details.






      share|cite|improve this answer



























        up vote
        0
        down vote













        Fix $x$ and use the mean value theorem applied to the function $$g(t):=F(x, tvarphi_2(x)+(1-t)varphi_1(x), tvarphi'_2(x)+(1-t)varphi_1'(x))$$
        to find
        begin{align}Delta''(x)&=g(1)-g(0)=g'(c)1=F_y(x,f_c(x),f'_c(x))(varphi_2(x)-varphi_1(x))\
        &quad+F_{y'}(x,f_c(x),f'_c(x))(varphi_2'(x)-varphi_1'(x))
        \:&=-G(x)Delta(x)-H(x)Delta'(x),end{align}

        where $f_c(x):=cvarphi_2(x)+(1-c)varphi_1(x)$,
        $G(x):=-F_y(x,f_c(x),f'_c(x))$, and $H(x):=-F_{y'}(x,f_c(x),f'_c(x))$. So now you have the linear equation
        $$Delta''(x)+H(x)Delta'(x)+G(x)Delta(x)=0,$$ where you know that $Delta(a)=0$, $Delta(b)=0$ and $G(x)le -k<0$. Now you have to apply the maximum principle, which says that if $H$ and $G$ are bounded, $Hle0$, and $Delta$ achieves a nonnegative maximum value $M$ at an interior point $d$ then $Delta(x)equiv M$.
        Assume by contradiction that $Delta>0$ somewhere in $(a,b)$, then by continuity it must have a maximum value $M>0$ at some $din (a,b)$ and so $Delta(x)equiv M$ by the maximum principle, which contradicts the fact that $Delta(b)=0$. This shows that $Deltale 0$. By interchanging $varphi_1$ with $varphi_2$ you get that $Delta=0$.



        Are you familiar with the maximum principle? You can find it in the book of Protter and Weinberger. Theorem 3. The trick is to take the function
        $$z(x):=Delta (x)+varepsilon (e^{alpha (x-d)}-1),$$
        where $varepsilon$ is small and $alpha$ very large. Let me know if you want more details.






        share|cite|improve this answer

























          up vote
          0
          down vote










          up vote
          0
          down vote









          Fix $x$ and use the mean value theorem applied to the function $$g(t):=F(x, tvarphi_2(x)+(1-t)varphi_1(x), tvarphi'_2(x)+(1-t)varphi_1'(x))$$
          to find
          begin{align}Delta''(x)&=g(1)-g(0)=g'(c)1=F_y(x,f_c(x),f'_c(x))(varphi_2(x)-varphi_1(x))\
          &quad+F_{y'}(x,f_c(x),f'_c(x))(varphi_2'(x)-varphi_1'(x))
          \:&=-G(x)Delta(x)-H(x)Delta'(x),end{align}

          where $f_c(x):=cvarphi_2(x)+(1-c)varphi_1(x)$,
          $G(x):=-F_y(x,f_c(x),f'_c(x))$, and $H(x):=-F_{y'}(x,f_c(x),f'_c(x))$. So now you have the linear equation
          $$Delta''(x)+H(x)Delta'(x)+G(x)Delta(x)=0,$$ where you know that $Delta(a)=0$, $Delta(b)=0$ and $G(x)le -k<0$. Now you have to apply the maximum principle, which says that if $H$ and $G$ are bounded, $Hle0$, and $Delta$ achieves a nonnegative maximum value $M$ at an interior point $d$ then $Delta(x)equiv M$.
          Assume by contradiction that $Delta>0$ somewhere in $(a,b)$, then by continuity it must have a maximum value $M>0$ at some $din (a,b)$ and so $Delta(x)equiv M$ by the maximum principle, which contradicts the fact that $Delta(b)=0$. This shows that $Deltale 0$. By interchanging $varphi_1$ with $varphi_2$ you get that $Delta=0$.



          Are you familiar with the maximum principle? You can find it in the book of Protter and Weinberger. Theorem 3. The trick is to take the function
          $$z(x):=Delta (x)+varepsilon (e^{alpha (x-d)}-1),$$
          where $varepsilon$ is small and $alpha$ very large. Let me know if you want more details.






          share|cite|improve this answer














          Fix $x$ and use the mean value theorem applied to the function $$g(t):=F(x, tvarphi_2(x)+(1-t)varphi_1(x), tvarphi'_2(x)+(1-t)varphi_1'(x))$$
          to find
          begin{align}Delta''(x)&=g(1)-g(0)=g'(c)1=F_y(x,f_c(x),f'_c(x))(varphi_2(x)-varphi_1(x))\
          &quad+F_{y'}(x,f_c(x),f'_c(x))(varphi_2'(x)-varphi_1'(x))
          \:&=-G(x)Delta(x)-H(x)Delta'(x),end{align}

          where $f_c(x):=cvarphi_2(x)+(1-c)varphi_1(x)$,
          $G(x):=-F_y(x,f_c(x),f'_c(x))$, and $H(x):=-F_{y'}(x,f_c(x),f'_c(x))$. So now you have the linear equation
          $$Delta''(x)+H(x)Delta'(x)+G(x)Delta(x)=0,$$ where you know that $Delta(a)=0$, $Delta(b)=0$ and $G(x)le -k<0$. Now you have to apply the maximum principle, which says that if $H$ and $G$ are bounded, $Hle0$, and $Delta$ achieves a nonnegative maximum value $M$ at an interior point $d$ then $Delta(x)equiv M$.
          Assume by contradiction that $Delta>0$ somewhere in $(a,b)$, then by continuity it must have a maximum value $M>0$ at some $din (a,b)$ and so $Delta(x)equiv M$ by the maximum principle, which contradicts the fact that $Delta(b)=0$. This shows that $Deltale 0$. By interchanging $varphi_1$ with $varphi_2$ you get that $Delta=0$.



          Are you familiar with the maximum principle? You can find it in the book of Protter and Weinberger. Theorem 3. The trick is to take the function
          $$z(x):=Delta (x)+varepsilon (e^{alpha (x-d)}-1),$$
          where $varepsilon$ is small and $alpha$ very large. Let me know if you want more details.







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Nov 17 at 14:05

























          answered Nov 17 at 13:58









          Gio67

          12.2k1626




          12.2k1626






















              up vote
              0
              down vote













              First of all, we may assume without loss of generality that $Delta$ does not change sign on the interval $(a,b)$. For if it does, we instead prove the theorem on a smaller interval $(a,b_*)$ where $b_*$ is the first time after $a$ that $Delta$ vanishes. We conclude from the proof in the special case that $Delta$ is identically $0$ on that shorter interval, and by the uniqueness theorem for the initial value problem---at $b_*$---both solutions are identical on the rest of the interval, too.



              So let us assume that $Delta>0$ on that interval---otherwise we swap the two solutions. You prove$$Delta''>kDelta + h(x)Delta'$$with some function $h(x)$. We get $h(x)$ from the mean value theorem applied to $F$ as a function of $y'$. Can you justify this step?



              Let us skip a technicality and assume without proof that $h$ is at least good enough to have a definite integral$$H(x):= int_a^x h(t),dt.$$Of course $h$ may not be uniquely defined, so we mean to assume that some choice of $h$ is good enough to have a definite integral.



              Then multiply the inequality$$Delta'' - h(x)Delta' >kDelta$$with $exp[-H(x)]$ and obtain$$[ exp(-H(x)) Delta'(x) ]' > k exp(-H(x))Delta(x) >0.$$Can you finish up from here?



              Now to fix the gap with that technical point about the construction of $H(x)$, we may say that we can write $h(x)$ as an integral expression from which it is clear that $h$ is continuous:$$F(varphi_2')-F(varphi_1') = int_0^1 frac{d}{dt} F((1-t)varphi_1'+tvarphi_2') , dt$$from the Fundamental Theorem of Calculus---and we omitted the other variables $x$, $varphi_1(x)$ here.




              Are you familiar with the maximum principle? You can find it in the book of Protter and Weinberger. Theorem 3. The trick is to take the function
              $$z(x):=Delta (x)+varepsilon (e^{alpha (x-d)}-1),$$
              where $varepsilon$ is small and $alpha$ very large.




              The argument I intended is tantamount to the $1$-dimensional maximum principle.



              The trick with the integrating factor $exp[-H(x)]$ is a standard trick in ordinary differential equations---normally used with $1$st order, but here towards the first-order term in $Delta'$.



              The reasoning I had in mind when arguing from$$[exp(-H(x))Delta'(x)]' > kexp(-H(x))Delta(x) > 0$$is that the expression $exp(-H(x)) Delta'(x)$ is strictly increasing over the interval $[a, b]$ because it has positive derivative. On the other hand, at $x = a$, this expression is nonnegative because $Delta > 0$ for $x > a$ and $Delta(a) = 0$, whereas at $x = b$, it is nonpositive for an analogous reason. This is a contradiction.



              Of course this very argument can be used to prove the maximum principle in $1$ dimension.






              share|cite|improve this answer

























                up vote
                0
                down vote













                First of all, we may assume without loss of generality that $Delta$ does not change sign on the interval $(a,b)$. For if it does, we instead prove the theorem on a smaller interval $(a,b_*)$ where $b_*$ is the first time after $a$ that $Delta$ vanishes. We conclude from the proof in the special case that $Delta$ is identically $0$ on that shorter interval, and by the uniqueness theorem for the initial value problem---at $b_*$---both solutions are identical on the rest of the interval, too.



                So let us assume that $Delta>0$ on that interval---otherwise we swap the two solutions. You prove$$Delta''>kDelta + h(x)Delta'$$with some function $h(x)$. We get $h(x)$ from the mean value theorem applied to $F$ as a function of $y'$. Can you justify this step?



                Let us skip a technicality and assume without proof that $h$ is at least good enough to have a definite integral$$H(x):= int_a^x h(t),dt.$$Of course $h$ may not be uniquely defined, so we mean to assume that some choice of $h$ is good enough to have a definite integral.



                Then multiply the inequality$$Delta'' - h(x)Delta' >kDelta$$with $exp[-H(x)]$ and obtain$$[ exp(-H(x)) Delta'(x) ]' > k exp(-H(x))Delta(x) >0.$$Can you finish up from here?



                Now to fix the gap with that technical point about the construction of $H(x)$, we may say that we can write $h(x)$ as an integral expression from which it is clear that $h$ is continuous:$$F(varphi_2')-F(varphi_1') = int_0^1 frac{d}{dt} F((1-t)varphi_1'+tvarphi_2') , dt$$from the Fundamental Theorem of Calculus---and we omitted the other variables $x$, $varphi_1(x)$ here.




                Are you familiar with the maximum principle? You can find it in the book of Protter and Weinberger. Theorem 3. The trick is to take the function
                $$z(x):=Delta (x)+varepsilon (e^{alpha (x-d)}-1),$$
                where $varepsilon$ is small and $alpha$ very large.




                The argument I intended is tantamount to the $1$-dimensional maximum principle.



                The trick with the integrating factor $exp[-H(x)]$ is a standard trick in ordinary differential equations---normally used with $1$st order, but here towards the first-order term in $Delta'$.



                The reasoning I had in mind when arguing from$$[exp(-H(x))Delta'(x)]' > kexp(-H(x))Delta(x) > 0$$is that the expression $exp(-H(x)) Delta'(x)$ is strictly increasing over the interval $[a, b]$ because it has positive derivative. On the other hand, at $x = a$, this expression is nonnegative because $Delta > 0$ for $x > a$ and $Delta(a) = 0$, whereas at $x = b$, it is nonpositive for an analogous reason. This is a contradiction.



                Of course this very argument can be used to prove the maximum principle in $1$ dimension.






                share|cite|improve this answer























                  up vote
                  0
                  down vote










                  up vote
                  0
                  down vote









                  First of all, we may assume without loss of generality that $Delta$ does not change sign on the interval $(a,b)$. For if it does, we instead prove the theorem on a smaller interval $(a,b_*)$ where $b_*$ is the first time after $a$ that $Delta$ vanishes. We conclude from the proof in the special case that $Delta$ is identically $0$ on that shorter interval, and by the uniqueness theorem for the initial value problem---at $b_*$---both solutions are identical on the rest of the interval, too.



                  So let us assume that $Delta>0$ on that interval---otherwise we swap the two solutions. You prove$$Delta''>kDelta + h(x)Delta'$$with some function $h(x)$. We get $h(x)$ from the mean value theorem applied to $F$ as a function of $y'$. Can you justify this step?



                  Let us skip a technicality and assume without proof that $h$ is at least good enough to have a definite integral$$H(x):= int_a^x h(t),dt.$$Of course $h$ may not be uniquely defined, so we mean to assume that some choice of $h$ is good enough to have a definite integral.



                  Then multiply the inequality$$Delta'' - h(x)Delta' >kDelta$$with $exp[-H(x)]$ and obtain$$[ exp(-H(x)) Delta'(x) ]' > k exp(-H(x))Delta(x) >0.$$Can you finish up from here?



                  Now to fix the gap with that technical point about the construction of $H(x)$, we may say that we can write $h(x)$ as an integral expression from which it is clear that $h$ is continuous:$$F(varphi_2')-F(varphi_1') = int_0^1 frac{d}{dt} F((1-t)varphi_1'+tvarphi_2') , dt$$from the Fundamental Theorem of Calculus---and we omitted the other variables $x$, $varphi_1(x)$ here.




                  Are you familiar with the maximum principle? You can find it in the book of Protter and Weinberger. Theorem 3. The trick is to take the function
                  $$z(x):=Delta (x)+varepsilon (e^{alpha (x-d)}-1),$$
                  where $varepsilon$ is small and $alpha$ very large.




                  The argument I intended is tantamount to the $1$-dimensional maximum principle.



                  The trick with the integrating factor $exp[-H(x)]$ is a standard trick in ordinary differential equations---normally used with $1$st order, but here towards the first-order term in $Delta'$.



                  The reasoning I had in mind when arguing from$$[exp(-H(x))Delta'(x)]' > kexp(-H(x))Delta(x) > 0$$is that the expression $exp(-H(x)) Delta'(x)$ is strictly increasing over the interval $[a, b]$ because it has positive derivative. On the other hand, at $x = a$, this expression is nonnegative because $Delta > 0$ for $x > a$ and $Delta(a) = 0$, whereas at $x = b$, it is nonpositive for an analogous reason. This is a contradiction.



                  Of course this very argument can be used to prove the maximum principle in $1$ dimension.






                  share|cite|improve this answer












                  First of all, we may assume without loss of generality that $Delta$ does not change sign on the interval $(a,b)$. For if it does, we instead prove the theorem on a smaller interval $(a,b_*)$ where $b_*$ is the first time after $a$ that $Delta$ vanishes. We conclude from the proof in the special case that $Delta$ is identically $0$ on that shorter interval, and by the uniqueness theorem for the initial value problem---at $b_*$---both solutions are identical on the rest of the interval, too.



                  So let us assume that $Delta>0$ on that interval---otherwise we swap the two solutions. You prove$$Delta''>kDelta + h(x)Delta'$$with some function $h(x)$. We get $h(x)$ from the mean value theorem applied to $F$ as a function of $y'$. Can you justify this step?



                  Let us skip a technicality and assume without proof that $h$ is at least good enough to have a definite integral$$H(x):= int_a^x h(t),dt.$$Of course $h$ may not be uniquely defined, so we mean to assume that some choice of $h$ is good enough to have a definite integral.



                  Then multiply the inequality$$Delta'' - h(x)Delta' >kDelta$$with $exp[-H(x)]$ and obtain$$[ exp(-H(x)) Delta'(x) ]' > k exp(-H(x))Delta(x) >0.$$Can you finish up from here?



                  Now to fix the gap with that technical point about the construction of $H(x)$, we may say that we can write $h(x)$ as an integral expression from which it is clear that $h$ is continuous:$$F(varphi_2')-F(varphi_1') = int_0^1 frac{d}{dt} F((1-t)varphi_1'+tvarphi_2') , dt$$from the Fundamental Theorem of Calculus---and we omitted the other variables $x$, $varphi_1(x)$ here.




                  Are you familiar with the maximum principle? You can find it in the book of Protter and Weinberger. Theorem 3. The trick is to take the function
                  $$z(x):=Delta (x)+varepsilon (e^{alpha (x-d)}-1),$$
                  where $varepsilon$ is small and $alpha$ very large.




                  The argument I intended is tantamount to the $1$-dimensional maximum principle.



                  The trick with the integrating factor $exp[-H(x)]$ is a standard trick in ordinary differential equations---normally used with $1$st order, but here towards the first-order term in $Delta'$.



                  The reasoning I had in mind when arguing from$$[exp(-H(x))Delta'(x)]' > kexp(-H(x))Delta(x) > 0$$is that the expression $exp(-H(x)) Delta'(x)$ is strictly increasing over the interval $[a, b]$ because it has positive derivative. On the other hand, at $x = a$, this expression is nonnegative because $Delta > 0$ for $x > a$ and $Delta(a) = 0$, whereas at $x = b$, it is nonpositive for an analogous reason. This is a contradiction.



                  Of course this very argument can be used to prove the maximum principle in $1$ dimension.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Nov 19 at 21:12









                  Get Off The Internet

                  1,112214




                  1,112214






























                       

                      draft saved


                      draft discarded



















































                       


                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2800586%2funiqueness-in-bernsteins-theorem-of-calculus-of-variations%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Ellipse (mathématiques)

                      Quarter-circle Tiles

                      Mont Emei