Links between difference and differential equations?












32












$begingroup$


Does there exist any correspondence between difference equations and differential equations?



In particular, can one cast some classes of ODEs into difference equations or vice versa?










share|cite|improve this question









$endgroup$












  • $begingroup$
    Correspondence in what sense? Could you suggest an example?
    $endgroup$
    – Pedro Tamaroff
    May 15 '12 at 16:19










  • $begingroup$
    Mostly I am interested to know if it is possible to reformulate ones in terms of anothers, but more general possible studied relations would be also interesting.
    $endgroup$
    – Alexey Bobrick
    May 15 '12 at 16:30










  • $begingroup$
    They are a bit, but not totally different.
    $endgroup$
    – AD.
    May 15 '12 at 16:34










  • $begingroup$
    This might generate some interesting answers, I hope.
    $endgroup$
    – AD.
    May 15 '12 at 16:36










  • $begingroup$
    Huh, atm I'm trying to understand something in methods mentioned at risc.jku.at/research/combinat/software/HolonomicFunctions/… As far as I understood there is a way to treat certain difference and differential equations (correspondingly definite integrals and sums) in one fashion. I can say nothing more, but hope somebody here could clarify whether it is what you are looking for or not.
    $endgroup$
    – Yrogirg
    May 15 '12 at 17:05


















32












$begingroup$


Does there exist any correspondence between difference equations and differential equations?



In particular, can one cast some classes of ODEs into difference equations or vice versa?










share|cite|improve this question









$endgroup$












  • $begingroup$
    Correspondence in what sense? Could you suggest an example?
    $endgroup$
    – Pedro Tamaroff
    May 15 '12 at 16:19










  • $begingroup$
    Mostly I am interested to know if it is possible to reformulate ones in terms of anothers, but more general possible studied relations would be also interesting.
    $endgroup$
    – Alexey Bobrick
    May 15 '12 at 16:30










  • $begingroup$
    They are a bit, but not totally different.
    $endgroup$
    – AD.
    May 15 '12 at 16:34










  • $begingroup$
    This might generate some interesting answers, I hope.
    $endgroup$
    – AD.
    May 15 '12 at 16:36










  • $begingroup$
    Huh, atm I'm trying to understand something in methods mentioned at risc.jku.at/research/combinat/software/HolonomicFunctions/… As far as I understood there is a way to treat certain difference and differential equations (correspondingly definite integrals and sums) in one fashion. I can say nothing more, but hope somebody here could clarify whether it is what you are looking for or not.
    $endgroup$
    – Yrogirg
    May 15 '12 at 17:05
















32












32








32


24



$begingroup$


Does there exist any correspondence between difference equations and differential equations?



In particular, can one cast some classes of ODEs into difference equations or vice versa?










share|cite|improve this question









$endgroup$




Does there exist any correspondence between difference equations and differential equations?



In particular, can one cast some classes of ODEs into difference equations or vice versa?







ordinary-differential-equations recurrence-relations






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked May 15 '12 at 16:18









Alexey BobrickAlexey Bobrick

300138




300138












  • $begingroup$
    Correspondence in what sense? Could you suggest an example?
    $endgroup$
    – Pedro Tamaroff
    May 15 '12 at 16:19










  • $begingroup$
    Mostly I am interested to know if it is possible to reformulate ones in terms of anothers, but more general possible studied relations would be also interesting.
    $endgroup$
    – Alexey Bobrick
    May 15 '12 at 16:30










  • $begingroup$
    They are a bit, but not totally different.
    $endgroup$
    – AD.
    May 15 '12 at 16:34










  • $begingroup$
    This might generate some interesting answers, I hope.
    $endgroup$
    – AD.
    May 15 '12 at 16:36










  • $begingroup$
    Huh, atm I'm trying to understand something in methods mentioned at risc.jku.at/research/combinat/software/HolonomicFunctions/… As far as I understood there is a way to treat certain difference and differential equations (correspondingly definite integrals and sums) in one fashion. I can say nothing more, but hope somebody here could clarify whether it is what you are looking for or not.
    $endgroup$
    – Yrogirg
    May 15 '12 at 17:05




















  • $begingroup$
    Correspondence in what sense? Could you suggest an example?
    $endgroup$
    – Pedro Tamaroff
    May 15 '12 at 16:19










  • $begingroup$
    Mostly I am interested to know if it is possible to reformulate ones in terms of anothers, but more general possible studied relations would be also interesting.
    $endgroup$
    – Alexey Bobrick
    May 15 '12 at 16:30










  • $begingroup$
    They are a bit, but not totally different.
    $endgroup$
    – AD.
    May 15 '12 at 16:34










  • $begingroup$
    This might generate some interesting answers, I hope.
    $endgroup$
    – AD.
    May 15 '12 at 16:36










  • $begingroup$
    Huh, atm I'm trying to understand something in methods mentioned at risc.jku.at/research/combinat/software/HolonomicFunctions/… As far as I understood there is a way to treat certain difference and differential equations (correspondingly definite integrals and sums) in one fashion. I can say nothing more, but hope somebody here could clarify whether it is what you are looking for or not.
    $endgroup$
    – Yrogirg
    May 15 '12 at 17:05


















$begingroup$
Correspondence in what sense? Could you suggest an example?
$endgroup$
– Pedro Tamaroff
May 15 '12 at 16:19




$begingroup$
Correspondence in what sense? Could you suggest an example?
$endgroup$
– Pedro Tamaroff
May 15 '12 at 16:19












$begingroup$
Mostly I am interested to know if it is possible to reformulate ones in terms of anothers, but more general possible studied relations would be also interesting.
$endgroup$
– Alexey Bobrick
May 15 '12 at 16:30




$begingroup$
Mostly I am interested to know if it is possible to reformulate ones in terms of anothers, but more general possible studied relations would be also interesting.
$endgroup$
– Alexey Bobrick
May 15 '12 at 16:30












$begingroup$
They are a bit, but not totally different.
$endgroup$
– AD.
May 15 '12 at 16:34




$begingroup$
They are a bit, but not totally different.
$endgroup$
– AD.
May 15 '12 at 16:34












$begingroup$
This might generate some interesting answers, I hope.
$endgroup$
– AD.
May 15 '12 at 16:36




$begingroup$
This might generate some interesting answers, I hope.
$endgroup$
– AD.
May 15 '12 at 16:36












$begingroup$
Huh, atm I'm trying to understand something in methods mentioned at risc.jku.at/research/combinat/software/HolonomicFunctions/… As far as I understood there is a way to treat certain difference and differential equations (correspondingly definite integrals and sums) in one fashion. I can say nothing more, but hope somebody here could clarify whether it is what you are looking for or not.
$endgroup$
– Yrogirg
May 15 '12 at 17:05






$begingroup$
Huh, atm I'm trying to understand something in methods mentioned at risc.jku.at/research/combinat/software/HolonomicFunctions/… As far as I understood there is a way to treat certain difference and differential equations (correspondingly definite integrals and sums) in one fashion. I can say nothing more, but hope somebody here could clarify whether it is what you are looking for or not.
$endgroup$
– Yrogirg
May 15 '12 at 17:05












5 Answers
5






active

oldest

votes


















8












$begingroup$

Yes, obviously there is some correspondence (numerical solutions of ordinary differential equations are discrete dynamical systems, many proofs in bifurcation theory uses continuous time dynamical systems to analyze discrete original problems, etc).



However, the most profound attempt to build a theory that unites difference and differential equations is the time scale calculus.






share|cite|improve this answer









$endgroup$





















    34












    $begingroup$

    First order example



    Consider the difference equation
    $$begin{equation*}
    x_{n+1} = a x_n,tag{1}
    end{equation*}$$

    where $x_0$ is given, with solution
    $$begin{equation*}
    x_n = a^n x_0. tag{2}
    end{equation*}$$



    Let $x_n = x(t_n)$, where $t_{n+1} = t_n + h$ and $t_0 = 0$, so $t_n = n h$.
    Rewrite the difference equation (1) as
    $$frac{x(t_{n}+h)-x(t_n)}{h} = frac{(a-1)}{h} x(t_n).$$
    If $h$ is small, this can be approximated by the differential equation
    $$begin{equation*}
    x'(t) = frac{a-1}{h}x(t),tag{3}
    end{equation*}$$

    with solution
    $$begin{equation*}
    x(t) = x(0) exp left(frac{a-1}{h} tright).tag{4}
    end{equation*}$$

    Notice that $expleft(frac{a-1}{h} tright) = a^{n} + O(n(a-1)^2)$, so for $nll 1/(a-1)^2$ we find good agreement with (2).



    Going in the reverse direction, from the differential equation (3) to the difference equation (1), is called Euler's method.



    There is a subtlety when approximating a difference equation by a differential equation.
    For this problem we require that $|a-1| ll 1$ since we need $x_{n+1}- x_n = O(h) ll 1$.
    Otherwise, we should expect (and will find) the approximation of (1) by (3) to be poor.
    Sometimes it is necessary to reformulate the difference equation so the difference between successive terms is small.${}^dagger$



    ${}^dagger$ For example, suppose $a$ in (1) were large and positive.
    Let $y_0=x_0$ and let $y_{n+1} = a^{1/p} y_n$,
    where $p$ is some large integer so $|a^{1/p} - 1|ll 1$.
    Note that $y_{n p} = x_n$.
    The solution to the corresponding differential equation for $y$ will be a good approximation to $y_n$, and this in turn can be used to approximate $x_n$.



    Second order example



    Consider the differential equation for the harmonic oscillator
    $$begin{equation*}
    x''+x = 0, hspace{5ex} x(0)=1, hspace{5ex}x'(0)=0, tag{5}
    end{equation*}$$

    the solution for which is $x(t) = cos t$.
    The simplest related difference equation is
    $$begin{equation*}
    frac{1}{h^2}(x_{n+2}-2x_{n+1}+x_n) + x_n = 0, tag{6}
    end{equation*}$$

    with $x_0 = x_1 = 1$.
    (We show how to get (6) from (5) below.)
    There are standard techniques for solving simple recurrence relations such as (6) in closed form.
    We find
    $$begin{equation*}
    x_n = frac{1}{2} left((1+i h)^{n}+(1 - i h)^{n}right).
    end{equation*}$$

    Note that $x_n$ is real since $x_n^* = x_n$.
    This is the closed form for the solution a computer would find solving (6) iteratively.
    Recall that $n=t_n/h$.
    For small $h$,
    $x_n = cos t_n + frac{1}{2} h t_n cos t_n + O(h^2)$,
    so the numerical solution to (6) will well approximate $cos t_n$ for $h t_n ll 1$.
    In the limit $hto 0$,
    $x_n to cos t_n,$
    as expected.



    Derivatives and finite differences



    Morally, a difference equation is a discrete version of a differential equation and
    a differential equation is a continuous version of a difference equation.
    The method of numerical integration of ODEs is essentially the rewriting of a differential equation as a difference equation which is then solved iteratively by a computer.
    This is a big subject with many subtleties.
    The method can also be applied to nonlinear and partial differential equations.
    Below we give a brief dictionary between finite difference and differential operators.



    Define the shift operator $E$ such that $E x_n = x_{n+1}$.
    The difference operator $Delta = E-1$ then gives $Delta x_n = x_{n+1}-x_n$.
    These operators are connected to differentiation by Taylor series.
    Let $D x_n = x_n' = d x(t)/d t|_{t=t_n}$. Then
    $$x_{n+1} = E x_n = sum_{k=0}^infty frac{h^k}{k!} D^k x_n
    = e^{h D}x_n.$$

    Thus, as an operator,
    $$E = e^{h D},$$ and so
    $$D = frac{1}{h} ln E = frac{1}{h} ln(1+Delta)
    = frac{1}{h}sum_{k=1}^infty (-1)^{k+1} frac{Delta^k}{k}.$$

    (Think of these operators as acting on polynomials, possibly of very high order.)
    This formalism gives us a way to convert any ODE into a difference equation and vice-versa.
    Notice that higher order derivatives can be approximated by $D^kapprox (Delta/h)^k$.
    Thus, for example,
    $$x'' = D^2 x rightarrow frac{1}{h^2}Delta^2x_n
    = frac{1}{h^2}Delta(x_{n+1}-x_n)
    = frac{1}{h^2} (x_{n+2}-2 x_{n+1} + x_n).$$



    When using Euler's method we let $D approx Delta/h$.
    We could just as well keep higher order terms in $Delta$ to get recursion relations with three or more terms.
    It is a sign of the nontrivial nature of the subject that this simple change leads to numerical instabilities.
    There are many named algorithms that do improve and generalize Euler's method, building on the ideas sketched above.
    (See, for example, the Runge-Kutta
    methods, a family of robust algorithms used to numerically solve linear and nonlinear differential equations.)



    Addendum



    This is in response to Drew N's question about seeing directly that
    $$D = frac{1}{h}sum_{k=1}^infty (-1)^{k+1} frac{Delta^k}{k}$$
    is the differential operator.
    Here we show that $D t^n = n t^{n-1}$ for $n=0,1,ldots$, which shows that $D$ is the differential operator on polynomials of arbitrarily high degree.
    (It is straightforward to extend this proof to $ninmathbb{C}$.)
    We have
    begin{align*}
    D t^n &= frac{1}{h} sum_{k=1}^infty (-1)^{k+1}frac{1}{k}Delta^k t^n \
    &= frac{1}{h} sum_{k=1}^infty (-1)^{k+1}frac{1}{k}(E-1)^k t^n \
    &= frac{1}{h} sum_{k=1}^infty (-1)^{k+1}frac{1}{k}
    sum_{j=0}^k{kchoose j}(-1)^{k-j}E^j t^n
    & textrm{binomial theorem} \
    &= frac{1}{h} sum_{k,j} (-1)^{j+1}frac{1}{k}{kchoose j}(t+j h)^n \
    &= frac{1}{h} sum_{k,j} (-1)^{j+1}frac{1}{k}{kchoose j}
    sum_{l=0}^n {nchoose l}j^l h^l t^{n-l}
    & textrm{binomial theorem} \
    &= left.frac{1}{h} sum_{k,j,l} (-1)^{j+1}frac{1}{k}
    {kchoose j}{nchoose l}
    (x D)^l x^j h^l t^{n-l}right|_{x=1}
    & D = d/dx, (x D)^l = underbrace{(x D)cdots (x D)}_{l} \
    &= left.-frac{1}{h} sum_{k,l} frac{1}{k}
    {nchoose l} h^l t^{n-l}
    (x D)^l sum_{j=0}^k {kchoose j}(-x)^j right|_{x=1} \
    &= left.-frac{1}{h} sum_{k,l} frac{1}{k}
    {nchoose l} h^l t^{n-l}
    (x D)^l (1-x)^k right|_{x=1}
    & textrm{binomial theorem} \
    &= left.-frac{1}{h} sum_{l} {nchoose l} h^l t^{n-l}
    (x D)^l sum_{k=1}^infty frac{1}{k} (1-x)^k right|_{x=1} \
    &= left.frac{1}{h} sum_{l} {nchoose l} h^l t^{n-l}
    (x D)^l log xright|_{x=1}
    & textrm{series for natural logarithm} \
    &= frac{1}{h} sum_{l=0}^n {nchoose l} h^l t^{n-l}
    delta_{l1}
    & textrm{see below} \
    &= frac{1}{h} {nchoose 1} h t^{n-1} \
    &= n t^{n-1}.
    end{align*}

    Note that
    $(x D)x^j = j x^j$
    so
    $$(x D)^l x^j = j^l x^j.$$
    Also
    begin{align*}
    (x D)^0 log x|_{x=1} &= log 1 = 0 \
    (x D)^1 log x &= x frac{1}{x} = 1 \
    (x D)^2 log x &= (x D)1 = 0 \
    (x D)^3 log x &= (x D)^2 0 = 0.
    end{align*}

    Thus,
    $$(x D)^l log x|_{x=1} = delta_{l1},$$
    where $delta_{ij}$ is the Kronecker delta.






    share|cite|improve this answer











    $endgroup$









    • 4




      $begingroup$
      Great work. Very helpful, concise, and clear. Thank you.
      $endgroup$
      – wesssg
      Oct 7 '16 at 5:10










    • $begingroup$
      Why did you apply limit only to left side in formula (3)? I though that h should not appear at right side
      $endgroup$
      – LmTinyToon
      Dec 16 '16 at 10:28










    • $begingroup$
      @LmTinyToon: For small $h$, the left hand side of (3) will be well-approximated by $x'(t)$. (But only as long as $x_{n+1}-x_n$ is also small.) The right hand side does not simplify in this limit and fundamentally depends on $h$. This is discussed in some detail in the last paragraph of "First order example" and in the footnote to that section. Cheers!
      $endgroup$
      – user26872
      Dec 16 '16 at 21:40






    • 1




      $begingroup$
      Using operator formalism you establish $D = (1/h) sum_{k=1}^infty (-1)^{k+1} Delta^{k}/k$, which I believe expands to $1/hbig[ (x_{n+1} - x_n) - (1/2) (x_{n+2} - 2x_{n+1} + x_n) + dots big]$. But I just don't intuit why this expression of $D$ must correctly give $Dx_n = x'(t_n)$. One can follow the operator formalism line by line, but is there a way to be convinced this really is $x'(t_n)$ just from the form of this expression alone?
      $endgroup$
      – Drew N
      Dec 16 '18 at 4:05












    • $begingroup$
      @DrewN: Good question! I added an addendum to address this.
      $endgroup$
      – user26872
      Jan 2 at 19:57



















    3












    $begingroup$

    One example:



    You can take the Laplace transform of a discrete function by expressing the target of the Laplace transform as the product of a continuous function and a sequence of shifted delta functions and summing up: $sum_{k=0}^inftyint_{0}^infty f(t)delta(t - kT)e^{-st}dt$ = $sum_{k=0}^infty f(kT)e^{-skT}$, where T is the sampling period.



    The (unilateral) Z transform is defined as $sum_{n=0}^{infty}x[k]z^{-k}$, for a discrete time signal $x[k]$. By substituting $z = e^{sT}$ in the formula for the discretized Laplace transform, the Z transform is obtained, and the s and Z domains are related by $s = frac{1}{T}log(Z)$. This is obviously a nonlinear transformation; points left of the imaginary axis in the s plane are mapped to the interior of the unit circle of the Z plane, while points to the right of the imaginary axis are mapped to the exterior. One can use the Bilinear transform, $s = frac{2(z -1)}{T(z + 1)}$, as a first order approximation to $s = frac{1}{T}log(Z)$.



    So it's possible to (approximately) transform any equation in the s domain into a corresponding equation in the Z domain. This is useful for deriving a difference equation, since it can be shown that the Z transform of a shift, $Z(x[k - n])$, is equal to simply $z^{-n}X(z).$ Due to this property it is often possible to go easily from the Z transform of an expression to a difference equation, simply by inspecting inverse powers of Z and their coefficients in the equation. In short, if the differential equation is amenable to the Laplace transform (i.e. linear time invariant), it can be quite straightforward using this method to get a difference equation representation.






    share|cite|improve this answer









    $endgroup$





















      3












      $begingroup$

      (Adding to user26872's answer as this was not obvious to me so it might help someone else going through his derivation)



      The identity



      $$x_{n+1} = Ex_n = sumlimits_{k=0}^infty frac{h^k}{k!}D^k x_n=e^{hD}x_n$$



      is true if we consider the following. Let $x_n = x(t_n)$. Let's assume that $x(t)$ is differentiable around some point $t_n$, then it's Taylor series can be defined as



      $$x(t) = sumlimits_{k=0}^infty frac{x^{(k)}(t_n)}{k!}(t-t_n)^k.$$



      If we now perform the same expansion at $t' = t_n+h$ we have



      $$x(t_n+h) = sumlimits_{k=0}^infty frac{x^{(k)}(t_n)}{k!}h^k = sumlimits_{k=0}^infty frac{h^k D^k}{k!}x(t) = e^{hD} x(t_n)$$



      and so



      $$x_{n+1} = Ex_n leftrightarrow x(t_n + h) = E x(t_n)$$



      thus giving



      $$E = e^{hD}.$$






      share|cite|improve this answer









      $endgroup$





















        2












        $begingroup$

        First see the correspondence between discrete and continuous dynamical systems. Both are acts of a monoid on some set $X$.



        For discrete dynamical systems the monoid that acts is $mathbb{N}$. For continuous dynamical systems the monoid that acts is $mathbb{R}$.



        This action comes in the form of a left multiplication
        $(.,.):mathbb{N}times X rightarrow X$ or $(.,.):mathbb{R}times X rightarrow X$
        and is compatible with the addition structure of $mathbb{N}$ and $mathbb{R}$ respectively.



        I.e. for all $n,m in mathbb{N}$ or $mathbb{R}$ and $x in X$ we have $(0,x) = x$ and $(n,(m,x)) = (n + m,x)$.



        Now on the story of difference and differential equations. A first order difference equation equals a discrete dynamical system. Note that any difference equation can be converted to a system of first order difference equations (see higher order difference equations). Hence any difference equation equals a discrete dynamical system. Note that the $mathbb{N}$-act is given by $(n,x) = f^n(x)$.



        If the differential condition is sufficiently smooth there exists a unique solution for any point ($phi_x^t$). We use this solution as the $mathbb{R}$-act. I.e. $(t,x) = phi_x^t$. Hence any sufficiently smooth differential equation equals a continuous dynamical system.






        share|cite|improve this answer









        $endgroup$













          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "69"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f145523%2flinks-between-difference-and-differential-equations%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          5 Answers
          5






          active

          oldest

          votes








          5 Answers
          5






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          8












          $begingroup$

          Yes, obviously there is some correspondence (numerical solutions of ordinary differential equations are discrete dynamical systems, many proofs in bifurcation theory uses continuous time dynamical systems to analyze discrete original problems, etc).



          However, the most profound attempt to build a theory that unites difference and differential equations is the time scale calculus.






          share|cite|improve this answer









          $endgroup$


















            8












            $begingroup$

            Yes, obviously there is some correspondence (numerical solutions of ordinary differential equations are discrete dynamical systems, many proofs in bifurcation theory uses continuous time dynamical systems to analyze discrete original problems, etc).



            However, the most profound attempt to build a theory that unites difference and differential equations is the time scale calculus.






            share|cite|improve this answer









            $endgroup$
















              8












              8








              8





              $begingroup$

              Yes, obviously there is some correspondence (numerical solutions of ordinary differential equations are discrete dynamical systems, many proofs in bifurcation theory uses continuous time dynamical systems to analyze discrete original problems, etc).



              However, the most profound attempt to build a theory that unites difference and differential equations is the time scale calculus.






              share|cite|improve this answer









              $endgroup$



              Yes, obviously there is some correspondence (numerical solutions of ordinary differential equations are discrete dynamical systems, many proofs in bifurcation theory uses continuous time dynamical systems to analyze discrete original problems, etc).



              However, the most profound attempt to build a theory that unites difference and differential equations is the time scale calculus.







              share|cite|improve this answer












              share|cite|improve this answer



              share|cite|improve this answer










              answered May 15 '12 at 16:24









              ArtemArtem

              11.5k32245




              11.5k32245























                  34












                  $begingroup$

                  First order example



                  Consider the difference equation
                  $$begin{equation*}
                  x_{n+1} = a x_n,tag{1}
                  end{equation*}$$

                  where $x_0$ is given, with solution
                  $$begin{equation*}
                  x_n = a^n x_0. tag{2}
                  end{equation*}$$



                  Let $x_n = x(t_n)$, where $t_{n+1} = t_n + h$ and $t_0 = 0$, so $t_n = n h$.
                  Rewrite the difference equation (1) as
                  $$frac{x(t_{n}+h)-x(t_n)}{h} = frac{(a-1)}{h} x(t_n).$$
                  If $h$ is small, this can be approximated by the differential equation
                  $$begin{equation*}
                  x'(t) = frac{a-1}{h}x(t),tag{3}
                  end{equation*}$$

                  with solution
                  $$begin{equation*}
                  x(t) = x(0) exp left(frac{a-1}{h} tright).tag{4}
                  end{equation*}$$

                  Notice that $expleft(frac{a-1}{h} tright) = a^{n} + O(n(a-1)^2)$, so for $nll 1/(a-1)^2$ we find good agreement with (2).



                  Going in the reverse direction, from the differential equation (3) to the difference equation (1), is called Euler's method.



                  There is a subtlety when approximating a difference equation by a differential equation.
                  For this problem we require that $|a-1| ll 1$ since we need $x_{n+1}- x_n = O(h) ll 1$.
                  Otherwise, we should expect (and will find) the approximation of (1) by (3) to be poor.
                  Sometimes it is necessary to reformulate the difference equation so the difference between successive terms is small.${}^dagger$



                  ${}^dagger$ For example, suppose $a$ in (1) were large and positive.
                  Let $y_0=x_0$ and let $y_{n+1} = a^{1/p} y_n$,
                  where $p$ is some large integer so $|a^{1/p} - 1|ll 1$.
                  Note that $y_{n p} = x_n$.
                  The solution to the corresponding differential equation for $y$ will be a good approximation to $y_n$, and this in turn can be used to approximate $x_n$.



                  Second order example



                  Consider the differential equation for the harmonic oscillator
                  $$begin{equation*}
                  x''+x = 0, hspace{5ex} x(0)=1, hspace{5ex}x'(0)=0, tag{5}
                  end{equation*}$$

                  the solution for which is $x(t) = cos t$.
                  The simplest related difference equation is
                  $$begin{equation*}
                  frac{1}{h^2}(x_{n+2}-2x_{n+1}+x_n) + x_n = 0, tag{6}
                  end{equation*}$$

                  with $x_0 = x_1 = 1$.
                  (We show how to get (6) from (5) below.)
                  There are standard techniques for solving simple recurrence relations such as (6) in closed form.
                  We find
                  $$begin{equation*}
                  x_n = frac{1}{2} left((1+i h)^{n}+(1 - i h)^{n}right).
                  end{equation*}$$

                  Note that $x_n$ is real since $x_n^* = x_n$.
                  This is the closed form for the solution a computer would find solving (6) iteratively.
                  Recall that $n=t_n/h$.
                  For small $h$,
                  $x_n = cos t_n + frac{1}{2} h t_n cos t_n + O(h^2)$,
                  so the numerical solution to (6) will well approximate $cos t_n$ for $h t_n ll 1$.
                  In the limit $hto 0$,
                  $x_n to cos t_n,$
                  as expected.



                  Derivatives and finite differences



                  Morally, a difference equation is a discrete version of a differential equation and
                  a differential equation is a continuous version of a difference equation.
                  The method of numerical integration of ODEs is essentially the rewriting of a differential equation as a difference equation which is then solved iteratively by a computer.
                  This is a big subject with many subtleties.
                  The method can also be applied to nonlinear and partial differential equations.
                  Below we give a brief dictionary between finite difference and differential operators.



                  Define the shift operator $E$ such that $E x_n = x_{n+1}$.
                  The difference operator $Delta = E-1$ then gives $Delta x_n = x_{n+1}-x_n$.
                  These operators are connected to differentiation by Taylor series.
                  Let $D x_n = x_n' = d x(t)/d t|_{t=t_n}$. Then
                  $$x_{n+1} = E x_n = sum_{k=0}^infty frac{h^k}{k!} D^k x_n
                  = e^{h D}x_n.$$

                  Thus, as an operator,
                  $$E = e^{h D},$$ and so
                  $$D = frac{1}{h} ln E = frac{1}{h} ln(1+Delta)
                  = frac{1}{h}sum_{k=1}^infty (-1)^{k+1} frac{Delta^k}{k}.$$

                  (Think of these operators as acting on polynomials, possibly of very high order.)
                  This formalism gives us a way to convert any ODE into a difference equation and vice-versa.
                  Notice that higher order derivatives can be approximated by $D^kapprox (Delta/h)^k$.
                  Thus, for example,
                  $$x'' = D^2 x rightarrow frac{1}{h^2}Delta^2x_n
                  = frac{1}{h^2}Delta(x_{n+1}-x_n)
                  = frac{1}{h^2} (x_{n+2}-2 x_{n+1} + x_n).$$



                  When using Euler's method we let $D approx Delta/h$.
                  We could just as well keep higher order terms in $Delta$ to get recursion relations with three or more terms.
                  It is a sign of the nontrivial nature of the subject that this simple change leads to numerical instabilities.
                  There are many named algorithms that do improve and generalize Euler's method, building on the ideas sketched above.
                  (See, for example, the Runge-Kutta
                  methods, a family of robust algorithms used to numerically solve linear and nonlinear differential equations.)



                  Addendum



                  This is in response to Drew N's question about seeing directly that
                  $$D = frac{1}{h}sum_{k=1}^infty (-1)^{k+1} frac{Delta^k}{k}$$
                  is the differential operator.
                  Here we show that $D t^n = n t^{n-1}$ for $n=0,1,ldots$, which shows that $D$ is the differential operator on polynomials of arbitrarily high degree.
                  (It is straightforward to extend this proof to $ninmathbb{C}$.)
                  We have
                  begin{align*}
                  D t^n &= frac{1}{h} sum_{k=1}^infty (-1)^{k+1}frac{1}{k}Delta^k t^n \
                  &= frac{1}{h} sum_{k=1}^infty (-1)^{k+1}frac{1}{k}(E-1)^k t^n \
                  &= frac{1}{h} sum_{k=1}^infty (-1)^{k+1}frac{1}{k}
                  sum_{j=0}^k{kchoose j}(-1)^{k-j}E^j t^n
                  & textrm{binomial theorem} \
                  &= frac{1}{h} sum_{k,j} (-1)^{j+1}frac{1}{k}{kchoose j}(t+j h)^n \
                  &= frac{1}{h} sum_{k,j} (-1)^{j+1}frac{1}{k}{kchoose j}
                  sum_{l=0}^n {nchoose l}j^l h^l t^{n-l}
                  & textrm{binomial theorem} \
                  &= left.frac{1}{h} sum_{k,j,l} (-1)^{j+1}frac{1}{k}
                  {kchoose j}{nchoose l}
                  (x D)^l x^j h^l t^{n-l}right|_{x=1}
                  & D = d/dx, (x D)^l = underbrace{(x D)cdots (x D)}_{l} \
                  &= left.-frac{1}{h} sum_{k,l} frac{1}{k}
                  {nchoose l} h^l t^{n-l}
                  (x D)^l sum_{j=0}^k {kchoose j}(-x)^j right|_{x=1} \
                  &= left.-frac{1}{h} sum_{k,l} frac{1}{k}
                  {nchoose l} h^l t^{n-l}
                  (x D)^l (1-x)^k right|_{x=1}
                  & textrm{binomial theorem} \
                  &= left.-frac{1}{h} sum_{l} {nchoose l} h^l t^{n-l}
                  (x D)^l sum_{k=1}^infty frac{1}{k} (1-x)^k right|_{x=1} \
                  &= left.frac{1}{h} sum_{l} {nchoose l} h^l t^{n-l}
                  (x D)^l log xright|_{x=1}
                  & textrm{series for natural logarithm} \
                  &= frac{1}{h} sum_{l=0}^n {nchoose l} h^l t^{n-l}
                  delta_{l1}
                  & textrm{see below} \
                  &= frac{1}{h} {nchoose 1} h t^{n-1} \
                  &= n t^{n-1}.
                  end{align*}

                  Note that
                  $(x D)x^j = j x^j$
                  so
                  $$(x D)^l x^j = j^l x^j.$$
                  Also
                  begin{align*}
                  (x D)^0 log x|_{x=1} &= log 1 = 0 \
                  (x D)^1 log x &= x frac{1}{x} = 1 \
                  (x D)^2 log x &= (x D)1 = 0 \
                  (x D)^3 log x &= (x D)^2 0 = 0.
                  end{align*}

                  Thus,
                  $$(x D)^l log x|_{x=1} = delta_{l1},$$
                  where $delta_{ij}$ is the Kronecker delta.






                  share|cite|improve this answer











                  $endgroup$









                  • 4




                    $begingroup$
                    Great work. Very helpful, concise, and clear. Thank you.
                    $endgroup$
                    – wesssg
                    Oct 7 '16 at 5:10










                  • $begingroup$
                    Why did you apply limit only to left side in formula (3)? I though that h should not appear at right side
                    $endgroup$
                    – LmTinyToon
                    Dec 16 '16 at 10:28










                  • $begingroup$
                    @LmTinyToon: For small $h$, the left hand side of (3) will be well-approximated by $x'(t)$. (But only as long as $x_{n+1}-x_n$ is also small.) The right hand side does not simplify in this limit and fundamentally depends on $h$. This is discussed in some detail in the last paragraph of "First order example" and in the footnote to that section. Cheers!
                    $endgroup$
                    – user26872
                    Dec 16 '16 at 21:40






                  • 1




                    $begingroup$
                    Using operator formalism you establish $D = (1/h) sum_{k=1}^infty (-1)^{k+1} Delta^{k}/k$, which I believe expands to $1/hbig[ (x_{n+1} - x_n) - (1/2) (x_{n+2} - 2x_{n+1} + x_n) + dots big]$. But I just don't intuit why this expression of $D$ must correctly give $Dx_n = x'(t_n)$. One can follow the operator formalism line by line, but is there a way to be convinced this really is $x'(t_n)$ just from the form of this expression alone?
                    $endgroup$
                    – Drew N
                    Dec 16 '18 at 4:05












                  • $begingroup$
                    @DrewN: Good question! I added an addendum to address this.
                    $endgroup$
                    – user26872
                    Jan 2 at 19:57
















                  34












                  $begingroup$

                  First order example



                  Consider the difference equation
                  $$begin{equation*}
                  x_{n+1} = a x_n,tag{1}
                  end{equation*}$$

                  where $x_0$ is given, with solution
                  $$begin{equation*}
                  x_n = a^n x_0. tag{2}
                  end{equation*}$$



                  Let $x_n = x(t_n)$, where $t_{n+1} = t_n + h$ and $t_0 = 0$, so $t_n = n h$.
                  Rewrite the difference equation (1) as
                  $$frac{x(t_{n}+h)-x(t_n)}{h} = frac{(a-1)}{h} x(t_n).$$
                  If $h$ is small, this can be approximated by the differential equation
                  $$begin{equation*}
                  x'(t) = frac{a-1}{h}x(t),tag{3}
                  end{equation*}$$

                  with solution
                  $$begin{equation*}
                  x(t) = x(0) exp left(frac{a-1}{h} tright).tag{4}
                  end{equation*}$$

                  Notice that $expleft(frac{a-1}{h} tright) = a^{n} + O(n(a-1)^2)$, so for $nll 1/(a-1)^2$ we find good agreement with (2).



                  Going in the reverse direction, from the differential equation (3) to the difference equation (1), is called Euler's method.



                  There is a subtlety when approximating a difference equation by a differential equation.
                  For this problem we require that $|a-1| ll 1$ since we need $x_{n+1}- x_n = O(h) ll 1$.
                  Otherwise, we should expect (and will find) the approximation of (1) by (3) to be poor.
                  Sometimes it is necessary to reformulate the difference equation so the difference between successive terms is small.${}^dagger$



                  ${}^dagger$ For example, suppose $a$ in (1) were large and positive.
                  Let $y_0=x_0$ and let $y_{n+1} = a^{1/p} y_n$,
                  where $p$ is some large integer so $|a^{1/p} - 1|ll 1$.
                  Note that $y_{n p} = x_n$.
                  The solution to the corresponding differential equation for $y$ will be a good approximation to $y_n$, and this in turn can be used to approximate $x_n$.



                  Second order example



                  Consider the differential equation for the harmonic oscillator
                  $$begin{equation*}
                  x''+x = 0, hspace{5ex} x(0)=1, hspace{5ex}x'(0)=0, tag{5}
                  end{equation*}$$

                  the solution for which is $x(t) = cos t$.
                  The simplest related difference equation is
                  $$begin{equation*}
                  frac{1}{h^2}(x_{n+2}-2x_{n+1}+x_n) + x_n = 0, tag{6}
                  end{equation*}$$

                  with $x_0 = x_1 = 1$.
                  (We show how to get (6) from (5) below.)
                  There are standard techniques for solving simple recurrence relations such as (6) in closed form.
                  We find
                  $$begin{equation*}
                  x_n = frac{1}{2} left((1+i h)^{n}+(1 - i h)^{n}right).
                  end{equation*}$$

                  Note that $x_n$ is real since $x_n^* = x_n$.
                  This is the closed form for the solution a computer would find solving (6) iteratively.
                  Recall that $n=t_n/h$.
                  For small $h$,
                  $x_n = cos t_n + frac{1}{2} h t_n cos t_n + O(h^2)$,
                  so the numerical solution to (6) will well approximate $cos t_n$ for $h t_n ll 1$.
                  In the limit $hto 0$,
                  $x_n to cos t_n,$
                  as expected.



                  Derivatives and finite differences



                  Morally, a difference equation is a discrete version of a differential equation and
                  a differential equation is a continuous version of a difference equation.
                  The method of numerical integration of ODEs is essentially the rewriting of a differential equation as a difference equation which is then solved iteratively by a computer.
                  This is a big subject with many subtleties.
                  The method can also be applied to nonlinear and partial differential equations.
                  Below we give a brief dictionary between finite difference and differential operators.



                  Define the shift operator $E$ such that $E x_n = x_{n+1}$.
                  The difference operator $Delta = E-1$ then gives $Delta x_n = x_{n+1}-x_n$.
                  These operators are connected to differentiation by Taylor series.
                  Let $D x_n = x_n' = d x(t)/d t|_{t=t_n}$. Then
                  $$x_{n+1} = E x_n = sum_{k=0}^infty frac{h^k}{k!} D^k x_n
                  = e^{h D}x_n.$$

                  Thus, as an operator,
                  $$E = e^{h D},$$ and so
                  $$D = frac{1}{h} ln E = frac{1}{h} ln(1+Delta)
                  = frac{1}{h}sum_{k=1}^infty (-1)^{k+1} frac{Delta^k}{k}.$$

                  (Think of these operators as acting on polynomials, possibly of very high order.)
                  This formalism gives us a way to convert any ODE into a difference equation and vice-versa.
                  Notice that higher order derivatives can be approximated by $D^kapprox (Delta/h)^k$.
                  Thus, for example,
                  $$x'' = D^2 x rightarrow frac{1}{h^2}Delta^2x_n
                  = frac{1}{h^2}Delta(x_{n+1}-x_n)
                  = frac{1}{h^2} (x_{n+2}-2 x_{n+1} + x_n).$$



                  When using Euler's method we let $D approx Delta/h$.
                  We could just as well keep higher order terms in $Delta$ to get recursion relations with three or more terms.
                  It is a sign of the nontrivial nature of the subject that this simple change leads to numerical instabilities.
                  There are many named algorithms that do improve and generalize Euler's method, building on the ideas sketched above.
                  (See, for example, the Runge-Kutta
                  methods, a family of robust algorithms used to numerically solve linear and nonlinear differential equations.)



                  Addendum



                  This is in response to Drew N's question about seeing directly that
                  $$D = frac{1}{h}sum_{k=1}^infty (-1)^{k+1} frac{Delta^k}{k}$$
                  is the differential operator.
                  Here we show that $D t^n = n t^{n-1}$ for $n=0,1,ldots$, which shows that $D$ is the differential operator on polynomials of arbitrarily high degree.
                  (It is straightforward to extend this proof to $ninmathbb{C}$.)
                  We have
                  begin{align*}
                  D t^n &= frac{1}{h} sum_{k=1}^infty (-1)^{k+1}frac{1}{k}Delta^k t^n \
                  &= frac{1}{h} sum_{k=1}^infty (-1)^{k+1}frac{1}{k}(E-1)^k t^n \
                  &= frac{1}{h} sum_{k=1}^infty (-1)^{k+1}frac{1}{k}
                  sum_{j=0}^k{kchoose j}(-1)^{k-j}E^j t^n
                  & textrm{binomial theorem} \
                  &= frac{1}{h} sum_{k,j} (-1)^{j+1}frac{1}{k}{kchoose j}(t+j h)^n \
                  &= frac{1}{h} sum_{k,j} (-1)^{j+1}frac{1}{k}{kchoose j}
                  sum_{l=0}^n {nchoose l}j^l h^l t^{n-l}
                  & textrm{binomial theorem} \
                  &= left.frac{1}{h} sum_{k,j,l} (-1)^{j+1}frac{1}{k}
                  {kchoose j}{nchoose l}
                  (x D)^l x^j h^l t^{n-l}right|_{x=1}
                  & D = d/dx, (x D)^l = underbrace{(x D)cdots (x D)}_{l} \
                  &= left.-frac{1}{h} sum_{k,l} frac{1}{k}
                  {nchoose l} h^l t^{n-l}
                  (x D)^l sum_{j=0}^k {kchoose j}(-x)^j right|_{x=1} \
                  &= left.-frac{1}{h} sum_{k,l} frac{1}{k}
                  {nchoose l} h^l t^{n-l}
                  (x D)^l (1-x)^k right|_{x=1}
                  & textrm{binomial theorem} \
                  &= left.-frac{1}{h} sum_{l} {nchoose l} h^l t^{n-l}
                  (x D)^l sum_{k=1}^infty frac{1}{k} (1-x)^k right|_{x=1} \
                  &= left.frac{1}{h} sum_{l} {nchoose l} h^l t^{n-l}
                  (x D)^l log xright|_{x=1}
                  & textrm{series for natural logarithm} \
                  &= frac{1}{h} sum_{l=0}^n {nchoose l} h^l t^{n-l}
                  delta_{l1}
                  & textrm{see below} \
                  &= frac{1}{h} {nchoose 1} h t^{n-1} \
                  &= n t^{n-1}.
                  end{align*}

                  Note that
                  $(x D)x^j = j x^j$
                  so
                  $$(x D)^l x^j = j^l x^j.$$
                  Also
                  begin{align*}
                  (x D)^0 log x|_{x=1} &= log 1 = 0 \
                  (x D)^1 log x &= x frac{1}{x} = 1 \
                  (x D)^2 log x &= (x D)1 = 0 \
                  (x D)^3 log x &= (x D)^2 0 = 0.
                  end{align*}

                  Thus,
                  $$(x D)^l log x|_{x=1} = delta_{l1},$$
                  where $delta_{ij}$ is the Kronecker delta.






                  share|cite|improve this answer











                  $endgroup$









                  • 4




                    $begingroup$
                    Great work. Very helpful, concise, and clear. Thank you.
                    $endgroup$
                    – wesssg
                    Oct 7 '16 at 5:10










                  • $begingroup$
                    Why did you apply limit only to left side in formula (3)? I though that h should not appear at right side
                    $endgroup$
                    – LmTinyToon
                    Dec 16 '16 at 10:28










                  • $begingroup$
                    @LmTinyToon: For small $h$, the left hand side of (3) will be well-approximated by $x'(t)$. (But only as long as $x_{n+1}-x_n$ is also small.) The right hand side does not simplify in this limit and fundamentally depends on $h$. This is discussed in some detail in the last paragraph of "First order example" and in the footnote to that section. Cheers!
                    $endgroup$
                    – user26872
                    Dec 16 '16 at 21:40






                  • 1




                    $begingroup$
                    Using operator formalism you establish $D = (1/h) sum_{k=1}^infty (-1)^{k+1} Delta^{k}/k$, which I believe expands to $1/hbig[ (x_{n+1} - x_n) - (1/2) (x_{n+2} - 2x_{n+1} + x_n) + dots big]$. But I just don't intuit why this expression of $D$ must correctly give $Dx_n = x'(t_n)$. One can follow the operator formalism line by line, but is there a way to be convinced this really is $x'(t_n)$ just from the form of this expression alone?
                    $endgroup$
                    – Drew N
                    Dec 16 '18 at 4:05












                  • $begingroup$
                    @DrewN: Good question! I added an addendum to address this.
                    $endgroup$
                    – user26872
                    Jan 2 at 19:57














                  34












                  34








                  34





                  $begingroup$

                  First order example



                  Consider the difference equation
                  $$begin{equation*}
                  x_{n+1} = a x_n,tag{1}
                  end{equation*}$$

                  where $x_0$ is given, with solution
                  $$begin{equation*}
                  x_n = a^n x_0. tag{2}
                  end{equation*}$$



                  Let $x_n = x(t_n)$, where $t_{n+1} = t_n + h$ and $t_0 = 0$, so $t_n = n h$.
                  Rewrite the difference equation (1) as
                  $$frac{x(t_{n}+h)-x(t_n)}{h} = frac{(a-1)}{h} x(t_n).$$
                  If $h$ is small, this can be approximated by the differential equation
                  $$begin{equation*}
                  x'(t) = frac{a-1}{h}x(t),tag{3}
                  end{equation*}$$

                  with solution
                  $$begin{equation*}
                  x(t) = x(0) exp left(frac{a-1}{h} tright).tag{4}
                  end{equation*}$$

                  Notice that $expleft(frac{a-1}{h} tright) = a^{n} + O(n(a-1)^2)$, so for $nll 1/(a-1)^2$ we find good agreement with (2).



                  Going in the reverse direction, from the differential equation (3) to the difference equation (1), is called Euler's method.



                  There is a subtlety when approximating a difference equation by a differential equation.
                  For this problem we require that $|a-1| ll 1$ since we need $x_{n+1}- x_n = O(h) ll 1$.
                  Otherwise, we should expect (and will find) the approximation of (1) by (3) to be poor.
                  Sometimes it is necessary to reformulate the difference equation so the difference between successive terms is small.${}^dagger$



                  ${}^dagger$ For example, suppose $a$ in (1) were large and positive.
                  Let $y_0=x_0$ and let $y_{n+1} = a^{1/p} y_n$,
                  where $p$ is some large integer so $|a^{1/p} - 1|ll 1$.
                  Note that $y_{n p} = x_n$.
                  The solution to the corresponding differential equation for $y$ will be a good approximation to $y_n$, and this in turn can be used to approximate $x_n$.



                  Second order example



                  Consider the differential equation for the harmonic oscillator
                  $$begin{equation*}
                  x''+x = 0, hspace{5ex} x(0)=1, hspace{5ex}x'(0)=0, tag{5}
                  end{equation*}$$

                  the solution for which is $x(t) = cos t$.
                  The simplest related difference equation is
                  $$begin{equation*}
                  frac{1}{h^2}(x_{n+2}-2x_{n+1}+x_n) + x_n = 0, tag{6}
                  end{equation*}$$

                  with $x_0 = x_1 = 1$.
                  (We show how to get (6) from (5) below.)
                  There are standard techniques for solving simple recurrence relations such as (6) in closed form.
                  We find
                  $$begin{equation*}
                  x_n = frac{1}{2} left((1+i h)^{n}+(1 - i h)^{n}right).
                  end{equation*}$$

                  Note that $x_n$ is real since $x_n^* = x_n$.
                  This is the closed form for the solution a computer would find solving (6) iteratively.
                  Recall that $n=t_n/h$.
                  For small $h$,
                  $x_n = cos t_n + frac{1}{2} h t_n cos t_n + O(h^2)$,
                  so the numerical solution to (6) will well approximate $cos t_n$ for $h t_n ll 1$.
                  In the limit $hto 0$,
                  $x_n to cos t_n,$
                  as expected.



                  Derivatives and finite differences



                  Morally, a difference equation is a discrete version of a differential equation and
                  a differential equation is a continuous version of a difference equation.
                  The method of numerical integration of ODEs is essentially the rewriting of a differential equation as a difference equation which is then solved iteratively by a computer.
                  This is a big subject with many subtleties.
                  The method can also be applied to nonlinear and partial differential equations.
                  Below we give a brief dictionary between finite difference and differential operators.



                  Define the shift operator $E$ such that $E x_n = x_{n+1}$.
                  The difference operator $Delta = E-1$ then gives $Delta x_n = x_{n+1}-x_n$.
                  These operators are connected to differentiation by Taylor series.
                  Let $D x_n = x_n' = d x(t)/d t|_{t=t_n}$. Then
                  $$x_{n+1} = E x_n = sum_{k=0}^infty frac{h^k}{k!} D^k x_n
                  = e^{h D}x_n.$$

                  Thus, as an operator,
                  $$E = e^{h D},$$ and so
                  $$D = frac{1}{h} ln E = frac{1}{h} ln(1+Delta)
                  = frac{1}{h}sum_{k=1}^infty (-1)^{k+1} frac{Delta^k}{k}.$$

                  (Think of these operators as acting on polynomials, possibly of very high order.)
                  This formalism gives us a way to convert any ODE into a difference equation and vice-versa.
                  Notice that higher order derivatives can be approximated by $D^kapprox (Delta/h)^k$.
                  Thus, for example,
                  $$x'' = D^2 x rightarrow frac{1}{h^2}Delta^2x_n
                  = frac{1}{h^2}Delta(x_{n+1}-x_n)
                  = frac{1}{h^2} (x_{n+2}-2 x_{n+1} + x_n).$$



                  When using Euler's method we let $D approx Delta/h$.
                  We could just as well keep higher order terms in $Delta$ to get recursion relations with three or more terms.
                  It is a sign of the nontrivial nature of the subject that this simple change leads to numerical instabilities.
                  There are many named algorithms that do improve and generalize Euler's method, building on the ideas sketched above.
                  (See, for example, the Runge-Kutta
                  methods, a family of robust algorithms used to numerically solve linear and nonlinear differential equations.)



                  Addendum



                  This is in response to Drew N's question about seeing directly that
                  $$D = frac{1}{h}sum_{k=1}^infty (-1)^{k+1} frac{Delta^k}{k}$$
                  is the differential operator.
                  Here we show that $D t^n = n t^{n-1}$ for $n=0,1,ldots$, which shows that $D$ is the differential operator on polynomials of arbitrarily high degree.
                  (It is straightforward to extend this proof to $ninmathbb{C}$.)
                  We have
                  begin{align*}
                  D t^n &= frac{1}{h} sum_{k=1}^infty (-1)^{k+1}frac{1}{k}Delta^k t^n \
                  &= frac{1}{h} sum_{k=1}^infty (-1)^{k+1}frac{1}{k}(E-1)^k t^n \
                  &= frac{1}{h} sum_{k=1}^infty (-1)^{k+1}frac{1}{k}
                  sum_{j=0}^k{kchoose j}(-1)^{k-j}E^j t^n
                  & textrm{binomial theorem} \
                  &= frac{1}{h} sum_{k,j} (-1)^{j+1}frac{1}{k}{kchoose j}(t+j h)^n \
                  &= frac{1}{h} sum_{k,j} (-1)^{j+1}frac{1}{k}{kchoose j}
                  sum_{l=0}^n {nchoose l}j^l h^l t^{n-l}
                  & textrm{binomial theorem} \
                  &= left.frac{1}{h} sum_{k,j,l} (-1)^{j+1}frac{1}{k}
                  {kchoose j}{nchoose l}
                  (x D)^l x^j h^l t^{n-l}right|_{x=1}
                  & D = d/dx, (x D)^l = underbrace{(x D)cdots (x D)}_{l} \
                  &= left.-frac{1}{h} sum_{k,l} frac{1}{k}
                  {nchoose l} h^l t^{n-l}
                  (x D)^l sum_{j=0}^k {kchoose j}(-x)^j right|_{x=1} \
                  &= left.-frac{1}{h} sum_{k,l} frac{1}{k}
                  {nchoose l} h^l t^{n-l}
                  (x D)^l (1-x)^k right|_{x=1}
                  & textrm{binomial theorem} \
                  &= left.-frac{1}{h} sum_{l} {nchoose l} h^l t^{n-l}
                  (x D)^l sum_{k=1}^infty frac{1}{k} (1-x)^k right|_{x=1} \
                  &= left.frac{1}{h} sum_{l} {nchoose l} h^l t^{n-l}
                  (x D)^l log xright|_{x=1}
                  & textrm{series for natural logarithm} \
                  &= frac{1}{h} sum_{l=0}^n {nchoose l} h^l t^{n-l}
                  delta_{l1}
                  & textrm{see below} \
                  &= frac{1}{h} {nchoose 1} h t^{n-1} \
                  &= n t^{n-1}.
                  end{align*}

                  Note that
                  $(x D)x^j = j x^j$
                  so
                  $$(x D)^l x^j = j^l x^j.$$
                  Also
                  begin{align*}
                  (x D)^0 log x|_{x=1} &= log 1 = 0 \
                  (x D)^1 log x &= x frac{1}{x} = 1 \
                  (x D)^2 log x &= (x D)1 = 0 \
                  (x D)^3 log x &= (x D)^2 0 = 0.
                  end{align*}

                  Thus,
                  $$(x D)^l log x|_{x=1} = delta_{l1},$$
                  where $delta_{ij}$ is the Kronecker delta.






                  share|cite|improve this answer











                  $endgroup$



                  First order example



                  Consider the difference equation
                  $$begin{equation*}
                  x_{n+1} = a x_n,tag{1}
                  end{equation*}$$

                  where $x_0$ is given, with solution
                  $$begin{equation*}
                  x_n = a^n x_0. tag{2}
                  end{equation*}$$



                  Let $x_n = x(t_n)$, where $t_{n+1} = t_n + h$ and $t_0 = 0$, so $t_n = n h$.
                  Rewrite the difference equation (1) as
                  $$frac{x(t_{n}+h)-x(t_n)}{h} = frac{(a-1)}{h} x(t_n).$$
                  If $h$ is small, this can be approximated by the differential equation
                  $$begin{equation*}
                  x'(t) = frac{a-1}{h}x(t),tag{3}
                  end{equation*}$$

                  with solution
                  $$begin{equation*}
                  x(t) = x(0) exp left(frac{a-1}{h} tright).tag{4}
                  end{equation*}$$

                  Notice that $expleft(frac{a-1}{h} tright) = a^{n} + O(n(a-1)^2)$, so for $nll 1/(a-1)^2$ we find good agreement with (2).



                  Going in the reverse direction, from the differential equation (3) to the difference equation (1), is called Euler's method.



                  There is a subtlety when approximating a difference equation by a differential equation.
                  For this problem we require that $|a-1| ll 1$ since we need $x_{n+1}- x_n = O(h) ll 1$.
                  Otherwise, we should expect (and will find) the approximation of (1) by (3) to be poor.
                  Sometimes it is necessary to reformulate the difference equation so the difference between successive terms is small.${}^dagger$



                  ${}^dagger$ For example, suppose $a$ in (1) were large and positive.
                  Let $y_0=x_0$ and let $y_{n+1} = a^{1/p} y_n$,
                  where $p$ is some large integer so $|a^{1/p} - 1|ll 1$.
                  Note that $y_{n p} = x_n$.
                  The solution to the corresponding differential equation for $y$ will be a good approximation to $y_n$, and this in turn can be used to approximate $x_n$.



                  Second order example



                  Consider the differential equation for the harmonic oscillator
                  $$begin{equation*}
                  x''+x = 0, hspace{5ex} x(0)=1, hspace{5ex}x'(0)=0, tag{5}
                  end{equation*}$$

                  the solution for which is $x(t) = cos t$.
                  The simplest related difference equation is
                  $$begin{equation*}
                  frac{1}{h^2}(x_{n+2}-2x_{n+1}+x_n) + x_n = 0, tag{6}
                  end{equation*}$$

                  with $x_0 = x_1 = 1$.
                  (We show how to get (6) from (5) below.)
                  There are standard techniques for solving simple recurrence relations such as (6) in closed form.
                  We find
                  $$begin{equation*}
                  x_n = frac{1}{2} left((1+i h)^{n}+(1 - i h)^{n}right).
                  end{equation*}$$

                  Note that $x_n$ is real since $x_n^* = x_n$.
                  This is the closed form for the solution a computer would find solving (6) iteratively.
                  Recall that $n=t_n/h$.
                  For small $h$,
                  $x_n = cos t_n + frac{1}{2} h t_n cos t_n + O(h^2)$,
                  so the numerical solution to (6) will well approximate $cos t_n$ for $h t_n ll 1$.
                  In the limit $hto 0$,
                  $x_n to cos t_n,$
                  as expected.



                  Derivatives and finite differences



                  Morally, a difference equation is a discrete version of a differential equation and
                  a differential equation is a continuous version of a difference equation.
                  The method of numerical integration of ODEs is essentially the rewriting of a differential equation as a difference equation which is then solved iteratively by a computer.
                  This is a big subject with many subtleties.
                  The method can also be applied to nonlinear and partial differential equations.
                  Below we give a brief dictionary between finite difference and differential operators.



                  Define the shift operator $E$ such that $E x_n = x_{n+1}$.
                  The difference operator $Delta = E-1$ then gives $Delta x_n = x_{n+1}-x_n$.
                  These operators are connected to differentiation by Taylor series.
                  Let $D x_n = x_n' = d x(t)/d t|_{t=t_n}$. Then
                  $$x_{n+1} = E x_n = sum_{k=0}^infty frac{h^k}{k!} D^k x_n
                  = e^{h D}x_n.$$

                  Thus, as an operator,
                  $$E = e^{h D},$$ and so
                  $$D = frac{1}{h} ln E = frac{1}{h} ln(1+Delta)
                  = frac{1}{h}sum_{k=1}^infty (-1)^{k+1} frac{Delta^k}{k}.$$

                  (Think of these operators as acting on polynomials, possibly of very high order.)
                  This formalism gives us a way to convert any ODE into a difference equation and vice-versa.
                  Notice that higher order derivatives can be approximated by $D^kapprox (Delta/h)^k$.
                  Thus, for example,
                  $$x'' = D^2 x rightarrow frac{1}{h^2}Delta^2x_n
                  = frac{1}{h^2}Delta(x_{n+1}-x_n)
                  = frac{1}{h^2} (x_{n+2}-2 x_{n+1} + x_n).$$



                  When using Euler's method we let $D approx Delta/h$.
                  We could just as well keep higher order terms in $Delta$ to get recursion relations with three or more terms.
                  It is a sign of the nontrivial nature of the subject that this simple change leads to numerical instabilities.
                  There are many named algorithms that do improve and generalize Euler's method, building on the ideas sketched above.
                  (See, for example, the Runge-Kutta
                  methods, a family of robust algorithms used to numerically solve linear and nonlinear differential equations.)



                  Addendum



                  This is in response to Drew N's question about seeing directly that
                  $$D = frac{1}{h}sum_{k=1}^infty (-1)^{k+1} frac{Delta^k}{k}$$
                  is the differential operator.
                  Here we show that $D t^n = n t^{n-1}$ for $n=0,1,ldots$, which shows that $D$ is the differential operator on polynomials of arbitrarily high degree.
                  (It is straightforward to extend this proof to $ninmathbb{C}$.)
                  We have
                  begin{align*}
                  D t^n &= frac{1}{h} sum_{k=1}^infty (-1)^{k+1}frac{1}{k}Delta^k t^n \
                  &= frac{1}{h} sum_{k=1}^infty (-1)^{k+1}frac{1}{k}(E-1)^k t^n \
                  &= frac{1}{h} sum_{k=1}^infty (-1)^{k+1}frac{1}{k}
                  sum_{j=0}^k{kchoose j}(-1)^{k-j}E^j t^n
                  & textrm{binomial theorem} \
                  &= frac{1}{h} sum_{k,j} (-1)^{j+1}frac{1}{k}{kchoose j}(t+j h)^n \
                  &= frac{1}{h} sum_{k,j} (-1)^{j+1}frac{1}{k}{kchoose j}
                  sum_{l=0}^n {nchoose l}j^l h^l t^{n-l}
                  & textrm{binomial theorem} \
                  &= left.frac{1}{h} sum_{k,j,l} (-1)^{j+1}frac{1}{k}
                  {kchoose j}{nchoose l}
                  (x D)^l x^j h^l t^{n-l}right|_{x=1}
                  & D = d/dx, (x D)^l = underbrace{(x D)cdots (x D)}_{l} \
                  &= left.-frac{1}{h} sum_{k,l} frac{1}{k}
                  {nchoose l} h^l t^{n-l}
                  (x D)^l sum_{j=0}^k {kchoose j}(-x)^j right|_{x=1} \
                  &= left.-frac{1}{h} sum_{k,l} frac{1}{k}
                  {nchoose l} h^l t^{n-l}
                  (x D)^l (1-x)^k right|_{x=1}
                  & textrm{binomial theorem} \
                  &= left.-frac{1}{h} sum_{l} {nchoose l} h^l t^{n-l}
                  (x D)^l sum_{k=1}^infty frac{1}{k} (1-x)^k right|_{x=1} \
                  &= left.frac{1}{h} sum_{l} {nchoose l} h^l t^{n-l}
                  (x D)^l log xright|_{x=1}
                  & textrm{series for natural logarithm} \
                  &= frac{1}{h} sum_{l=0}^n {nchoose l} h^l t^{n-l}
                  delta_{l1}
                  & textrm{see below} \
                  &= frac{1}{h} {nchoose 1} h t^{n-1} \
                  &= n t^{n-1}.
                  end{align*}

                  Note that
                  $(x D)x^j = j x^j$
                  so
                  $$(x D)^l x^j = j^l x^j.$$
                  Also
                  begin{align*}
                  (x D)^0 log x|_{x=1} &= log 1 = 0 \
                  (x D)^1 log x &= x frac{1}{x} = 1 \
                  (x D)^2 log x &= (x D)1 = 0 \
                  (x D)^3 log x &= (x D)^2 0 = 0.
                  end{align*}

                  Thus,
                  $$(x D)^l log x|_{x=1} = delta_{l1},$$
                  where $delta_{ij}$ is the Kronecker delta.







                  share|cite|improve this answer














                  share|cite|improve this answer



                  share|cite|improve this answer








                  edited Jan 2 at 19:54

























                  answered May 17 '12 at 21:56









                  user26872user26872

                  14.9k22773




                  14.9k22773








                  • 4




                    $begingroup$
                    Great work. Very helpful, concise, and clear. Thank you.
                    $endgroup$
                    – wesssg
                    Oct 7 '16 at 5:10










                  • $begingroup$
                    Why did you apply limit only to left side in formula (3)? I though that h should not appear at right side
                    $endgroup$
                    – LmTinyToon
                    Dec 16 '16 at 10:28










                  • $begingroup$
                    @LmTinyToon: For small $h$, the left hand side of (3) will be well-approximated by $x'(t)$. (But only as long as $x_{n+1}-x_n$ is also small.) The right hand side does not simplify in this limit and fundamentally depends on $h$. This is discussed in some detail in the last paragraph of "First order example" and in the footnote to that section. Cheers!
                    $endgroup$
                    – user26872
                    Dec 16 '16 at 21:40






                  • 1




                    $begingroup$
                    Using operator formalism you establish $D = (1/h) sum_{k=1}^infty (-1)^{k+1} Delta^{k}/k$, which I believe expands to $1/hbig[ (x_{n+1} - x_n) - (1/2) (x_{n+2} - 2x_{n+1} + x_n) + dots big]$. But I just don't intuit why this expression of $D$ must correctly give $Dx_n = x'(t_n)$. One can follow the operator formalism line by line, but is there a way to be convinced this really is $x'(t_n)$ just from the form of this expression alone?
                    $endgroup$
                    – Drew N
                    Dec 16 '18 at 4:05












                  • $begingroup$
                    @DrewN: Good question! I added an addendum to address this.
                    $endgroup$
                    – user26872
                    Jan 2 at 19:57














                  • 4




                    $begingroup$
                    Great work. Very helpful, concise, and clear. Thank you.
                    $endgroup$
                    – wesssg
                    Oct 7 '16 at 5:10










                  • $begingroup$
                    Why did you apply limit only to left side in formula (3)? I though that h should not appear at right side
                    $endgroup$
                    – LmTinyToon
                    Dec 16 '16 at 10:28










                  • $begingroup$
                    @LmTinyToon: For small $h$, the left hand side of (3) will be well-approximated by $x'(t)$. (But only as long as $x_{n+1}-x_n$ is also small.) The right hand side does not simplify in this limit and fundamentally depends on $h$. This is discussed in some detail in the last paragraph of "First order example" and in the footnote to that section. Cheers!
                    $endgroup$
                    – user26872
                    Dec 16 '16 at 21:40






                  • 1




                    $begingroup$
                    Using operator formalism you establish $D = (1/h) sum_{k=1}^infty (-1)^{k+1} Delta^{k}/k$, which I believe expands to $1/hbig[ (x_{n+1} - x_n) - (1/2) (x_{n+2} - 2x_{n+1} + x_n) + dots big]$. But I just don't intuit why this expression of $D$ must correctly give $Dx_n = x'(t_n)$. One can follow the operator formalism line by line, but is there a way to be convinced this really is $x'(t_n)$ just from the form of this expression alone?
                    $endgroup$
                    – Drew N
                    Dec 16 '18 at 4:05












                  • $begingroup$
                    @DrewN: Good question! I added an addendum to address this.
                    $endgroup$
                    – user26872
                    Jan 2 at 19:57








                  4




                  4




                  $begingroup$
                  Great work. Very helpful, concise, and clear. Thank you.
                  $endgroup$
                  – wesssg
                  Oct 7 '16 at 5:10




                  $begingroup$
                  Great work. Very helpful, concise, and clear. Thank you.
                  $endgroup$
                  – wesssg
                  Oct 7 '16 at 5:10












                  $begingroup$
                  Why did you apply limit only to left side in formula (3)? I though that h should not appear at right side
                  $endgroup$
                  – LmTinyToon
                  Dec 16 '16 at 10:28




                  $begingroup$
                  Why did you apply limit only to left side in formula (3)? I though that h should not appear at right side
                  $endgroup$
                  – LmTinyToon
                  Dec 16 '16 at 10:28












                  $begingroup$
                  @LmTinyToon: For small $h$, the left hand side of (3) will be well-approximated by $x'(t)$. (But only as long as $x_{n+1}-x_n$ is also small.) The right hand side does not simplify in this limit and fundamentally depends on $h$. This is discussed in some detail in the last paragraph of "First order example" and in the footnote to that section. Cheers!
                  $endgroup$
                  – user26872
                  Dec 16 '16 at 21:40




                  $begingroup$
                  @LmTinyToon: For small $h$, the left hand side of (3) will be well-approximated by $x'(t)$. (But only as long as $x_{n+1}-x_n$ is also small.) The right hand side does not simplify in this limit and fundamentally depends on $h$. This is discussed in some detail in the last paragraph of "First order example" and in the footnote to that section. Cheers!
                  $endgroup$
                  – user26872
                  Dec 16 '16 at 21:40




                  1




                  1




                  $begingroup$
                  Using operator formalism you establish $D = (1/h) sum_{k=1}^infty (-1)^{k+1} Delta^{k}/k$, which I believe expands to $1/hbig[ (x_{n+1} - x_n) - (1/2) (x_{n+2} - 2x_{n+1} + x_n) + dots big]$. But I just don't intuit why this expression of $D$ must correctly give $Dx_n = x'(t_n)$. One can follow the operator formalism line by line, but is there a way to be convinced this really is $x'(t_n)$ just from the form of this expression alone?
                  $endgroup$
                  – Drew N
                  Dec 16 '18 at 4:05






                  $begingroup$
                  Using operator formalism you establish $D = (1/h) sum_{k=1}^infty (-1)^{k+1} Delta^{k}/k$, which I believe expands to $1/hbig[ (x_{n+1} - x_n) - (1/2) (x_{n+2} - 2x_{n+1} + x_n) + dots big]$. But I just don't intuit why this expression of $D$ must correctly give $Dx_n = x'(t_n)$. One can follow the operator formalism line by line, but is there a way to be convinced this really is $x'(t_n)$ just from the form of this expression alone?
                  $endgroup$
                  – Drew N
                  Dec 16 '18 at 4:05














                  $begingroup$
                  @DrewN: Good question! I added an addendum to address this.
                  $endgroup$
                  – user26872
                  Jan 2 at 19:57




                  $begingroup$
                  @DrewN: Good question! I added an addendum to address this.
                  $endgroup$
                  – user26872
                  Jan 2 at 19:57











                  3












                  $begingroup$

                  One example:



                  You can take the Laplace transform of a discrete function by expressing the target of the Laplace transform as the product of a continuous function and a sequence of shifted delta functions and summing up: $sum_{k=0}^inftyint_{0}^infty f(t)delta(t - kT)e^{-st}dt$ = $sum_{k=0}^infty f(kT)e^{-skT}$, where T is the sampling period.



                  The (unilateral) Z transform is defined as $sum_{n=0}^{infty}x[k]z^{-k}$, for a discrete time signal $x[k]$. By substituting $z = e^{sT}$ in the formula for the discretized Laplace transform, the Z transform is obtained, and the s and Z domains are related by $s = frac{1}{T}log(Z)$. This is obviously a nonlinear transformation; points left of the imaginary axis in the s plane are mapped to the interior of the unit circle of the Z plane, while points to the right of the imaginary axis are mapped to the exterior. One can use the Bilinear transform, $s = frac{2(z -1)}{T(z + 1)}$, as a first order approximation to $s = frac{1}{T}log(Z)$.



                  So it's possible to (approximately) transform any equation in the s domain into a corresponding equation in the Z domain. This is useful for deriving a difference equation, since it can be shown that the Z transform of a shift, $Z(x[k - n])$, is equal to simply $z^{-n}X(z).$ Due to this property it is often possible to go easily from the Z transform of an expression to a difference equation, simply by inspecting inverse powers of Z and their coefficients in the equation. In short, if the differential equation is amenable to the Laplace transform (i.e. linear time invariant), it can be quite straightforward using this method to get a difference equation representation.






                  share|cite|improve this answer









                  $endgroup$


















                    3












                    $begingroup$

                    One example:



                    You can take the Laplace transform of a discrete function by expressing the target of the Laplace transform as the product of a continuous function and a sequence of shifted delta functions and summing up: $sum_{k=0}^inftyint_{0}^infty f(t)delta(t - kT)e^{-st}dt$ = $sum_{k=0}^infty f(kT)e^{-skT}$, where T is the sampling period.



                    The (unilateral) Z transform is defined as $sum_{n=0}^{infty}x[k]z^{-k}$, for a discrete time signal $x[k]$. By substituting $z = e^{sT}$ in the formula for the discretized Laplace transform, the Z transform is obtained, and the s and Z domains are related by $s = frac{1}{T}log(Z)$. This is obviously a nonlinear transformation; points left of the imaginary axis in the s plane are mapped to the interior of the unit circle of the Z plane, while points to the right of the imaginary axis are mapped to the exterior. One can use the Bilinear transform, $s = frac{2(z -1)}{T(z + 1)}$, as a first order approximation to $s = frac{1}{T}log(Z)$.



                    So it's possible to (approximately) transform any equation in the s domain into a corresponding equation in the Z domain. This is useful for deriving a difference equation, since it can be shown that the Z transform of a shift, $Z(x[k - n])$, is equal to simply $z^{-n}X(z).$ Due to this property it is often possible to go easily from the Z transform of an expression to a difference equation, simply by inspecting inverse powers of Z and their coefficients in the equation. In short, if the differential equation is amenable to the Laplace transform (i.e. linear time invariant), it can be quite straightforward using this method to get a difference equation representation.






                    share|cite|improve this answer









                    $endgroup$
















                      3












                      3








                      3





                      $begingroup$

                      One example:



                      You can take the Laplace transform of a discrete function by expressing the target of the Laplace transform as the product of a continuous function and a sequence of shifted delta functions and summing up: $sum_{k=0}^inftyint_{0}^infty f(t)delta(t - kT)e^{-st}dt$ = $sum_{k=0}^infty f(kT)e^{-skT}$, where T is the sampling period.



                      The (unilateral) Z transform is defined as $sum_{n=0}^{infty}x[k]z^{-k}$, for a discrete time signal $x[k]$. By substituting $z = e^{sT}$ in the formula for the discretized Laplace transform, the Z transform is obtained, and the s and Z domains are related by $s = frac{1}{T}log(Z)$. This is obviously a nonlinear transformation; points left of the imaginary axis in the s plane are mapped to the interior of the unit circle of the Z plane, while points to the right of the imaginary axis are mapped to the exterior. One can use the Bilinear transform, $s = frac{2(z -1)}{T(z + 1)}$, as a first order approximation to $s = frac{1}{T}log(Z)$.



                      So it's possible to (approximately) transform any equation in the s domain into a corresponding equation in the Z domain. This is useful for deriving a difference equation, since it can be shown that the Z transform of a shift, $Z(x[k - n])$, is equal to simply $z^{-n}X(z).$ Due to this property it is often possible to go easily from the Z transform of an expression to a difference equation, simply by inspecting inverse powers of Z and their coefficients in the equation. In short, if the differential equation is amenable to the Laplace transform (i.e. linear time invariant), it can be quite straightforward using this method to get a difference equation representation.






                      share|cite|improve this answer









                      $endgroup$



                      One example:



                      You can take the Laplace transform of a discrete function by expressing the target of the Laplace transform as the product of a continuous function and a sequence of shifted delta functions and summing up: $sum_{k=0}^inftyint_{0}^infty f(t)delta(t - kT)e^{-st}dt$ = $sum_{k=0}^infty f(kT)e^{-skT}$, where T is the sampling period.



                      The (unilateral) Z transform is defined as $sum_{n=0}^{infty}x[k]z^{-k}$, for a discrete time signal $x[k]$. By substituting $z = e^{sT}$ in the formula for the discretized Laplace transform, the Z transform is obtained, and the s and Z domains are related by $s = frac{1}{T}log(Z)$. This is obviously a nonlinear transformation; points left of the imaginary axis in the s plane are mapped to the interior of the unit circle of the Z plane, while points to the right of the imaginary axis are mapped to the exterior. One can use the Bilinear transform, $s = frac{2(z -1)}{T(z + 1)}$, as a first order approximation to $s = frac{1}{T}log(Z)$.



                      So it's possible to (approximately) transform any equation in the s domain into a corresponding equation in the Z domain. This is useful for deriving a difference equation, since it can be shown that the Z transform of a shift, $Z(x[k - n])$, is equal to simply $z^{-n}X(z).$ Due to this property it is often possible to go easily from the Z transform of an expression to a difference equation, simply by inspecting inverse powers of Z and their coefficients in the equation. In short, if the differential equation is amenable to the Laplace transform (i.e. linear time invariant), it can be quite straightforward using this method to get a difference equation representation.







                      share|cite|improve this answer












                      share|cite|improve this answer



                      share|cite|improve this answer










                      answered May 16 '12 at 6:15









                      BitrexBitrex

                      1,28031528




                      1,28031528























                          3












                          $begingroup$

                          (Adding to user26872's answer as this was not obvious to me so it might help someone else going through his derivation)



                          The identity



                          $$x_{n+1} = Ex_n = sumlimits_{k=0}^infty frac{h^k}{k!}D^k x_n=e^{hD}x_n$$



                          is true if we consider the following. Let $x_n = x(t_n)$. Let's assume that $x(t)$ is differentiable around some point $t_n$, then it's Taylor series can be defined as



                          $$x(t) = sumlimits_{k=0}^infty frac{x^{(k)}(t_n)}{k!}(t-t_n)^k.$$



                          If we now perform the same expansion at $t' = t_n+h$ we have



                          $$x(t_n+h) = sumlimits_{k=0}^infty frac{x^{(k)}(t_n)}{k!}h^k = sumlimits_{k=0}^infty frac{h^k D^k}{k!}x(t) = e^{hD} x(t_n)$$



                          and so



                          $$x_{n+1} = Ex_n leftrightarrow x(t_n + h) = E x(t_n)$$



                          thus giving



                          $$E = e^{hD}.$$






                          share|cite|improve this answer









                          $endgroup$


















                            3












                            $begingroup$

                            (Adding to user26872's answer as this was not obvious to me so it might help someone else going through his derivation)



                            The identity



                            $$x_{n+1} = Ex_n = sumlimits_{k=0}^infty frac{h^k}{k!}D^k x_n=e^{hD}x_n$$



                            is true if we consider the following. Let $x_n = x(t_n)$. Let's assume that $x(t)$ is differentiable around some point $t_n$, then it's Taylor series can be defined as



                            $$x(t) = sumlimits_{k=0}^infty frac{x^{(k)}(t_n)}{k!}(t-t_n)^k.$$



                            If we now perform the same expansion at $t' = t_n+h$ we have



                            $$x(t_n+h) = sumlimits_{k=0}^infty frac{x^{(k)}(t_n)}{k!}h^k = sumlimits_{k=0}^infty frac{h^k D^k}{k!}x(t) = e^{hD} x(t_n)$$



                            and so



                            $$x_{n+1} = Ex_n leftrightarrow x(t_n + h) = E x(t_n)$$



                            thus giving



                            $$E = e^{hD}.$$






                            share|cite|improve this answer









                            $endgroup$
















                              3












                              3








                              3





                              $begingroup$

                              (Adding to user26872's answer as this was not obvious to me so it might help someone else going through his derivation)



                              The identity



                              $$x_{n+1} = Ex_n = sumlimits_{k=0}^infty frac{h^k}{k!}D^k x_n=e^{hD}x_n$$



                              is true if we consider the following. Let $x_n = x(t_n)$. Let's assume that $x(t)$ is differentiable around some point $t_n$, then it's Taylor series can be defined as



                              $$x(t) = sumlimits_{k=0}^infty frac{x^{(k)}(t_n)}{k!}(t-t_n)^k.$$



                              If we now perform the same expansion at $t' = t_n+h$ we have



                              $$x(t_n+h) = sumlimits_{k=0}^infty frac{x^{(k)}(t_n)}{k!}h^k = sumlimits_{k=0}^infty frac{h^k D^k}{k!}x(t) = e^{hD} x(t_n)$$



                              and so



                              $$x_{n+1} = Ex_n leftrightarrow x(t_n + h) = E x(t_n)$$



                              thus giving



                              $$E = e^{hD}.$$






                              share|cite|improve this answer









                              $endgroup$



                              (Adding to user26872's answer as this was not obvious to me so it might help someone else going through his derivation)



                              The identity



                              $$x_{n+1} = Ex_n = sumlimits_{k=0}^infty frac{h^k}{k!}D^k x_n=e^{hD}x_n$$



                              is true if we consider the following. Let $x_n = x(t_n)$. Let's assume that $x(t)$ is differentiable around some point $t_n$, then it's Taylor series can be defined as



                              $$x(t) = sumlimits_{k=0}^infty frac{x^{(k)}(t_n)}{k!}(t-t_n)^k.$$



                              If we now perform the same expansion at $t' = t_n+h$ we have



                              $$x(t_n+h) = sumlimits_{k=0}^infty frac{x^{(k)}(t_n)}{k!}h^k = sumlimits_{k=0}^infty frac{h^k D^k}{k!}x(t) = e^{hD} x(t_n)$$



                              and so



                              $$x_{n+1} = Ex_n leftrightarrow x(t_n + h) = E x(t_n)$$



                              thus giving



                              $$E = e^{hD}.$$







                              share|cite|improve this answer












                              share|cite|improve this answer



                              share|cite|improve this answer










                              answered Jan 10 '18 at 16:48









                              laynlayn

                              885




                              885























                                  2












                                  $begingroup$

                                  First see the correspondence between discrete and continuous dynamical systems. Both are acts of a monoid on some set $X$.



                                  For discrete dynamical systems the monoid that acts is $mathbb{N}$. For continuous dynamical systems the monoid that acts is $mathbb{R}$.



                                  This action comes in the form of a left multiplication
                                  $(.,.):mathbb{N}times X rightarrow X$ or $(.,.):mathbb{R}times X rightarrow X$
                                  and is compatible with the addition structure of $mathbb{N}$ and $mathbb{R}$ respectively.



                                  I.e. for all $n,m in mathbb{N}$ or $mathbb{R}$ and $x in X$ we have $(0,x) = x$ and $(n,(m,x)) = (n + m,x)$.



                                  Now on the story of difference and differential equations. A first order difference equation equals a discrete dynamical system. Note that any difference equation can be converted to a system of first order difference equations (see higher order difference equations). Hence any difference equation equals a discrete dynamical system. Note that the $mathbb{N}$-act is given by $(n,x) = f^n(x)$.



                                  If the differential condition is sufficiently smooth there exists a unique solution for any point ($phi_x^t$). We use this solution as the $mathbb{R}$-act. I.e. $(t,x) = phi_x^t$. Hence any sufficiently smooth differential equation equals a continuous dynamical system.






                                  share|cite|improve this answer









                                  $endgroup$


















                                    2












                                    $begingroup$

                                    First see the correspondence between discrete and continuous dynamical systems. Both are acts of a monoid on some set $X$.



                                    For discrete dynamical systems the monoid that acts is $mathbb{N}$. For continuous dynamical systems the monoid that acts is $mathbb{R}$.



                                    This action comes in the form of a left multiplication
                                    $(.,.):mathbb{N}times X rightarrow X$ or $(.,.):mathbb{R}times X rightarrow X$
                                    and is compatible with the addition structure of $mathbb{N}$ and $mathbb{R}$ respectively.



                                    I.e. for all $n,m in mathbb{N}$ or $mathbb{R}$ and $x in X$ we have $(0,x) = x$ and $(n,(m,x)) = (n + m,x)$.



                                    Now on the story of difference and differential equations. A first order difference equation equals a discrete dynamical system. Note that any difference equation can be converted to a system of first order difference equations (see higher order difference equations). Hence any difference equation equals a discrete dynamical system. Note that the $mathbb{N}$-act is given by $(n,x) = f^n(x)$.



                                    If the differential condition is sufficiently smooth there exists a unique solution for any point ($phi_x^t$). We use this solution as the $mathbb{R}$-act. I.e. $(t,x) = phi_x^t$. Hence any sufficiently smooth differential equation equals a continuous dynamical system.






                                    share|cite|improve this answer









                                    $endgroup$
















                                      2












                                      2








                                      2





                                      $begingroup$

                                      First see the correspondence between discrete and continuous dynamical systems. Both are acts of a monoid on some set $X$.



                                      For discrete dynamical systems the monoid that acts is $mathbb{N}$. For continuous dynamical systems the monoid that acts is $mathbb{R}$.



                                      This action comes in the form of a left multiplication
                                      $(.,.):mathbb{N}times X rightarrow X$ or $(.,.):mathbb{R}times X rightarrow X$
                                      and is compatible with the addition structure of $mathbb{N}$ and $mathbb{R}$ respectively.



                                      I.e. for all $n,m in mathbb{N}$ or $mathbb{R}$ and $x in X$ we have $(0,x) = x$ and $(n,(m,x)) = (n + m,x)$.



                                      Now on the story of difference and differential equations. A first order difference equation equals a discrete dynamical system. Note that any difference equation can be converted to a system of first order difference equations (see higher order difference equations). Hence any difference equation equals a discrete dynamical system. Note that the $mathbb{N}$-act is given by $(n,x) = f^n(x)$.



                                      If the differential condition is sufficiently smooth there exists a unique solution for any point ($phi_x^t$). We use this solution as the $mathbb{R}$-act. I.e. $(t,x) = phi_x^t$. Hence any sufficiently smooth differential equation equals a continuous dynamical system.






                                      share|cite|improve this answer









                                      $endgroup$



                                      First see the correspondence between discrete and continuous dynamical systems. Both are acts of a monoid on some set $X$.



                                      For discrete dynamical systems the monoid that acts is $mathbb{N}$. For continuous dynamical systems the monoid that acts is $mathbb{R}$.



                                      This action comes in the form of a left multiplication
                                      $(.,.):mathbb{N}times X rightarrow X$ or $(.,.):mathbb{R}times X rightarrow X$
                                      and is compatible with the addition structure of $mathbb{N}$ and $mathbb{R}$ respectively.



                                      I.e. for all $n,m in mathbb{N}$ or $mathbb{R}$ and $x in X$ we have $(0,x) = x$ and $(n,(m,x)) = (n + m,x)$.



                                      Now on the story of difference and differential equations. A first order difference equation equals a discrete dynamical system. Note that any difference equation can be converted to a system of first order difference equations (see higher order difference equations). Hence any difference equation equals a discrete dynamical system. Note that the $mathbb{N}$-act is given by $(n,x) = f^n(x)$.



                                      If the differential condition is sufficiently smooth there exists a unique solution for any point ($phi_x^t$). We use this solution as the $mathbb{R}$-act. I.e. $(t,x) = phi_x^t$. Hence any sufficiently smooth differential equation equals a continuous dynamical system.







                                      share|cite|improve this answer












                                      share|cite|improve this answer



                                      share|cite|improve this answer










                                      answered Oct 25 '18 at 10:47









                                      Jens WagemakerJens Wagemaker

                                      550312




                                      550312






























                                          draft saved

                                          draft discarded




















































                                          Thanks for contributing an answer to Mathematics Stack Exchange!


                                          • Please be sure to answer the question. Provide details and share your research!

                                          But avoid



                                          • Asking for help, clarification, or responding to other answers.

                                          • Making statements based on opinion; back them up with references or personal experience.


                                          Use MathJax to format equations. MathJax reference.


                                          To learn more, see our tips on writing great answers.




                                          draft saved


                                          draft discarded














                                          StackExchange.ready(
                                          function () {
                                          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f145523%2flinks-between-difference-and-differential-equations%23new-answer', 'question_page');
                                          }
                                          );

                                          Post as a guest















                                          Required, but never shown





















































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown

































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown







                                          Popular posts from this blog

                                          Ellipse (mathématiques)

                                          Quarter-circle Tiles

                                          Mont Emei