Show that the sequence $(T_n)_{ngeq 1}$ converges in probability to the constant $2p$












1












$begingroup$


Let $X_n$ ~ Bernoulli(p). Let $Y_n = X_n + X_{n+1}$.
Let $T_n = frac{1}{n}sum_{i=1}^{n} Y_i$.
I want to show that the sequence $(T_n)_{ngeq 1}$ converges in probability to the constant 2p.



I found that $E[T_n] = 2p$ and that $operatorname{Var}[T_n] = 2p(1-p)frac{2n-1}{n^2}$.



My definition of convergence in probability is the following:
$$forall epsilon > 0 spacemathbb{P}(vert T_n - 2p vert > epsilon) to 0$$



I can also use the following criterion:



Convergence in probability iff $$lim_{ntoinfty} mathbb{E}Big[frac{vert T_n - 2pvert}{vert T_n - 2pvert + 1}Big] = 0$$



To me using the criterion here seems smart because I already know that the expected value is $2p$, but I am not sure how to proceed. Any hints?










share|cite|improve this question











$endgroup$












  • $begingroup$
    Hint: You can try to use Chebyshev's inequality $mathbb{P}bigl(|X-EX|>epsilonbigr)leq frac{Var(X)}{epsilon^2}.$
    $endgroup$
    – dem0nakos
    Jan 4 at 20:38










  • $begingroup$
    Can you use the law of large numbers?
    $endgroup$
    – Lundborg
    Jan 4 at 20:54










  • $begingroup$
    @dem0nakos I didn't know that inequality, super useful.
    $endgroup$
    – qcc101
    Jan 4 at 20:58
















1












$begingroup$


Let $X_n$ ~ Bernoulli(p). Let $Y_n = X_n + X_{n+1}$.
Let $T_n = frac{1}{n}sum_{i=1}^{n} Y_i$.
I want to show that the sequence $(T_n)_{ngeq 1}$ converges in probability to the constant 2p.



I found that $E[T_n] = 2p$ and that $operatorname{Var}[T_n] = 2p(1-p)frac{2n-1}{n^2}$.



My definition of convergence in probability is the following:
$$forall epsilon > 0 spacemathbb{P}(vert T_n - 2p vert > epsilon) to 0$$



I can also use the following criterion:



Convergence in probability iff $$lim_{ntoinfty} mathbb{E}Big[frac{vert T_n - 2pvert}{vert T_n - 2pvert + 1}Big] = 0$$



To me using the criterion here seems smart because I already know that the expected value is $2p$, but I am not sure how to proceed. Any hints?










share|cite|improve this question











$endgroup$












  • $begingroup$
    Hint: You can try to use Chebyshev's inequality $mathbb{P}bigl(|X-EX|>epsilonbigr)leq frac{Var(X)}{epsilon^2}.$
    $endgroup$
    – dem0nakos
    Jan 4 at 20:38










  • $begingroup$
    Can you use the law of large numbers?
    $endgroup$
    – Lundborg
    Jan 4 at 20:54










  • $begingroup$
    @dem0nakos I didn't know that inequality, super useful.
    $endgroup$
    – qcc101
    Jan 4 at 20:58














1












1








1





$begingroup$


Let $X_n$ ~ Bernoulli(p). Let $Y_n = X_n + X_{n+1}$.
Let $T_n = frac{1}{n}sum_{i=1}^{n} Y_i$.
I want to show that the sequence $(T_n)_{ngeq 1}$ converges in probability to the constant 2p.



I found that $E[T_n] = 2p$ and that $operatorname{Var}[T_n] = 2p(1-p)frac{2n-1}{n^2}$.



My definition of convergence in probability is the following:
$$forall epsilon > 0 spacemathbb{P}(vert T_n - 2p vert > epsilon) to 0$$



I can also use the following criterion:



Convergence in probability iff $$lim_{ntoinfty} mathbb{E}Big[frac{vert T_n - 2pvert}{vert T_n - 2pvert + 1}Big] = 0$$



To me using the criterion here seems smart because I already know that the expected value is $2p$, but I am not sure how to proceed. Any hints?










share|cite|improve this question











$endgroup$




Let $X_n$ ~ Bernoulli(p). Let $Y_n = X_n + X_{n+1}$.
Let $T_n = frac{1}{n}sum_{i=1}^{n} Y_i$.
I want to show that the sequence $(T_n)_{ngeq 1}$ converges in probability to the constant 2p.



I found that $E[T_n] = 2p$ and that $operatorname{Var}[T_n] = 2p(1-p)frac{2n-1}{n^2}$.



My definition of convergence in probability is the following:
$$forall epsilon > 0 spacemathbb{P}(vert T_n - 2p vert > epsilon) to 0$$



I can also use the following criterion:



Convergence in probability iff $$lim_{ntoinfty} mathbb{E}Big[frac{vert T_n - 2pvert}{vert T_n - 2pvert + 1}Big] = 0$$



To me using the criterion here seems smart because I already know that the expected value is $2p$, but I am not sure how to proceed. Any hints?







probability convergence






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Jan 4 at 21:57









Davide Giraudo

127k16153268




127k16153268










asked Jan 4 at 20:30









qcc101qcc101

627213




627213












  • $begingroup$
    Hint: You can try to use Chebyshev's inequality $mathbb{P}bigl(|X-EX|>epsilonbigr)leq frac{Var(X)}{epsilon^2}.$
    $endgroup$
    – dem0nakos
    Jan 4 at 20:38










  • $begingroup$
    Can you use the law of large numbers?
    $endgroup$
    – Lundborg
    Jan 4 at 20:54










  • $begingroup$
    @dem0nakos I didn't know that inequality, super useful.
    $endgroup$
    – qcc101
    Jan 4 at 20:58


















  • $begingroup$
    Hint: You can try to use Chebyshev's inequality $mathbb{P}bigl(|X-EX|>epsilonbigr)leq frac{Var(X)}{epsilon^2}.$
    $endgroup$
    – dem0nakos
    Jan 4 at 20:38










  • $begingroup$
    Can you use the law of large numbers?
    $endgroup$
    – Lundborg
    Jan 4 at 20:54










  • $begingroup$
    @dem0nakos I didn't know that inequality, super useful.
    $endgroup$
    – qcc101
    Jan 4 at 20:58
















$begingroup$
Hint: You can try to use Chebyshev's inequality $mathbb{P}bigl(|X-EX|>epsilonbigr)leq frac{Var(X)}{epsilon^2}.$
$endgroup$
– dem0nakos
Jan 4 at 20:38




$begingroup$
Hint: You can try to use Chebyshev's inequality $mathbb{P}bigl(|X-EX|>epsilonbigr)leq frac{Var(X)}{epsilon^2}.$
$endgroup$
– dem0nakos
Jan 4 at 20:38












$begingroup$
Can you use the law of large numbers?
$endgroup$
– Lundborg
Jan 4 at 20:54




$begingroup$
Can you use the law of large numbers?
$endgroup$
– Lundborg
Jan 4 at 20:54












$begingroup$
@dem0nakos I didn't know that inequality, super useful.
$endgroup$
– qcc101
Jan 4 at 20:58




$begingroup$
@dem0nakos I didn't know that inequality, super useful.
$endgroup$
– qcc101
Jan 4 at 20:58










2 Answers
2






active

oldest

votes


















4












$begingroup$

Claim. If $mu_n = mathbf{E}(T_n) to mu$ and $sigma_n^2 = mathbf{V}mathrm{ar}(T_n) to 0$ then $T_n to mu$ in $mathscr{L}^2$ and, hence, in probability too.



Proof. We have $mathbf{E}(|T_n - mu|^2) = mathbf{E}(|T_n - mu_n|^2) + 2(mu_n - mu) mathbf{E}(T_n - mu_n) + (mu_n - mu)^2 to 0.$ Q.E.D.






share|cite|improve this answer









$endgroup$





















    1












    $begingroup$

    Note that



    $$
    T_n = frac{1}{n} sum_{i=1}^n Y_i = frac{1}{n} sum_{i=1}^n (X_i + X_{i+1}) = 2 cdot frac{1}{n} sum_{i=1}^n X_i - frac{1}{n}X_1
    $$



    Now $X_n$ is an i.i.d sequence of random variables with mean $E(X_i)=p$, thus the law of large numbers states that



    $$
    2 cdot frac{1}{n} sum_{i=1}^n X_i overset{P}{to} 2 p
    $$



    and since $X_1$ is constant, clearly $frac{1}{n} X_1 overset{P}{to} 0$ thus yielding



    $$
    T_n overset{P}{to} 2 p
    $$



    as desired.






    share|cite|improve this answer









    $endgroup$













      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "69"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3062073%2fshow-that-the-sequence-t-n-n-geq-1-converges-in-probability-to-the-constan%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      4












      $begingroup$

      Claim. If $mu_n = mathbf{E}(T_n) to mu$ and $sigma_n^2 = mathbf{V}mathrm{ar}(T_n) to 0$ then $T_n to mu$ in $mathscr{L}^2$ and, hence, in probability too.



      Proof. We have $mathbf{E}(|T_n - mu|^2) = mathbf{E}(|T_n - mu_n|^2) + 2(mu_n - mu) mathbf{E}(T_n - mu_n) + (mu_n - mu)^2 to 0.$ Q.E.D.






      share|cite|improve this answer









      $endgroup$


















        4












        $begingroup$

        Claim. If $mu_n = mathbf{E}(T_n) to mu$ and $sigma_n^2 = mathbf{V}mathrm{ar}(T_n) to 0$ then $T_n to mu$ in $mathscr{L}^2$ and, hence, in probability too.



        Proof. We have $mathbf{E}(|T_n - mu|^2) = mathbf{E}(|T_n - mu_n|^2) + 2(mu_n - mu) mathbf{E}(T_n - mu_n) + (mu_n - mu)^2 to 0.$ Q.E.D.






        share|cite|improve this answer









        $endgroup$
















          4












          4








          4





          $begingroup$

          Claim. If $mu_n = mathbf{E}(T_n) to mu$ and $sigma_n^2 = mathbf{V}mathrm{ar}(T_n) to 0$ then $T_n to mu$ in $mathscr{L}^2$ and, hence, in probability too.



          Proof. We have $mathbf{E}(|T_n - mu|^2) = mathbf{E}(|T_n - mu_n|^2) + 2(mu_n - mu) mathbf{E}(T_n - mu_n) + (mu_n - mu)^2 to 0.$ Q.E.D.






          share|cite|improve this answer









          $endgroup$



          Claim. If $mu_n = mathbf{E}(T_n) to mu$ and $sigma_n^2 = mathbf{V}mathrm{ar}(T_n) to 0$ then $T_n to mu$ in $mathscr{L}^2$ and, hence, in probability too.



          Proof. We have $mathbf{E}(|T_n - mu|^2) = mathbf{E}(|T_n - mu_n|^2) + 2(mu_n - mu) mathbf{E}(T_n - mu_n) + (mu_n - mu)^2 to 0.$ Q.E.D.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Jan 4 at 20:39









          Will M.Will M.

          2,835315




          2,835315























              1












              $begingroup$

              Note that



              $$
              T_n = frac{1}{n} sum_{i=1}^n Y_i = frac{1}{n} sum_{i=1}^n (X_i + X_{i+1}) = 2 cdot frac{1}{n} sum_{i=1}^n X_i - frac{1}{n}X_1
              $$



              Now $X_n$ is an i.i.d sequence of random variables with mean $E(X_i)=p$, thus the law of large numbers states that



              $$
              2 cdot frac{1}{n} sum_{i=1}^n X_i overset{P}{to} 2 p
              $$



              and since $X_1$ is constant, clearly $frac{1}{n} X_1 overset{P}{to} 0$ thus yielding



              $$
              T_n overset{P}{to} 2 p
              $$



              as desired.






              share|cite|improve this answer









              $endgroup$


















                1












                $begingroup$

                Note that



                $$
                T_n = frac{1}{n} sum_{i=1}^n Y_i = frac{1}{n} sum_{i=1}^n (X_i + X_{i+1}) = 2 cdot frac{1}{n} sum_{i=1}^n X_i - frac{1}{n}X_1
                $$



                Now $X_n$ is an i.i.d sequence of random variables with mean $E(X_i)=p$, thus the law of large numbers states that



                $$
                2 cdot frac{1}{n} sum_{i=1}^n X_i overset{P}{to} 2 p
                $$



                and since $X_1$ is constant, clearly $frac{1}{n} X_1 overset{P}{to} 0$ thus yielding



                $$
                T_n overset{P}{to} 2 p
                $$



                as desired.






                share|cite|improve this answer









                $endgroup$
















                  1












                  1








                  1





                  $begingroup$

                  Note that



                  $$
                  T_n = frac{1}{n} sum_{i=1}^n Y_i = frac{1}{n} sum_{i=1}^n (X_i + X_{i+1}) = 2 cdot frac{1}{n} sum_{i=1}^n X_i - frac{1}{n}X_1
                  $$



                  Now $X_n$ is an i.i.d sequence of random variables with mean $E(X_i)=p$, thus the law of large numbers states that



                  $$
                  2 cdot frac{1}{n} sum_{i=1}^n X_i overset{P}{to} 2 p
                  $$



                  and since $X_1$ is constant, clearly $frac{1}{n} X_1 overset{P}{to} 0$ thus yielding



                  $$
                  T_n overset{P}{to} 2 p
                  $$



                  as desired.






                  share|cite|improve this answer









                  $endgroup$



                  Note that



                  $$
                  T_n = frac{1}{n} sum_{i=1}^n Y_i = frac{1}{n} sum_{i=1}^n (X_i + X_{i+1}) = 2 cdot frac{1}{n} sum_{i=1}^n X_i - frac{1}{n}X_1
                  $$



                  Now $X_n$ is an i.i.d sequence of random variables with mean $E(X_i)=p$, thus the law of large numbers states that



                  $$
                  2 cdot frac{1}{n} sum_{i=1}^n X_i overset{P}{to} 2 p
                  $$



                  and since $X_1$ is constant, clearly $frac{1}{n} X_1 overset{P}{to} 0$ thus yielding



                  $$
                  T_n overset{P}{to} 2 p
                  $$



                  as desired.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Jan 4 at 20:58









                  LundborgLundborg

                  892516




                  892516






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Mathematics Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3062073%2fshow-that-the-sequence-t-n-n-geq-1-converges-in-probability-to-the-constan%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Quarter-circle Tiles

                      build a pushdown automaton that recognizes the reverse language of a given pushdown automaton?

                      Mont Emei