Likelihood Ratio Test Variance of Normal Distribution












0












$begingroup$


Let $X_1,...,X_n$ be a random sample from $N(0,sigma_X^2)$ and let $Y_1,...,Y_m$ be a random sample from $N(0,sigma_Y^2)$. Define $alpha := sigma_Y^2/sigma_X^2$. Find the level $alpha$ LRT of $H_0 : alpha = alpha_0$ versus $H_1 : alpha ne alpha_0$. Express the rejection region of the LRT in terms of an $F(n,m)$ random variable. (Hint: $F$ can be obtained as the ratio of scaled $chi^2$ distributions, i.e. $F(n,m) = frac{chi^2_n/n}{chi_m^2/m}$.)



First of all, I find it a little bit confusing to define $alpha$ as $sigma_Y^2/sigma_X^2$. This $alpha$ is not the same $alpha$ as the level of the LRT, right?



Anyway, I determined that the LRT is $$lambda(X,Y) = frac{sup_{sigma_X^2,sigma_Y^2:frac{sigma_Y^2}{sigma_X^2} = alpha_0}L(sigma_X^2|X)L(sigma_Y^2|Y)}{sup_{sigma_X^2,sigma_Y^2}L(sigma_X^2|X)L(sigma_Y^2|Y)}$$



Calculating where the suprema are taken and substituting that gave me $$lambda(X,Y)=frac{(n+m)^{(n+m)/2}alpha_0^{n/2}big(sum X_i^2big)^{n/2}big(sum Y_i^2big)^{m/2}}{n^{n/2}m^{m/2}big(alpha_0sum X_i^2+sum Y_i^2big)^{(n+m)/2}}le c$$



where $c$ still needs to be determined to ensure we have a level $alpha$ test. However, to do so I would need to know the distribution of this monstrous expression. I know that I can rescale everything a bit to get that e.g. $sum X_i^2$ is $chi_n^2$-distributed, but I still do not know what happens if such a distribution is taken to some power, or multiplied by something, etc.



Furthermore, it is not clear to me how I should express the rejection region using this random variable $F$, but maybe this will become clear when I know how to solve the level $alpha$ LRT. Thank you for any help in clearing things up for me.










share|cite|improve this question









$endgroup$








  • 1




    $begingroup$
    I agree using $alpha$ to define the ratio of the variances is confusing (and not appropriate) when $alpha$ is also used to refer to the size of the test. This is likely a typo or an oversight on the author/instructor.
    $endgroup$
    – Just_to_Answer
    Jan 4 at 20:25












  • $begingroup$
    Three other notes: (1) Since $sum X^2_i / sigma^2_X$ has a $chi^2$ distribution, it might be beneficial to simplify the likelihood ratio keeping that in mind; (2) If the left-side of the inequality can be written inside a single power, by taking an appropriate root the power can be moved over to the other side and call it a new "constant"; (3) similar idea for any lingering constants on the left side.
    $endgroup$
    – Just_to_Answer
    Jan 4 at 20:38
















0












$begingroup$


Let $X_1,...,X_n$ be a random sample from $N(0,sigma_X^2)$ and let $Y_1,...,Y_m$ be a random sample from $N(0,sigma_Y^2)$. Define $alpha := sigma_Y^2/sigma_X^2$. Find the level $alpha$ LRT of $H_0 : alpha = alpha_0$ versus $H_1 : alpha ne alpha_0$. Express the rejection region of the LRT in terms of an $F(n,m)$ random variable. (Hint: $F$ can be obtained as the ratio of scaled $chi^2$ distributions, i.e. $F(n,m) = frac{chi^2_n/n}{chi_m^2/m}$.)



First of all, I find it a little bit confusing to define $alpha$ as $sigma_Y^2/sigma_X^2$. This $alpha$ is not the same $alpha$ as the level of the LRT, right?



Anyway, I determined that the LRT is $$lambda(X,Y) = frac{sup_{sigma_X^2,sigma_Y^2:frac{sigma_Y^2}{sigma_X^2} = alpha_0}L(sigma_X^2|X)L(sigma_Y^2|Y)}{sup_{sigma_X^2,sigma_Y^2}L(sigma_X^2|X)L(sigma_Y^2|Y)}$$



Calculating where the suprema are taken and substituting that gave me $$lambda(X,Y)=frac{(n+m)^{(n+m)/2}alpha_0^{n/2}big(sum X_i^2big)^{n/2}big(sum Y_i^2big)^{m/2}}{n^{n/2}m^{m/2}big(alpha_0sum X_i^2+sum Y_i^2big)^{(n+m)/2}}le c$$



where $c$ still needs to be determined to ensure we have a level $alpha$ test. However, to do so I would need to know the distribution of this monstrous expression. I know that I can rescale everything a bit to get that e.g. $sum X_i^2$ is $chi_n^2$-distributed, but I still do not know what happens if such a distribution is taken to some power, or multiplied by something, etc.



Furthermore, it is not clear to me how I should express the rejection region using this random variable $F$, but maybe this will become clear when I know how to solve the level $alpha$ LRT. Thank you for any help in clearing things up for me.










share|cite|improve this question









$endgroup$








  • 1




    $begingroup$
    I agree using $alpha$ to define the ratio of the variances is confusing (and not appropriate) when $alpha$ is also used to refer to the size of the test. This is likely a typo or an oversight on the author/instructor.
    $endgroup$
    – Just_to_Answer
    Jan 4 at 20:25












  • $begingroup$
    Three other notes: (1) Since $sum X^2_i / sigma^2_X$ has a $chi^2$ distribution, it might be beneficial to simplify the likelihood ratio keeping that in mind; (2) If the left-side of the inequality can be written inside a single power, by taking an appropriate root the power can be moved over to the other side and call it a new "constant"; (3) similar idea for any lingering constants on the left side.
    $endgroup$
    – Just_to_Answer
    Jan 4 at 20:38














0












0








0


0



$begingroup$


Let $X_1,...,X_n$ be a random sample from $N(0,sigma_X^2)$ and let $Y_1,...,Y_m$ be a random sample from $N(0,sigma_Y^2)$. Define $alpha := sigma_Y^2/sigma_X^2$. Find the level $alpha$ LRT of $H_0 : alpha = alpha_0$ versus $H_1 : alpha ne alpha_0$. Express the rejection region of the LRT in terms of an $F(n,m)$ random variable. (Hint: $F$ can be obtained as the ratio of scaled $chi^2$ distributions, i.e. $F(n,m) = frac{chi^2_n/n}{chi_m^2/m}$.)



First of all, I find it a little bit confusing to define $alpha$ as $sigma_Y^2/sigma_X^2$. This $alpha$ is not the same $alpha$ as the level of the LRT, right?



Anyway, I determined that the LRT is $$lambda(X,Y) = frac{sup_{sigma_X^2,sigma_Y^2:frac{sigma_Y^2}{sigma_X^2} = alpha_0}L(sigma_X^2|X)L(sigma_Y^2|Y)}{sup_{sigma_X^2,sigma_Y^2}L(sigma_X^2|X)L(sigma_Y^2|Y)}$$



Calculating where the suprema are taken and substituting that gave me $$lambda(X,Y)=frac{(n+m)^{(n+m)/2}alpha_0^{n/2}big(sum X_i^2big)^{n/2}big(sum Y_i^2big)^{m/2}}{n^{n/2}m^{m/2}big(alpha_0sum X_i^2+sum Y_i^2big)^{(n+m)/2}}le c$$



where $c$ still needs to be determined to ensure we have a level $alpha$ test. However, to do so I would need to know the distribution of this monstrous expression. I know that I can rescale everything a bit to get that e.g. $sum X_i^2$ is $chi_n^2$-distributed, but I still do not know what happens if such a distribution is taken to some power, or multiplied by something, etc.



Furthermore, it is not clear to me how I should express the rejection region using this random variable $F$, but maybe this will become clear when I know how to solve the level $alpha$ LRT. Thank you for any help in clearing things up for me.










share|cite|improve this question









$endgroup$




Let $X_1,...,X_n$ be a random sample from $N(0,sigma_X^2)$ and let $Y_1,...,Y_m$ be a random sample from $N(0,sigma_Y^2)$. Define $alpha := sigma_Y^2/sigma_X^2$. Find the level $alpha$ LRT of $H_0 : alpha = alpha_0$ versus $H_1 : alpha ne alpha_0$. Express the rejection region of the LRT in terms of an $F(n,m)$ random variable. (Hint: $F$ can be obtained as the ratio of scaled $chi^2$ distributions, i.e. $F(n,m) = frac{chi^2_n/n}{chi_m^2/m}$.)



First of all, I find it a little bit confusing to define $alpha$ as $sigma_Y^2/sigma_X^2$. This $alpha$ is not the same $alpha$ as the level of the LRT, right?



Anyway, I determined that the LRT is $$lambda(X,Y) = frac{sup_{sigma_X^2,sigma_Y^2:frac{sigma_Y^2}{sigma_X^2} = alpha_0}L(sigma_X^2|X)L(sigma_Y^2|Y)}{sup_{sigma_X^2,sigma_Y^2}L(sigma_X^2|X)L(sigma_Y^2|Y)}$$



Calculating where the suprema are taken and substituting that gave me $$lambda(X,Y)=frac{(n+m)^{(n+m)/2}alpha_0^{n/2}big(sum X_i^2big)^{n/2}big(sum Y_i^2big)^{m/2}}{n^{n/2}m^{m/2}big(alpha_0sum X_i^2+sum Y_i^2big)^{(n+m)/2}}le c$$



where $c$ still needs to be determined to ensure we have a level $alpha$ test. However, to do so I would need to know the distribution of this monstrous expression. I know that I can rescale everything a bit to get that e.g. $sum X_i^2$ is $chi_n^2$-distributed, but I still do not know what happens if such a distribution is taken to some power, or multiplied by something, etc.



Furthermore, it is not clear to me how I should express the rejection region using this random variable $F$, but maybe this will become clear when I know how to solve the level $alpha$ LRT. Thank you for any help in clearing things up for me.







statistics statistical-inference hypothesis-testing maximum-likelihood






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Jan 4 at 19:35









Analysis801Analysis801

1317




1317








  • 1




    $begingroup$
    I agree using $alpha$ to define the ratio of the variances is confusing (and not appropriate) when $alpha$ is also used to refer to the size of the test. This is likely a typo or an oversight on the author/instructor.
    $endgroup$
    – Just_to_Answer
    Jan 4 at 20:25












  • $begingroup$
    Three other notes: (1) Since $sum X^2_i / sigma^2_X$ has a $chi^2$ distribution, it might be beneficial to simplify the likelihood ratio keeping that in mind; (2) If the left-side of the inequality can be written inside a single power, by taking an appropriate root the power can be moved over to the other side and call it a new "constant"; (3) similar idea for any lingering constants on the left side.
    $endgroup$
    – Just_to_Answer
    Jan 4 at 20:38














  • 1




    $begingroup$
    I agree using $alpha$ to define the ratio of the variances is confusing (and not appropriate) when $alpha$ is also used to refer to the size of the test. This is likely a typo or an oversight on the author/instructor.
    $endgroup$
    – Just_to_Answer
    Jan 4 at 20:25












  • $begingroup$
    Three other notes: (1) Since $sum X^2_i / sigma^2_X$ has a $chi^2$ distribution, it might be beneficial to simplify the likelihood ratio keeping that in mind; (2) If the left-side of the inequality can be written inside a single power, by taking an appropriate root the power can be moved over to the other side and call it a new "constant"; (3) similar idea for any lingering constants on the left side.
    $endgroup$
    – Just_to_Answer
    Jan 4 at 20:38








1




1




$begingroup$
I agree using $alpha$ to define the ratio of the variances is confusing (and not appropriate) when $alpha$ is also used to refer to the size of the test. This is likely a typo or an oversight on the author/instructor.
$endgroup$
– Just_to_Answer
Jan 4 at 20:25






$begingroup$
I agree using $alpha$ to define the ratio of the variances is confusing (and not appropriate) when $alpha$ is also used to refer to the size of the test. This is likely a typo or an oversight on the author/instructor.
$endgroup$
– Just_to_Answer
Jan 4 at 20:25














$begingroup$
Three other notes: (1) Since $sum X^2_i / sigma^2_X$ has a $chi^2$ distribution, it might be beneficial to simplify the likelihood ratio keeping that in mind; (2) If the left-side of the inequality can be written inside a single power, by taking an appropriate root the power can be moved over to the other side and call it a new "constant"; (3) similar idea for any lingering constants on the left side.
$endgroup$
– Just_to_Answer
Jan 4 at 20:38




$begingroup$
Three other notes: (1) Since $sum X^2_i / sigma^2_X$ has a $chi^2$ distribution, it might be beneficial to simplify the likelihood ratio keeping that in mind; (2) If the left-side of the inequality can be written inside a single power, by taking an appropriate root the power can be moved over to the other side and call it a new "constant"; (3) similar idea for any lingering constants on the left side.
$endgroup$
– Just_to_Answer
Jan 4 at 20:38










1 Answer
1






active

oldest

votes


















1












$begingroup$

Here is a somewhat heuristic argument without going into details of a likelihood ratio test:



Suppose $theta=sigma_Y^2/sigma_X^2$, and we are to test $H_0:theta=theta_0$ versus $H_1:theta=theta_1,(ne theta_0)$.



Recall that the statistics $s_1^2=frac{1}{n}sumlimits_{i=1}^n X_i^2$ and $s_2^2=frac{1}{m}sumlimits_{i=1}^m Y_i^2$ are unbiased and sufficient for $sigma_X^2$ and $sigma_Y^2$ respectively. Moreover, $frac{ns_1^2}{sigma_X^2}simchi^2_n$ and $frac{ms_2^2}{sigma_Y^2}simchi^2_m$ are independently distributed.



Then we readily have



$$F=frac{ns_1^2/nsigma_X^2}{ms_2^2/msigma_Y^2}=frac{s_1^2}{s_2^2}thetasim F_{n,m}$$



So a test statistic for testing $H_0$ would be $$F=frac{s_1^2}{s_2^2}theta_0$$



We can say that expected value of the observed $F$ statistic is $$E(F)=frac{m}{m-2}approx 1$$



So it could be argued that the decision rule is "Reject $H_0$ if observed $F<c_1$ or observed $F>c_2$", where $c_1,c_2$ are so chosen that $$P_{H_0}(F<c_1)+P_{H_0}(F>c_2)=alpha$$



I haven't made much progress with the LR test specifically, but I am pretty sure you would end up with a test of the above form.






share|cite|improve this answer









$endgroup$













    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3062014%2flikelihood-ratio-test-variance-of-normal-distribution%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    1












    $begingroup$

    Here is a somewhat heuristic argument without going into details of a likelihood ratio test:



    Suppose $theta=sigma_Y^2/sigma_X^2$, and we are to test $H_0:theta=theta_0$ versus $H_1:theta=theta_1,(ne theta_0)$.



    Recall that the statistics $s_1^2=frac{1}{n}sumlimits_{i=1}^n X_i^2$ and $s_2^2=frac{1}{m}sumlimits_{i=1}^m Y_i^2$ are unbiased and sufficient for $sigma_X^2$ and $sigma_Y^2$ respectively. Moreover, $frac{ns_1^2}{sigma_X^2}simchi^2_n$ and $frac{ms_2^2}{sigma_Y^2}simchi^2_m$ are independently distributed.



    Then we readily have



    $$F=frac{ns_1^2/nsigma_X^2}{ms_2^2/msigma_Y^2}=frac{s_1^2}{s_2^2}thetasim F_{n,m}$$



    So a test statistic for testing $H_0$ would be $$F=frac{s_1^2}{s_2^2}theta_0$$



    We can say that expected value of the observed $F$ statistic is $$E(F)=frac{m}{m-2}approx 1$$



    So it could be argued that the decision rule is "Reject $H_0$ if observed $F<c_1$ or observed $F>c_2$", where $c_1,c_2$ are so chosen that $$P_{H_0}(F<c_1)+P_{H_0}(F>c_2)=alpha$$



    I haven't made much progress with the LR test specifically, but I am pretty sure you would end up with a test of the above form.






    share|cite|improve this answer









    $endgroup$


















      1












      $begingroup$

      Here is a somewhat heuristic argument without going into details of a likelihood ratio test:



      Suppose $theta=sigma_Y^2/sigma_X^2$, and we are to test $H_0:theta=theta_0$ versus $H_1:theta=theta_1,(ne theta_0)$.



      Recall that the statistics $s_1^2=frac{1}{n}sumlimits_{i=1}^n X_i^2$ and $s_2^2=frac{1}{m}sumlimits_{i=1}^m Y_i^2$ are unbiased and sufficient for $sigma_X^2$ and $sigma_Y^2$ respectively. Moreover, $frac{ns_1^2}{sigma_X^2}simchi^2_n$ and $frac{ms_2^2}{sigma_Y^2}simchi^2_m$ are independently distributed.



      Then we readily have



      $$F=frac{ns_1^2/nsigma_X^2}{ms_2^2/msigma_Y^2}=frac{s_1^2}{s_2^2}thetasim F_{n,m}$$



      So a test statistic for testing $H_0$ would be $$F=frac{s_1^2}{s_2^2}theta_0$$



      We can say that expected value of the observed $F$ statistic is $$E(F)=frac{m}{m-2}approx 1$$



      So it could be argued that the decision rule is "Reject $H_0$ if observed $F<c_1$ or observed $F>c_2$", where $c_1,c_2$ are so chosen that $$P_{H_0}(F<c_1)+P_{H_0}(F>c_2)=alpha$$



      I haven't made much progress with the LR test specifically, but I am pretty sure you would end up with a test of the above form.






      share|cite|improve this answer









      $endgroup$
















        1












        1








        1





        $begingroup$

        Here is a somewhat heuristic argument without going into details of a likelihood ratio test:



        Suppose $theta=sigma_Y^2/sigma_X^2$, and we are to test $H_0:theta=theta_0$ versus $H_1:theta=theta_1,(ne theta_0)$.



        Recall that the statistics $s_1^2=frac{1}{n}sumlimits_{i=1}^n X_i^2$ and $s_2^2=frac{1}{m}sumlimits_{i=1}^m Y_i^2$ are unbiased and sufficient for $sigma_X^2$ and $sigma_Y^2$ respectively. Moreover, $frac{ns_1^2}{sigma_X^2}simchi^2_n$ and $frac{ms_2^2}{sigma_Y^2}simchi^2_m$ are independently distributed.



        Then we readily have



        $$F=frac{ns_1^2/nsigma_X^2}{ms_2^2/msigma_Y^2}=frac{s_1^2}{s_2^2}thetasim F_{n,m}$$



        So a test statistic for testing $H_0$ would be $$F=frac{s_1^2}{s_2^2}theta_0$$



        We can say that expected value of the observed $F$ statistic is $$E(F)=frac{m}{m-2}approx 1$$



        So it could be argued that the decision rule is "Reject $H_0$ if observed $F<c_1$ or observed $F>c_2$", where $c_1,c_2$ are so chosen that $$P_{H_0}(F<c_1)+P_{H_0}(F>c_2)=alpha$$



        I haven't made much progress with the LR test specifically, but I am pretty sure you would end up with a test of the above form.






        share|cite|improve this answer









        $endgroup$



        Here is a somewhat heuristic argument without going into details of a likelihood ratio test:



        Suppose $theta=sigma_Y^2/sigma_X^2$, and we are to test $H_0:theta=theta_0$ versus $H_1:theta=theta_1,(ne theta_0)$.



        Recall that the statistics $s_1^2=frac{1}{n}sumlimits_{i=1}^n X_i^2$ and $s_2^2=frac{1}{m}sumlimits_{i=1}^m Y_i^2$ are unbiased and sufficient for $sigma_X^2$ and $sigma_Y^2$ respectively. Moreover, $frac{ns_1^2}{sigma_X^2}simchi^2_n$ and $frac{ms_2^2}{sigma_Y^2}simchi^2_m$ are independently distributed.



        Then we readily have



        $$F=frac{ns_1^2/nsigma_X^2}{ms_2^2/msigma_Y^2}=frac{s_1^2}{s_2^2}thetasim F_{n,m}$$



        So a test statistic for testing $H_0$ would be $$F=frac{s_1^2}{s_2^2}theta_0$$



        We can say that expected value of the observed $F$ statistic is $$E(F)=frac{m}{m-2}approx 1$$



        So it could be argued that the decision rule is "Reject $H_0$ if observed $F<c_1$ or observed $F>c_2$", where $c_1,c_2$ are so chosen that $$P_{H_0}(F<c_1)+P_{H_0}(F>c_2)=alpha$$



        I haven't made much progress with the LR test specifically, but I am pretty sure you would end up with a test of the above form.







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered Jan 8 at 20:23









        StubbornAtomStubbornAtom

        6,06811239




        6,06811239






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3062014%2flikelihood-ratio-test-variance-of-normal-distribution%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Quarter-circle Tiles

            build a pushdown automaton that recognizes the reverse language of a given pushdown automaton?

            Mont Emei