Clarifying the statement $f(x,y) = f_X(x) f_Y(y)$ for all $x, y$ (when $X$ and $Y$ are independent)












1














The textbook A First Course in Probability by Ross defines random variables $X$ and $Y$ to be independent when the following condition is satisfied: if $A$ and $B$ are subsets of $mathbb R$, then
$$
tag{0}P(X in A, Y in B) = P(X in A) P(Y in B).
$$

(I think technically we should assume that $X$ and $Y$ are Borel, but Ross glosses over that issue.)



The textbook goes on to state that if $X$ and $Y$ are jointly continuous random variables with PDFs $f_X$ and $f_Y$ and joint PDF $f$, then
$$
tag{1} f(x,y) = f_X(x) f_Y(y) quad text{for all } x,y.
$$

However, I think this statement is not correct because $f_X$ and $f_Y$ are only defined up to sets of measure $0$. (For example, I think you could redefine $f_X$ arbitrarily on a set of measure $0$, and you would still have a perfectly valid PDF for $X$.)



Question: So how can we state equation (1) in a way that is correct (so that we are not speaking nonsense to students) but which is also understandable to undergrads?





Further details: Let $F$ be the joint CDF for $X$ and $Y$,
and let $F_X$ and $F_Y$ be the CDFs for $X$ and $Y$ respectively.
It can be shown that equation (0) is equivalent to
$$
F(a,b) = F_X(a) F_Y(b)
$$

for all $a, b in mathbb R$. In other words,
$$
F(a,b) = int_{-infty}^a f_X(x) , dx
int_{-infty}^b f_Y(y) , dy
$$

for all $a,b in mathbb R$.
If $f_X$ is continuous at $a$, then differentiating both sides with respect to $a$ yields
$$
frac{partial F}{partial a} = f_X(a) int_{-infty}^b f_Y(y) , dy.
$$

If $f_Y$ is continuous at $b$, then differentiating with respect to $b$ yields
$$
frac{partial^2 F}{partial a partial b}
= f_X(a) f_Y(b).
$$

If we knew that
$$
tag{2} frac{partial^2 F}{partial a partial b} = f(a,b)
$$
then we would have established that $f(a,b) = f_X(a) f_Y(b)$.



So let's see what assumptions we need in order to conclude that $frac{partial^2 F}{partial a partial b} = f(a,b)$. We know that
$$
tag{3} F(a,b) = int_{-infty}^a int_{-infty}^b f(x,y) , dy , dx.
$$

If the function $x mapsto int_{-infty}^b f(x,y) , dy$ is continuous
at $a$, then by the fundamental theorem of calculus differentiating both sides of (3) with respect to $a$ yields
$$
frac{partial F}{partial a} = int_{-infty}^b f(a,y) , dy.
$$

If the function $y mapsto f(a,y)$ is continuous at $b$, then differentiating with respect to $b$ yields
$$
frac{partial^2 F}{partial a partial b} F(a,b) = f(a,b).
$$



So, in order to establish (2), we needed the following two assumptions:




  1. The function $x mapsto int_{-infty}^b f(x,y) , dy$ is continuous
    at $a$.

  2. The function $y mapsto f(a,y)$ is continuous at $b$.


Question: What is a nice, simple assumption I can state which will guarantee that these two conditions are met? If $f$ is continuous at $(a,b)$, then the second assumption is satisfied. But I don't see that the continuity of $f$ at $(a,b)$ would guarantee that the first condition is satisfied.










share|cite|improve this question




















  • 1




    You could say that (1) holds for all $(x,y)in U{times} V$, for any open $U$ and $V$ for which $f_X$, $f_Y$, and $f$ are continuous on $U$, $V$, and $U{times} V$ respectively. Most density functions seen in undergraduate courses are continuous, so you are not losing many examples this way.
    – kimchi lover
    Nov 26 at 2:19












  • @kimchilover Thank you, that is the kind of suggestion I'm looking for.
    – eternalGoldenBraid
    Nov 26 at 3:59






  • 1




    The correct statement is: $f(x, y) = f_X(x) f_Y(y)$ holds for almost every $x,y$ (with respect to 2-dimensional Lebesgue measure).
    – Song
    Nov 26 at 7:47
















1














The textbook A First Course in Probability by Ross defines random variables $X$ and $Y$ to be independent when the following condition is satisfied: if $A$ and $B$ are subsets of $mathbb R$, then
$$
tag{0}P(X in A, Y in B) = P(X in A) P(Y in B).
$$

(I think technically we should assume that $X$ and $Y$ are Borel, but Ross glosses over that issue.)



The textbook goes on to state that if $X$ and $Y$ are jointly continuous random variables with PDFs $f_X$ and $f_Y$ and joint PDF $f$, then
$$
tag{1} f(x,y) = f_X(x) f_Y(y) quad text{for all } x,y.
$$

However, I think this statement is not correct because $f_X$ and $f_Y$ are only defined up to sets of measure $0$. (For example, I think you could redefine $f_X$ arbitrarily on a set of measure $0$, and you would still have a perfectly valid PDF for $X$.)



Question: So how can we state equation (1) in a way that is correct (so that we are not speaking nonsense to students) but which is also understandable to undergrads?





Further details: Let $F$ be the joint CDF for $X$ and $Y$,
and let $F_X$ and $F_Y$ be the CDFs for $X$ and $Y$ respectively.
It can be shown that equation (0) is equivalent to
$$
F(a,b) = F_X(a) F_Y(b)
$$

for all $a, b in mathbb R$. In other words,
$$
F(a,b) = int_{-infty}^a f_X(x) , dx
int_{-infty}^b f_Y(y) , dy
$$

for all $a,b in mathbb R$.
If $f_X$ is continuous at $a$, then differentiating both sides with respect to $a$ yields
$$
frac{partial F}{partial a} = f_X(a) int_{-infty}^b f_Y(y) , dy.
$$

If $f_Y$ is continuous at $b$, then differentiating with respect to $b$ yields
$$
frac{partial^2 F}{partial a partial b}
= f_X(a) f_Y(b).
$$

If we knew that
$$
tag{2} frac{partial^2 F}{partial a partial b} = f(a,b)
$$
then we would have established that $f(a,b) = f_X(a) f_Y(b)$.



So let's see what assumptions we need in order to conclude that $frac{partial^2 F}{partial a partial b} = f(a,b)$. We know that
$$
tag{3} F(a,b) = int_{-infty}^a int_{-infty}^b f(x,y) , dy , dx.
$$

If the function $x mapsto int_{-infty}^b f(x,y) , dy$ is continuous
at $a$, then by the fundamental theorem of calculus differentiating both sides of (3) with respect to $a$ yields
$$
frac{partial F}{partial a} = int_{-infty}^b f(a,y) , dy.
$$

If the function $y mapsto f(a,y)$ is continuous at $b$, then differentiating with respect to $b$ yields
$$
frac{partial^2 F}{partial a partial b} F(a,b) = f(a,b).
$$



So, in order to establish (2), we needed the following two assumptions:




  1. The function $x mapsto int_{-infty}^b f(x,y) , dy$ is continuous
    at $a$.

  2. The function $y mapsto f(a,y)$ is continuous at $b$.


Question: What is a nice, simple assumption I can state which will guarantee that these two conditions are met? If $f$ is continuous at $(a,b)$, then the second assumption is satisfied. But I don't see that the continuity of $f$ at $(a,b)$ would guarantee that the first condition is satisfied.










share|cite|improve this question




















  • 1




    You could say that (1) holds for all $(x,y)in U{times} V$, for any open $U$ and $V$ for which $f_X$, $f_Y$, and $f$ are continuous on $U$, $V$, and $U{times} V$ respectively. Most density functions seen in undergraduate courses are continuous, so you are not losing many examples this way.
    – kimchi lover
    Nov 26 at 2:19












  • @kimchilover Thank you, that is the kind of suggestion I'm looking for.
    – eternalGoldenBraid
    Nov 26 at 3:59






  • 1




    The correct statement is: $f(x, y) = f_X(x) f_Y(y)$ holds for almost every $x,y$ (with respect to 2-dimensional Lebesgue measure).
    – Song
    Nov 26 at 7:47














1












1








1







The textbook A First Course in Probability by Ross defines random variables $X$ and $Y$ to be independent when the following condition is satisfied: if $A$ and $B$ are subsets of $mathbb R$, then
$$
tag{0}P(X in A, Y in B) = P(X in A) P(Y in B).
$$

(I think technically we should assume that $X$ and $Y$ are Borel, but Ross glosses over that issue.)



The textbook goes on to state that if $X$ and $Y$ are jointly continuous random variables with PDFs $f_X$ and $f_Y$ and joint PDF $f$, then
$$
tag{1} f(x,y) = f_X(x) f_Y(y) quad text{for all } x,y.
$$

However, I think this statement is not correct because $f_X$ and $f_Y$ are only defined up to sets of measure $0$. (For example, I think you could redefine $f_X$ arbitrarily on a set of measure $0$, and you would still have a perfectly valid PDF for $X$.)



Question: So how can we state equation (1) in a way that is correct (so that we are not speaking nonsense to students) but which is also understandable to undergrads?





Further details: Let $F$ be the joint CDF for $X$ and $Y$,
and let $F_X$ and $F_Y$ be the CDFs for $X$ and $Y$ respectively.
It can be shown that equation (0) is equivalent to
$$
F(a,b) = F_X(a) F_Y(b)
$$

for all $a, b in mathbb R$. In other words,
$$
F(a,b) = int_{-infty}^a f_X(x) , dx
int_{-infty}^b f_Y(y) , dy
$$

for all $a,b in mathbb R$.
If $f_X$ is continuous at $a$, then differentiating both sides with respect to $a$ yields
$$
frac{partial F}{partial a} = f_X(a) int_{-infty}^b f_Y(y) , dy.
$$

If $f_Y$ is continuous at $b$, then differentiating with respect to $b$ yields
$$
frac{partial^2 F}{partial a partial b}
= f_X(a) f_Y(b).
$$

If we knew that
$$
tag{2} frac{partial^2 F}{partial a partial b} = f(a,b)
$$
then we would have established that $f(a,b) = f_X(a) f_Y(b)$.



So let's see what assumptions we need in order to conclude that $frac{partial^2 F}{partial a partial b} = f(a,b)$. We know that
$$
tag{3} F(a,b) = int_{-infty}^a int_{-infty}^b f(x,y) , dy , dx.
$$

If the function $x mapsto int_{-infty}^b f(x,y) , dy$ is continuous
at $a$, then by the fundamental theorem of calculus differentiating both sides of (3) with respect to $a$ yields
$$
frac{partial F}{partial a} = int_{-infty}^b f(a,y) , dy.
$$

If the function $y mapsto f(a,y)$ is continuous at $b$, then differentiating with respect to $b$ yields
$$
frac{partial^2 F}{partial a partial b} F(a,b) = f(a,b).
$$



So, in order to establish (2), we needed the following two assumptions:




  1. The function $x mapsto int_{-infty}^b f(x,y) , dy$ is continuous
    at $a$.

  2. The function $y mapsto f(a,y)$ is continuous at $b$.


Question: What is a nice, simple assumption I can state which will guarantee that these two conditions are met? If $f$ is continuous at $(a,b)$, then the second assumption is satisfied. But I don't see that the continuity of $f$ at $(a,b)$ would guarantee that the first condition is satisfied.










share|cite|improve this question















The textbook A First Course in Probability by Ross defines random variables $X$ and $Y$ to be independent when the following condition is satisfied: if $A$ and $B$ are subsets of $mathbb R$, then
$$
tag{0}P(X in A, Y in B) = P(X in A) P(Y in B).
$$

(I think technically we should assume that $X$ and $Y$ are Borel, but Ross glosses over that issue.)



The textbook goes on to state that if $X$ and $Y$ are jointly continuous random variables with PDFs $f_X$ and $f_Y$ and joint PDF $f$, then
$$
tag{1} f(x,y) = f_X(x) f_Y(y) quad text{for all } x,y.
$$

However, I think this statement is not correct because $f_X$ and $f_Y$ are only defined up to sets of measure $0$. (For example, I think you could redefine $f_X$ arbitrarily on a set of measure $0$, and you would still have a perfectly valid PDF for $X$.)



Question: So how can we state equation (1) in a way that is correct (so that we are not speaking nonsense to students) but which is also understandable to undergrads?





Further details: Let $F$ be the joint CDF for $X$ and $Y$,
and let $F_X$ and $F_Y$ be the CDFs for $X$ and $Y$ respectively.
It can be shown that equation (0) is equivalent to
$$
F(a,b) = F_X(a) F_Y(b)
$$

for all $a, b in mathbb R$. In other words,
$$
F(a,b) = int_{-infty}^a f_X(x) , dx
int_{-infty}^b f_Y(y) , dy
$$

for all $a,b in mathbb R$.
If $f_X$ is continuous at $a$, then differentiating both sides with respect to $a$ yields
$$
frac{partial F}{partial a} = f_X(a) int_{-infty}^b f_Y(y) , dy.
$$

If $f_Y$ is continuous at $b$, then differentiating with respect to $b$ yields
$$
frac{partial^2 F}{partial a partial b}
= f_X(a) f_Y(b).
$$

If we knew that
$$
tag{2} frac{partial^2 F}{partial a partial b} = f(a,b)
$$
then we would have established that $f(a,b) = f_X(a) f_Y(b)$.



So let's see what assumptions we need in order to conclude that $frac{partial^2 F}{partial a partial b} = f(a,b)$. We know that
$$
tag{3} F(a,b) = int_{-infty}^a int_{-infty}^b f(x,y) , dy , dx.
$$

If the function $x mapsto int_{-infty}^b f(x,y) , dy$ is continuous
at $a$, then by the fundamental theorem of calculus differentiating both sides of (3) with respect to $a$ yields
$$
frac{partial F}{partial a} = int_{-infty}^b f(a,y) , dy.
$$

If the function $y mapsto f(a,y)$ is continuous at $b$, then differentiating with respect to $b$ yields
$$
frac{partial^2 F}{partial a partial b} F(a,b) = f(a,b).
$$



So, in order to establish (2), we needed the following two assumptions:




  1. The function $x mapsto int_{-infty}^b f(x,y) , dy$ is continuous
    at $a$.

  2. The function $y mapsto f(a,y)$ is continuous at $b$.


Question: What is a nice, simple assumption I can state which will guarantee that these two conditions are met? If $f$ is continuous at $(a,b)$, then the second assumption is satisfied. But I don't see that the continuity of $f$ at $(a,b)$ would guarantee that the first condition is satisfied.







probability






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Nov 26 at 6:28

























asked Nov 26 at 1:28









eternalGoldenBraid

717314




717314








  • 1




    You could say that (1) holds for all $(x,y)in U{times} V$, for any open $U$ and $V$ for which $f_X$, $f_Y$, and $f$ are continuous on $U$, $V$, and $U{times} V$ respectively. Most density functions seen in undergraduate courses are continuous, so you are not losing many examples this way.
    – kimchi lover
    Nov 26 at 2:19












  • @kimchilover Thank you, that is the kind of suggestion I'm looking for.
    – eternalGoldenBraid
    Nov 26 at 3:59






  • 1




    The correct statement is: $f(x, y) = f_X(x) f_Y(y)$ holds for almost every $x,y$ (with respect to 2-dimensional Lebesgue measure).
    – Song
    Nov 26 at 7:47














  • 1




    You could say that (1) holds for all $(x,y)in U{times} V$, for any open $U$ and $V$ for which $f_X$, $f_Y$, and $f$ are continuous on $U$, $V$, and $U{times} V$ respectively. Most density functions seen in undergraduate courses are continuous, so you are not losing many examples this way.
    – kimchi lover
    Nov 26 at 2:19












  • @kimchilover Thank you, that is the kind of suggestion I'm looking for.
    – eternalGoldenBraid
    Nov 26 at 3:59






  • 1




    The correct statement is: $f(x, y) = f_X(x) f_Y(y)$ holds for almost every $x,y$ (with respect to 2-dimensional Lebesgue measure).
    – Song
    Nov 26 at 7:47








1




1




You could say that (1) holds for all $(x,y)in U{times} V$, for any open $U$ and $V$ for which $f_X$, $f_Y$, and $f$ are continuous on $U$, $V$, and $U{times} V$ respectively. Most density functions seen in undergraduate courses are continuous, so you are not losing many examples this way.
– kimchi lover
Nov 26 at 2:19






You could say that (1) holds for all $(x,y)in U{times} V$, for any open $U$ and $V$ for which $f_X$, $f_Y$, and $f$ are continuous on $U$, $V$, and $U{times} V$ respectively. Most density functions seen in undergraduate courses are continuous, so you are not losing many examples this way.
– kimchi lover
Nov 26 at 2:19














@kimchilover Thank you, that is the kind of suggestion I'm looking for.
– eternalGoldenBraid
Nov 26 at 3:59




@kimchilover Thank you, that is the kind of suggestion I'm looking for.
– eternalGoldenBraid
Nov 26 at 3:59




1




1




The correct statement is: $f(x, y) = f_X(x) f_Y(y)$ holds for almost every $x,y$ (with respect to 2-dimensional Lebesgue measure).
– Song
Nov 26 at 7:47




The correct statement is: $f(x, y) = f_X(x) f_Y(y)$ holds for almost every $x,y$ (with respect to 2-dimensional Lebesgue measure).
– Song
Nov 26 at 7:47















active

oldest

votes











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3013677%2fclarifying-the-statement-fx-y-f-xx-f-yy-for-all-x-y-when-x-and%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown






























active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3013677%2fclarifying-the-statement-fx-y-f-xx-f-yy-for-all-x-y-when-x-and%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Quarter-circle Tiles

build a pushdown automaton that recognizes the reverse language of a given pushdown automaton?

Mont Emei