Confidence interval of an average coefficient calculated from curve fits, with each curve fit using a...
I am curve fitting a model to data sets in order to determine a coefficient for each curve fit/data set. I then calculate an average coefficient and the 95% confidence interval for this average. However, each curve fit involves the use of a parameter (in the model equation) which itself has a 95% confidence interval. How do I propagate the 95% confidence interval for this parameter into the 95% confidence interval for the coefficient calculated from curve fits?
Here is an example version of the model that I am curve fitting to data:
$$y=text{erfc}left(frac{x}{A+B}right)$$
$A$ is the curve fit variable and $B$ is the pre-determined parameter with a 95% confidence interval. Changing the value of $B$ changes the calculated value of $A$, so unless I'm mistaken, the confidence interval of $B$ should propagate into that of $A$. How is this done mathematically?
Edit: I found the answer to my question. It's included below.
data-analysis confidence-interval error-propagation
add a comment |
I am curve fitting a model to data sets in order to determine a coefficient for each curve fit/data set. I then calculate an average coefficient and the 95% confidence interval for this average. However, each curve fit involves the use of a parameter (in the model equation) which itself has a 95% confidence interval. How do I propagate the 95% confidence interval for this parameter into the 95% confidence interval for the coefficient calculated from curve fits?
Here is an example version of the model that I am curve fitting to data:
$$y=text{erfc}left(frac{x}{A+B}right)$$
$A$ is the curve fit variable and $B$ is the pre-determined parameter with a 95% confidence interval. Changing the value of $B$ changes the calculated value of $A$, so unless I'm mistaken, the confidence interval of $B$ should propagate into that of $A$. How is this done mathematically?
Edit: I found the answer to my question. It's included below.
data-analysis confidence-interval error-propagation
Could I calculate $x$ with the upper and lower bounds of $K$, based on its confidence interval, and then state that the confidence interval of $x$ is between the far extremes of the confidence intervals of $x$ calculated for the upper and lower bounds of $K$? For example, calculating $x=10pm10$ for the lower bound of $K$, $x=20pm10$ for the upper bound of $K$, $x=15pm10$ for the average value of $K$, and stating that $x=15pm15$.
– Cedric Eveleigh
Nov 9 at 1:13
Just using intervals is problematic -- extremes of x are not necessarily found at the extremes of K (you would need to verify that x is a monotonic function of K), and usually x is nonuniformly distributed, but an interval can't express anything about that. My advice for a relatively simple approximation to the full Bayesian taco is to generate a random sample from a distribution over K, and see what resulting distribution of x you get.
– Robert Dodier
Nov 11 at 6:22
FYI I updated the question with different variable names.
– Cedric Eveleigh
Nov 24 at 19:52
add a comment |
I am curve fitting a model to data sets in order to determine a coefficient for each curve fit/data set. I then calculate an average coefficient and the 95% confidence interval for this average. However, each curve fit involves the use of a parameter (in the model equation) which itself has a 95% confidence interval. How do I propagate the 95% confidence interval for this parameter into the 95% confidence interval for the coefficient calculated from curve fits?
Here is an example version of the model that I am curve fitting to data:
$$y=text{erfc}left(frac{x}{A+B}right)$$
$A$ is the curve fit variable and $B$ is the pre-determined parameter with a 95% confidence interval. Changing the value of $B$ changes the calculated value of $A$, so unless I'm mistaken, the confidence interval of $B$ should propagate into that of $A$. How is this done mathematically?
Edit: I found the answer to my question. It's included below.
data-analysis confidence-interval error-propagation
I am curve fitting a model to data sets in order to determine a coefficient for each curve fit/data set. I then calculate an average coefficient and the 95% confidence interval for this average. However, each curve fit involves the use of a parameter (in the model equation) which itself has a 95% confidence interval. How do I propagate the 95% confidence interval for this parameter into the 95% confidence interval for the coefficient calculated from curve fits?
Here is an example version of the model that I am curve fitting to data:
$$y=text{erfc}left(frac{x}{A+B}right)$$
$A$ is the curve fit variable and $B$ is the pre-determined parameter with a 95% confidence interval. Changing the value of $B$ changes the calculated value of $A$, so unless I'm mistaken, the confidence interval of $B$ should propagate into that of $A$. How is this done mathematically?
Edit: I found the answer to my question. It's included below.
data-analysis confidence-interval error-propagation
data-analysis confidence-interval error-propagation
edited Nov 24 at 19:45
asked Nov 5 at 22:38
Cedric Eveleigh
113
113
Could I calculate $x$ with the upper and lower bounds of $K$, based on its confidence interval, and then state that the confidence interval of $x$ is between the far extremes of the confidence intervals of $x$ calculated for the upper and lower bounds of $K$? For example, calculating $x=10pm10$ for the lower bound of $K$, $x=20pm10$ for the upper bound of $K$, $x=15pm10$ for the average value of $K$, and stating that $x=15pm15$.
– Cedric Eveleigh
Nov 9 at 1:13
Just using intervals is problematic -- extremes of x are not necessarily found at the extremes of K (you would need to verify that x is a monotonic function of K), and usually x is nonuniformly distributed, but an interval can't express anything about that. My advice for a relatively simple approximation to the full Bayesian taco is to generate a random sample from a distribution over K, and see what resulting distribution of x you get.
– Robert Dodier
Nov 11 at 6:22
FYI I updated the question with different variable names.
– Cedric Eveleigh
Nov 24 at 19:52
add a comment |
Could I calculate $x$ with the upper and lower bounds of $K$, based on its confidence interval, and then state that the confidence interval of $x$ is between the far extremes of the confidence intervals of $x$ calculated for the upper and lower bounds of $K$? For example, calculating $x=10pm10$ for the lower bound of $K$, $x=20pm10$ for the upper bound of $K$, $x=15pm10$ for the average value of $K$, and stating that $x=15pm15$.
– Cedric Eveleigh
Nov 9 at 1:13
Just using intervals is problematic -- extremes of x are not necessarily found at the extremes of K (you would need to verify that x is a monotonic function of K), and usually x is nonuniformly distributed, but an interval can't express anything about that. My advice for a relatively simple approximation to the full Bayesian taco is to generate a random sample from a distribution over K, and see what resulting distribution of x you get.
– Robert Dodier
Nov 11 at 6:22
FYI I updated the question with different variable names.
– Cedric Eveleigh
Nov 24 at 19:52
Could I calculate $x$ with the upper and lower bounds of $K$, based on its confidence interval, and then state that the confidence interval of $x$ is between the far extremes of the confidence intervals of $x$ calculated for the upper and lower bounds of $K$? For example, calculating $x=10pm10$ for the lower bound of $K$, $x=20pm10$ for the upper bound of $K$, $x=15pm10$ for the average value of $K$, and stating that $x=15pm15$.
– Cedric Eveleigh
Nov 9 at 1:13
Could I calculate $x$ with the upper and lower bounds of $K$, based on its confidence interval, and then state that the confidence interval of $x$ is between the far extremes of the confidence intervals of $x$ calculated for the upper and lower bounds of $K$? For example, calculating $x=10pm10$ for the lower bound of $K$, $x=20pm10$ for the upper bound of $K$, $x=15pm10$ for the average value of $K$, and stating that $x=15pm15$.
– Cedric Eveleigh
Nov 9 at 1:13
Just using intervals is problematic -- extremes of x are not necessarily found at the extremes of K (you would need to verify that x is a monotonic function of K), and usually x is nonuniformly distributed, but an interval can't express anything about that. My advice for a relatively simple approximation to the full Bayesian taco is to generate a random sample from a distribution over K, and see what resulting distribution of x you get.
– Robert Dodier
Nov 11 at 6:22
Just using intervals is problematic -- extremes of x are not necessarily found at the extremes of K (you would need to verify that x is a monotonic function of K), and usually x is nonuniformly distributed, but an interval can't express anything about that. My advice for a relatively simple approximation to the full Bayesian taco is to generate a random sample from a distribution over K, and see what resulting distribution of x you get.
– Robert Dodier
Nov 11 at 6:22
FYI I updated the question with different variable names.
– Cedric Eveleigh
Nov 24 at 19:52
FYI I updated the question with different variable names.
– Cedric Eveleigh
Nov 24 at 19:52
add a comment |
3 Answers
3
active
oldest
votes
Your question is a bit confusing -- if x is the adjustable parameter to be optimised, and K is an input (from a disribution), what is the independent variable? Nevertheless, if I assume for the sake of argument that it's there somewhere, conceptually you should fit by optimisation of a single parameter
$y = text{erfc}(theta)$,
then rearrange the basic error propagation (eq 3.20 from Bevington) to extract the uncertainty in $x$ (i.e. solve for $sigma_x$ from the following):
$sigma_theta^2 = sigma_x^2 + sigma_K^2 pm 2sigma_{xK}^2$
where the correlation coefficient of the covariance term would be 1.
This assumes a normal distribution for all quantities, of course, which may not be the case. Monte Carlo sampling would be the more robust way of solving this, if you've got any computational nous.
add a comment |
This is a good question, and unfortunately there is no easy answer. The right way to go about problems like this is via Bayesian inference. In this context, that amounts to assuming that every uncertain variable has a distribution, conditional or unconditional, and then deriving the distribution over what you consider interesting, given any data at hand, and marginalizing (i.e. integrating) over the other variables you consider uninteresting. My advice is to try to state the whole Bayesian model, and then make simplifying assumptions (possibly drastic) to arrive at something tractable.
There are probably many non-Bayesian methods, which amount to specific simplifications of the Bayesian approach, making use of particular assumptions, possibly unstated. My advice is, don't resort immediately to simplifications, instead work out the whole model and consider what are reasonable and workable assumptions for your problem.
I'm curious what you think about what I suggested in a comment to the question.
– Cedric Eveleigh
Nov 11 at 1:13
add a comment |
I found a way to accomplish the propagation of uncertainty: Combine $A$ and $B$ into a single term—say, $C$—and curve fit to determine $C$. Then, solve $C = A + B$ for $A$ and calculate the uncertainty of $A$ with the uncertainties of $C$ and $B$.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2986426%2fconfidence-interval-of-an-average-coefficient-calculated-from-curve-fits-with-e%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
Your question is a bit confusing -- if x is the adjustable parameter to be optimised, and K is an input (from a disribution), what is the independent variable? Nevertheless, if I assume for the sake of argument that it's there somewhere, conceptually you should fit by optimisation of a single parameter
$y = text{erfc}(theta)$,
then rearrange the basic error propagation (eq 3.20 from Bevington) to extract the uncertainty in $x$ (i.e. solve for $sigma_x$ from the following):
$sigma_theta^2 = sigma_x^2 + sigma_K^2 pm 2sigma_{xK}^2$
where the correlation coefficient of the covariance term would be 1.
This assumes a normal distribution for all quantities, of course, which may not be the case. Monte Carlo sampling would be the more robust way of solving this, if you've got any computational nous.
add a comment |
Your question is a bit confusing -- if x is the adjustable parameter to be optimised, and K is an input (from a disribution), what is the independent variable? Nevertheless, if I assume for the sake of argument that it's there somewhere, conceptually you should fit by optimisation of a single parameter
$y = text{erfc}(theta)$,
then rearrange the basic error propagation (eq 3.20 from Bevington) to extract the uncertainty in $x$ (i.e. solve for $sigma_x$ from the following):
$sigma_theta^2 = sigma_x^2 + sigma_K^2 pm 2sigma_{xK}^2$
where the correlation coefficient of the covariance term would be 1.
This assumes a normal distribution for all quantities, of course, which may not be the case. Monte Carlo sampling would be the more robust way of solving this, if you've got any computational nous.
add a comment |
Your question is a bit confusing -- if x is the adjustable parameter to be optimised, and K is an input (from a disribution), what is the independent variable? Nevertheless, if I assume for the sake of argument that it's there somewhere, conceptually you should fit by optimisation of a single parameter
$y = text{erfc}(theta)$,
then rearrange the basic error propagation (eq 3.20 from Bevington) to extract the uncertainty in $x$ (i.e. solve for $sigma_x$ from the following):
$sigma_theta^2 = sigma_x^2 + sigma_K^2 pm 2sigma_{xK}^2$
where the correlation coefficient of the covariance term would be 1.
This assumes a normal distribution for all quantities, of course, which may not be the case. Monte Carlo sampling would be the more robust way of solving this, if you've got any computational nous.
Your question is a bit confusing -- if x is the adjustable parameter to be optimised, and K is an input (from a disribution), what is the independent variable? Nevertheless, if I assume for the sake of argument that it's there somewhere, conceptually you should fit by optimisation of a single parameter
$y = text{erfc}(theta)$,
then rearrange the basic error propagation (eq 3.20 from Bevington) to extract the uncertainty in $x$ (i.e. solve for $sigma_x$ from the following):
$sigma_theta^2 = sigma_x^2 + sigma_K^2 pm 2sigma_{xK}^2$
where the correlation coefficient of the covariance term would be 1.
This assumes a normal distribution for all quantities, of course, which may not be the case. Monte Carlo sampling would be the more robust way of solving this, if you've got any computational nous.
answered Nov 8 at 5:12
GoneAsync
1011
1011
add a comment |
add a comment |
This is a good question, and unfortunately there is no easy answer. The right way to go about problems like this is via Bayesian inference. In this context, that amounts to assuming that every uncertain variable has a distribution, conditional or unconditional, and then deriving the distribution over what you consider interesting, given any data at hand, and marginalizing (i.e. integrating) over the other variables you consider uninteresting. My advice is to try to state the whole Bayesian model, and then make simplifying assumptions (possibly drastic) to arrive at something tractable.
There are probably many non-Bayesian methods, which amount to specific simplifications of the Bayesian approach, making use of particular assumptions, possibly unstated. My advice is, don't resort immediately to simplifications, instead work out the whole model and consider what are reasonable and workable assumptions for your problem.
I'm curious what you think about what I suggested in a comment to the question.
– Cedric Eveleigh
Nov 11 at 1:13
add a comment |
This is a good question, and unfortunately there is no easy answer. The right way to go about problems like this is via Bayesian inference. In this context, that amounts to assuming that every uncertain variable has a distribution, conditional or unconditional, and then deriving the distribution over what you consider interesting, given any data at hand, and marginalizing (i.e. integrating) over the other variables you consider uninteresting. My advice is to try to state the whole Bayesian model, and then make simplifying assumptions (possibly drastic) to arrive at something tractable.
There are probably many non-Bayesian methods, which amount to specific simplifications of the Bayesian approach, making use of particular assumptions, possibly unstated. My advice is, don't resort immediately to simplifications, instead work out the whole model and consider what are reasonable and workable assumptions for your problem.
I'm curious what you think about what I suggested in a comment to the question.
– Cedric Eveleigh
Nov 11 at 1:13
add a comment |
This is a good question, and unfortunately there is no easy answer. The right way to go about problems like this is via Bayesian inference. In this context, that amounts to assuming that every uncertain variable has a distribution, conditional or unconditional, and then deriving the distribution over what you consider interesting, given any data at hand, and marginalizing (i.e. integrating) over the other variables you consider uninteresting. My advice is to try to state the whole Bayesian model, and then make simplifying assumptions (possibly drastic) to arrive at something tractable.
There are probably many non-Bayesian methods, which amount to specific simplifications of the Bayesian approach, making use of particular assumptions, possibly unstated. My advice is, don't resort immediately to simplifications, instead work out the whole model and consider what are reasonable and workable assumptions for your problem.
This is a good question, and unfortunately there is no easy answer. The right way to go about problems like this is via Bayesian inference. In this context, that amounts to assuming that every uncertain variable has a distribution, conditional or unconditional, and then deriving the distribution over what you consider interesting, given any data at hand, and marginalizing (i.e. integrating) over the other variables you consider uninteresting. My advice is to try to state the whole Bayesian model, and then make simplifying assumptions (possibly drastic) to arrive at something tractable.
There are probably many non-Bayesian methods, which amount to specific simplifications of the Bayesian approach, making use of particular assumptions, possibly unstated. My advice is, don't resort immediately to simplifications, instead work out the whole model and consider what are reasonable and workable assumptions for your problem.
answered Nov 8 at 17:59
Robert Dodier
1968
1968
I'm curious what you think about what I suggested in a comment to the question.
– Cedric Eveleigh
Nov 11 at 1:13
add a comment |
I'm curious what you think about what I suggested in a comment to the question.
– Cedric Eveleigh
Nov 11 at 1:13
I'm curious what you think about what I suggested in a comment to the question.
– Cedric Eveleigh
Nov 11 at 1:13
I'm curious what you think about what I suggested in a comment to the question.
– Cedric Eveleigh
Nov 11 at 1:13
add a comment |
I found a way to accomplish the propagation of uncertainty: Combine $A$ and $B$ into a single term—say, $C$—and curve fit to determine $C$. Then, solve $C = A + B$ for $A$ and calculate the uncertainty of $A$ with the uncertainties of $C$ and $B$.
add a comment |
I found a way to accomplish the propagation of uncertainty: Combine $A$ and $B$ into a single term—say, $C$—and curve fit to determine $C$. Then, solve $C = A + B$ for $A$ and calculate the uncertainty of $A$ with the uncertainties of $C$ and $B$.
add a comment |
I found a way to accomplish the propagation of uncertainty: Combine $A$ and $B$ into a single term—say, $C$—and curve fit to determine $C$. Then, solve $C = A + B$ for $A$ and calculate the uncertainty of $A$ with the uncertainties of $C$ and $B$.
I found a way to accomplish the propagation of uncertainty: Combine $A$ and $B$ into a single term—say, $C$—and curve fit to determine $C$. Then, solve $C = A + B$ for $A$ and calculate the uncertainty of $A$ with the uncertainties of $C$ and $B$.
answered Nov 24 at 19:49
Cedric Eveleigh
113
113
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2986426%2fconfidence-interval-of-an-average-coefficient-calculated-from-curve-fits-with-e%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Could I calculate $x$ with the upper and lower bounds of $K$, based on its confidence interval, and then state that the confidence interval of $x$ is between the far extremes of the confidence intervals of $x$ calculated for the upper and lower bounds of $K$? For example, calculating $x=10pm10$ for the lower bound of $K$, $x=20pm10$ for the upper bound of $K$, $x=15pm10$ for the average value of $K$, and stating that $x=15pm15$.
– Cedric Eveleigh
Nov 9 at 1:13
Just using intervals is problematic -- extremes of x are not necessarily found at the extremes of K (you would need to verify that x is a monotonic function of K), and usually x is nonuniformly distributed, but an interval can't express anything about that. My advice for a relatively simple approximation to the full Bayesian taco is to generate a random sample from a distribution over K, and see what resulting distribution of x you get.
– Robert Dodier
Nov 11 at 6:22
FYI I updated the question with different variable names.
– Cedric Eveleigh
Nov 24 at 19:52