Kalman filter with missing measurement inputs












3












$begingroup$


I am a newby to Kalmar filters, but after some study, I think I understand how it works now.
For my application, I need a Kalmar filter that combines the measurement input from two sources. In the standard Kalmar filter, that is no problem at all, but it assumes that the measurement inputs from the two sensors are available at the same times. In my application, there is one new measurement from sensor 'b' for every 13 measurements of sensor 'a'.That is, 12 out of 13 times, the measurement of sensor 'b' is missing.



How would you handle that normally? Do you simply use the predicted measurements values as substitute for the missing ones? Does that not lead to overconfidence in the missing measurements? How else can it be handled?










share|cite|improve this question









$endgroup$












  • $begingroup$
    Do not use predicted measurement values. You can have two measurement matrices $H_1$ and $H_2$ that you only apply whenever you get either measurement, and in both cases apply the standard prediction.
    $endgroup$
    – mikkola
    Jan 2 '17 at 18:24
















3












$begingroup$


I am a newby to Kalmar filters, but after some study, I think I understand how it works now.
For my application, I need a Kalmar filter that combines the measurement input from two sources. In the standard Kalmar filter, that is no problem at all, but it assumes that the measurement inputs from the two sensors are available at the same times. In my application, there is one new measurement from sensor 'b' for every 13 measurements of sensor 'a'.That is, 12 out of 13 times, the measurement of sensor 'b' is missing.



How would you handle that normally? Do you simply use the predicted measurements values as substitute for the missing ones? Does that not lead to overconfidence in the missing measurements? How else can it be handled?










share|cite|improve this question









$endgroup$












  • $begingroup$
    Do not use predicted measurement values. You can have two measurement matrices $H_1$ and $H_2$ that you only apply whenever you get either measurement, and in both cases apply the standard prediction.
    $endgroup$
    – mikkola
    Jan 2 '17 at 18:24














3












3








3


3



$begingroup$


I am a newby to Kalmar filters, but after some study, I think I understand how it works now.
For my application, I need a Kalmar filter that combines the measurement input from two sources. In the standard Kalmar filter, that is no problem at all, but it assumes that the measurement inputs from the two sensors are available at the same times. In my application, there is one new measurement from sensor 'b' for every 13 measurements of sensor 'a'.That is, 12 out of 13 times, the measurement of sensor 'b' is missing.



How would you handle that normally? Do you simply use the predicted measurements values as substitute for the missing ones? Does that not lead to overconfidence in the missing measurements? How else can it be handled?










share|cite|improve this question









$endgroup$




I am a newby to Kalmar filters, but after some study, I think I understand how it works now.
For my application, I need a Kalmar filter that combines the measurement input from two sources. In the standard Kalmar filter, that is no problem at all, but it assumes that the measurement inputs from the two sensors are available at the same times. In my application, there is one new measurement from sensor 'b' for every 13 measurements of sensor 'a'.That is, 12 out of 13 times, the measurement of sensor 'b' is missing.



How would you handle that normally? Do you simply use the predicted measurements values as substitute for the missing ones? Does that not lead to overconfidence in the missing measurements? How else can it be handled?







kalman-filter






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Oct 20 '14 at 18:37









fishinearfishinear

13115




13115












  • $begingroup$
    Do not use predicted measurement values. You can have two measurement matrices $H_1$ and $H_2$ that you only apply whenever you get either measurement, and in both cases apply the standard prediction.
    $endgroup$
    – mikkola
    Jan 2 '17 at 18:24


















  • $begingroup$
    Do not use predicted measurement values. You can have two measurement matrices $H_1$ and $H_2$ that you only apply whenever you get either measurement, and in both cases apply the standard prediction.
    $endgroup$
    – mikkola
    Jan 2 '17 at 18:24
















$begingroup$
Do not use predicted measurement values. You can have two measurement matrices $H_1$ and $H_2$ that you only apply whenever you get either measurement, and in both cases apply the standard prediction.
$endgroup$
– mikkola
Jan 2 '17 at 18:24




$begingroup$
Do not use predicted measurement values. You can have two measurement matrices $H_1$ and $H_2$ that you only apply whenever you get either measurement, and in both cases apply the standard prediction.
$endgroup$
– mikkola
Jan 2 '17 at 18:24










4 Answers
4






active

oldest

votes


















1












$begingroup$

Here might be a better approach (from link)




For a missing measurement, just use the last state estimate as a
measurement but set the covariance matrix of the measurement to
essentially infinity. (If the system uses inverse covariance just set
the values to zero.) This would cause a Kalman filter to essentially
ignore the new measurement since the ratio of the variance of the
prediction to the measurement is zero. The result will be a new
prediction that maintains velocity/acceleration but whose variance
will grow according to the process noise.







share|cite|improve this answer









$endgroup$













  • $begingroup$
    What should the covariance matrix look like for 4 sensors, one of which has a missing measurement?
    $endgroup$
    – Petrus Theron
    Sep 18 '18 at 15:21



















0












$begingroup$

You are absolutely right. If at a time t the measurement is missing, only the time-update is computed and the measurement update must be skipped. This is the way you shold handle the problem.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Since I posted this question, I have tested with the Kalman filter as described, and noticed that my suspicions had been correct: it is overconfident on the missing data. To compensate, I have now implemented a double Kalman filter, one for the situation where the sample is missing, and a different one when the sample is there. That seems to be working well, but is that a normal approach?
    $endgroup$
    – fishinear
    Jan 26 '15 at 14:20










  • $begingroup$
    Please explain to me what you exactly mean by "overconfident on the missing data".
    $endgroup$
    – Dominik
    Jan 27 '15 at 12:17






  • 1




    $begingroup$
    Because 12 out of 13 times, the predicted value is used as measured value, all those times, the error between predicted value and measured value is zero. Therefor the Kalman filter "thinks" its predictions are really good and starts relying on them. If then the real measured value comes it, it over-reacts because it assumes there to be zero error in that one as well. Sorry if I cannot explain it very well.
    $endgroup$
    – fishinear
    Jan 27 '15 at 18:31












  • $begingroup$
    From a statistical point of view it would be the right choice to use the predicted values. Because the Kalman Filter gives you E[y_t|z_1,...,z_t], the expected value of the state at time t, given all measurements up to time t. If you have a time partition t=1,...,t=10, and you want to derive a approximation for every timestep, there is no other way than taking the predicted value. If you only have the measurement at time t_1, you must be satisfied with E[y_10|z_1], the prediction. What does you solution exactly look like?
    $endgroup$
    – Dominik
    Jan 28 '15 at 9:28






  • 1




    $begingroup$
    Right now, I use one Kalman filter when the 'b' input is absent. That one is only based on the sensor 'a' input. Then, in the steps when a 'b' sample is present, I use another Kalman filter which takes both 'a' and 'b' into account. As I said, it seems to work OK, but I'm not sure whether I am missing something.
    $endgroup$
    – fishinear
    Jan 28 '15 at 14:44



















0












$begingroup$

Don't use predicted values. Just Bayes-fuse the likelihoods from each available observation into your posterior as they arrive, it doesn't matter how many there are at each step.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    You may want to expand on what you are saying. I think you mean a Bayesian Data Fusion? How would that combine with a Kalman filter? And using the predicted values is essential to getting accurate values in a Kalman filter, so how would Bayesian Data Fusion help with that?
    $endgroup$
    – fishinear
    Aug 19 '17 at 11:29





















0












$begingroup$

This is not a problem at all with a Kalman filter (KF). In a KF, you have a prediction step and an update step. At each time step $k$, you must predict your states at the prediction step. This is performed using a process model. If you do not have a measurement, you skip the update step. If you have a measurement, you perform the update step after the prediction step.



Edit: Keep in mind in many cases, the updates run at a lower frequency than the predictions (e.g. GPS/INS sensor fusion). Your problem sounds suitable for this framework.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    In my experience, this approach seems to lead to overconfidence in the predicted values for sensor 'b' - which is not surprising, because the predicted value is used 12 out of 13 times. In my current approach, I use one Kalman filter when the 'b' input is absent. That one is only based on the sensor 'a' input. Then, in the steps when a 'b' sample is present, I use another Kalman filter which takes both 'a' and 'b' into account. This seems to work OK.
    $endgroup$
    – fishinear
    Oct 6 '18 at 15:30










  • $begingroup$
    This shouldn’t lead to overconfidence in the predicted values if your process and observation models are correct. If they are not, I suggest finding a better model. Additionally, the fact that you have 12 measurements of one type for 1 measurement if the other means nothing without knowing your models/error characteristics.
    $endgroup$
    – Ralff
    Oct 6 '18 at 16:45











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f982982%2fkalman-filter-with-missing-measurement-inputs%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























4 Answers
4






active

oldest

votes








4 Answers
4






active

oldest

votes









active

oldest

votes






active

oldest

votes









1












$begingroup$

Here might be a better approach (from link)




For a missing measurement, just use the last state estimate as a
measurement but set the covariance matrix of the measurement to
essentially infinity. (If the system uses inverse covariance just set
the values to zero.) This would cause a Kalman filter to essentially
ignore the new measurement since the ratio of the variance of the
prediction to the measurement is zero. The result will be a new
prediction that maintains velocity/acceleration but whose variance
will grow according to the process noise.







share|cite|improve this answer









$endgroup$













  • $begingroup$
    What should the covariance matrix look like for 4 sensors, one of which has a missing measurement?
    $endgroup$
    – Petrus Theron
    Sep 18 '18 at 15:21
















1












$begingroup$

Here might be a better approach (from link)




For a missing measurement, just use the last state estimate as a
measurement but set the covariance matrix of the measurement to
essentially infinity. (If the system uses inverse covariance just set
the values to zero.) This would cause a Kalman filter to essentially
ignore the new measurement since the ratio of the variance of the
prediction to the measurement is zero. The result will be a new
prediction that maintains velocity/acceleration but whose variance
will grow according to the process noise.







share|cite|improve this answer









$endgroup$













  • $begingroup$
    What should the covariance matrix look like for 4 sensors, one of which has a missing measurement?
    $endgroup$
    – Petrus Theron
    Sep 18 '18 at 15:21














1












1








1





$begingroup$

Here might be a better approach (from link)




For a missing measurement, just use the last state estimate as a
measurement but set the covariance matrix of the measurement to
essentially infinity. (If the system uses inverse covariance just set
the values to zero.) This would cause a Kalman filter to essentially
ignore the new measurement since the ratio of the variance of the
prediction to the measurement is zero. The result will be a new
prediction that maintains velocity/acceleration but whose variance
will grow according to the process noise.







share|cite|improve this answer









$endgroup$



Here might be a better approach (from link)




For a missing measurement, just use the last state estimate as a
measurement but set the covariance matrix of the measurement to
essentially infinity. (If the system uses inverse covariance just set
the values to zero.) This would cause a Kalman filter to essentially
ignore the new measurement since the ratio of the variance of the
prediction to the measurement is zero. The result will be a new
prediction that maintains velocity/acceleration but whose variance
will grow according to the process noise.








share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Jul 8 '17 at 13:49









BB_MLBB_ML

5,98052544




5,98052544












  • $begingroup$
    What should the covariance matrix look like for 4 sensors, one of which has a missing measurement?
    $endgroup$
    – Petrus Theron
    Sep 18 '18 at 15:21


















  • $begingroup$
    What should the covariance matrix look like for 4 sensors, one of which has a missing measurement?
    $endgroup$
    – Petrus Theron
    Sep 18 '18 at 15:21
















$begingroup$
What should the covariance matrix look like for 4 sensors, one of which has a missing measurement?
$endgroup$
– Petrus Theron
Sep 18 '18 at 15:21




$begingroup$
What should the covariance matrix look like for 4 sensors, one of which has a missing measurement?
$endgroup$
– Petrus Theron
Sep 18 '18 at 15:21











0












$begingroup$

You are absolutely right. If at a time t the measurement is missing, only the time-update is computed and the measurement update must be skipped. This is the way you shold handle the problem.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Since I posted this question, I have tested with the Kalman filter as described, and noticed that my suspicions had been correct: it is overconfident on the missing data. To compensate, I have now implemented a double Kalman filter, one for the situation where the sample is missing, and a different one when the sample is there. That seems to be working well, but is that a normal approach?
    $endgroup$
    – fishinear
    Jan 26 '15 at 14:20










  • $begingroup$
    Please explain to me what you exactly mean by "overconfident on the missing data".
    $endgroup$
    – Dominik
    Jan 27 '15 at 12:17






  • 1




    $begingroup$
    Because 12 out of 13 times, the predicted value is used as measured value, all those times, the error between predicted value and measured value is zero. Therefor the Kalman filter "thinks" its predictions are really good and starts relying on them. If then the real measured value comes it, it over-reacts because it assumes there to be zero error in that one as well. Sorry if I cannot explain it very well.
    $endgroup$
    – fishinear
    Jan 27 '15 at 18:31












  • $begingroup$
    From a statistical point of view it would be the right choice to use the predicted values. Because the Kalman Filter gives you E[y_t|z_1,...,z_t], the expected value of the state at time t, given all measurements up to time t. If you have a time partition t=1,...,t=10, and you want to derive a approximation for every timestep, there is no other way than taking the predicted value. If you only have the measurement at time t_1, you must be satisfied with E[y_10|z_1], the prediction. What does you solution exactly look like?
    $endgroup$
    – Dominik
    Jan 28 '15 at 9:28






  • 1




    $begingroup$
    Right now, I use one Kalman filter when the 'b' input is absent. That one is only based on the sensor 'a' input. Then, in the steps when a 'b' sample is present, I use another Kalman filter which takes both 'a' and 'b' into account. As I said, it seems to work OK, but I'm not sure whether I am missing something.
    $endgroup$
    – fishinear
    Jan 28 '15 at 14:44
















0












$begingroup$

You are absolutely right. If at a time t the measurement is missing, only the time-update is computed and the measurement update must be skipped. This is the way you shold handle the problem.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Since I posted this question, I have tested with the Kalman filter as described, and noticed that my suspicions had been correct: it is overconfident on the missing data. To compensate, I have now implemented a double Kalman filter, one for the situation where the sample is missing, and a different one when the sample is there. That seems to be working well, but is that a normal approach?
    $endgroup$
    – fishinear
    Jan 26 '15 at 14:20










  • $begingroup$
    Please explain to me what you exactly mean by "overconfident on the missing data".
    $endgroup$
    – Dominik
    Jan 27 '15 at 12:17






  • 1




    $begingroup$
    Because 12 out of 13 times, the predicted value is used as measured value, all those times, the error between predicted value and measured value is zero. Therefor the Kalman filter "thinks" its predictions are really good and starts relying on them. If then the real measured value comes it, it over-reacts because it assumes there to be zero error in that one as well. Sorry if I cannot explain it very well.
    $endgroup$
    – fishinear
    Jan 27 '15 at 18:31












  • $begingroup$
    From a statistical point of view it would be the right choice to use the predicted values. Because the Kalman Filter gives you E[y_t|z_1,...,z_t], the expected value of the state at time t, given all measurements up to time t. If you have a time partition t=1,...,t=10, and you want to derive a approximation for every timestep, there is no other way than taking the predicted value. If you only have the measurement at time t_1, you must be satisfied with E[y_10|z_1], the prediction. What does you solution exactly look like?
    $endgroup$
    – Dominik
    Jan 28 '15 at 9:28






  • 1




    $begingroup$
    Right now, I use one Kalman filter when the 'b' input is absent. That one is only based on the sensor 'a' input. Then, in the steps when a 'b' sample is present, I use another Kalman filter which takes both 'a' and 'b' into account. As I said, it seems to work OK, but I'm not sure whether I am missing something.
    $endgroup$
    – fishinear
    Jan 28 '15 at 14:44














0












0








0





$begingroup$

You are absolutely right. If at a time t the measurement is missing, only the time-update is computed and the measurement update must be skipped. This is the way you shold handle the problem.






share|cite|improve this answer









$endgroup$



You are absolutely right. If at a time t the measurement is missing, only the time-update is computed and the measurement update must be skipped. This is the way you shold handle the problem.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Jan 25 '15 at 22:06









DominikDominik

113




113












  • $begingroup$
    Since I posted this question, I have tested with the Kalman filter as described, and noticed that my suspicions had been correct: it is overconfident on the missing data. To compensate, I have now implemented a double Kalman filter, one for the situation where the sample is missing, and a different one when the sample is there. That seems to be working well, but is that a normal approach?
    $endgroup$
    – fishinear
    Jan 26 '15 at 14:20










  • $begingroup$
    Please explain to me what you exactly mean by "overconfident on the missing data".
    $endgroup$
    – Dominik
    Jan 27 '15 at 12:17






  • 1




    $begingroup$
    Because 12 out of 13 times, the predicted value is used as measured value, all those times, the error between predicted value and measured value is zero. Therefor the Kalman filter "thinks" its predictions are really good and starts relying on them. If then the real measured value comes it, it over-reacts because it assumes there to be zero error in that one as well. Sorry if I cannot explain it very well.
    $endgroup$
    – fishinear
    Jan 27 '15 at 18:31












  • $begingroup$
    From a statistical point of view it would be the right choice to use the predicted values. Because the Kalman Filter gives you E[y_t|z_1,...,z_t], the expected value of the state at time t, given all measurements up to time t. If you have a time partition t=1,...,t=10, and you want to derive a approximation for every timestep, there is no other way than taking the predicted value. If you only have the measurement at time t_1, you must be satisfied with E[y_10|z_1], the prediction. What does you solution exactly look like?
    $endgroup$
    – Dominik
    Jan 28 '15 at 9:28






  • 1




    $begingroup$
    Right now, I use one Kalman filter when the 'b' input is absent. That one is only based on the sensor 'a' input. Then, in the steps when a 'b' sample is present, I use another Kalman filter which takes both 'a' and 'b' into account. As I said, it seems to work OK, but I'm not sure whether I am missing something.
    $endgroup$
    – fishinear
    Jan 28 '15 at 14:44


















  • $begingroup$
    Since I posted this question, I have tested with the Kalman filter as described, and noticed that my suspicions had been correct: it is overconfident on the missing data. To compensate, I have now implemented a double Kalman filter, one for the situation where the sample is missing, and a different one when the sample is there. That seems to be working well, but is that a normal approach?
    $endgroup$
    – fishinear
    Jan 26 '15 at 14:20










  • $begingroup$
    Please explain to me what you exactly mean by "overconfident on the missing data".
    $endgroup$
    – Dominik
    Jan 27 '15 at 12:17






  • 1




    $begingroup$
    Because 12 out of 13 times, the predicted value is used as measured value, all those times, the error between predicted value and measured value is zero. Therefor the Kalman filter "thinks" its predictions are really good and starts relying on them. If then the real measured value comes it, it over-reacts because it assumes there to be zero error in that one as well. Sorry if I cannot explain it very well.
    $endgroup$
    – fishinear
    Jan 27 '15 at 18:31












  • $begingroup$
    From a statistical point of view it would be the right choice to use the predicted values. Because the Kalman Filter gives you E[y_t|z_1,...,z_t], the expected value of the state at time t, given all measurements up to time t. If you have a time partition t=1,...,t=10, and you want to derive a approximation for every timestep, there is no other way than taking the predicted value. If you only have the measurement at time t_1, you must be satisfied with E[y_10|z_1], the prediction. What does you solution exactly look like?
    $endgroup$
    – Dominik
    Jan 28 '15 at 9:28






  • 1




    $begingroup$
    Right now, I use one Kalman filter when the 'b' input is absent. That one is only based on the sensor 'a' input. Then, in the steps when a 'b' sample is present, I use another Kalman filter which takes both 'a' and 'b' into account. As I said, it seems to work OK, but I'm not sure whether I am missing something.
    $endgroup$
    – fishinear
    Jan 28 '15 at 14:44
















$begingroup$
Since I posted this question, I have tested with the Kalman filter as described, and noticed that my suspicions had been correct: it is overconfident on the missing data. To compensate, I have now implemented a double Kalman filter, one for the situation where the sample is missing, and a different one when the sample is there. That seems to be working well, but is that a normal approach?
$endgroup$
– fishinear
Jan 26 '15 at 14:20




$begingroup$
Since I posted this question, I have tested with the Kalman filter as described, and noticed that my suspicions had been correct: it is overconfident on the missing data. To compensate, I have now implemented a double Kalman filter, one for the situation where the sample is missing, and a different one when the sample is there. That seems to be working well, but is that a normal approach?
$endgroup$
– fishinear
Jan 26 '15 at 14:20












$begingroup$
Please explain to me what you exactly mean by "overconfident on the missing data".
$endgroup$
– Dominik
Jan 27 '15 at 12:17




$begingroup$
Please explain to me what you exactly mean by "overconfident on the missing data".
$endgroup$
– Dominik
Jan 27 '15 at 12:17




1




1




$begingroup$
Because 12 out of 13 times, the predicted value is used as measured value, all those times, the error between predicted value and measured value is zero. Therefor the Kalman filter "thinks" its predictions are really good and starts relying on them. If then the real measured value comes it, it over-reacts because it assumes there to be zero error in that one as well. Sorry if I cannot explain it very well.
$endgroup$
– fishinear
Jan 27 '15 at 18:31






$begingroup$
Because 12 out of 13 times, the predicted value is used as measured value, all those times, the error between predicted value and measured value is zero. Therefor the Kalman filter "thinks" its predictions are really good and starts relying on them. If then the real measured value comes it, it over-reacts because it assumes there to be zero error in that one as well. Sorry if I cannot explain it very well.
$endgroup$
– fishinear
Jan 27 '15 at 18:31














$begingroup$
From a statistical point of view it would be the right choice to use the predicted values. Because the Kalman Filter gives you E[y_t|z_1,...,z_t], the expected value of the state at time t, given all measurements up to time t. If you have a time partition t=1,...,t=10, and you want to derive a approximation for every timestep, there is no other way than taking the predicted value. If you only have the measurement at time t_1, you must be satisfied with E[y_10|z_1], the prediction. What does you solution exactly look like?
$endgroup$
– Dominik
Jan 28 '15 at 9:28




$begingroup$
From a statistical point of view it would be the right choice to use the predicted values. Because the Kalman Filter gives you E[y_t|z_1,...,z_t], the expected value of the state at time t, given all measurements up to time t. If you have a time partition t=1,...,t=10, and you want to derive a approximation for every timestep, there is no other way than taking the predicted value. If you only have the measurement at time t_1, you must be satisfied with E[y_10|z_1], the prediction. What does you solution exactly look like?
$endgroup$
– Dominik
Jan 28 '15 at 9:28




1




1




$begingroup$
Right now, I use one Kalman filter when the 'b' input is absent. That one is only based on the sensor 'a' input. Then, in the steps when a 'b' sample is present, I use another Kalman filter which takes both 'a' and 'b' into account. As I said, it seems to work OK, but I'm not sure whether I am missing something.
$endgroup$
– fishinear
Jan 28 '15 at 14:44




$begingroup$
Right now, I use one Kalman filter when the 'b' input is absent. That one is only based on the sensor 'a' input. Then, in the steps when a 'b' sample is present, I use another Kalman filter which takes both 'a' and 'b' into account. As I said, it seems to work OK, but I'm not sure whether I am missing something.
$endgroup$
– fishinear
Jan 28 '15 at 14:44











0












$begingroup$

Don't use predicted values. Just Bayes-fuse the likelihoods from each available observation into your posterior as they arrive, it doesn't matter how many there are at each step.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    You may want to expand on what you are saying. I think you mean a Bayesian Data Fusion? How would that combine with a Kalman filter? And using the predicted values is essential to getting accurate values in a Kalman filter, so how would Bayesian Data Fusion help with that?
    $endgroup$
    – fishinear
    Aug 19 '17 at 11:29


















0












$begingroup$

Don't use predicted values. Just Bayes-fuse the likelihoods from each available observation into your posterior as they arrive, it doesn't matter how many there are at each step.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    You may want to expand on what you are saying. I think you mean a Bayesian Data Fusion? How would that combine with a Kalman filter? And using the predicted values is essential to getting accurate values in a Kalman filter, so how would Bayesian Data Fusion help with that?
    $endgroup$
    – fishinear
    Aug 19 '17 at 11:29
















0












0








0





$begingroup$

Don't use predicted values. Just Bayes-fuse the likelihoods from each available observation into your posterior as they arrive, it doesn't matter how many there are at each step.






share|cite|improve this answer









$endgroup$



Don't use predicted values. Just Bayes-fuse the likelihoods from each available observation into your posterior as they arrive, it doesn't matter how many there are at each step.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Aug 18 '17 at 10:02









charles.foxcharles.fox

1313




1313












  • $begingroup$
    You may want to expand on what you are saying. I think you mean a Bayesian Data Fusion? How would that combine with a Kalman filter? And using the predicted values is essential to getting accurate values in a Kalman filter, so how would Bayesian Data Fusion help with that?
    $endgroup$
    – fishinear
    Aug 19 '17 at 11:29




















  • $begingroup$
    You may want to expand on what you are saying. I think you mean a Bayesian Data Fusion? How would that combine with a Kalman filter? And using the predicted values is essential to getting accurate values in a Kalman filter, so how would Bayesian Data Fusion help with that?
    $endgroup$
    – fishinear
    Aug 19 '17 at 11:29


















$begingroup$
You may want to expand on what you are saying. I think you mean a Bayesian Data Fusion? How would that combine with a Kalman filter? And using the predicted values is essential to getting accurate values in a Kalman filter, so how would Bayesian Data Fusion help with that?
$endgroup$
– fishinear
Aug 19 '17 at 11:29






$begingroup$
You may want to expand on what you are saying. I think you mean a Bayesian Data Fusion? How would that combine with a Kalman filter? And using the predicted values is essential to getting accurate values in a Kalman filter, so how would Bayesian Data Fusion help with that?
$endgroup$
– fishinear
Aug 19 '17 at 11:29













0












$begingroup$

This is not a problem at all with a Kalman filter (KF). In a KF, you have a prediction step and an update step. At each time step $k$, you must predict your states at the prediction step. This is performed using a process model. If you do not have a measurement, you skip the update step. If you have a measurement, you perform the update step after the prediction step.



Edit: Keep in mind in many cases, the updates run at a lower frequency than the predictions (e.g. GPS/INS sensor fusion). Your problem sounds suitable for this framework.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    In my experience, this approach seems to lead to overconfidence in the predicted values for sensor 'b' - which is not surprising, because the predicted value is used 12 out of 13 times. In my current approach, I use one Kalman filter when the 'b' input is absent. That one is only based on the sensor 'a' input. Then, in the steps when a 'b' sample is present, I use another Kalman filter which takes both 'a' and 'b' into account. This seems to work OK.
    $endgroup$
    – fishinear
    Oct 6 '18 at 15:30










  • $begingroup$
    This shouldn’t lead to overconfidence in the predicted values if your process and observation models are correct. If they are not, I suggest finding a better model. Additionally, the fact that you have 12 measurements of one type for 1 measurement if the other means nothing without knowing your models/error characteristics.
    $endgroup$
    – Ralff
    Oct 6 '18 at 16:45
















0












$begingroup$

This is not a problem at all with a Kalman filter (KF). In a KF, you have a prediction step and an update step. At each time step $k$, you must predict your states at the prediction step. This is performed using a process model. If you do not have a measurement, you skip the update step. If you have a measurement, you perform the update step after the prediction step.



Edit: Keep in mind in many cases, the updates run at a lower frequency than the predictions (e.g. GPS/INS sensor fusion). Your problem sounds suitable for this framework.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    In my experience, this approach seems to lead to overconfidence in the predicted values for sensor 'b' - which is not surprising, because the predicted value is used 12 out of 13 times. In my current approach, I use one Kalman filter when the 'b' input is absent. That one is only based on the sensor 'a' input. Then, in the steps when a 'b' sample is present, I use another Kalman filter which takes both 'a' and 'b' into account. This seems to work OK.
    $endgroup$
    – fishinear
    Oct 6 '18 at 15:30










  • $begingroup$
    This shouldn’t lead to overconfidence in the predicted values if your process and observation models are correct. If they are not, I suggest finding a better model. Additionally, the fact that you have 12 measurements of one type for 1 measurement if the other means nothing without knowing your models/error characteristics.
    $endgroup$
    – Ralff
    Oct 6 '18 at 16:45














0












0








0





$begingroup$

This is not a problem at all with a Kalman filter (KF). In a KF, you have a prediction step and an update step. At each time step $k$, you must predict your states at the prediction step. This is performed using a process model. If you do not have a measurement, you skip the update step. If you have a measurement, you perform the update step after the prediction step.



Edit: Keep in mind in many cases, the updates run at a lower frequency than the predictions (e.g. GPS/INS sensor fusion). Your problem sounds suitable for this framework.






share|cite|improve this answer









$endgroup$



This is not a problem at all with a Kalman filter (KF). In a KF, you have a prediction step and an update step. At each time step $k$, you must predict your states at the prediction step. This is performed using a process model. If you do not have a measurement, you skip the update step. If you have a measurement, you perform the update step after the prediction step.



Edit: Keep in mind in many cases, the updates run at a lower frequency than the predictions (e.g. GPS/INS sensor fusion). Your problem sounds suitable for this framework.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Oct 6 '18 at 9:22









RalffRalff

574212




574212












  • $begingroup$
    In my experience, this approach seems to lead to overconfidence in the predicted values for sensor 'b' - which is not surprising, because the predicted value is used 12 out of 13 times. In my current approach, I use one Kalman filter when the 'b' input is absent. That one is only based on the sensor 'a' input. Then, in the steps when a 'b' sample is present, I use another Kalman filter which takes both 'a' and 'b' into account. This seems to work OK.
    $endgroup$
    – fishinear
    Oct 6 '18 at 15:30










  • $begingroup$
    This shouldn’t lead to overconfidence in the predicted values if your process and observation models are correct. If they are not, I suggest finding a better model. Additionally, the fact that you have 12 measurements of one type for 1 measurement if the other means nothing without knowing your models/error characteristics.
    $endgroup$
    – Ralff
    Oct 6 '18 at 16:45


















  • $begingroup$
    In my experience, this approach seems to lead to overconfidence in the predicted values for sensor 'b' - which is not surprising, because the predicted value is used 12 out of 13 times. In my current approach, I use one Kalman filter when the 'b' input is absent. That one is only based on the sensor 'a' input. Then, in the steps when a 'b' sample is present, I use another Kalman filter which takes both 'a' and 'b' into account. This seems to work OK.
    $endgroup$
    – fishinear
    Oct 6 '18 at 15:30










  • $begingroup$
    This shouldn’t lead to overconfidence in the predicted values if your process and observation models are correct. If they are not, I suggest finding a better model. Additionally, the fact that you have 12 measurements of one type for 1 measurement if the other means nothing without knowing your models/error characteristics.
    $endgroup$
    – Ralff
    Oct 6 '18 at 16:45
















$begingroup$
In my experience, this approach seems to lead to overconfidence in the predicted values for sensor 'b' - which is not surprising, because the predicted value is used 12 out of 13 times. In my current approach, I use one Kalman filter when the 'b' input is absent. That one is only based on the sensor 'a' input. Then, in the steps when a 'b' sample is present, I use another Kalman filter which takes both 'a' and 'b' into account. This seems to work OK.
$endgroup$
– fishinear
Oct 6 '18 at 15:30




$begingroup$
In my experience, this approach seems to lead to overconfidence in the predicted values for sensor 'b' - which is not surprising, because the predicted value is used 12 out of 13 times. In my current approach, I use one Kalman filter when the 'b' input is absent. That one is only based on the sensor 'a' input. Then, in the steps when a 'b' sample is present, I use another Kalman filter which takes both 'a' and 'b' into account. This seems to work OK.
$endgroup$
– fishinear
Oct 6 '18 at 15:30












$begingroup$
This shouldn’t lead to overconfidence in the predicted values if your process and observation models are correct. If they are not, I suggest finding a better model. Additionally, the fact that you have 12 measurements of one type for 1 measurement if the other means nothing without knowing your models/error characteristics.
$endgroup$
– Ralff
Oct 6 '18 at 16:45




$begingroup$
This shouldn’t lead to overconfidence in the predicted values if your process and observation models are correct. If they are not, I suggest finding a better model. Additionally, the fact that you have 12 measurements of one type for 1 measurement if the other means nothing without knowing your models/error characteristics.
$endgroup$
– Ralff
Oct 6 '18 at 16:45


















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f982982%2fkalman-filter-with-missing-measurement-inputs%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Quarter-circle Tiles

build a pushdown automaton that recognizes the reverse language of a given pushdown automaton?

Mont Emei