Is it possible for the bisection method to provide “fake” zeros











up vote
7
down vote

favorite
3












I've read about the bisection method for finding roots of a function in my numerical analysis textbook and one question came to my mind.



Given a relatively complicated function, the chances of finding the exact root (that is, a root that is completely represented in the computer's memory, with all significant figures) are very low. This means that most of the time, we will have a value for which the function is very close to, but not exactly equal to zero.



So what would happen if the function had one root, and another value at which the function gets extremely close to zero, without actually getting there? Would the algorithm fail? And what is the meaning of that eventual "fake" root; is it worth anything?



Thank you.



EDIT: here is a picture showing what I meant












share|cite|improve this question




















  • 3




    If you are within the tolerance you have set for the algorithm, $|f(x)|<epsilon$ then it would count as a root. This could arise if your function is discontinuous at x, but you would probably make sure that isn't the case before you start.
    – Paul
    Dec 1 at 16:36








  • 1




    Are you assuming the function is continuous?
    – PyRulez
    Dec 1 at 23:10






  • 1




    the "fake" root alludes to the fact that the intermediate value theorem is not constructive.
    – Matt A Pelto
    Dec 2 at 2:50










  • The bisection method does NOT "find roots". It finds (preferably, small) intervals where the function changes sign. Ask it to "find a root" of 1/x between -1 and +2, and it will tell you there is one close to 0. That behaviour is NOT a bug in the algorithm, though it may be a bug in the mind of the user who doesn't understand what the algorithm really does!
    – alephzero
    Dec 2 at 14:26












  • Perhaps it is the same bug that is infecting users who insist on putting words in other people's mouths.
    – Matt A Pelto
    Dec 2 at 21:07















up vote
7
down vote

favorite
3












I've read about the bisection method for finding roots of a function in my numerical analysis textbook and one question came to my mind.



Given a relatively complicated function, the chances of finding the exact root (that is, a root that is completely represented in the computer's memory, with all significant figures) are very low. This means that most of the time, we will have a value for which the function is very close to, but not exactly equal to zero.



So what would happen if the function had one root, and another value at which the function gets extremely close to zero, without actually getting there? Would the algorithm fail? And what is the meaning of that eventual "fake" root; is it worth anything?



Thank you.



EDIT: here is a picture showing what I meant












share|cite|improve this question




















  • 3




    If you are within the tolerance you have set for the algorithm, $|f(x)|<epsilon$ then it would count as a root. This could arise if your function is discontinuous at x, but you would probably make sure that isn't the case before you start.
    – Paul
    Dec 1 at 16:36








  • 1




    Are you assuming the function is continuous?
    – PyRulez
    Dec 1 at 23:10






  • 1




    the "fake" root alludes to the fact that the intermediate value theorem is not constructive.
    – Matt A Pelto
    Dec 2 at 2:50










  • The bisection method does NOT "find roots". It finds (preferably, small) intervals where the function changes sign. Ask it to "find a root" of 1/x between -1 and +2, and it will tell you there is one close to 0. That behaviour is NOT a bug in the algorithm, though it may be a bug in the mind of the user who doesn't understand what the algorithm really does!
    – alephzero
    Dec 2 at 14:26












  • Perhaps it is the same bug that is infecting users who insist on putting words in other people's mouths.
    – Matt A Pelto
    Dec 2 at 21:07













up vote
7
down vote

favorite
3









up vote
7
down vote

favorite
3






3





I've read about the bisection method for finding roots of a function in my numerical analysis textbook and one question came to my mind.



Given a relatively complicated function, the chances of finding the exact root (that is, a root that is completely represented in the computer's memory, with all significant figures) are very low. This means that most of the time, we will have a value for which the function is very close to, but not exactly equal to zero.



So what would happen if the function had one root, and another value at which the function gets extremely close to zero, without actually getting there? Would the algorithm fail? And what is the meaning of that eventual "fake" root; is it worth anything?



Thank you.



EDIT: here is a picture showing what I meant












share|cite|improve this question















I've read about the bisection method for finding roots of a function in my numerical analysis textbook and one question came to my mind.



Given a relatively complicated function, the chances of finding the exact root (that is, a root that is completely represented in the computer's memory, with all significant figures) are very low. This means that most of the time, we will have a value for which the function is very close to, but not exactly equal to zero.



So what would happen if the function had one root, and another value at which the function gets extremely close to zero, without actually getting there? Would the algorithm fail? And what is the meaning of that eventual "fake" root; is it worth anything?



Thank you.



EDIT: here is a picture showing what I meant









real-analysis analysis numerical-methods roots






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Dec 1 at 18:12









timtfj

837217




837217










asked Dec 1 at 16:29









FredV

876




876








  • 3




    If you are within the tolerance you have set for the algorithm, $|f(x)|<epsilon$ then it would count as a root. This could arise if your function is discontinuous at x, but you would probably make sure that isn't the case before you start.
    – Paul
    Dec 1 at 16:36








  • 1




    Are you assuming the function is continuous?
    – PyRulez
    Dec 1 at 23:10






  • 1




    the "fake" root alludes to the fact that the intermediate value theorem is not constructive.
    – Matt A Pelto
    Dec 2 at 2:50










  • The bisection method does NOT "find roots". It finds (preferably, small) intervals where the function changes sign. Ask it to "find a root" of 1/x between -1 and +2, and it will tell you there is one close to 0. That behaviour is NOT a bug in the algorithm, though it may be a bug in the mind of the user who doesn't understand what the algorithm really does!
    – alephzero
    Dec 2 at 14:26












  • Perhaps it is the same bug that is infecting users who insist on putting words in other people's mouths.
    – Matt A Pelto
    Dec 2 at 21:07














  • 3




    If you are within the tolerance you have set for the algorithm, $|f(x)|<epsilon$ then it would count as a root. This could arise if your function is discontinuous at x, but you would probably make sure that isn't the case before you start.
    – Paul
    Dec 1 at 16:36








  • 1




    Are you assuming the function is continuous?
    – PyRulez
    Dec 1 at 23:10






  • 1




    the "fake" root alludes to the fact that the intermediate value theorem is not constructive.
    – Matt A Pelto
    Dec 2 at 2:50










  • The bisection method does NOT "find roots". It finds (preferably, small) intervals where the function changes sign. Ask it to "find a root" of 1/x between -1 and +2, and it will tell you there is one close to 0. That behaviour is NOT a bug in the algorithm, though it may be a bug in the mind of the user who doesn't understand what the algorithm really does!
    – alephzero
    Dec 2 at 14:26












  • Perhaps it is the same bug that is infecting users who insist on putting words in other people's mouths.
    – Matt A Pelto
    Dec 2 at 21:07








3




3




If you are within the tolerance you have set for the algorithm, $|f(x)|<epsilon$ then it would count as a root. This could arise if your function is discontinuous at x, but you would probably make sure that isn't the case before you start.
– Paul
Dec 1 at 16:36






If you are within the tolerance you have set for the algorithm, $|f(x)|<epsilon$ then it would count as a root. This could arise if your function is discontinuous at x, but you would probably make sure that isn't the case before you start.
– Paul
Dec 1 at 16:36






1




1




Are you assuming the function is continuous?
– PyRulez
Dec 1 at 23:10




Are you assuming the function is continuous?
– PyRulez
Dec 1 at 23:10




1




1




the "fake" root alludes to the fact that the intermediate value theorem is not constructive.
– Matt A Pelto
Dec 2 at 2:50




the "fake" root alludes to the fact that the intermediate value theorem is not constructive.
– Matt A Pelto
Dec 2 at 2:50












The bisection method does NOT "find roots". It finds (preferably, small) intervals where the function changes sign. Ask it to "find a root" of 1/x between -1 and +2, and it will tell you there is one close to 0. That behaviour is NOT a bug in the algorithm, though it may be a bug in the mind of the user who doesn't understand what the algorithm really does!
– alephzero
Dec 2 at 14:26






The bisection method does NOT "find roots". It finds (preferably, small) intervals where the function changes sign. Ask it to "find a root" of 1/x between -1 and +2, and it will tell you there is one close to 0. That behaviour is NOT a bug in the algorithm, though it may be a bug in the mind of the user who doesn't understand what the algorithm really does!
– alephzero
Dec 2 at 14:26














Perhaps it is the same bug that is infecting users who insist on putting words in other people's mouths.
– Matt A Pelto
Dec 2 at 21:07




Perhaps it is the same bug that is infecting users who insist on putting words in other people's mouths.
– Matt A Pelto
Dec 2 at 21:07










4 Answers
4






active

oldest

votes

















up vote
17
down vote



accepted










The bisection method only cares if the function changes sign, so it goes straight past the 'fake' root without noticing.



If the coefficients have a slight error in them, then perhaps the 'fake' root should have been a root.






share|cite|improve this answer

















  • 2




    This should be the official answer.
    – DavidG
    Dec 2 at 12:04


















up vote
9
down vote













You have to be aware that the bisection method finds a point with a sign change in the values of the numerical evaluation of your function. Due to catastrophic cancellation this can give wide errors even for simple roots. Take for instance the rescaled Wilkinson polynomial $p(x)=prod_{k=1}^{20}(x-k/10)$ in double floating point precision, after multiplying it out as $x^{20}-21x^{19}+a_{18}x^{18}+...+a_1x+a_0$. Around the root $x=1.9$ the numerical evaluation of a larger sampling of points gives this image



enter image description here



so that depending on the initial interval the bisection method might end at any point between $1.8999$ and $1.9003$





To put this into perspective, the scale for this situation, polynomial evaluation for $|x|le 2$, is provided by $p(-2)=3.35367..cdot 10^9$, so that the expected accuracy bound using a machine epsilon of $2⋅10^{-16}$ is indeed about $7⋅10^{-7}$, the observed errors are a little smaller.






share|cite|improve this answer



















  • 1




    It's strange that you get so apparent cancellation problem with this polynomial using double precision. Mathematica with machine precision handles it pretty well, both using built-in Product function and manual multiplication loop: screenshot. What software did you use to make this plot?
    – Ruslan
    Dec 1 at 21:39










  • I used the monomial form, that is, computed the coefficients and evaluated from there. Using the Horner scheme reduces the band-width by about half.
    – LutzL
    Dec 1 at 21:46










  • Ah, right, I can reproduce it after I Expand the product.
    – Ruslan
    Dec 1 at 21:48






  • 1




    If someone knows so little about numerical analysis that they want to evaluate Wilkinson's polynomial by multiplying it out first, they deserve whatever nonsense answers they get IMO. "Garbage in, garbage out" applies to algorithms as well as data!
    – alephzero
    Dec 2 at 14:21








  • 4




    @alephzero : I think you missed the meaning of the Wilkinson polynomial as an example of a polynomial with a seemingly "nice" root set that behaves "badly" for numerical root-finding algorithms. In general, if one knows the complete factorization of a polynomial, there is no need to apply root-finding algorithms, as one can directly read off the roots.
    – LutzL
    Dec 2 at 14:25




















up vote
3
down vote













As long as you evaluate $fleft(frac {a+b}2right)$ to something greater than zero, the method will work fine. It will replace $a$ with $frac {a+b}2$ as the left end of the interval. The usual termination criterion for bisection and other bracketing approaches is the length of the interval, not the function value. It would keep going, knowing that there is a root somewhere in the interval because the function values at the endpoints have opposite signs.






share|cite|improve this answer























  • In my example the computer will think that the very first estimation is a root, while it's actually not, so does it mean, as stated by @Empy2, that the value should have been a root, and that the function actually has 2 roots on the interval?
    – FredV
    Dec 1 at 17:07






  • 2




    If you are using a function value test for a root and the computed value is within it, it is a root to you. That is what you said when you set the test limit. In that case this function has one double and one single root in the interval. As I said, the implementations I have seen test on the length of the interval as that is the uncertainty in the position of the root. As long as the computed value is not exactly $0$ the method will replace one endpoint or the other and continue. Getting a small non-zero value is no problem at all.
    – Ross Millikan
    Dec 1 at 17:13


















up vote
3
down vote













Suppose we are utilizing a computer that has $N$-bit precision ($Ngeq4$). If we take any integer $n > N$ and we apply the bisection algorithm to the function $f$ defined on $[0,1]$ by $f(x)=left(x-frac12right)^2left(x-frac34right)-2^{-n}$, then the algorithm will output $x=frac12$ as the point where $f$ has a root. This is because $f(0)=-frac3{16}-2^{-n}<0$, $f(1)=frac1{16}-2^{-n}>0$, and our computer isn't capable of distinguishing $,fleft(frac12right)=-2^{-n}$ from zero.



Here is a plot of our function $f$ if we took $n=10$:enter image description here



Notwithstanding the output from applying the bisection algorithm to $f$ on $[0,1]$, we can see that the only zero of $f$ in $[0,1]$ lies somewhere between $frac34$ and $1$.





As for the meaning of such a "fake" root, I would say it alludes to the fact that the Intermediate Value Theorem is equivalent to the nonconstructive proposition known as the lesser limited principle of omniscience.



Define a binary sequence ${a_n}_{n=1}^infty$ by
begin{equation}a_n=begin{cases}0 &text{ iff either } 2k+3 text{ is the sum of 3 primes for all } kleq n text{ or there exists } k<n text{ s.t. } a_k=1 \1&text{ otherwise.}end{cases} end{equation}
Define $a=sum_{n=1}^infty a_n2^{-n}$, and apply the bisection algorithm to the function $f$ defined on $[0,1]$ by $f(x)=left(x-frac12right)^2left(x-frac34right)-a$. As long as our computation is limited to some finite precision, the algorithm will output $x=frac12$ as a root of $f$. This output is correct (which I take to mean that it is either identical to or approximately close to a root) if and only if the odd Goldbach conjecture is true.



The way that the binary sequence ${a_n}_{n=1}^infty$ is defined is meant to invoke the limited principle of omniscience, a nonconstructive principle which implies the lesser limited principle of omniscience.





Disclaimer (in response to the valid concerns raised by Euro Micelli): My "answer" is not trying to provide affirmation of the question in the title as I would say the answer to the question posed in the title is "not yes". I will note that even arbitrary precision is still subject to available memory and computation time (as far as I know). I figure we have two sides of the same coin, the bisection method is not constructive and so is the definition of the function $f$ on $[0,1]$. Indeed there are ways to preclude such a false output, and my response has only considered the algorithm underlying the proof of the Intermediate Value Theorem in the most classical and basic setting. I respond to questions on this forum by trying to provide the question asker with some insight and perspective to the best of my knowledge given the contents of their initial post.






share|cite|improve this answer



















  • 2




    The mild issue I have with this explanation is that it unwittingly sidesteps the question being asked and answers a subtly different one. The proposed function is being pathologically constructed so that the suggested computer cannot avoid the near zero by the use of any conceivable numerical algorithm. Here, we are effectively blaming the algorithm for the limitations of the chosen computer to attack the given problem. Precision limitation is of course a critical consideration for numerical analysis, but this example doesn’t illustrate a limitation of the Bisection method specifically.
    – Euro Micelli
    Dec 2 at 3:43












  • As long as the precision is somehow limited, we can find a positive integer $n$ so that the algorithm stops by outputting the "fake" root at $x=frac12$ when applied to the function $f(x)=left(x-frac12right)^2left(x-frac34right)-2^{-n}$ on $[0,1]$. Of course for a given function $f$ we can always have better precision so that the algorithm is not duped by some $hat{x}$ where $f$ is "close" to zero but not identically zero. I am merely noting that for a given precision limitation we can always find a function such that the algorithm fails when applied to the particular function.
    – Matt A Pelto
    Dec 2 at 5:15













Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3021532%2fis-it-possible-for-the-bisection-method-to-provide-fake-zeros%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























4 Answers
4






active

oldest

votes








4 Answers
4






active

oldest

votes









active

oldest

votes






active

oldest

votes








up vote
17
down vote



accepted










The bisection method only cares if the function changes sign, so it goes straight past the 'fake' root without noticing.



If the coefficients have a slight error in them, then perhaps the 'fake' root should have been a root.






share|cite|improve this answer

















  • 2




    This should be the official answer.
    – DavidG
    Dec 2 at 12:04















up vote
17
down vote



accepted










The bisection method only cares if the function changes sign, so it goes straight past the 'fake' root without noticing.



If the coefficients have a slight error in them, then perhaps the 'fake' root should have been a root.






share|cite|improve this answer

















  • 2




    This should be the official answer.
    – DavidG
    Dec 2 at 12:04













up vote
17
down vote



accepted







up vote
17
down vote



accepted






The bisection method only cares if the function changes sign, so it goes straight past the 'fake' root without noticing.



If the coefficients have a slight error in them, then perhaps the 'fake' root should have been a root.






share|cite|improve this answer












The bisection method only cares if the function changes sign, so it goes straight past the 'fake' root without noticing.



If the coefficients have a slight error in them, then perhaps the 'fake' root should have been a root.







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Dec 1 at 16:41









Empy2

33.4k12261




33.4k12261








  • 2




    This should be the official answer.
    – DavidG
    Dec 2 at 12:04














  • 2




    This should be the official answer.
    – DavidG
    Dec 2 at 12:04








2




2




This should be the official answer.
– DavidG
Dec 2 at 12:04




This should be the official answer.
– DavidG
Dec 2 at 12:04










up vote
9
down vote













You have to be aware that the bisection method finds a point with a sign change in the values of the numerical evaluation of your function. Due to catastrophic cancellation this can give wide errors even for simple roots. Take for instance the rescaled Wilkinson polynomial $p(x)=prod_{k=1}^{20}(x-k/10)$ in double floating point precision, after multiplying it out as $x^{20}-21x^{19}+a_{18}x^{18}+...+a_1x+a_0$. Around the root $x=1.9$ the numerical evaluation of a larger sampling of points gives this image



enter image description here



so that depending on the initial interval the bisection method might end at any point between $1.8999$ and $1.9003$





To put this into perspective, the scale for this situation, polynomial evaluation for $|x|le 2$, is provided by $p(-2)=3.35367..cdot 10^9$, so that the expected accuracy bound using a machine epsilon of $2⋅10^{-16}$ is indeed about $7⋅10^{-7}$, the observed errors are a little smaller.






share|cite|improve this answer



















  • 1




    It's strange that you get so apparent cancellation problem with this polynomial using double precision. Mathematica with machine precision handles it pretty well, both using built-in Product function and manual multiplication loop: screenshot. What software did you use to make this plot?
    – Ruslan
    Dec 1 at 21:39










  • I used the monomial form, that is, computed the coefficients and evaluated from there. Using the Horner scheme reduces the band-width by about half.
    – LutzL
    Dec 1 at 21:46










  • Ah, right, I can reproduce it after I Expand the product.
    – Ruslan
    Dec 1 at 21:48






  • 1




    If someone knows so little about numerical analysis that they want to evaluate Wilkinson's polynomial by multiplying it out first, they deserve whatever nonsense answers they get IMO. "Garbage in, garbage out" applies to algorithms as well as data!
    – alephzero
    Dec 2 at 14:21








  • 4




    @alephzero : I think you missed the meaning of the Wilkinson polynomial as an example of a polynomial with a seemingly "nice" root set that behaves "badly" for numerical root-finding algorithms. In general, if one knows the complete factorization of a polynomial, there is no need to apply root-finding algorithms, as one can directly read off the roots.
    – LutzL
    Dec 2 at 14:25

















up vote
9
down vote













You have to be aware that the bisection method finds a point with a sign change in the values of the numerical evaluation of your function. Due to catastrophic cancellation this can give wide errors even for simple roots. Take for instance the rescaled Wilkinson polynomial $p(x)=prod_{k=1}^{20}(x-k/10)$ in double floating point precision, after multiplying it out as $x^{20}-21x^{19}+a_{18}x^{18}+...+a_1x+a_0$. Around the root $x=1.9$ the numerical evaluation of a larger sampling of points gives this image



enter image description here



so that depending on the initial interval the bisection method might end at any point between $1.8999$ and $1.9003$





To put this into perspective, the scale for this situation, polynomial evaluation for $|x|le 2$, is provided by $p(-2)=3.35367..cdot 10^9$, so that the expected accuracy bound using a machine epsilon of $2⋅10^{-16}$ is indeed about $7⋅10^{-7}$, the observed errors are a little smaller.






share|cite|improve this answer



















  • 1




    It's strange that you get so apparent cancellation problem with this polynomial using double precision. Mathematica with machine precision handles it pretty well, both using built-in Product function and manual multiplication loop: screenshot. What software did you use to make this plot?
    – Ruslan
    Dec 1 at 21:39










  • I used the monomial form, that is, computed the coefficients and evaluated from there. Using the Horner scheme reduces the band-width by about half.
    – LutzL
    Dec 1 at 21:46










  • Ah, right, I can reproduce it after I Expand the product.
    – Ruslan
    Dec 1 at 21:48






  • 1




    If someone knows so little about numerical analysis that they want to evaluate Wilkinson's polynomial by multiplying it out first, they deserve whatever nonsense answers they get IMO. "Garbage in, garbage out" applies to algorithms as well as data!
    – alephzero
    Dec 2 at 14:21








  • 4




    @alephzero : I think you missed the meaning of the Wilkinson polynomial as an example of a polynomial with a seemingly "nice" root set that behaves "badly" for numerical root-finding algorithms. In general, if one knows the complete factorization of a polynomial, there is no need to apply root-finding algorithms, as one can directly read off the roots.
    – LutzL
    Dec 2 at 14:25















up vote
9
down vote










up vote
9
down vote









You have to be aware that the bisection method finds a point with a sign change in the values of the numerical evaluation of your function. Due to catastrophic cancellation this can give wide errors even for simple roots. Take for instance the rescaled Wilkinson polynomial $p(x)=prod_{k=1}^{20}(x-k/10)$ in double floating point precision, after multiplying it out as $x^{20}-21x^{19}+a_{18}x^{18}+...+a_1x+a_0$. Around the root $x=1.9$ the numerical evaluation of a larger sampling of points gives this image



enter image description here



so that depending on the initial interval the bisection method might end at any point between $1.8999$ and $1.9003$





To put this into perspective, the scale for this situation, polynomial evaluation for $|x|le 2$, is provided by $p(-2)=3.35367..cdot 10^9$, so that the expected accuracy bound using a machine epsilon of $2⋅10^{-16}$ is indeed about $7⋅10^{-7}$, the observed errors are a little smaller.






share|cite|improve this answer














You have to be aware that the bisection method finds a point with a sign change in the values of the numerical evaluation of your function. Due to catastrophic cancellation this can give wide errors even for simple roots. Take for instance the rescaled Wilkinson polynomial $p(x)=prod_{k=1}^{20}(x-k/10)$ in double floating point precision, after multiplying it out as $x^{20}-21x^{19}+a_{18}x^{18}+...+a_1x+a_0$. Around the root $x=1.9$ the numerical evaluation of a larger sampling of points gives this image



enter image description here



so that depending on the initial interval the bisection method might end at any point between $1.8999$ and $1.9003$





To put this into perspective, the scale for this situation, polynomial evaluation for $|x|le 2$, is provided by $p(-2)=3.35367..cdot 10^9$, so that the expected accuracy bound using a machine epsilon of $2⋅10^{-16}$ is indeed about $7⋅10^{-7}$, the observed errors are a little smaller.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Dec 2 at 11:03

























answered Dec 1 at 19:17









LutzL

55k42053




55k42053








  • 1




    It's strange that you get so apparent cancellation problem with this polynomial using double precision. Mathematica with machine precision handles it pretty well, both using built-in Product function and manual multiplication loop: screenshot. What software did you use to make this plot?
    – Ruslan
    Dec 1 at 21:39










  • I used the monomial form, that is, computed the coefficients and evaluated from there. Using the Horner scheme reduces the band-width by about half.
    – LutzL
    Dec 1 at 21:46










  • Ah, right, I can reproduce it after I Expand the product.
    – Ruslan
    Dec 1 at 21:48






  • 1




    If someone knows so little about numerical analysis that they want to evaluate Wilkinson's polynomial by multiplying it out first, they deserve whatever nonsense answers they get IMO. "Garbage in, garbage out" applies to algorithms as well as data!
    – alephzero
    Dec 2 at 14:21








  • 4




    @alephzero : I think you missed the meaning of the Wilkinson polynomial as an example of a polynomial with a seemingly "nice" root set that behaves "badly" for numerical root-finding algorithms. In general, if one knows the complete factorization of a polynomial, there is no need to apply root-finding algorithms, as one can directly read off the roots.
    – LutzL
    Dec 2 at 14:25
















  • 1




    It's strange that you get so apparent cancellation problem with this polynomial using double precision. Mathematica with machine precision handles it pretty well, both using built-in Product function and manual multiplication loop: screenshot. What software did you use to make this plot?
    – Ruslan
    Dec 1 at 21:39










  • I used the monomial form, that is, computed the coefficients and evaluated from there. Using the Horner scheme reduces the band-width by about half.
    – LutzL
    Dec 1 at 21:46










  • Ah, right, I can reproduce it after I Expand the product.
    – Ruslan
    Dec 1 at 21:48






  • 1




    If someone knows so little about numerical analysis that they want to evaluate Wilkinson's polynomial by multiplying it out first, they deserve whatever nonsense answers they get IMO. "Garbage in, garbage out" applies to algorithms as well as data!
    – alephzero
    Dec 2 at 14:21








  • 4




    @alephzero : I think you missed the meaning of the Wilkinson polynomial as an example of a polynomial with a seemingly "nice" root set that behaves "badly" for numerical root-finding algorithms. In general, if one knows the complete factorization of a polynomial, there is no need to apply root-finding algorithms, as one can directly read off the roots.
    – LutzL
    Dec 2 at 14:25










1




1




It's strange that you get so apparent cancellation problem with this polynomial using double precision. Mathematica with machine precision handles it pretty well, both using built-in Product function and manual multiplication loop: screenshot. What software did you use to make this plot?
– Ruslan
Dec 1 at 21:39




It's strange that you get so apparent cancellation problem with this polynomial using double precision. Mathematica with machine precision handles it pretty well, both using built-in Product function and manual multiplication loop: screenshot. What software did you use to make this plot?
– Ruslan
Dec 1 at 21:39












I used the monomial form, that is, computed the coefficients and evaluated from there. Using the Horner scheme reduces the band-width by about half.
– LutzL
Dec 1 at 21:46




I used the monomial form, that is, computed the coefficients and evaluated from there. Using the Horner scheme reduces the band-width by about half.
– LutzL
Dec 1 at 21:46












Ah, right, I can reproduce it after I Expand the product.
– Ruslan
Dec 1 at 21:48




Ah, right, I can reproduce it after I Expand the product.
– Ruslan
Dec 1 at 21:48




1




1




If someone knows so little about numerical analysis that they want to evaluate Wilkinson's polynomial by multiplying it out first, they deserve whatever nonsense answers they get IMO. "Garbage in, garbage out" applies to algorithms as well as data!
– alephzero
Dec 2 at 14:21






If someone knows so little about numerical analysis that they want to evaluate Wilkinson's polynomial by multiplying it out first, they deserve whatever nonsense answers they get IMO. "Garbage in, garbage out" applies to algorithms as well as data!
– alephzero
Dec 2 at 14:21






4




4




@alephzero : I think you missed the meaning of the Wilkinson polynomial as an example of a polynomial with a seemingly "nice" root set that behaves "badly" for numerical root-finding algorithms. In general, if one knows the complete factorization of a polynomial, there is no need to apply root-finding algorithms, as one can directly read off the roots.
– LutzL
Dec 2 at 14:25






@alephzero : I think you missed the meaning of the Wilkinson polynomial as an example of a polynomial with a seemingly "nice" root set that behaves "badly" for numerical root-finding algorithms. In general, if one knows the complete factorization of a polynomial, there is no need to apply root-finding algorithms, as one can directly read off the roots.
– LutzL
Dec 2 at 14:25












up vote
3
down vote













As long as you evaluate $fleft(frac {a+b}2right)$ to something greater than zero, the method will work fine. It will replace $a$ with $frac {a+b}2$ as the left end of the interval. The usual termination criterion for bisection and other bracketing approaches is the length of the interval, not the function value. It would keep going, knowing that there is a root somewhere in the interval because the function values at the endpoints have opposite signs.






share|cite|improve this answer























  • In my example the computer will think that the very first estimation is a root, while it's actually not, so does it mean, as stated by @Empy2, that the value should have been a root, and that the function actually has 2 roots on the interval?
    – FredV
    Dec 1 at 17:07






  • 2




    If you are using a function value test for a root and the computed value is within it, it is a root to you. That is what you said when you set the test limit. In that case this function has one double and one single root in the interval. As I said, the implementations I have seen test on the length of the interval as that is the uncertainty in the position of the root. As long as the computed value is not exactly $0$ the method will replace one endpoint or the other and continue. Getting a small non-zero value is no problem at all.
    – Ross Millikan
    Dec 1 at 17:13















up vote
3
down vote













As long as you evaluate $fleft(frac {a+b}2right)$ to something greater than zero, the method will work fine. It will replace $a$ with $frac {a+b}2$ as the left end of the interval. The usual termination criterion for bisection and other bracketing approaches is the length of the interval, not the function value. It would keep going, knowing that there is a root somewhere in the interval because the function values at the endpoints have opposite signs.






share|cite|improve this answer























  • In my example the computer will think that the very first estimation is a root, while it's actually not, so does it mean, as stated by @Empy2, that the value should have been a root, and that the function actually has 2 roots on the interval?
    – FredV
    Dec 1 at 17:07






  • 2




    If you are using a function value test for a root and the computed value is within it, it is a root to you. That is what you said when you set the test limit. In that case this function has one double and one single root in the interval. As I said, the implementations I have seen test on the length of the interval as that is the uncertainty in the position of the root. As long as the computed value is not exactly $0$ the method will replace one endpoint or the other and continue. Getting a small non-zero value is no problem at all.
    – Ross Millikan
    Dec 1 at 17:13













up vote
3
down vote










up vote
3
down vote









As long as you evaluate $fleft(frac {a+b}2right)$ to something greater than zero, the method will work fine. It will replace $a$ with $frac {a+b}2$ as the left end of the interval. The usual termination criterion for bisection and other bracketing approaches is the length of the interval, not the function value. It would keep going, knowing that there is a root somewhere in the interval because the function values at the endpoints have opposite signs.






share|cite|improve this answer














As long as you evaluate $fleft(frac {a+b}2right)$ to something greater than zero, the method will work fine. It will replace $a$ with $frac {a+b}2$ as the left end of the interval. The usual termination criterion for bisection and other bracketing approaches is the length of the interval, not the function value. It would keep going, knowing that there is a root somewhere in the interval because the function values at the endpoints have opposite signs.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Dec 2 at 1:03









Mutantoe

558411




558411










answered Dec 1 at 17:01









Ross Millikan

290k23195368




290k23195368












  • In my example the computer will think that the very first estimation is a root, while it's actually not, so does it mean, as stated by @Empy2, that the value should have been a root, and that the function actually has 2 roots on the interval?
    – FredV
    Dec 1 at 17:07






  • 2




    If you are using a function value test for a root and the computed value is within it, it is a root to you. That is what you said when you set the test limit. In that case this function has one double and one single root in the interval. As I said, the implementations I have seen test on the length of the interval as that is the uncertainty in the position of the root. As long as the computed value is not exactly $0$ the method will replace one endpoint or the other and continue. Getting a small non-zero value is no problem at all.
    – Ross Millikan
    Dec 1 at 17:13


















  • In my example the computer will think that the very first estimation is a root, while it's actually not, so does it mean, as stated by @Empy2, that the value should have been a root, and that the function actually has 2 roots on the interval?
    – FredV
    Dec 1 at 17:07






  • 2




    If you are using a function value test for a root and the computed value is within it, it is a root to you. That is what you said when you set the test limit. In that case this function has one double and one single root in the interval. As I said, the implementations I have seen test on the length of the interval as that is the uncertainty in the position of the root. As long as the computed value is not exactly $0$ the method will replace one endpoint or the other and continue. Getting a small non-zero value is no problem at all.
    – Ross Millikan
    Dec 1 at 17:13
















In my example the computer will think that the very first estimation is a root, while it's actually not, so does it mean, as stated by @Empy2, that the value should have been a root, and that the function actually has 2 roots on the interval?
– FredV
Dec 1 at 17:07




In my example the computer will think that the very first estimation is a root, while it's actually not, so does it mean, as stated by @Empy2, that the value should have been a root, and that the function actually has 2 roots on the interval?
– FredV
Dec 1 at 17:07




2




2




If you are using a function value test for a root and the computed value is within it, it is a root to you. That is what you said when you set the test limit. In that case this function has one double and one single root in the interval. As I said, the implementations I have seen test on the length of the interval as that is the uncertainty in the position of the root. As long as the computed value is not exactly $0$ the method will replace one endpoint or the other and continue. Getting a small non-zero value is no problem at all.
– Ross Millikan
Dec 1 at 17:13




If you are using a function value test for a root and the computed value is within it, it is a root to you. That is what you said when you set the test limit. In that case this function has one double and one single root in the interval. As I said, the implementations I have seen test on the length of the interval as that is the uncertainty in the position of the root. As long as the computed value is not exactly $0$ the method will replace one endpoint or the other and continue. Getting a small non-zero value is no problem at all.
– Ross Millikan
Dec 1 at 17:13










up vote
3
down vote













Suppose we are utilizing a computer that has $N$-bit precision ($Ngeq4$). If we take any integer $n > N$ and we apply the bisection algorithm to the function $f$ defined on $[0,1]$ by $f(x)=left(x-frac12right)^2left(x-frac34right)-2^{-n}$, then the algorithm will output $x=frac12$ as the point where $f$ has a root. This is because $f(0)=-frac3{16}-2^{-n}<0$, $f(1)=frac1{16}-2^{-n}>0$, and our computer isn't capable of distinguishing $,fleft(frac12right)=-2^{-n}$ from zero.



Here is a plot of our function $f$ if we took $n=10$:enter image description here



Notwithstanding the output from applying the bisection algorithm to $f$ on $[0,1]$, we can see that the only zero of $f$ in $[0,1]$ lies somewhere between $frac34$ and $1$.





As for the meaning of such a "fake" root, I would say it alludes to the fact that the Intermediate Value Theorem is equivalent to the nonconstructive proposition known as the lesser limited principle of omniscience.



Define a binary sequence ${a_n}_{n=1}^infty$ by
begin{equation}a_n=begin{cases}0 &text{ iff either } 2k+3 text{ is the sum of 3 primes for all } kleq n text{ or there exists } k<n text{ s.t. } a_k=1 \1&text{ otherwise.}end{cases} end{equation}
Define $a=sum_{n=1}^infty a_n2^{-n}$, and apply the bisection algorithm to the function $f$ defined on $[0,1]$ by $f(x)=left(x-frac12right)^2left(x-frac34right)-a$. As long as our computation is limited to some finite precision, the algorithm will output $x=frac12$ as a root of $f$. This output is correct (which I take to mean that it is either identical to or approximately close to a root) if and only if the odd Goldbach conjecture is true.



The way that the binary sequence ${a_n}_{n=1}^infty$ is defined is meant to invoke the limited principle of omniscience, a nonconstructive principle which implies the lesser limited principle of omniscience.





Disclaimer (in response to the valid concerns raised by Euro Micelli): My "answer" is not trying to provide affirmation of the question in the title as I would say the answer to the question posed in the title is "not yes". I will note that even arbitrary precision is still subject to available memory and computation time (as far as I know). I figure we have two sides of the same coin, the bisection method is not constructive and so is the definition of the function $f$ on $[0,1]$. Indeed there are ways to preclude such a false output, and my response has only considered the algorithm underlying the proof of the Intermediate Value Theorem in the most classical and basic setting. I respond to questions on this forum by trying to provide the question asker with some insight and perspective to the best of my knowledge given the contents of their initial post.






share|cite|improve this answer



















  • 2




    The mild issue I have with this explanation is that it unwittingly sidesteps the question being asked and answers a subtly different one. The proposed function is being pathologically constructed so that the suggested computer cannot avoid the near zero by the use of any conceivable numerical algorithm. Here, we are effectively blaming the algorithm for the limitations of the chosen computer to attack the given problem. Precision limitation is of course a critical consideration for numerical analysis, but this example doesn’t illustrate a limitation of the Bisection method specifically.
    – Euro Micelli
    Dec 2 at 3:43












  • As long as the precision is somehow limited, we can find a positive integer $n$ so that the algorithm stops by outputting the "fake" root at $x=frac12$ when applied to the function $f(x)=left(x-frac12right)^2left(x-frac34right)-2^{-n}$ on $[0,1]$. Of course for a given function $f$ we can always have better precision so that the algorithm is not duped by some $hat{x}$ where $f$ is "close" to zero but not identically zero. I am merely noting that for a given precision limitation we can always find a function such that the algorithm fails when applied to the particular function.
    – Matt A Pelto
    Dec 2 at 5:15

















up vote
3
down vote













Suppose we are utilizing a computer that has $N$-bit precision ($Ngeq4$). If we take any integer $n > N$ and we apply the bisection algorithm to the function $f$ defined on $[0,1]$ by $f(x)=left(x-frac12right)^2left(x-frac34right)-2^{-n}$, then the algorithm will output $x=frac12$ as the point where $f$ has a root. This is because $f(0)=-frac3{16}-2^{-n}<0$, $f(1)=frac1{16}-2^{-n}>0$, and our computer isn't capable of distinguishing $,fleft(frac12right)=-2^{-n}$ from zero.



Here is a plot of our function $f$ if we took $n=10$:enter image description here



Notwithstanding the output from applying the bisection algorithm to $f$ on $[0,1]$, we can see that the only zero of $f$ in $[0,1]$ lies somewhere between $frac34$ and $1$.





As for the meaning of such a "fake" root, I would say it alludes to the fact that the Intermediate Value Theorem is equivalent to the nonconstructive proposition known as the lesser limited principle of omniscience.



Define a binary sequence ${a_n}_{n=1}^infty$ by
begin{equation}a_n=begin{cases}0 &text{ iff either } 2k+3 text{ is the sum of 3 primes for all } kleq n text{ or there exists } k<n text{ s.t. } a_k=1 \1&text{ otherwise.}end{cases} end{equation}
Define $a=sum_{n=1}^infty a_n2^{-n}$, and apply the bisection algorithm to the function $f$ defined on $[0,1]$ by $f(x)=left(x-frac12right)^2left(x-frac34right)-a$. As long as our computation is limited to some finite precision, the algorithm will output $x=frac12$ as a root of $f$. This output is correct (which I take to mean that it is either identical to or approximately close to a root) if and only if the odd Goldbach conjecture is true.



The way that the binary sequence ${a_n}_{n=1}^infty$ is defined is meant to invoke the limited principle of omniscience, a nonconstructive principle which implies the lesser limited principle of omniscience.





Disclaimer (in response to the valid concerns raised by Euro Micelli): My "answer" is not trying to provide affirmation of the question in the title as I would say the answer to the question posed in the title is "not yes". I will note that even arbitrary precision is still subject to available memory and computation time (as far as I know). I figure we have two sides of the same coin, the bisection method is not constructive and so is the definition of the function $f$ on $[0,1]$. Indeed there are ways to preclude such a false output, and my response has only considered the algorithm underlying the proof of the Intermediate Value Theorem in the most classical and basic setting. I respond to questions on this forum by trying to provide the question asker with some insight and perspective to the best of my knowledge given the contents of their initial post.






share|cite|improve this answer



















  • 2




    The mild issue I have with this explanation is that it unwittingly sidesteps the question being asked and answers a subtly different one. The proposed function is being pathologically constructed so that the suggested computer cannot avoid the near zero by the use of any conceivable numerical algorithm. Here, we are effectively blaming the algorithm for the limitations of the chosen computer to attack the given problem. Precision limitation is of course a critical consideration for numerical analysis, but this example doesn’t illustrate a limitation of the Bisection method specifically.
    – Euro Micelli
    Dec 2 at 3:43












  • As long as the precision is somehow limited, we can find a positive integer $n$ so that the algorithm stops by outputting the "fake" root at $x=frac12$ when applied to the function $f(x)=left(x-frac12right)^2left(x-frac34right)-2^{-n}$ on $[0,1]$. Of course for a given function $f$ we can always have better precision so that the algorithm is not duped by some $hat{x}$ where $f$ is "close" to zero but not identically zero. I am merely noting that for a given precision limitation we can always find a function such that the algorithm fails when applied to the particular function.
    – Matt A Pelto
    Dec 2 at 5:15















up vote
3
down vote










up vote
3
down vote









Suppose we are utilizing a computer that has $N$-bit precision ($Ngeq4$). If we take any integer $n > N$ and we apply the bisection algorithm to the function $f$ defined on $[0,1]$ by $f(x)=left(x-frac12right)^2left(x-frac34right)-2^{-n}$, then the algorithm will output $x=frac12$ as the point where $f$ has a root. This is because $f(0)=-frac3{16}-2^{-n}<0$, $f(1)=frac1{16}-2^{-n}>0$, and our computer isn't capable of distinguishing $,fleft(frac12right)=-2^{-n}$ from zero.



Here is a plot of our function $f$ if we took $n=10$:enter image description here



Notwithstanding the output from applying the bisection algorithm to $f$ on $[0,1]$, we can see that the only zero of $f$ in $[0,1]$ lies somewhere between $frac34$ and $1$.





As for the meaning of such a "fake" root, I would say it alludes to the fact that the Intermediate Value Theorem is equivalent to the nonconstructive proposition known as the lesser limited principle of omniscience.



Define a binary sequence ${a_n}_{n=1}^infty$ by
begin{equation}a_n=begin{cases}0 &text{ iff either } 2k+3 text{ is the sum of 3 primes for all } kleq n text{ or there exists } k<n text{ s.t. } a_k=1 \1&text{ otherwise.}end{cases} end{equation}
Define $a=sum_{n=1}^infty a_n2^{-n}$, and apply the bisection algorithm to the function $f$ defined on $[0,1]$ by $f(x)=left(x-frac12right)^2left(x-frac34right)-a$. As long as our computation is limited to some finite precision, the algorithm will output $x=frac12$ as a root of $f$. This output is correct (which I take to mean that it is either identical to or approximately close to a root) if and only if the odd Goldbach conjecture is true.



The way that the binary sequence ${a_n}_{n=1}^infty$ is defined is meant to invoke the limited principle of omniscience, a nonconstructive principle which implies the lesser limited principle of omniscience.





Disclaimer (in response to the valid concerns raised by Euro Micelli): My "answer" is not trying to provide affirmation of the question in the title as I would say the answer to the question posed in the title is "not yes". I will note that even arbitrary precision is still subject to available memory and computation time (as far as I know). I figure we have two sides of the same coin, the bisection method is not constructive and so is the definition of the function $f$ on $[0,1]$. Indeed there are ways to preclude such a false output, and my response has only considered the algorithm underlying the proof of the Intermediate Value Theorem in the most classical and basic setting. I respond to questions on this forum by trying to provide the question asker with some insight and perspective to the best of my knowledge given the contents of their initial post.






share|cite|improve this answer














Suppose we are utilizing a computer that has $N$-bit precision ($Ngeq4$). If we take any integer $n > N$ and we apply the bisection algorithm to the function $f$ defined on $[0,1]$ by $f(x)=left(x-frac12right)^2left(x-frac34right)-2^{-n}$, then the algorithm will output $x=frac12$ as the point where $f$ has a root. This is because $f(0)=-frac3{16}-2^{-n}<0$, $f(1)=frac1{16}-2^{-n}>0$, and our computer isn't capable of distinguishing $,fleft(frac12right)=-2^{-n}$ from zero.



Here is a plot of our function $f$ if we took $n=10$:enter image description here



Notwithstanding the output from applying the bisection algorithm to $f$ on $[0,1]$, we can see that the only zero of $f$ in $[0,1]$ lies somewhere between $frac34$ and $1$.





As for the meaning of such a "fake" root, I would say it alludes to the fact that the Intermediate Value Theorem is equivalent to the nonconstructive proposition known as the lesser limited principle of omniscience.



Define a binary sequence ${a_n}_{n=1}^infty$ by
begin{equation}a_n=begin{cases}0 &text{ iff either } 2k+3 text{ is the sum of 3 primes for all } kleq n text{ or there exists } k<n text{ s.t. } a_k=1 \1&text{ otherwise.}end{cases} end{equation}
Define $a=sum_{n=1}^infty a_n2^{-n}$, and apply the bisection algorithm to the function $f$ defined on $[0,1]$ by $f(x)=left(x-frac12right)^2left(x-frac34right)-a$. As long as our computation is limited to some finite precision, the algorithm will output $x=frac12$ as a root of $f$. This output is correct (which I take to mean that it is either identical to or approximately close to a root) if and only if the odd Goldbach conjecture is true.



The way that the binary sequence ${a_n}_{n=1}^infty$ is defined is meant to invoke the limited principle of omniscience, a nonconstructive principle which implies the lesser limited principle of omniscience.





Disclaimer (in response to the valid concerns raised by Euro Micelli): My "answer" is not trying to provide affirmation of the question in the title as I would say the answer to the question posed in the title is "not yes". I will note that even arbitrary precision is still subject to available memory and computation time (as far as I know). I figure we have two sides of the same coin, the bisection method is not constructive and so is the definition of the function $f$ on $[0,1]$. Indeed there are ways to preclude such a false output, and my response has only considered the algorithm underlying the proof of the Intermediate Value Theorem in the most classical and basic setting. I respond to questions on this forum by trying to provide the question asker with some insight and perspective to the best of my knowledge given the contents of their initial post.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Dec 2 at 22:33

























answered Dec 2 at 2:00









Matt A Pelto

2,338620




2,338620








  • 2




    The mild issue I have with this explanation is that it unwittingly sidesteps the question being asked and answers a subtly different one. The proposed function is being pathologically constructed so that the suggested computer cannot avoid the near zero by the use of any conceivable numerical algorithm. Here, we are effectively blaming the algorithm for the limitations of the chosen computer to attack the given problem. Precision limitation is of course a critical consideration for numerical analysis, but this example doesn’t illustrate a limitation of the Bisection method specifically.
    – Euro Micelli
    Dec 2 at 3:43












  • As long as the precision is somehow limited, we can find a positive integer $n$ so that the algorithm stops by outputting the "fake" root at $x=frac12$ when applied to the function $f(x)=left(x-frac12right)^2left(x-frac34right)-2^{-n}$ on $[0,1]$. Of course for a given function $f$ we can always have better precision so that the algorithm is not duped by some $hat{x}$ where $f$ is "close" to zero but not identically zero. I am merely noting that for a given precision limitation we can always find a function such that the algorithm fails when applied to the particular function.
    – Matt A Pelto
    Dec 2 at 5:15
















  • 2




    The mild issue I have with this explanation is that it unwittingly sidesteps the question being asked and answers a subtly different one. The proposed function is being pathologically constructed so that the suggested computer cannot avoid the near zero by the use of any conceivable numerical algorithm. Here, we are effectively blaming the algorithm for the limitations of the chosen computer to attack the given problem. Precision limitation is of course a critical consideration for numerical analysis, but this example doesn’t illustrate a limitation of the Bisection method specifically.
    – Euro Micelli
    Dec 2 at 3:43












  • As long as the precision is somehow limited, we can find a positive integer $n$ so that the algorithm stops by outputting the "fake" root at $x=frac12$ when applied to the function $f(x)=left(x-frac12right)^2left(x-frac34right)-2^{-n}$ on $[0,1]$. Of course for a given function $f$ we can always have better precision so that the algorithm is not duped by some $hat{x}$ where $f$ is "close" to zero but not identically zero. I am merely noting that for a given precision limitation we can always find a function such that the algorithm fails when applied to the particular function.
    – Matt A Pelto
    Dec 2 at 5:15










2




2




The mild issue I have with this explanation is that it unwittingly sidesteps the question being asked and answers a subtly different one. The proposed function is being pathologically constructed so that the suggested computer cannot avoid the near zero by the use of any conceivable numerical algorithm. Here, we are effectively blaming the algorithm for the limitations of the chosen computer to attack the given problem. Precision limitation is of course a critical consideration for numerical analysis, but this example doesn’t illustrate a limitation of the Bisection method specifically.
– Euro Micelli
Dec 2 at 3:43






The mild issue I have with this explanation is that it unwittingly sidesteps the question being asked and answers a subtly different one. The proposed function is being pathologically constructed so that the suggested computer cannot avoid the near zero by the use of any conceivable numerical algorithm. Here, we are effectively blaming the algorithm for the limitations of the chosen computer to attack the given problem. Precision limitation is of course a critical consideration for numerical analysis, but this example doesn’t illustrate a limitation of the Bisection method specifically.
– Euro Micelli
Dec 2 at 3:43














As long as the precision is somehow limited, we can find a positive integer $n$ so that the algorithm stops by outputting the "fake" root at $x=frac12$ when applied to the function $f(x)=left(x-frac12right)^2left(x-frac34right)-2^{-n}$ on $[0,1]$. Of course for a given function $f$ we can always have better precision so that the algorithm is not duped by some $hat{x}$ where $f$ is "close" to zero but not identically zero. I am merely noting that for a given precision limitation we can always find a function such that the algorithm fails when applied to the particular function.
– Matt A Pelto
Dec 2 at 5:15






As long as the precision is somehow limited, we can find a positive integer $n$ so that the algorithm stops by outputting the "fake" root at $x=frac12$ when applied to the function $f(x)=left(x-frac12right)^2left(x-frac34right)-2^{-n}$ on $[0,1]$. Of course for a given function $f$ we can always have better precision so that the algorithm is not duped by some $hat{x}$ where $f$ is "close" to zero but not identically zero. I am merely noting that for a given precision limitation we can always find a function such that the algorithm fails when applied to the particular function.
– Matt A Pelto
Dec 2 at 5:15




















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3021532%2fis-it-possible-for-the-bisection-method-to-provide-fake-zeros%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Ellipse (mathématiques)

Quarter-circle Tiles

Mont Emei