Binary Representation of the Collatz Conjecture
up vote
0
down vote
favorite
What is the benefit of looking at the binary representation of the collatz conjecture. I know that it makes the computation easier because there is really one operation involved which is multiplying by 3. And then we simply shift the resulting binary number to the right if it hast Least significant bit (lsm) = 0
Does this offer a faster algorithm (complexity wise) ?
I know that the normal algorithm for collatz doesn't have an upper bound using Big Oh notation. (It's not analysable).
algorithms binary conjectures
add a comment |
up vote
0
down vote
favorite
What is the benefit of looking at the binary representation of the collatz conjecture. I know that it makes the computation easier because there is really one operation involved which is multiplying by 3. And then we simply shift the resulting binary number to the right if it hast Least significant bit (lsm) = 0
Does this offer a faster algorithm (complexity wise) ?
I know that the normal algorithm for collatz doesn't have an upper bound using Big Oh notation. (It's not analysable).
algorithms binary conjectures
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
What is the benefit of looking at the binary representation of the collatz conjecture. I know that it makes the computation easier because there is really one operation involved which is multiplying by 3. And then we simply shift the resulting binary number to the right if it hast Least significant bit (lsm) = 0
Does this offer a faster algorithm (complexity wise) ?
I know that the normal algorithm for collatz doesn't have an upper bound using Big Oh notation. (It's not analysable).
algorithms binary conjectures
What is the benefit of looking at the binary representation of the collatz conjecture. I know that it makes the computation easier because there is really one operation involved which is multiplying by 3. And then we simply shift the resulting binary number to the right if it hast Least significant bit (lsm) = 0
Does this offer a faster algorithm (complexity wise) ?
I know that the normal algorithm for collatz doesn't have an upper bound using Big Oh notation. (It's not analysable).
algorithms binary conjectures
algorithms binary conjectures
asked Apr 8 '15 at 8:12
alkabary
4,0791537
4,0791537
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
up vote
3
down vote
It is natural to consider (and analyze) the Collatz map not as an operation on numbers but on strings. Most obvious candidates are the strings representing the numbers in bases $2$, $3$, and $6$. In base $6$ the Collatz map works like a cellular automaton, i.e., can be computed for the different digits "in parallel". The reason for this is the following: $>:!2$ and $>times3$ are the same in base $6$; furthermore the resulting carries have no long-range effects in this base. (Note that for multiplications or divisions in some base you usually have to work from the left or from the right and cannot decide on a particular digit until all digits to the left or to the right of it have been worked out and the remaining carry is determined.)
Wondering: what do we mean by "carries" in the sense that "carries have no long-range effects in this base"?
– Jonathan Rayner
Nov 22 at 10:35
@JonathanRayner: See my edit. Hope it's clearer now.
– Christian Blatter
Nov 22 at 10:46
add a comment |
up vote
0
down vote
I don't know much about the second part of the question (speed of the algorithm in terms of complexity), but one advantage of the binary representation is that your "win" condition is clearly represented:
"Does every natural number eventually reach the binary string $100dots 0$? "
We can go a step or two further. Notice that if the string $n = 100dots 0$ appears, then the previous step was a string (of length one less than $n$) of the form
begin{align}
n-1 = 11dots1
end{align}
Multiplying by 3 in binary is the same as multiplying by the string $11$, which is the same as multiplying by $10 + 1$ and so we notice that
begin{align}
n-1 &= 11dots1\
implies frac{n-1}{3} &= 1010dots 101
end{align}
where I have been lazy about indicating the length of the strings. We can then go further by padding with an arbitrary number of zeros:
begin{align}
2^k(frac{n-1}{3}) = 1010dots10100dots00
end{align}
which means that the next step after this replaces the trailing $100dots00$ part by $011dots11$:
begin{align}
2^k(frac{n-1}{3}) - 1 = 1010dots10011dots11
end{align}
and thereafter things get more complicated. One might argue that these patterns don't particularly get us far, but at the very least they don't appear as clearly in base 10, so perhaps there's some value in investigating binary.
I like the idea of pursuing base 6 as mentioned in Christian Blatter's answer, although one disadvantage is that the price of simplifying our "step-by-step" computations is that our "win" condition becomes more obscured: the end goal is to reach the numbers $2^k$ and in base 6 their string representations don't follow a particularly "nice" pattern (or at least I don't think they do, maybe I'm wrong?)
Perhaps roughly: base 6 may be advantageous in seeking to disprove the conjecture, while base 2 may be advantageous in seeking to prove the conjecture.
(A lot of what I wrote was inspired by/confirmed by this post )
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1225170%2fbinary-representation-of-the-collatz-conjecture%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
3
down vote
It is natural to consider (and analyze) the Collatz map not as an operation on numbers but on strings. Most obvious candidates are the strings representing the numbers in bases $2$, $3$, and $6$. In base $6$ the Collatz map works like a cellular automaton, i.e., can be computed for the different digits "in parallel". The reason for this is the following: $>:!2$ and $>times3$ are the same in base $6$; furthermore the resulting carries have no long-range effects in this base. (Note that for multiplications or divisions in some base you usually have to work from the left or from the right and cannot decide on a particular digit until all digits to the left or to the right of it have been worked out and the remaining carry is determined.)
Wondering: what do we mean by "carries" in the sense that "carries have no long-range effects in this base"?
– Jonathan Rayner
Nov 22 at 10:35
@JonathanRayner: See my edit. Hope it's clearer now.
– Christian Blatter
Nov 22 at 10:46
add a comment |
up vote
3
down vote
It is natural to consider (and analyze) the Collatz map not as an operation on numbers but on strings. Most obvious candidates are the strings representing the numbers in bases $2$, $3$, and $6$. In base $6$ the Collatz map works like a cellular automaton, i.e., can be computed for the different digits "in parallel". The reason for this is the following: $>:!2$ and $>times3$ are the same in base $6$; furthermore the resulting carries have no long-range effects in this base. (Note that for multiplications or divisions in some base you usually have to work from the left or from the right and cannot decide on a particular digit until all digits to the left or to the right of it have been worked out and the remaining carry is determined.)
Wondering: what do we mean by "carries" in the sense that "carries have no long-range effects in this base"?
– Jonathan Rayner
Nov 22 at 10:35
@JonathanRayner: See my edit. Hope it's clearer now.
– Christian Blatter
Nov 22 at 10:46
add a comment |
up vote
3
down vote
up vote
3
down vote
It is natural to consider (and analyze) the Collatz map not as an operation on numbers but on strings. Most obvious candidates are the strings representing the numbers in bases $2$, $3$, and $6$. In base $6$ the Collatz map works like a cellular automaton, i.e., can be computed for the different digits "in parallel". The reason for this is the following: $>:!2$ and $>times3$ are the same in base $6$; furthermore the resulting carries have no long-range effects in this base. (Note that for multiplications or divisions in some base you usually have to work from the left or from the right and cannot decide on a particular digit until all digits to the left or to the right of it have been worked out and the remaining carry is determined.)
It is natural to consider (and analyze) the Collatz map not as an operation on numbers but on strings. Most obvious candidates are the strings representing the numbers in bases $2$, $3$, and $6$. In base $6$ the Collatz map works like a cellular automaton, i.e., can be computed for the different digits "in parallel". The reason for this is the following: $>:!2$ and $>times3$ are the same in base $6$; furthermore the resulting carries have no long-range effects in this base. (Note that for multiplications or divisions in some base you usually have to work from the left or from the right and cannot decide on a particular digit until all digits to the left or to the right of it have been worked out and the remaining carry is determined.)
edited Nov 22 at 10:44
answered Apr 8 '15 at 8:58
Christian Blatter
171k7111325
171k7111325
Wondering: what do we mean by "carries" in the sense that "carries have no long-range effects in this base"?
– Jonathan Rayner
Nov 22 at 10:35
@JonathanRayner: See my edit. Hope it's clearer now.
– Christian Blatter
Nov 22 at 10:46
add a comment |
Wondering: what do we mean by "carries" in the sense that "carries have no long-range effects in this base"?
– Jonathan Rayner
Nov 22 at 10:35
@JonathanRayner: See my edit. Hope it's clearer now.
– Christian Blatter
Nov 22 at 10:46
Wondering: what do we mean by "carries" in the sense that "carries have no long-range effects in this base"?
– Jonathan Rayner
Nov 22 at 10:35
Wondering: what do we mean by "carries" in the sense that "carries have no long-range effects in this base"?
– Jonathan Rayner
Nov 22 at 10:35
@JonathanRayner: See my edit. Hope it's clearer now.
– Christian Blatter
Nov 22 at 10:46
@JonathanRayner: See my edit. Hope it's clearer now.
– Christian Blatter
Nov 22 at 10:46
add a comment |
up vote
0
down vote
I don't know much about the second part of the question (speed of the algorithm in terms of complexity), but one advantage of the binary representation is that your "win" condition is clearly represented:
"Does every natural number eventually reach the binary string $100dots 0$? "
We can go a step or two further. Notice that if the string $n = 100dots 0$ appears, then the previous step was a string (of length one less than $n$) of the form
begin{align}
n-1 = 11dots1
end{align}
Multiplying by 3 in binary is the same as multiplying by the string $11$, which is the same as multiplying by $10 + 1$ and so we notice that
begin{align}
n-1 &= 11dots1\
implies frac{n-1}{3} &= 1010dots 101
end{align}
where I have been lazy about indicating the length of the strings. We can then go further by padding with an arbitrary number of zeros:
begin{align}
2^k(frac{n-1}{3}) = 1010dots10100dots00
end{align}
which means that the next step after this replaces the trailing $100dots00$ part by $011dots11$:
begin{align}
2^k(frac{n-1}{3}) - 1 = 1010dots10011dots11
end{align}
and thereafter things get more complicated. One might argue that these patterns don't particularly get us far, but at the very least they don't appear as clearly in base 10, so perhaps there's some value in investigating binary.
I like the idea of pursuing base 6 as mentioned in Christian Blatter's answer, although one disadvantage is that the price of simplifying our "step-by-step" computations is that our "win" condition becomes more obscured: the end goal is to reach the numbers $2^k$ and in base 6 their string representations don't follow a particularly "nice" pattern (or at least I don't think they do, maybe I'm wrong?)
Perhaps roughly: base 6 may be advantageous in seeking to disprove the conjecture, while base 2 may be advantageous in seeking to prove the conjecture.
(A lot of what I wrote was inspired by/confirmed by this post )
add a comment |
up vote
0
down vote
I don't know much about the second part of the question (speed of the algorithm in terms of complexity), but one advantage of the binary representation is that your "win" condition is clearly represented:
"Does every natural number eventually reach the binary string $100dots 0$? "
We can go a step or two further. Notice that if the string $n = 100dots 0$ appears, then the previous step was a string (of length one less than $n$) of the form
begin{align}
n-1 = 11dots1
end{align}
Multiplying by 3 in binary is the same as multiplying by the string $11$, which is the same as multiplying by $10 + 1$ and so we notice that
begin{align}
n-1 &= 11dots1\
implies frac{n-1}{3} &= 1010dots 101
end{align}
where I have been lazy about indicating the length of the strings. We can then go further by padding with an arbitrary number of zeros:
begin{align}
2^k(frac{n-1}{3}) = 1010dots10100dots00
end{align}
which means that the next step after this replaces the trailing $100dots00$ part by $011dots11$:
begin{align}
2^k(frac{n-1}{3}) - 1 = 1010dots10011dots11
end{align}
and thereafter things get more complicated. One might argue that these patterns don't particularly get us far, but at the very least they don't appear as clearly in base 10, so perhaps there's some value in investigating binary.
I like the idea of pursuing base 6 as mentioned in Christian Blatter's answer, although one disadvantage is that the price of simplifying our "step-by-step" computations is that our "win" condition becomes more obscured: the end goal is to reach the numbers $2^k$ and in base 6 their string representations don't follow a particularly "nice" pattern (or at least I don't think they do, maybe I'm wrong?)
Perhaps roughly: base 6 may be advantageous in seeking to disprove the conjecture, while base 2 may be advantageous in seeking to prove the conjecture.
(A lot of what I wrote was inspired by/confirmed by this post )
add a comment |
up vote
0
down vote
up vote
0
down vote
I don't know much about the second part of the question (speed of the algorithm in terms of complexity), but one advantage of the binary representation is that your "win" condition is clearly represented:
"Does every natural number eventually reach the binary string $100dots 0$? "
We can go a step or two further. Notice that if the string $n = 100dots 0$ appears, then the previous step was a string (of length one less than $n$) of the form
begin{align}
n-1 = 11dots1
end{align}
Multiplying by 3 in binary is the same as multiplying by the string $11$, which is the same as multiplying by $10 + 1$ and so we notice that
begin{align}
n-1 &= 11dots1\
implies frac{n-1}{3} &= 1010dots 101
end{align}
where I have been lazy about indicating the length of the strings. We can then go further by padding with an arbitrary number of zeros:
begin{align}
2^k(frac{n-1}{3}) = 1010dots10100dots00
end{align}
which means that the next step after this replaces the trailing $100dots00$ part by $011dots11$:
begin{align}
2^k(frac{n-1}{3}) - 1 = 1010dots10011dots11
end{align}
and thereafter things get more complicated. One might argue that these patterns don't particularly get us far, but at the very least they don't appear as clearly in base 10, so perhaps there's some value in investigating binary.
I like the idea of pursuing base 6 as mentioned in Christian Blatter's answer, although one disadvantage is that the price of simplifying our "step-by-step" computations is that our "win" condition becomes more obscured: the end goal is to reach the numbers $2^k$ and in base 6 their string representations don't follow a particularly "nice" pattern (or at least I don't think they do, maybe I'm wrong?)
Perhaps roughly: base 6 may be advantageous in seeking to disprove the conjecture, while base 2 may be advantageous in seeking to prove the conjecture.
(A lot of what I wrote was inspired by/confirmed by this post )
I don't know much about the second part of the question (speed of the algorithm in terms of complexity), but one advantage of the binary representation is that your "win" condition is clearly represented:
"Does every natural number eventually reach the binary string $100dots 0$? "
We can go a step or two further. Notice that if the string $n = 100dots 0$ appears, then the previous step was a string (of length one less than $n$) of the form
begin{align}
n-1 = 11dots1
end{align}
Multiplying by 3 in binary is the same as multiplying by the string $11$, which is the same as multiplying by $10 + 1$ and so we notice that
begin{align}
n-1 &= 11dots1\
implies frac{n-1}{3} &= 1010dots 101
end{align}
where I have been lazy about indicating the length of the strings. We can then go further by padding with an arbitrary number of zeros:
begin{align}
2^k(frac{n-1}{3}) = 1010dots10100dots00
end{align}
which means that the next step after this replaces the trailing $100dots00$ part by $011dots11$:
begin{align}
2^k(frac{n-1}{3}) - 1 = 1010dots10011dots11
end{align}
and thereafter things get more complicated. One might argue that these patterns don't particularly get us far, but at the very least they don't appear as clearly in base 10, so perhaps there's some value in investigating binary.
I like the idea of pursuing base 6 as mentioned in Christian Blatter's answer, although one disadvantage is that the price of simplifying our "step-by-step" computations is that our "win" condition becomes more obscured: the end goal is to reach the numbers $2^k$ and in base 6 their string representations don't follow a particularly "nice" pattern (or at least I don't think they do, maybe I'm wrong?)
Perhaps roughly: base 6 may be advantageous in seeking to disprove the conjecture, while base 2 may be advantageous in seeking to prove the conjecture.
(A lot of what I wrote was inspired by/confirmed by this post )
edited Nov 23 at 13:44
answered Nov 22 at 12:56
Jonathan Rayner
9619
9619
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1225170%2fbinary-representation-of-the-collatz-conjecture%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown