Efficient way to count the number of ways to select 3 numbers from a given list has their AND(bit-wise) equal...












1












$begingroup$


Suppose a list A contains non-negative numbers no larger than $2^8$.



Eg. A = {4, 9, 6, 1, 15, 8, 3, 5, 18, 7}



I want to find the number of selecting 3 members of A such that their AND bit-wise equals 0.



An example of 3 members of A has AND bit-wise equal to 0 would be{4, 6, 9}



Binary of 4 = 0100



Binary of 6 = 0110



Binary of 9 = 1001



Therefore 4 & 6 & 9 = 0



For the example above, using exhaustive search, there are 82 ways. By exhaustive search, I use 3 loops, therefore running time would be $O(n^3)$
Example of brute-force approach in Python:



counts = 0
A = [4, 9, 6, 1, 15, 8, 3, 5, 18, 7]

for i in range(0, len(A)):
for j in range(i + 1, len(A)):
for k in range(j + 1, len(A)):
if A[i] & A[j] & A[k] == 0:
counts += 1

print(counts) # return 82


I want to know that if it is possible to do better than exhaustive search - $O(n^3)$? If the conditions is changed to find 3 members sum up to certain value, we can use hash function to store and run 2 loops instead to improve the bound but I don't see the same approach works for AND bit-wise.



Edit: Noble Mushtak pointed out that keeping separate bits in different hash tables, I can achieve running time $O(n^2)$. Now, I wonder, is this the best bound possible for this problem?










share|cite|improve this question











$endgroup$












  • $begingroup$
    You can skip the k loop when A[i] & A[j] == 0 and do counts += len(A) - j - 1 instead.
    $endgroup$
    – Daniel Mathias
    Dec 28 '18 at 4:16
















1












$begingroup$


Suppose a list A contains non-negative numbers no larger than $2^8$.



Eg. A = {4, 9, 6, 1, 15, 8, 3, 5, 18, 7}



I want to find the number of selecting 3 members of A such that their AND bit-wise equals 0.



An example of 3 members of A has AND bit-wise equal to 0 would be{4, 6, 9}



Binary of 4 = 0100



Binary of 6 = 0110



Binary of 9 = 1001



Therefore 4 & 6 & 9 = 0



For the example above, using exhaustive search, there are 82 ways. By exhaustive search, I use 3 loops, therefore running time would be $O(n^3)$
Example of brute-force approach in Python:



counts = 0
A = [4, 9, 6, 1, 15, 8, 3, 5, 18, 7]

for i in range(0, len(A)):
for j in range(i + 1, len(A)):
for k in range(j + 1, len(A)):
if A[i] & A[j] & A[k] == 0:
counts += 1

print(counts) # return 82


I want to know that if it is possible to do better than exhaustive search - $O(n^3)$? If the conditions is changed to find 3 members sum up to certain value, we can use hash function to store and run 2 loops instead to improve the bound but I don't see the same approach works for AND bit-wise.



Edit: Noble Mushtak pointed out that keeping separate bits in different hash tables, I can achieve running time $O(n^2)$. Now, I wonder, is this the best bound possible for this problem?










share|cite|improve this question











$endgroup$












  • $begingroup$
    You can skip the k loop when A[i] & A[j] == 0 and do counts += len(A) - j - 1 instead.
    $endgroup$
    – Daniel Mathias
    Dec 28 '18 at 4:16














1












1








1





$begingroup$


Suppose a list A contains non-negative numbers no larger than $2^8$.



Eg. A = {4, 9, 6, 1, 15, 8, 3, 5, 18, 7}



I want to find the number of selecting 3 members of A such that their AND bit-wise equals 0.



An example of 3 members of A has AND bit-wise equal to 0 would be{4, 6, 9}



Binary of 4 = 0100



Binary of 6 = 0110



Binary of 9 = 1001



Therefore 4 & 6 & 9 = 0



For the example above, using exhaustive search, there are 82 ways. By exhaustive search, I use 3 loops, therefore running time would be $O(n^3)$
Example of brute-force approach in Python:



counts = 0
A = [4, 9, 6, 1, 15, 8, 3, 5, 18, 7]

for i in range(0, len(A)):
for j in range(i + 1, len(A)):
for k in range(j + 1, len(A)):
if A[i] & A[j] & A[k] == 0:
counts += 1

print(counts) # return 82


I want to know that if it is possible to do better than exhaustive search - $O(n^3)$? If the conditions is changed to find 3 members sum up to certain value, we can use hash function to store and run 2 loops instead to improve the bound but I don't see the same approach works for AND bit-wise.



Edit: Noble Mushtak pointed out that keeping separate bits in different hash tables, I can achieve running time $O(n^2)$. Now, I wonder, is this the best bound possible for this problem?










share|cite|improve this question











$endgroup$




Suppose a list A contains non-negative numbers no larger than $2^8$.



Eg. A = {4, 9, 6, 1, 15, 8, 3, 5, 18, 7}



I want to find the number of selecting 3 members of A such that their AND bit-wise equals 0.



An example of 3 members of A has AND bit-wise equal to 0 would be{4, 6, 9}



Binary of 4 = 0100



Binary of 6 = 0110



Binary of 9 = 1001



Therefore 4 & 6 & 9 = 0



For the example above, using exhaustive search, there are 82 ways. By exhaustive search, I use 3 loops, therefore running time would be $O(n^3)$
Example of brute-force approach in Python:



counts = 0
A = [4, 9, 6, 1, 15, 8, 3, 5, 18, 7]

for i in range(0, len(A)):
for j in range(i + 1, len(A)):
for k in range(j + 1, len(A)):
if A[i] & A[j] & A[k] == 0:
counts += 1

print(counts) # return 82


I want to know that if it is possible to do better than exhaustive search - $O(n^3)$? If the conditions is changed to find 3 members sum up to certain value, we can use hash function to store and run 2 loops instead to improve the bound but I don't see the same approach works for AND bit-wise.



Edit: Noble Mushtak pointed out that keeping separate bits in different hash tables, I can achieve running time $O(n^2)$. Now, I wonder, is this the best bound possible for this problem?







combinatorics algorithms combinations computational-complexity






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Dec 28 '18 at 0:54







HCN108

















asked Dec 27 '18 at 23:50









HCN108HCN108

83




83












  • $begingroup$
    You can skip the k loop when A[i] & A[j] == 0 and do counts += len(A) - j - 1 instead.
    $endgroup$
    – Daniel Mathias
    Dec 28 '18 at 4:16


















  • $begingroup$
    You can skip the k loop when A[i] & A[j] == 0 and do counts += len(A) - j - 1 instead.
    $endgroup$
    – Daniel Mathias
    Dec 28 '18 at 4:16
















$begingroup$
You can skip the k loop when A[i] & A[j] == 0 and do counts += len(A) - j - 1 instead.
$endgroup$
– Daniel Mathias
Dec 28 '18 at 4:16




$begingroup$
You can skip the k loop when A[i] & A[j] == 0 and do counts += len(A) - j - 1 instead.
$endgroup$
– Daniel Mathias
Dec 28 '18 at 4:16










1 Answer
1






active

oldest

votes


















1












$begingroup$

Just like addition, AND is an associative operation, so I don't understand your argument against hash-tables for this problem. Using a table worked fine for me:



A = [4, 9, 6, 1, 15, 8, 3, 5, 18, 7]
counts = 0
# Here, i is the index of the middle number in the triplet we are going to pick
for i in range(len(A)):
# storage[l] represents the number of pairs such that:
# A[j] & A[i] == l for some j < i
# Thus, j is the index of the first number in the triplet we are going to pick
storage = [0]*(1 << 8)
for j in range(i):
storage[A[j] & A[i]] += 1
# Finally, k is the index of the last number in the triplet
for k in range(i+1, len(A)):
# If l & A[k] is 0, then we found some pairs for which
# A[j] & A[i] & A[k] == 0 holds true,
# so add the number of pairs (i.e. storage[l]) to counts.
for l in range(len(storage)):
if l & A[k] == 0: counts += storage[l]
print(counts) # Outputs 82


Here, the complexity is $O(n^2 2^b)$, where $b$ is the number of bits, but since $b=8$, I don't think the $2^b$ is really that much of an issue.





Also, we can reduce the complexity even further to $O(n^2+n2^b)$ by using a two-dimensional table.



import copy

A = [4, 9, 6, 1, 15, 8, 3, 5, 18, 7]
counts = 0
# storage[i][l] represents the number of pairs A[j] & A[j2] == l
# where j2 <= i
storage = [[0]*(1 << 8)]
# Again, i and j are the indexes of
# the middle and first numbers, respectively, of the triplet
for i in range(len(A)-1):
# Add all the pairs of A[j] & A[i] to storage[i]:
for j in range(i):
storage[i][A[j] & A[i]] += 1
# Then, add all of the pairs from storage[i] to storage[i+1]
# since storage[i] is supposed to be a cumulative array of
# all the pairs of ANDs from before and up until the index i
storage.append(copy.deepcopy(storage[-1]))
# Again, k is the index of the last element in the triplet
for k in range(1, len(A)):
# If l & A[k] is 0, then add all of the pairs before k
# (in other words, up until k-1) whose AND value was l.
# According to our storage array,
# the number of pairs is storage[k-1][l]
for l in range(len(storage)):
if l & A[k] == 0: counts += storage[k-1][l]
print(counts) # Outputs 82





share|cite|improve this answer











$endgroup$













  • $begingroup$
    If I read that correctly, you are comparing l & A[k] == 0 256 times for each A[k]. That's far more than HCN's code.
    $endgroup$
    – Daniel Mathias
    Dec 28 '18 at 4:12










  • $begingroup$
    @DanielMathias I think you're not reading my code correctly. By looking at the for loops, you can clearly see that my complexity is $O(n^2 2^b)$ while OP's complexity is $O(n^3)$. However, by your logic, my complexity would be $O(n^3 2^b)$, which is clearly not the case.
    $endgroup$
    – Noble Mushtak
    Dec 28 '18 at 12:43










  • $begingroup$
    My point is that your running time will be greater because $n$ is likely to be much less than $2^b$. Your code is therefore less efficient. To see this, count the number of times your code executes counts += storage[l] when storage[l] is $0$
    $endgroup$
    – Daniel Mathias
    Dec 28 '18 at 13:15










  • $begingroup$
    In this particular case, Mushtak code isn't efficient but it is asymptotically faster if we assume the number of bits is constant and length of A can grow larger than $2^b$ - it is actually what I looked for, I didn't know the trick about treating separate bits in bit-wise operators. I also understand Daniel's point that $n$ cannot be bigger than $2^b$ as there are only $2^b$ unique numbers, and using this approach doesn't yield a better running time in real application. But again, I was confused with the question in the first place.
    $endgroup$
    – HCN108
    Dec 28 '18 at 16:22












  • $begingroup$
    @DanielMathias I see your point now. I made the algorithm a little better by reducing it to $O(n2^b)$, which is now significantly faster if $n$ is about $2^b$.
    $endgroup$
    – Noble Mushtak
    Dec 28 '18 at 17:11











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3054454%2fefficient-way-to-count-the-number-of-ways-to-select-3-numbers-from-a-given-list%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1












$begingroup$

Just like addition, AND is an associative operation, so I don't understand your argument against hash-tables for this problem. Using a table worked fine for me:



A = [4, 9, 6, 1, 15, 8, 3, 5, 18, 7]
counts = 0
# Here, i is the index of the middle number in the triplet we are going to pick
for i in range(len(A)):
# storage[l] represents the number of pairs such that:
# A[j] & A[i] == l for some j < i
# Thus, j is the index of the first number in the triplet we are going to pick
storage = [0]*(1 << 8)
for j in range(i):
storage[A[j] & A[i]] += 1
# Finally, k is the index of the last number in the triplet
for k in range(i+1, len(A)):
# If l & A[k] is 0, then we found some pairs for which
# A[j] & A[i] & A[k] == 0 holds true,
# so add the number of pairs (i.e. storage[l]) to counts.
for l in range(len(storage)):
if l & A[k] == 0: counts += storage[l]
print(counts) # Outputs 82


Here, the complexity is $O(n^2 2^b)$, where $b$ is the number of bits, but since $b=8$, I don't think the $2^b$ is really that much of an issue.





Also, we can reduce the complexity even further to $O(n^2+n2^b)$ by using a two-dimensional table.



import copy

A = [4, 9, 6, 1, 15, 8, 3, 5, 18, 7]
counts = 0
# storage[i][l] represents the number of pairs A[j] & A[j2] == l
# where j2 <= i
storage = [[0]*(1 << 8)]
# Again, i and j are the indexes of
# the middle and first numbers, respectively, of the triplet
for i in range(len(A)-1):
# Add all the pairs of A[j] & A[i] to storage[i]:
for j in range(i):
storage[i][A[j] & A[i]] += 1
# Then, add all of the pairs from storage[i] to storage[i+1]
# since storage[i] is supposed to be a cumulative array of
# all the pairs of ANDs from before and up until the index i
storage.append(copy.deepcopy(storage[-1]))
# Again, k is the index of the last element in the triplet
for k in range(1, len(A)):
# If l & A[k] is 0, then add all of the pairs before k
# (in other words, up until k-1) whose AND value was l.
# According to our storage array,
# the number of pairs is storage[k-1][l]
for l in range(len(storage)):
if l & A[k] == 0: counts += storage[k-1][l]
print(counts) # Outputs 82





share|cite|improve this answer











$endgroup$













  • $begingroup$
    If I read that correctly, you are comparing l & A[k] == 0 256 times for each A[k]. That's far more than HCN's code.
    $endgroup$
    – Daniel Mathias
    Dec 28 '18 at 4:12










  • $begingroup$
    @DanielMathias I think you're not reading my code correctly. By looking at the for loops, you can clearly see that my complexity is $O(n^2 2^b)$ while OP's complexity is $O(n^3)$. However, by your logic, my complexity would be $O(n^3 2^b)$, which is clearly not the case.
    $endgroup$
    – Noble Mushtak
    Dec 28 '18 at 12:43










  • $begingroup$
    My point is that your running time will be greater because $n$ is likely to be much less than $2^b$. Your code is therefore less efficient. To see this, count the number of times your code executes counts += storage[l] when storage[l] is $0$
    $endgroup$
    – Daniel Mathias
    Dec 28 '18 at 13:15










  • $begingroup$
    In this particular case, Mushtak code isn't efficient but it is asymptotically faster if we assume the number of bits is constant and length of A can grow larger than $2^b$ - it is actually what I looked for, I didn't know the trick about treating separate bits in bit-wise operators. I also understand Daniel's point that $n$ cannot be bigger than $2^b$ as there are only $2^b$ unique numbers, and using this approach doesn't yield a better running time in real application. But again, I was confused with the question in the first place.
    $endgroup$
    – HCN108
    Dec 28 '18 at 16:22












  • $begingroup$
    @DanielMathias I see your point now. I made the algorithm a little better by reducing it to $O(n2^b)$, which is now significantly faster if $n$ is about $2^b$.
    $endgroup$
    – Noble Mushtak
    Dec 28 '18 at 17:11
















1












$begingroup$

Just like addition, AND is an associative operation, so I don't understand your argument against hash-tables for this problem. Using a table worked fine for me:



A = [4, 9, 6, 1, 15, 8, 3, 5, 18, 7]
counts = 0
# Here, i is the index of the middle number in the triplet we are going to pick
for i in range(len(A)):
# storage[l] represents the number of pairs such that:
# A[j] & A[i] == l for some j < i
# Thus, j is the index of the first number in the triplet we are going to pick
storage = [0]*(1 << 8)
for j in range(i):
storage[A[j] & A[i]] += 1
# Finally, k is the index of the last number in the triplet
for k in range(i+1, len(A)):
# If l & A[k] is 0, then we found some pairs for which
# A[j] & A[i] & A[k] == 0 holds true,
# so add the number of pairs (i.e. storage[l]) to counts.
for l in range(len(storage)):
if l & A[k] == 0: counts += storage[l]
print(counts) # Outputs 82


Here, the complexity is $O(n^2 2^b)$, where $b$ is the number of bits, but since $b=8$, I don't think the $2^b$ is really that much of an issue.





Also, we can reduce the complexity even further to $O(n^2+n2^b)$ by using a two-dimensional table.



import copy

A = [4, 9, 6, 1, 15, 8, 3, 5, 18, 7]
counts = 0
# storage[i][l] represents the number of pairs A[j] & A[j2] == l
# where j2 <= i
storage = [[0]*(1 << 8)]
# Again, i and j are the indexes of
# the middle and first numbers, respectively, of the triplet
for i in range(len(A)-1):
# Add all the pairs of A[j] & A[i] to storage[i]:
for j in range(i):
storage[i][A[j] & A[i]] += 1
# Then, add all of the pairs from storage[i] to storage[i+1]
# since storage[i] is supposed to be a cumulative array of
# all the pairs of ANDs from before and up until the index i
storage.append(copy.deepcopy(storage[-1]))
# Again, k is the index of the last element in the triplet
for k in range(1, len(A)):
# If l & A[k] is 0, then add all of the pairs before k
# (in other words, up until k-1) whose AND value was l.
# According to our storage array,
# the number of pairs is storage[k-1][l]
for l in range(len(storage)):
if l & A[k] == 0: counts += storage[k-1][l]
print(counts) # Outputs 82





share|cite|improve this answer











$endgroup$













  • $begingroup$
    If I read that correctly, you are comparing l & A[k] == 0 256 times for each A[k]. That's far more than HCN's code.
    $endgroup$
    – Daniel Mathias
    Dec 28 '18 at 4:12










  • $begingroup$
    @DanielMathias I think you're not reading my code correctly. By looking at the for loops, you can clearly see that my complexity is $O(n^2 2^b)$ while OP's complexity is $O(n^3)$. However, by your logic, my complexity would be $O(n^3 2^b)$, which is clearly not the case.
    $endgroup$
    – Noble Mushtak
    Dec 28 '18 at 12:43










  • $begingroup$
    My point is that your running time will be greater because $n$ is likely to be much less than $2^b$. Your code is therefore less efficient. To see this, count the number of times your code executes counts += storage[l] when storage[l] is $0$
    $endgroup$
    – Daniel Mathias
    Dec 28 '18 at 13:15










  • $begingroup$
    In this particular case, Mushtak code isn't efficient but it is asymptotically faster if we assume the number of bits is constant and length of A can grow larger than $2^b$ - it is actually what I looked for, I didn't know the trick about treating separate bits in bit-wise operators. I also understand Daniel's point that $n$ cannot be bigger than $2^b$ as there are only $2^b$ unique numbers, and using this approach doesn't yield a better running time in real application. But again, I was confused with the question in the first place.
    $endgroup$
    – HCN108
    Dec 28 '18 at 16:22












  • $begingroup$
    @DanielMathias I see your point now. I made the algorithm a little better by reducing it to $O(n2^b)$, which is now significantly faster if $n$ is about $2^b$.
    $endgroup$
    – Noble Mushtak
    Dec 28 '18 at 17:11














1












1








1





$begingroup$

Just like addition, AND is an associative operation, so I don't understand your argument against hash-tables for this problem. Using a table worked fine for me:



A = [4, 9, 6, 1, 15, 8, 3, 5, 18, 7]
counts = 0
# Here, i is the index of the middle number in the triplet we are going to pick
for i in range(len(A)):
# storage[l] represents the number of pairs such that:
# A[j] & A[i] == l for some j < i
# Thus, j is the index of the first number in the triplet we are going to pick
storage = [0]*(1 << 8)
for j in range(i):
storage[A[j] & A[i]] += 1
# Finally, k is the index of the last number in the triplet
for k in range(i+1, len(A)):
# If l & A[k] is 0, then we found some pairs for which
# A[j] & A[i] & A[k] == 0 holds true,
# so add the number of pairs (i.e. storage[l]) to counts.
for l in range(len(storage)):
if l & A[k] == 0: counts += storage[l]
print(counts) # Outputs 82


Here, the complexity is $O(n^2 2^b)$, where $b$ is the number of bits, but since $b=8$, I don't think the $2^b$ is really that much of an issue.





Also, we can reduce the complexity even further to $O(n^2+n2^b)$ by using a two-dimensional table.



import copy

A = [4, 9, 6, 1, 15, 8, 3, 5, 18, 7]
counts = 0
# storage[i][l] represents the number of pairs A[j] & A[j2] == l
# where j2 <= i
storage = [[0]*(1 << 8)]
# Again, i and j are the indexes of
# the middle and first numbers, respectively, of the triplet
for i in range(len(A)-1):
# Add all the pairs of A[j] & A[i] to storage[i]:
for j in range(i):
storage[i][A[j] & A[i]] += 1
# Then, add all of the pairs from storage[i] to storage[i+1]
# since storage[i] is supposed to be a cumulative array of
# all the pairs of ANDs from before and up until the index i
storage.append(copy.deepcopy(storage[-1]))
# Again, k is the index of the last element in the triplet
for k in range(1, len(A)):
# If l & A[k] is 0, then add all of the pairs before k
# (in other words, up until k-1) whose AND value was l.
# According to our storage array,
# the number of pairs is storage[k-1][l]
for l in range(len(storage)):
if l & A[k] == 0: counts += storage[k-1][l]
print(counts) # Outputs 82





share|cite|improve this answer











$endgroup$



Just like addition, AND is an associative operation, so I don't understand your argument against hash-tables for this problem. Using a table worked fine for me:



A = [4, 9, 6, 1, 15, 8, 3, 5, 18, 7]
counts = 0
# Here, i is the index of the middle number in the triplet we are going to pick
for i in range(len(A)):
# storage[l] represents the number of pairs such that:
# A[j] & A[i] == l for some j < i
# Thus, j is the index of the first number in the triplet we are going to pick
storage = [0]*(1 << 8)
for j in range(i):
storage[A[j] & A[i]] += 1
# Finally, k is the index of the last number in the triplet
for k in range(i+1, len(A)):
# If l & A[k] is 0, then we found some pairs for which
# A[j] & A[i] & A[k] == 0 holds true,
# so add the number of pairs (i.e. storage[l]) to counts.
for l in range(len(storage)):
if l & A[k] == 0: counts += storage[l]
print(counts) # Outputs 82


Here, the complexity is $O(n^2 2^b)$, where $b$ is the number of bits, but since $b=8$, I don't think the $2^b$ is really that much of an issue.





Also, we can reduce the complexity even further to $O(n^2+n2^b)$ by using a two-dimensional table.



import copy

A = [4, 9, 6, 1, 15, 8, 3, 5, 18, 7]
counts = 0
# storage[i][l] represents the number of pairs A[j] & A[j2] == l
# where j2 <= i
storage = [[0]*(1 << 8)]
# Again, i and j are the indexes of
# the middle and first numbers, respectively, of the triplet
for i in range(len(A)-1):
# Add all the pairs of A[j] & A[i] to storage[i]:
for j in range(i):
storage[i][A[j] & A[i]] += 1
# Then, add all of the pairs from storage[i] to storage[i+1]
# since storage[i] is supposed to be a cumulative array of
# all the pairs of ANDs from before and up until the index i
storage.append(copy.deepcopy(storage[-1]))
# Again, k is the index of the last element in the triplet
for k in range(1, len(A)):
# If l & A[k] is 0, then add all of the pairs before k
# (in other words, up until k-1) whose AND value was l.
# According to our storage array,
# the number of pairs is storage[k-1][l]
for l in range(len(storage)):
if l & A[k] == 0: counts += storage[k-1][l]
print(counts) # Outputs 82






share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Dec 28 '18 at 17:30

























answered Dec 28 '18 at 0:06









Noble MushtakNoble Mushtak

15.3k1835




15.3k1835












  • $begingroup$
    If I read that correctly, you are comparing l & A[k] == 0 256 times for each A[k]. That's far more than HCN's code.
    $endgroup$
    – Daniel Mathias
    Dec 28 '18 at 4:12










  • $begingroup$
    @DanielMathias I think you're not reading my code correctly. By looking at the for loops, you can clearly see that my complexity is $O(n^2 2^b)$ while OP's complexity is $O(n^3)$. However, by your logic, my complexity would be $O(n^3 2^b)$, which is clearly not the case.
    $endgroup$
    – Noble Mushtak
    Dec 28 '18 at 12:43










  • $begingroup$
    My point is that your running time will be greater because $n$ is likely to be much less than $2^b$. Your code is therefore less efficient. To see this, count the number of times your code executes counts += storage[l] when storage[l] is $0$
    $endgroup$
    – Daniel Mathias
    Dec 28 '18 at 13:15










  • $begingroup$
    In this particular case, Mushtak code isn't efficient but it is asymptotically faster if we assume the number of bits is constant and length of A can grow larger than $2^b$ - it is actually what I looked for, I didn't know the trick about treating separate bits in bit-wise operators. I also understand Daniel's point that $n$ cannot be bigger than $2^b$ as there are only $2^b$ unique numbers, and using this approach doesn't yield a better running time in real application. But again, I was confused with the question in the first place.
    $endgroup$
    – HCN108
    Dec 28 '18 at 16:22












  • $begingroup$
    @DanielMathias I see your point now. I made the algorithm a little better by reducing it to $O(n2^b)$, which is now significantly faster if $n$ is about $2^b$.
    $endgroup$
    – Noble Mushtak
    Dec 28 '18 at 17:11


















  • $begingroup$
    If I read that correctly, you are comparing l & A[k] == 0 256 times for each A[k]. That's far more than HCN's code.
    $endgroup$
    – Daniel Mathias
    Dec 28 '18 at 4:12










  • $begingroup$
    @DanielMathias I think you're not reading my code correctly. By looking at the for loops, you can clearly see that my complexity is $O(n^2 2^b)$ while OP's complexity is $O(n^3)$. However, by your logic, my complexity would be $O(n^3 2^b)$, which is clearly not the case.
    $endgroup$
    – Noble Mushtak
    Dec 28 '18 at 12:43










  • $begingroup$
    My point is that your running time will be greater because $n$ is likely to be much less than $2^b$. Your code is therefore less efficient. To see this, count the number of times your code executes counts += storage[l] when storage[l] is $0$
    $endgroup$
    – Daniel Mathias
    Dec 28 '18 at 13:15










  • $begingroup$
    In this particular case, Mushtak code isn't efficient but it is asymptotically faster if we assume the number of bits is constant and length of A can grow larger than $2^b$ - it is actually what I looked for, I didn't know the trick about treating separate bits in bit-wise operators. I also understand Daniel's point that $n$ cannot be bigger than $2^b$ as there are only $2^b$ unique numbers, and using this approach doesn't yield a better running time in real application. But again, I was confused with the question in the first place.
    $endgroup$
    – HCN108
    Dec 28 '18 at 16:22












  • $begingroup$
    @DanielMathias I see your point now. I made the algorithm a little better by reducing it to $O(n2^b)$, which is now significantly faster if $n$ is about $2^b$.
    $endgroup$
    – Noble Mushtak
    Dec 28 '18 at 17:11
















$begingroup$
If I read that correctly, you are comparing l & A[k] == 0 256 times for each A[k]. That's far more than HCN's code.
$endgroup$
– Daniel Mathias
Dec 28 '18 at 4:12




$begingroup$
If I read that correctly, you are comparing l & A[k] == 0 256 times for each A[k]. That's far more than HCN's code.
$endgroup$
– Daniel Mathias
Dec 28 '18 at 4:12












$begingroup$
@DanielMathias I think you're not reading my code correctly. By looking at the for loops, you can clearly see that my complexity is $O(n^2 2^b)$ while OP's complexity is $O(n^3)$. However, by your logic, my complexity would be $O(n^3 2^b)$, which is clearly not the case.
$endgroup$
– Noble Mushtak
Dec 28 '18 at 12:43




$begingroup$
@DanielMathias I think you're not reading my code correctly. By looking at the for loops, you can clearly see that my complexity is $O(n^2 2^b)$ while OP's complexity is $O(n^3)$. However, by your logic, my complexity would be $O(n^3 2^b)$, which is clearly not the case.
$endgroup$
– Noble Mushtak
Dec 28 '18 at 12:43












$begingroup$
My point is that your running time will be greater because $n$ is likely to be much less than $2^b$. Your code is therefore less efficient. To see this, count the number of times your code executes counts += storage[l] when storage[l] is $0$
$endgroup$
– Daniel Mathias
Dec 28 '18 at 13:15




$begingroup$
My point is that your running time will be greater because $n$ is likely to be much less than $2^b$. Your code is therefore less efficient. To see this, count the number of times your code executes counts += storage[l] when storage[l] is $0$
$endgroup$
– Daniel Mathias
Dec 28 '18 at 13:15












$begingroup$
In this particular case, Mushtak code isn't efficient but it is asymptotically faster if we assume the number of bits is constant and length of A can grow larger than $2^b$ - it is actually what I looked for, I didn't know the trick about treating separate bits in bit-wise operators. I also understand Daniel's point that $n$ cannot be bigger than $2^b$ as there are only $2^b$ unique numbers, and using this approach doesn't yield a better running time in real application. But again, I was confused with the question in the first place.
$endgroup$
– HCN108
Dec 28 '18 at 16:22






$begingroup$
In this particular case, Mushtak code isn't efficient but it is asymptotically faster if we assume the number of bits is constant and length of A can grow larger than $2^b$ - it is actually what I looked for, I didn't know the trick about treating separate bits in bit-wise operators. I also understand Daniel's point that $n$ cannot be bigger than $2^b$ as there are only $2^b$ unique numbers, and using this approach doesn't yield a better running time in real application. But again, I was confused with the question in the first place.
$endgroup$
– HCN108
Dec 28 '18 at 16:22














$begingroup$
@DanielMathias I see your point now. I made the algorithm a little better by reducing it to $O(n2^b)$, which is now significantly faster if $n$ is about $2^b$.
$endgroup$
– Noble Mushtak
Dec 28 '18 at 17:11




$begingroup$
@DanielMathias I see your point now. I made the algorithm a little better by reducing it to $O(n2^b)$, which is now significantly faster if $n$ is about $2^b$.
$endgroup$
– Noble Mushtak
Dec 28 '18 at 17:11


















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3054454%2fefficient-way-to-count-the-number-of-ways-to-select-3-numbers-from-a-given-list%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Ellipse (mathématiques)

Quarter-circle Tiles

Mont Emei