Find every possible pair of numbers that sum up to n number











up vote
7
down vote

favorite












I have this following code which find every possible pair of numbers that sum up to n number



lst = [int(input()) for i in range(int(input()))]
num = int(input()) #Amount to be matched
cnt = 0
lst = list(filter(lambda i: i <= num, lst)) #Remove number that more than `num`
for i in lst:
for j in lst:
if i+j == num:
cnt += 1
print(int(cnt/2))


For example, if I enter



5 #How many numbers
1 < #Start of number input
4 <
5
7
1 < #End of number input
5 #Amount to be matched


It will return 2 because there is two pair that their sum is equal to 5 (1,4) and (4,1) (The number I marked with < ).



The problem is the complexity is O(n2) which will run slow on large input. I wanted to know if that is there a way to make this run faster?



Another example:



10 #How many numbers
46 #Start of number input
35
27
45
16
0 <
30 <
30 <
45
37 #End of number input
30 #Amount to be matched


The pair will be (0, 30) and (0, 30) which will return 2.










share|improve this question









New contributor




phwt is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
















  • 3




    If you store them in a set you can, for each number entered, find if amount - number is in the set. If you ask user to enter amount BEFORE entering all the numbers, you can do it in realtime so that by the moment user enters the last number you'll already know all the appropriate pairs.
    – bipll
    2 days ago












  • @bipll Will try, Thanks!
    – phwt
    2 days ago






  • 1




    @phwt - that's true only if no numbers appear more than once. There's a bit more work involved in the general case - see my answer.
    – Toby Speight
    2 days ago










  • Are the numbers in the list all non-negative integers? Do you know anything about the maximum value of those integers?
    – Eric Lippert
    2 days ago















up vote
7
down vote

favorite












I have this following code which find every possible pair of numbers that sum up to n number



lst = [int(input()) for i in range(int(input()))]
num = int(input()) #Amount to be matched
cnt = 0
lst = list(filter(lambda i: i <= num, lst)) #Remove number that more than `num`
for i in lst:
for j in lst:
if i+j == num:
cnt += 1
print(int(cnt/2))


For example, if I enter



5 #How many numbers
1 < #Start of number input
4 <
5
7
1 < #End of number input
5 #Amount to be matched


It will return 2 because there is two pair that their sum is equal to 5 (1,4) and (4,1) (The number I marked with < ).



The problem is the complexity is O(n2) which will run slow on large input. I wanted to know if that is there a way to make this run faster?



Another example:



10 #How many numbers
46 #Start of number input
35
27
45
16
0 <
30 <
30 <
45
37 #End of number input
30 #Amount to be matched


The pair will be (0, 30) and (0, 30) which will return 2.










share|improve this question









New contributor




phwt is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
















  • 3




    If you store them in a set you can, for each number entered, find if amount - number is in the set. If you ask user to enter amount BEFORE entering all the numbers, you can do it in realtime so that by the moment user enters the last number you'll already know all the appropriate pairs.
    – bipll
    2 days ago












  • @bipll Will try, Thanks!
    – phwt
    2 days ago






  • 1




    @phwt - that's true only if no numbers appear more than once. There's a bit more work involved in the general case - see my answer.
    – Toby Speight
    2 days ago










  • Are the numbers in the list all non-negative integers? Do you know anything about the maximum value of those integers?
    – Eric Lippert
    2 days ago













up vote
7
down vote

favorite









up vote
7
down vote

favorite











I have this following code which find every possible pair of numbers that sum up to n number



lst = [int(input()) for i in range(int(input()))]
num = int(input()) #Amount to be matched
cnt = 0
lst = list(filter(lambda i: i <= num, lst)) #Remove number that more than `num`
for i in lst:
for j in lst:
if i+j == num:
cnt += 1
print(int(cnt/2))


For example, if I enter



5 #How many numbers
1 < #Start of number input
4 <
5
7
1 < #End of number input
5 #Amount to be matched


It will return 2 because there is two pair that their sum is equal to 5 (1,4) and (4,1) (The number I marked with < ).



The problem is the complexity is O(n2) which will run slow on large input. I wanted to know if that is there a way to make this run faster?



Another example:



10 #How many numbers
46 #Start of number input
35
27
45
16
0 <
30 <
30 <
45
37 #End of number input
30 #Amount to be matched


The pair will be (0, 30) and (0, 30) which will return 2.










share|improve this question









New contributor




phwt is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











I have this following code which find every possible pair of numbers that sum up to n number



lst = [int(input()) for i in range(int(input()))]
num = int(input()) #Amount to be matched
cnt = 0
lst = list(filter(lambda i: i <= num, lst)) #Remove number that more than `num`
for i in lst:
for j in lst:
if i+j == num:
cnt += 1
print(int(cnt/2))


For example, if I enter



5 #How many numbers
1 < #Start of number input
4 <
5
7
1 < #End of number input
5 #Amount to be matched


It will return 2 because there is two pair that their sum is equal to 5 (1,4) and (4,1) (The number I marked with < ).



The problem is the complexity is O(n2) which will run slow on large input. I wanted to know if that is there a way to make this run faster?



Another example:



10 #How many numbers
46 #Start of number input
35
27
45
16
0 <
30 <
30 <
45
37 #End of number input
30 #Amount to be matched


The pair will be (0, 30) and (0, 30) which will return 2.







python algorithm k-sum






share|improve this question









New contributor




phwt is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question









New contributor




phwt is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question








edited yesterday









200_success

127k15148411




127k15148411






New contributor




phwt is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked 2 days ago









phwt

386




386




New contributor




phwt is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





phwt is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






phwt is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








  • 3




    If you store them in a set you can, for each number entered, find if amount - number is in the set. If you ask user to enter amount BEFORE entering all the numbers, you can do it in realtime so that by the moment user enters the last number you'll already know all the appropriate pairs.
    – bipll
    2 days ago












  • @bipll Will try, Thanks!
    – phwt
    2 days ago






  • 1




    @phwt - that's true only if no numbers appear more than once. There's a bit more work involved in the general case - see my answer.
    – Toby Speight
    2 days ago










  • Are the numbers in the list all non-negative integers? Do you know anything about the maximum value of those integers?
    – Eric Lippert
    2 days ago














  • 3




    If you store them in a set you can, for each number entered, find if amount - number is in the set. If you ask user to enter amount BEFORE entering all the numbers, you can do it in realtime so that by the moment user enters the last number you'll already know all the appropriate pairs.
    – bipll
    2 days ago












  • @bipll Will try, Thanks!
    – phwt
    2 days ago






  • 1




    @phwt - that's true only if no numbers appear more than once. There's a bit more work involved in the general case - see my answer.
    – Toby Speight
    2 days ago










  • Are the numbers in the list all non-negative integers? Do you know anything about the maximum value of those integers?
    – Eric Lippert
    2 days ago








3




3




If you store them in a set you can, for each number entered, find if amount - number is in the set. If you ask user to enter amount BEFORE entering all the numbers, you can do it in realtime so that by the moment user enters the last number you'll already know all the appropriate pairs.
– bipll
2 days ago






If you store them in a set you can, for each number entered, find if amount - number is in the set. If you ask user to enter amount BEFORE entering all the numbers, you can do it in realtime so that by the moment user enters the last number you'll already know all the appropriate pairs.
– bipll
2 days ago














@bipll Will try, Thanks!
– phwt
2 days ago




@bipll Will try, Thanks!
– phwt
2 days ago




1




1




@phwt - that's true only if no numbers appear more than once. There's a bit more work involved in the general case - see my answer.
– Toby Speight
2 days ago




@phwt - that's true only if no numbers appear more than once. There's a bit more work involved in the general case - see my answer.
– Toby Speight
2 days ago












Are the numbers in the list all non-negative integers? Do you know anything about the maximum value of those integers?
– Eric Lippert
2 days ago




Are the numbers in the list all non-negative integers? Do you know anything about the maximum value of those integers?
– Eric Lippert
2 days ago










4 Answers
4






active

oldest

votes

















up vote
10
down vote



accepted










Removing numbers greater than the target affects the correctness of this program - they could be added to negative numbers to reach the target.



You could get a performance improvement by finding the difference between each element and the target, then looking to see if that difference is in the list. This doesn't in itself reduce the computational complexity (it's still O(n²)), but we can build on that: if the list is first sorted, and we then use a binary search to test membership we get to O(n log n); if we convert to a structure with fast lookup (such as a collections.​Counter, which has amortized O(1) insertion and lookup), then we come down to O(n).



If we have a Counter, then we can account for all combinations of that pair by multiplying one count by the other (but we'll need to consider the special case that the number is exactly half the target).



We could do with some auto tests. Consider importing the doctest module and using it. Some good test cases to include:



1,  → 0
1, [1] → 0
1, [0, 1] → 1
0, [-1, 1] → 1
0, [0, 1] → 0
4, [1, 4, 3, 0] → 2
4, [1, 1, 3, 3] → 4
4, [2, 2, 2, 2] → 6





share|improve this answer






























    up vote
    6
    down vote













    So far every solution given has been O(n2) or O(n log n), but there is an O(n) solution, which is sketched as follows:




    • Get your input into a list, as you have done so. Obviously this is O(n)

    • Create an empty map from integers to integers. The map must have an O(1) insertion operation and an O(1) contains-key operation and an O(1) lookup operation and an O(n) "enumerate all the keys" operation. Commonly-used map types in modern programming languages typically have these characteristics.

    • Build a count of all the items in the input. That is, for each input item, check to see if it is in the map. If it is not, add the pair (item, 1) to the map. If it is already in the map, look up the associated value and change the map so that it has the pair (item, value + 1). All those operations are O(1) and we do them n times, so this step is O(n).

    • Now we take our target, call it sum and we wish to enumerate the pairs which add to that target. Enumerate the keys of the map. Suppose the key is k. We compute sum-k. Now there are two cases.


      • Case 1: if sum-k == k then check the map to see if the value associated with k is 2 or greater. If it is, then we have a pair (k, k). Output it.

      • Case 2: if sum-k is not k then check the map to see if sum-k is in the map. If it is, then we have a pair (k, sum-k).



    • The enumeration enumerates at most n keys, and each step is O(1), so this step is also O(n)

    • And we're done, with total cost O(n).


    Now, can you implement this solution?






    share|improve this answer























    • I don't think that "most" map types have all the O(1) operations that you mention. They do have an amortized cost of O(1), though. Worst case scenario, they'll still be O(n)
      – ChatterOne
      2 days ago






    • 1




      @ChatterOne: "Most" was not intended to imply that I had done a survey and that 50%+1 or more met the criterion; I was speaking informally. But I am curious: can you give an example of a popular language with a popular mutable dictionary type where typical worst-case performance with non-hostile input is O(n)? Particularly if the keys are small integers, as is the case here. The C#, Java, Python, JavaScript dictionaries all have O(1) insertion and lookup when given realistic, non-hostile inputs.
      – Eric Lippert
      2 days ago












    • @ChatterOne: I note that the sorted key dictionary types and immutable dictionary types are typically balanced binaries trees behind the scenes, and so are O(lg n) on those operations, but here I am specifically interested in mutable unsorted dictionaries.
      – Eric Lippert
      2 days ago










    • @EricLippert: Absent some other work-around, the O(n) insertion cost will be paid on each resize operation. Hence, ChatterOne is making the point that calling insertion an O(1) operation (i.e., without stating that this cost is amortized) is inaccurate even for non-hostile inputs. Per the last paragraph of Dictionary.Add remarks, C# has the same cost. However, C# can avoid this cost by setting the initial capacity, a feature Python lacks.
      – Brian
      2 days ago






    • 1




      @Brian: Ah, I understand the point now, but I'm not sure why it is germane. First, because the algorithm I propose is O(n) whether or not the cost of a dictionary insertion is O(1) strictly, or O(1) amortized. And second, because as you note, we know ahead of time a bound on the dictionary size. And third, because we are cheerfully ignoring superlinearities in all manner of infrastructure, like the superlinear cost of garbage collections as data structures grow in an environment with collection pressure.
      – Eric Lippert
      2 days ago


















    up vote
    2
    down vote













    Other minor suggestions:




    • Don't leave your input()s blank. Pass a prompt so that the user knows what they're entering.

    • The first time you initialize lst, it doesn't need to be memory; it can be left as a generator (parens instead of brackets).

    • The second time you initialize lst, it does not need to be mutable, so make it a tuple instead of a list.






    share|improve this answer




























      up vote
      2
      down vote













      You can use a sorting algorithm to first sort the list, this can be done (in many ways) with a complexity of O(nlog(n)).
      Once the list is sorted (small to large for example), the problem can be solved with complexity O(n) as followed:



      head = 0
      tail = len(list) - 1
      while (head < tail):
      sum = list[head] + list[tail]
      if sum == num:
      cnt += 1
      head += 1
      tail -= 1
      elif sum > num:
      tail -= 1
      else:
      head += 1


      This results in an overall complexity of O(nlog(n))



      Note : This runs under the assumption that all elements are unique. This can be easily fixed depending on how you want to handle cases with duplicate elements.






      share|improve this answer








      New contributor




      David is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.














      • 1




        Since the sample given by the original poster indicates that the elements are not unique, you do need to handle duplicates.
        – Eric Lippert
        2 days ago










      • There is a linear time solution, which I've given in my answer. See if you can find it!
        – Eric Lippert
        2 days ago










      • @EricLippert With an ideal map implementation there is a linear time solution. Unfortunately python's map does not guarantee constant time reads or writes, the worst case time for reads and writes are linear with respect to the size of the map which makes the worst case total time for an algorithm using maps to be O(n^2).
        – David
        18 hours ago











      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ifUsing("editor", function () {
      StackExchange.using("externalEditor", function () {
      StackExchange.using("snippets", function () {
      StackExchange.snippets.init();
      });
      });
      }, "code-snippets");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "196"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });






      phwt is a new contributor. Be nice, and check out our Code of Conduct.










       

      draft saved


      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f208138%2ffind-every-possible-pair-of-numbers-that-sum-up-to-n-number%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      4 Answers
      4






      active

      oldest

      votes








      4 Answers
      4






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      10
      down vote



      accepted










      Removing numbers greater than the target affects the correctness of this program - they could be added to negative numbers to reach the target.



      You could get a performance improvement by finding the difference between each element and the target, then looking to see if that difference is in the list. This doesn't in itself reduce the computational complexity (it's still O(n²)), but we can build on that: if the list is first sorted, and we then use a binary search to test membership we get to O(n log n); if we convert to a structure with fast lookup (such as a collections.​Counter, which has amortized O(1) insertion and lookup), then we come down to O(n).



      If we have a Counter, then we can account for all combinations of that pair by multiplying one count by the other (but we'll need to consider the special case that the number is exactly half the target).



      We could do with some auto tests. Consider importing the doctest module and using it. Some good test cases to include:



      1,  → 0
      1, [1] → 0
      1, [0, 1] → 1
      0, [-1, 1] → 1
      0, [0, 1] → 0
      4, [1, 4, 3, 0] → 2
      4, [1, 1, 3, 3] → 4
      4, [2, 2, 2, 2] → 6





      share|improve this answer



























        up vote
        10
        down vote



        accepted










        Removing numbers greater than the target affects the correctness of this program - they could be added to negative numbers to reach the target.



        You could get a performance improvement by finding the difference between each element and the target, then looking to see if that difference is in the list. This doesn't in itself reduce the computational complexity (it's still O(n²)), but we can build on that: if the list is first sorted, and we then use a binary search to test membership we get to O(n log n); if we convert to a structure with fast lookup (such as a collections.​Counter, which has amortized O(1) insertion and lookup), then we come down to O(n).



        If we have a Counter, then we can account for all combinations of that pair by multiplying one count by the other (but we'll need to consider the special case that the number is exactly half the target).



        We could do with some auto tests. Consider importing the doctest module and using it. Some good test cases to include:



        1,  → 0
        1, [1] → 0
        1, [0, 1] → 1
        0, [-1, 1] → 1
        0, [0, 1] → 0
        4, [1, 4, 3, 0] → 2
        4, [1, 1, 3, 3] → 4
        4, [2, 2, 2, 2] → 6





        share|improve this answer

























          up vote
          10
          down vote



          accepted







          up vote
          10
          down vote



          accepted






          Removing numbers greater than the target affects the correctness of this program - they could be added to negative numbers to reach the target.



          You could get a performance improvement by finding the difference between each element and the target, then looking to see if that difference is in the list. This doesn't in itself reduce the computational complexity (it's still O(n²)), but we can build on that: if the list is first sorted, and we then use a binary search to test membership we get to O(n log n); if we convert to a structure with fast lookup (such as a collections.​Counter, which has amortized O(1) insertion and lookup), then we come down to O(n).



          If we have a Counter, then we can account for all combinations of that pair by multiplying one count by the other (but we'll need to consider the special case that the number is exactly half the target).



          We could do with some auto tests. Consider importing the doctest module and using it. Some good test cases to include:



          1,  → 0
          1, [1] → 0
          1, [0, 1] → 1
          0, [-1, 1] → 1
          0, [0, 1] → 0
          4, [1, 4, 3, 0] → 2
          4, [1, 1, 3, 3] → 4
          4, [2, 2, 2, 2] → 6





          share|improve this answer














          Removing numbers greater than the target affects the correctness of this program - they could be added to negative numbers to reach the target.



          You could get a performance improvement by finding the difference between each element and the target, then looking to see if that difference is in the list. This doesn't in itself reduce the computational complexity (it's still O(n²)), but we can build on that: if the list is first sorted, and we then use a binary search to test membership we get to O(n log n); if we convert to a structure with fast lookup (such as a collections.​Counter, which has amortized O(1) insertion and lookup), then we come down to O(n).



          If we have a Counter, then we can account for all combinations of that pair by multiplying one count by the other (but we'll need to consider the special case that the number is exactly half the target).



          We could do with some auto tests. Consider importing the doctest module and using it. Some good test cases to include:



          1,  → 0
          1, [1] → 0
          1, [0, 1] → 1
          0, [-1, 1] → 1
          0, [0, 1] → 0
          4, [1, 4, 3, 0] → 2
          4, [1, 1, 3, 3] → 4
          4, [2, 2, 2, 2] → 6






          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited 2 days ago

























          answered 2 days ago









          Toby Speight

          22.4k537109




          22.4k537109
























              up vote
              6
              down vote













              So far every solution given has been O(n2) or O(n log n), but there is an O(n) solution, which is sketched as follows:




              • Get your input into a list, as you have done so. Obviously this is O(n)

              • Create an empty map from integers to integers. The map must have an O(1) insertion operation and an O(1) contains-key operation and an O(1) lookup operation and an O(n) "enumerate all the keys" operation. Commonly-used map types in modern programming languages typically have these characteristics.

              • Build a count of all the items in the input. That is, for each input item, check to see if it is in the map. If it is not, add the pair (item, 1) to the map. If it is already in the map, look up the associated value and change the map so that it has the pair (item, value + 1). All those operations are O(1) and we do them n times, so this step is O(n).

              • Now we take our target, call it sum and we wish to enumerate the pairs which add to that target. Enumerate the keys of the map. Suppose the key is k. We compute sum-k. Now there are two cases.


                • Case 1: if sum-k == k then check the map to see if the value associated with k is 2 or greater. If it is, then we have a pair (k, k). Output it.

                • Case 2: if sum-k is not k then check the map to see if sum-k is in the map. If it is, then we have a pair (k, sum-k).



              • The enumeration enumerates at most n keys, and each step is O(1), so this step is also O(n)

              • And we're done, with total cost O(n).


              Now, can you implement this solution?






              share|improve this answer























              • I don't think that "most" map types have all the O(1) operations that you mention. They do have an amortized cost of O(1), though. Worst case scenario, they'll still be O(n)
                – ChatterOne
                2 days ago






              • 1




                @ChatterOne: "Most" was not intended to imply that I had done a survey and that 50%+1 or more met the criterion; I was speaking informally. But I am curious: can you give an example of a popular language with a popular mutable dictionary type where typical worst-case performance with non-hostile input is O(n)? Particularly if the keys are small integers, as is the case here. The C#, Java, Python, JavaScript dictionaries all have O(1) insertion and lookup when given realistic, non-hostile inputs.
                – Eric Lippert
                2 days ago












              • @ChatterOne: I note that the sorted key dictionary types and immutable dictionary types are typically balanced binaries trees behind the scenes, and so are O(lg n) on those operations, but here I am specifically interested in mutable unsorted dictionaries.
                – Eric Lippert
                2 days ago










              • @EricLippert: Absent some other work-around, the O(n) insertion cost will be paid on each resize operation. Hence, ChatterOne is making the point that calling insertion an O(1) operation (i.e., without stating that this cost is amortized) is inaccurate even for non-hostile inputs. Per the last paragraph of Dictionary.Add remarks, C# has the same cost. However, C# can avoid this cost by setting the initial capacity, a feature Python lacks.
                – Brian
                2 days ago






              • 1




                @Brian: Ah, I understand the point now, but I'm not sure why it is germane. First, because the algorithm I propose is O(n) whether or not the cost of a dictionary insertion is O(1) strictly, or O(1) amortized. And second, because as you note, we know ahead of time a bound on the dictionary size. And third, because we are cheerfully ignoring superlinearities in all manner of infrastructure, like the superlinear cost of garbage collections as data structures grow in an environment with collection pressure.
                – Eric Lippert
                2 days ago















              up vote
              6
              down vote













              So far every solution given has been O(n2) or O(n log n), but there is an O(n) solution, which is sketched as follows:




              • Get your input into a list, as you have done so. Obviously this is O(n)

              • Create an empty map from integers to integers. The map must have an O(1) insertion operation and an O(1) contains-key operation and an O(1) lookup operation and an O(n) "enumerate all the keys" operation. Commonly-used map types in modern programming languages typically have these characteristics.

              • Build a count of all the items in the input. That is, for each input item, check to see if it is in the map. If it is not, add the pair (item, 1) to the map. If it is already in the map, look up the associated value and change the map so that it has the pair (item, value + 1). All those operations are O(1) and we do them n times, so this step is O(n).

              • Now we take our target, call it sum and we wish to enumerate the pairs which add to that target. Enumerate the keys of the map. Suppose the key is k. We compute sum-k. Now there are two cases.


                • Case 1: if sum-k == k then check the map to see if the value associated with k is 2 or greater. If it is, then we have a pair (k, k). Output it.

                • Case 2: if sum-k is not k then check the map to see if sum-k is in the map. If it is, then we have a pair (k, sum-k).



              • The enumeration enumerates at most n keys, and each step is O(1), so this step is also O(n)

              • And we're done, with total cost O(n).


              Now, can you implement this solution?






              share|improve this answer























              • I don't think that "most" map types have all the O(1) operations that you mention. They do have an amortized cost of O(1), though. Worst case scenario, they'll still be O(n)
                – ChatterOne
                2 days ago






              • 1




                @ChatterOne: "Most" was not intended to imply that I had done a survey and that 50%+1 or more met the criterion; I was speaking informally. But I am curious: can you give an example of a popular language with a popular mutable dictionary type where typical worst-case performance with non-hostile input is O(n)? Particularly if the keys are small integers, as is the case here. The C#, Java, Python, JavaScript dictionaries all have O(1) insertion and lookup when given realistic, non-hostile inputs.
                – Eric Lippert
                2 days ago












              • @ChatterOne: I note that the sorted key dictionary types and immutable dictionary types are typically balanced binaries trees behind the scenes, and so are O(lg n) on those operations, but here I am specifically interested in mutable unsorted dictionaries.
                – Eric Lippert
                2 days ago










              • @EricLippert: Absent some other work-around, the O(n) insertion cost will be paid on each resize operation. Hence, ChatterOne is making the point that calling insertion an O(1) operation (i.e., without stating that this cost is amortized) is inaccurate even for non-hostile inputs. Per the last paragraph of Dictionary.Add remarks, C# has the same cost. However, C# can avoid this cost by setting the initial capacity, a feature Python lacks.
                – Brian
                2 days ago






              • 1




                @Brian: Ah, I understand the point now, but I'm not sure why it is germane. First, because the algorithm I propose is O(n) whether or not the cost of a dictionary insertion is O(1) strictly, or O(1) amortized. And second, because as you note, we know ahead of time a bound on the dictionary size. And third, because we are cheerfully ignoring superlinearities in all manner of infrastructure, like the superlinear cost of garbage collections as data structures grow in an environment with collection pressure.
                – Eric Lippert
                2 days ago













              up vote
              6
              down vote










              up vote
              6
              down vote









              So far every solution given has been O(n2) or O(n log n), but there is an O(n) solution, which is sketched as follows:




              • Get your input into a list, as you have done so. Obviously this is O(n)

              • Create an empty map from integers to integers. The map must have an O(1) insertion operation and an O(1) contains-key operation and an O(1) lookup operation and an O(n) "enumerate all the keys" operation. Commonly-used map types in modern programming languages typically have these characteristics.

              • Build a count of all the items in the input. That is, for each input item, check to see if it is in the map. If it is not, add the pair (item, 1) to the map. If it is already in the map, look up the associated value and change the map so that it has the pair (item, value + 1). All those operations are O(1) and we do them n times, so this step is O(n).

              • Now we take our target, call it sum and we wish to enumerate the pairs which add to that target. Enumerate the keys of the map. Suppose the key is k. We compute sum-k. Now there are two cases.


                • Case 1: if sum-k == k then check the map to see if the value associated with k is 2 or greater. If it is, then we have a pair (k, k). Output it.

                • Case 2: if sum-k is not k then check the map to see if sum-k is in the map. If it is, then we have a pair (k, sum-k).



              • The enumeration enumerates at most n keys, and each step is O(1), so this step is also O(n)

              • And we're done, with total cost O(n).


              Now, can you implement this solution?






              share|improve this answer














              So far every solution given has been O(n2) or O(n log n), but there is an O(n) solution, which is sketched as follows:




              • Get your input into a list, as you have done so. Obviously this is O(n)

              • Create an empty map from integers to integers. The map must have an O(1) insertion operation and an O(1) contains-key operation and an O(1) lookup operation and an O(n) "enumerate all the keys" operation. Commonly-used map types in modern programming languages typically have these characteristics.

              • Build a count of all the items in the input. That is, for each input item, check to see if it is in the map. If it is not, add the pair (item, 1) to the map. If it is already in the map, look up the associated value and change the map so that it has the pair (item, value + 1). All those operations are O(1) and we do them n times, so this step is O(n).

              • Now we take our target, call it sum and we wish to enumerate the pairs which add to that target. Enumerate the keys of the map. Suppose the key is k. We compute sum-k. Now there are two cases.


                • Case 1: if sum-k == k then check the map to see if the value associated with k is 2 or greater. If it is, then we have a pair (k, k). Output it.

                • Case 2: if sum-k is not k then check the map to see if sum-k is in the map. If it is, then we have a pair (k, sum-k).



              • The enumeration enumerates at most n keys, and each step is O(1), so this step is also O(n)

              • And we're done, with total cost O(n).


              Now, can you implement this solution?







              share|improve this answer














              share|improve this answer



              share|improve this answer








              edited 2 days ago

























              answered 2 days ago









              Eric Lippert

              12.7k42746




              12.7k42746












              • I don't think that "most" map types have all the O(1) operations that you mention. They do have an amortized cost of O(1), though. Worst case scenario, they'll still be O(n)
                – ChatterOne
                2 days ago






              • 1




                @ChatterOne: "Most" was not intended to imply that I had done a survey and that 50%+1 or more met the criterion; I was speaking informally. But I am curious: can you give an example of a popular language with a popular mutable dictionary type where typical worst-case performance with non-hostile input is O(n)? Particularly if the keys are small integers, as is the case here. The C#, Java, Python, JavaScript dictionaries all have O(1) insertion and lookup when given realistic, non-hostile inputs.
                – Eric Lippert
                2 days ago












              • @ChatterOne: I note that the sorted key dictionary types and immutable dictionary types are typically balanced binaries trees behind the scenes, and so are O(lg n) on those operations, but here I am specifically interested in mutable unsorted dictionaries.
                – Eric Lippert
                2 days ago










              • @EricLippert: Absent some other work-around, the O(n) insertion cost will be paid on each resize operation. Hence, ChatterOne is making the point that calling insertion an O(1) operation (i.e., without stating that this cost is amortized) is inaccurate even for non-hostile inputs. Per the last paragraph of Dictionary.Add remarks, C# has the same cost. However, C# can avoid this cost by setting the initial capacity, a feature Python lacks.
                – Brian
                2 days ago






              • 1




                @Brian: Ah, I understand the point now, but I'm not sure why it is germane. First, because the algorithm I propose is O(n) whether or not the cost of a dictionary insertion is O(1) strictly, or O(1) amortized. And second, because as you note, we know ahead of time a bound on the dictionary size. And third, because we are cheerfully ignoring superlinearities in all manner of infrastructure, like the superlinear cost of garbage collections as data structures grow in an environment with collection pressure.
                – Eric Lippert
                2 days ago


















              • I don't think that "most" map types have all the O(1) operations that you mention. They do have an amortized cost of O(1), though. Worst case scenario, they'll still be O(n)
                – ChatterOne
                2 days ago






              • 1




                @ChatterOne: "Most" was not intended to imply that I had done a survey and that 50%+1 or more met the criterion; I was speaking informally. But I am curious: can you give an example of a popular language with a popular mutable dictionary type where typical worst-case performance with non-hostile input is O(n)? Particularly if the keys are small integers, as is the case here. The C#, Java, Python, JavaScript dictionaries all have O(1) insertion and lookup when given realistic, non-hostile inputs.
                – Eric Lippert
                2 days ago












              • @ChatterOne: I note that the sorted key dictionary types and immutable dictionary types are typically balanced binaries trees behind the scenes, and so are O(lg n) on those operations, but here I am specifically interested in mutable unsorted dictionaries.
                – Eric Lippert
                2 days ago










              • @EricLippert: Absent some other work-around, the O(n) insertion cost will be paid on each resize operation. Hence, ChatterOne is making the point that calling insertion an O(1) operation (i.e., without stating that this cost is amortized) is inaccurate even for non-hostile inputs. Per the last paragraph of Dictionary.Add remarks, C# has the same cost. However, C# can avoid this cost by setting the initial capacity, a feature Python lacks.
                – Brian
                2 days ago






              • 1




                @Brian: Ah, I understand the point now, but I'm not sure why it is germane. First, because the algorithm I propose is O(n) whether or not the cost of a dictionary insertion is O(1) strictly, or O(1) amortized. And second, because as you note, we know ahead of time a bound on the dictionary size. And third, because we are cheerfully ignoring superlinearities in all manner of infrastructure, like the superlinear cost of garbage collections as data structures grow in an environment with collection pressure.
                – Eric Lippert
                2 days ago
















              I don't think that "most" map types have all the O(1) operations that you mention. They do have an amortized cost of O(1), though. Worst case scenario, they'll still be O(n)
              – ChatterOne
              2 days ago




              I don't think that "most" map types have all the O(1) operations that you mention. They do have an amortized cost of O(1), though. Worst case scenario, they'll still be O(n)
              – ChatterOne
              2 days ago




              1




              1




              @ChatterOne: "Most" was not intended to imply that I had done a survey and that 50%+1 or more met the criterion; I was speaking informally. But I am curious: can you give an example of a popular language with a popular mutable dictionary type where typical worst-case performance with non-hostile input is O(n)? Particularly if the keys are small integers, as is the case here. The C#, Java, Python, JavaScript dictionaries all have O(1) insertion and lookup when given realistic, non-hostile inputs.
              – Eric Lippert
              2 days ago






              @ChatterOne: "Most" was not intended to imply that I had done a survey and that 50%+1 or more met the criterion; I was speaking informally. But I am curious: can you give an example of a popular language with a popular mutable dictionary type where typical worst-case performance with non-hostile input is O(n)? Particularly if the keys are small integers, as is the case here. The C#, Java, Python, JavaScript dictionaries all have O(1) insertion and lookup when given realistic, non-hostile inputs.
              – Eric Lippert
              2 days ago














              @ChatterOne: I note that the sorted key dictionary types and immutable dictionary types are typically balanced binaries trees behind the scenes, and so are O(lg n) on those operations, but here I am specifically interested in mutable unsorted dictionaries.
              – Eric Lippert
              2 days ago




              @ChatterOne: I note that the sorted key dictionary types and immutable dictionary types are typically balanced binaries trees behind the scenes, and so are O(lg n) on those operations, but here I am specifically interested in mutable unsorted dictionaries.
              – Eric Lippert
              2 days ago












              @EricLippert: Absent some other work-around, the O(n) insertion cost will be paid on each resize operation. Hence, ChatterOne is making the point that calling insertion an O(1) operation (i.e., without stating that this cost is amortized) is inaccurate even for non-hostile inputs. Per the last paragraph of Dictionary.Add remarks, C# has the same cost. However, C# can avoid this cost by setting the initial capacity, a feature Python lacks.
              – Brian
              2 days ago




              @EricLippert: Absent some other work-around, the O(n) insertion cost will be paid on each resize operation. Hence, ChatterOne is making the point that calling insertion an O(1) operation (i.e., without stating that this cost is amortized) is inaccurate even for non-hostile inputs. Per the last paragraph of Dictionary.Add remarks, C# has the same cost. However, C# can avoid this cost by setting the initial capacity, a feature Python lacks.
              – Brian
              2 days ago




              1




              1




              @Brian: Ah, I understand the point now, but I'm not sure why it is germane. First, because the algorithm I propose is O(n) whether or not the cost of a dictionary insertion is O(1) strictly, or O(1) amortized. And second, because as you note, we know ahead of time a bound on the dictionary size. And third, because we are cheerfully ignoring superlinearities in all manner of infrastructure, like the superlinear cost of garbage collections as data structures grow in an environment with collection pressure.
              – Eric Lippert
              2 days ago




              @Brian: Ah, I understand the point now, but I'm not sure why it is germane. First, because the algorithm I propose is O(n) whether or not the cost of a dictionary insertion is O(1) strictly, or O(1) amortized. And second, because as you note, we know ahead of time a bound on the dictionary size. And third, because we are cheerfully ignoring superlinearities in all manner of infrastructure, like the superlinear cost of garbage collections as data structures grow in an environment with collection pressure.
              – Eric Lippert
              2 days ago










              up vote
              2
              down vote













              Other minor suggestions:




              • Don't leave your input()s blank. Pass a prompt so that the user knows what they're entering.

              • The first time you initialize lst, it doesn't need to be memory; it can be left as a generator (parens instead of brackets).

              • The second time you initialize lst, it does not need to be mutable, so make it a tuple instead of a list.






              share|improve this answer

























                up vote
                2
                down vote













                Other minor suggestions:




                • Don't leave your input()s blank. Pass a prompt so that the user knows what they're entering.

                • The first time you initialize lst, it doesn't need to be memory; it can be left as a generator (parens instead of brackets).

                • The second time you initialize lst, it does not need to be mutable, so make it a tuple instead of a list.






                share|improve this answer























                  up vote
                  2
                  down vote










                  up vote
                  2
                  down vote









                  Other minor suggestions:




                  • Don't leave your input()s blank. Pass a prompt so that the user knows what they're entering.

                  • The first time you initialize lst, it doesn't need to be memory; it can be left as a generator (parens instead of brackets).

                  • The second time you initialize lst, it does not need to be mutable, so make it a tuple instead of a list.






                  share|improve this answer












                  Other minor suggestions:




                  • Don't leave your input()s blank. Pass a prompt so that the user knows what they're entering.

                  • The first time you initialize lst, it doesn't need to be memory; it can be left as a generator (parens instead of brackets).

                  • The second time you initialize lst, it does not need to be mutable, so make it a tuple instead of a list.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered 2 days ago









                  Reinderien

                  1,362516




                  1,362516






















                      up vote
                      2
                      down vote













                      You can use a sorting algorithm to first sort the list, this can be done (in many ways) with a complexity of O(nlog(n)).
                      Once the list is sorted (small to large for example), the problem can be solved with complexity O(n) as followed:



                      head = 0
                      tail = len(list) - 1
                      while (head < tail):
                      sum = list[head] + list[tail]
                      if sum == num:
                      cnt += 1
                      head += 1
                      tail -= 1
                      elif sum > num:
                      tail -= 1
                      else:
                      head += 1


                      This results in an overall complexity of O(nlog(n))



                      Note : This runs under the assumption that all elements are unique. This can be easily fixed depending on how you want to handle cases with duplicate elements.






                      share|improve this answer








                      New contributor




                      David is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.














                      • 1




                        Since the sample given by the original poster indicates that the elements are not unique, you do need to handle duplicates.
                        – Eric Lippert
                        2 days ago










                      • There is a linear time solution, which I've given in my answer. See if you can find it!
                        – Eric Lippert
                        2 days ago










                      • @EricLippert With an ideal map implementation there is a linear time solution. Unfortunately python's map does not guarantee constant time reads or writes, the worst case time for reads and writes are linear with respect to the size of the map which makes the worst case total time for an algorithm using maps to be O(n^2).
                        – David
                        18 hours ago















                      up vote
                      2
                      down vote













                      You can use a sorting algorithm to first sort the list, this can be done (in many ways) with a complexity of O(nlog(n)).
                      Once the list is sorted (small to large for example), the problem can be solved with complexity O(n) as followed:



                      head = 0
                      tail = len(list) - 1
                      while (head < tail):
                      sum = list[head] + list[tail]
                      if sum == num:
                      cnt += 1
                      head += 1
                      tail -= 1
                      elif sum > num:
                      tail -= 1
                      else:
                      head += 1


                      This results in an overall complexity of O(nlog(n))



                      Note : This runs under the assumption that all elements are unique. This can be easily fixed depending on how you want to handle cases with duplicate elements.






                      share|improve this answer








                      New contributor




                      David is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.














                      • 1




                        Since the sample given by the original poster indicates that the elements are not unique, you do need to handle duplicates.
                        – Eric Lippert
                        2 days ago










                      • There is a linear time solution, which I've given in my answer. See if you can find it!
                        – Eric Lippert
                        2 days ago










                      • @EricLippert With an ideal map implementation there is a linear time solution. Unfortunately python's map does not guarantee constant time reads or writes, the worst case time for reads and writes are linear with respect to the size of the map which makes the worst case total time for an algorithm using maps to be O(n^2).
                        – David
                        18 hours ago













                      up vote
                      2
                      down vote










                      up vote
                      2
                      down vote









                      You can use a sorting algorithm to first sort the list, this can be done (in many ways) with a complexity of O(nlog(n)).
                      Once the list is sorted (small to large for example), the problem can be solved with complexity O(n) as followed:



                      head = 0
                      tail = len(list) - 1
                      while (head < tail):
                      sum = list[head] + list[tail]
                      if sum == num:
                      cnt += 1
                      head += 1
                      tail -= 1
                      elif sum > num:
                      tail -= 1
                      else:
                      head += 1


                      This results in an overall complexity of O(nlog(n))



                      Note : This runs under the assumption that all elements are unique. This can be easily fixed depending on how you want to handle cases with duplicate elements.






                      share|improve this answer








                      New contributor




                      David is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.









                      You can use a sorting algorithm to first sort the list, this can be done (in many ways) with a complexity of O(nlog(n)).
                      Once the list is sorted (small to large for example), the problem can be solved with complexity O(n) as followed:



                      head = 0
                      tail = len(list) - 1
                      while (head < tail):
                      sum = list[head] + list[tail]
                      if sum == num:
                      cnt += 1
                      head += 1
                      tail -= 1
                      elif sum > num:
                      tail -= 1
                      else:
                      head += 1


                      This results in an overall complexity of O(nlog(n))



                      Note : This runs under the assumption that all elements are unique. This can be easily fixed depending on how you want to handle cases with duplicate elements.







                      share|improve this answer








                      New contributor




                      David is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.









                      share|improve this answer



                      share|improve this answer






                      New contributor




                      David is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.









                      answered 2 days ago









                      David

                      211




                      211




                      New contributor




                      David is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.





                      New contributor





                      David is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.






                      David is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
                      Check out our Code of Conduct.








                      • 1




                        Since the sample given by the original poster indicates that the elements are not unique, you do need to handle duplicates.
                        – Eric Lippert
                        2 days ago










                      • There is a linear time solution, which I've given in my answer. See if you can find it!
                        – Eric Lippert
                        2 days ago










                      • @EricLippert With an ideal map implementation there is a linear time solution. Unfortunately python's map does not guarantee constant time reads or writes, the worst case time for reads and writes are linear with respect to the size of the map which makes the worst case total time for an algorithm using maps to be O(n^2).
                        – David
                        18 hours ago














                      • 1




                        Since the sample given by the original poster indicates that the elements are not unique, you do need to handle duplicates.
                        – Eric Lippert
                        2 days ago










                      • There is a linear time solution, which I've given in my answer. See if you can find it!
                        – Eric Lippert
                        2 days ago










                      • @EricLippert With an ideal map implementation there is a linear time solution. Unfortunately python's map does not guarantee constant time reads or writes, the worst case time for reads and writes are linear with respect to the size of the map which makes the worst case total time for an algorithm using maps to be O(n^2).
                        – David
                        18 hours ago








                      1




                      1




                      Since the sample given by the original poster indicates that the elements are not unique, you do need to handle duplicates.
                      – Eric Lippert
                      2 days ago




                      Since the sample given by the original poster indicates that the elements are not unique, you do need to handle duplicates.
                      – Eric Lippert
                      2 days ago












                      There is a linear time solution, which I've given in my answer. See if you can find it!
                      – Eric Lippert
                      2 days ago




                      There is a linear time solution, which I've given in my answer. See if you can find it!
                      – Eric Lippert
                      2 days ago












                      @EricLippert With an ideal map implementation there is a linear time solution. Unfortunately python's map does not guarantee constant time reads or writes, the worst case time for reads and writes are linear with respect to the size of the map which makes the worst case total time for an algorithm using maps to be O(n^2).
                      – David
                      18 hours ago




                      @EricLippert With an ideal map implementation there is a linear time solution. Unfortunately python's map does not guarantee constant time reads or writes, the worst case time for reads and writes are linear with respect to the size of the map which makes the worst case total time for an algorithm using maps to be O(n^2).
                      – David
                      18 hours ago










                      phwt is a new contributor. Be nice, and check out our Code of Conduct.










                       

                      draft saved


                      draft discarded


















                      phwt is a new contributor. Be nice, and check out our Code of Conduct.













                      phwt is a new contributor. Be nice, and check out our Code of Conduct.












                      phwt is a new contributor. Be nice, and check out our Code of Conduct.















                       


                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f208138%2ffind-every-possible-pair-of-numbers-that-sum-up-to-n-number%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Quarter-circle Tiles

                      build a pushdown automaton that recognizes the reverse language of a given pushdown automaton?

                      Mont Emei