DEV Community

Ivan
Ivan

Posted on

Daily Coding Problem #1

For quite some time I found my time procastinating after getting home from work. Untill a few days ago I stumbled upon the Daily Coding Problem (DCP) and decided to give it a shot.
The code is in C#.

How it works?

The folks at DCP send you a problem that was asked at a top company everyday for you to solve. The premium membership also offers the opportunity to verify your solution.

Problem #1

This problem was recently asked by Google.

Given a list of numbers and a number k, return whether any two numbers from the list add up to k.

For example, given [10, 15, 3, 7] and k of 17, return true since 10 + 7 is 17.

Bonus: Can you do this in one pass?

My solution

using System;
using System.Collections.Generic;
using System.Linq;

namespace Problem01
{
    public class Program
    {
        public static void Main(string[] args)
        {
            var numbers = Console.ReadLine()
                .Split(' ')
                .Select(int.Parse)
                .ToList();

            var k = int.Parse(Console.ReadLine());

            var result = TwoNumbersEqual(numbers, k);
            Console.WriteLine(result);
        }

        public static bool TwoNumbersEqual(List<int> numbers, int k)
        {
            var set = new HashSet<int>();

            foreach (var number in numbers)
            {
                var remaining = k - number;

                if (set.Contains(remaining))
                {
                    return true;
                }

                set.Add(number);
            }

            return false;
        }
    }
}

Explanation

The naive solution would be to do two nested for loops and check if any of the combinations gives the k we desire.
But because I wanted the bonus tag and after some tought, I realized that you could foreach the input array and for each item check find the difference between it and k. After you find the difference, you can check if you have already seen this number before and the HashSet data structure allows for constant lookups (because it is a hash table).

I will try to do the daily problem everyday, but sometimes life gets in the way :)
Solutions are posted in my github account




.

Feel free to leave a like and let me know in the comments if I should continue to post my solutions.

If you also have any better solution in mind, by all means share it, so we can learn from each other.

Top comments (58)

Collapse
 
rrampage profile image
Raunak Ramakrishnan

One pass in Python using built-in list and set types

def sum_exists(l,k):
    s = set()
    for x in l:
        if k-x in s:
            return True
        s.add(x)
    return False
Collapse
 
vdedodev profile image
Vincent Dedo

Can I suggest you use more explicit variable names?

def sum_exists(numbers, target):
    numbers_seen = set()
    for number in numbers:
        if target - number in numbers_seen:
            return True
        numbers_seen.add(number)
    return False

Although I prefer a more concise implementation, although probably not O(N)

def sum_exists(numbers, target):
    differences = {target - number for number in numbers}
    return bool(differences.intersection(numbers))
Collapse
 
lyfolos profile image
Muhammed H. Alkan

This is should be the shortest version of your code

def sum_exists(ns, t):
    return bool({t - n for n in ns} & set(ns))
Thread Thread
 
vdedodev profile image
Vincent Dedo

I've just realised that my algorithm is wrong, it would fail for something like 10 and [5]. Guess I need to put a bit more thought in my comments.

Collapse
 
maeghith profile image
mæghith | Ramón FSM

ATM I see 2 proposals for JS, here's another one taking advantage of the fact that JS arrays are sparse:

function twoNumbersEqual(inList, k) {
const lookup = [];

    for (const item of inList) {
        if (lookup[k - item]) {
            return true;
        }
        lookup[item]=k - item;
    }
    return false;
}

console.log (twoNumbersEqual ([10, 15, 3, 7], 17));

No need for fancy hash structures (because arrays in javascript are already hashes).

Collapse
 
sakalx profile image
Serhii Sakal

Yes, you could,
but if you look into lookup array, and imagen k = -1,
you will create an array with a lot of empty

[empty × 3, 52, empty × 3, 48, empty × 2, 45, empty × 4, 40]

lookup.length === 16

now, for example, we have input some array with big numbers

lookup.length > max.number in inList

Collapse
 
maeghith profile image
mæghith | Ramón FSM

So… you are saying that having very big sequence of empty space is a problem.

Can you explain why is that a problem?

Thread Thread
 
sakalx profile image
Serhii Sakal

There no such a big problem now,
it just my opinion,

but as suggested my friend
anyway, it's better using new Map || new Set for better performance.

jsperf.com/hashtable-vs-array

Thread Thread
 
themindfuldev profile image
Tiago Romero Garcia • Edited

Here is my O(N) solution using Set:

function doTwoNumbersSumUpToK(list, k) {
    const visitedNumbers = new Set();
    for (let number of list) {
        visitedNumbers.add(number);
        if (visitedNumbers.has(k-number)) {
            return true;
        }
    }
    return false;
}
Thread Thread
 
adibiton profile image
Adi biton

You should move line:
visitedNumbers.add(number);

after the if clause (otherwise the sums like 4+4=8 will be counted)

A compare i made for the Set vs HashTable
jsperf.com/add-to-k

Collapse
 
minche profile image
Minja Davidović

there was actually an article recently that touched on the subject mentioned by Serhii Sakal dev.to/_codingblocks/when-is-an-ar...
I was wondering, why not just use an object instead of an array?
let's take a simple example, where you have something like this

const a = [1, 2];
a[1000] = 1000;

console.log(a);

and look at the size of a.
even with your example, the created lookup array is bigger than the given array, so you're using more memory than needed.

Collapse
 
phaust94 profile image
Phaust94

Hash lookup is O(n) due to collisions. You cannot assume no collisions, so it's O(n2)

Collapse
 
martinhaeusler profile image
Martin Häusler

That's only partially true. If we asume:

  • a proper hash function (for integers their own value is a very good hash code) and

  • a hash set implementation that resizes once a certain percentage of buckets have been filled (which all standard collection libraries do)

...then not only is the cost of a hash collision amortized, but also the cost of re-hashing in case that the hash set runs out of space. There's a whole lot of theory behind this (look up "amortized complexity analysis"), but generally speaking, provided a proper implementation of both the set collection itself and the hash function of the objects to be stored, hash set lookups are always amortized O(1).

Thread Thread
 
phaust94 profile image
Phaust94

I think yiu might be misunderstanding how hash tables work.
The idea is that there exists a "bounded" table for hashes. Hence, hases of ints are not themselves, but themselves modulo some number. Due to that, it might be the case that the numbers in the given list are all divisible into that number, hence they will all fall into the same hash bucket, hence the hash becomes a simple linked list, hence it's O(n). Moreover, there is no hash function dealing with thia problem, unless you assume an infinite storage space LOL

Thread Thread
 
martinhaeusler profile image
Martin Häusler

No, I understand hash tables perfectly fine. Every bucket in ha hash table can be implemented as a list, that's true (however, alternatives do exist as well). That list has O(n) complexity for "contains", that's true as well. However, each hash table implementation worth its salt will perform a procedure called "rehashing". For example, lets say you have a hash table with 10 buckets. Now entries 1 through 7 are being inserted, no problem. If you have a decent hash function, you will end up with 7 buckets of size 1, and 3 empty buckets. Collissions MAY occur, but if your hash function is any good they will be extremely rare. Now, we have 70% fill ratio in our table. Entry #8 wants to be inserted. Instead of just putting it in (risking more collisions), we create a NEW list of buckets which is larger than the old one (typically twice as large), so now we have 20 buckets. We reinsert (rehash) our first 7 entries into the new 20-bucket-table, then insert entry #8. We continue inserting until we are again 70% full, then we resize to 40 buckets, and so on. No infinite storage needed. Amortized cost of contains is O(1), also for insertion O(1).

I studied all of that long enough, but you don't have to take my word for it - just check it on wikipedia or Stackoverflow.

Thread Thread
 
danstur profile image
danstur

Question: What happens if the hash of the values being inserted happens to be the same for all values? (Hint: Remember that you have to first check if the value already exists in the set before you can add it).

Is amortized cost for n inserts still O(n) in that scenario?

Or an even easier "proof": If you're doing open hashing as most implementations do, a hashset that contains values with the same hash degrades to a simple linked list. What is the amount of time to insert n values into a linked list of you have to check for duplicates first?

The amortized performance you learned for hashsets in school only works under the assumption of a reasonably well distributed hash function - in the worst case it degrades.

Thread Thread
 
martinhaeusler profile image
Martin Häusler • Edited

If all values deliver the same hash code then I'd argue that you have a pretty useless hash function... as I said, all of the argumentation I provided is based on the assumption of a proper hash function.

No, you don't have to check first if something is in a hash set before you insert it. Simply look up the bucket and the position where the element would be, then put it there if absent, or don't if it's not there. If your buckets are of size 0, 1 or 2 (as they should be) then the algorithm is O(1).

Bad hash codes will result in bad hash performance. No rocket science. If you take an object from a standard library, be it a number, a string, you name it - they are bound to have proper hash functions with good distributions. If a real-world hash set degenerates into a list (which is possible), I would question the programmer - not the theory.

Thread Thread
 
danstur profile image
danstur • Edited

The problem is that even the best hash function will always have pathological cases. In this case with integers the problem isn't the hash function of the integer, but the necessary transformation to fit into the set, which always requires some kind of modulo computation.

If hash mod bucket_size is always the same number you have the problem. If you know the used algorithm to convert the hash to the bucket number, the starting size and the growth algorithm of the set you can always pick your numbers in such a way that you'll get collisions. This has nothing to do with whether the hash algorithm of the data type is bad or good (integers have a pretty perfect naive hash algorithm after all).

"No, you don't have to check first if something is in a hash set before you insert it. Simply look up the bucket and the position where the element would be, then put it there if absent, or don't if it's not there."
Good explanation of how you check to see if the element exists - that's exactly how you'd do it. And since we've already established that for certain inputs all values will be in the same bucket, this degenerates to O(N) in the worst case.

Consequently the worst case performance here is O(n2) and not O(n). That's simply how hashsets work. Is that important? Depends on your use case. If you aren't worrying about an attacker, or the input is outside the control of any attacker you're good. If not, you've just opened yourself up to a classical DDOS attack by not paying attention to those * next to the complexity numbers.

And before you disregard this as purely theory: This issue has plagued web frameworks and their handling of POST form data amongst many other examples I can think of. You can read bugs.python.org/issue13703 for one classic example.

Thread Thread
 
martinhaeusler profile image
Martin Häusler

Hold your horses. We went from a discussion with somebody who clearly was unfamiliar with hashing techniques to malicious attack vectors as an argument to prove a generally accepted theory wrong? Good grief, that's a stretch. Sorry, this is getting ridiculous - I just wanted to clarify a misunderstanding, that's all. If we all explained things this way, we would have an army of students running about, firmly believing that the common access complexity for a hash set is O(n). I highly doubt that this is useful to anybody.

Thread Thread
 
danstur profile image
danstur • Edited

To each their own. I find "Hashsets have O(1) amortized lookup and insert performance assuming a relatively uniform hash distribution, otherwise O(n)" a perfectly reasonable explanation even if it is marginally more complex than "Hashsets always have O(1) amortized lookup and insert performance". I doubt that most students would fail to understand the first explanation.

It's also a valuable lesson to students to teach them that you have to specify whether you mean best case, average or worst case performance when analysing algorithms.

Collapse
 
juancuiule profile image
Juan Ignacio Cuiule

I have come up with this

sumPair :: Num a => (a, a) -> a
sumPair (x, y) = x + y

isMoreThanOnce :: Eq a => a -> [a] -> Bool
isMoreThanOnce x xs = (>1).length.(filter (==x)) $ xs

createPairs :: (Num a, Eq a) => [a] -> a -> [(a, a)]
createPairs xs x
  | isMoreThanOnce x xs = tuples.addX.whithoutX $ xs
  | otherwise = tuples.whithoutX $ xs
  where whithoutX = filter (/=x)
        tuples = map ((,) x)
        addX = (x:)

createAllPairs :: (Num a, Eq a) => [a] -> [(a, a)]
createAllPairs xs = flatmap (createPairs xs) xs

problem :: (Num a, Eq a) => a -> [a] -> Bool
problem k xs = elem k.map sumPair.createAllPairs $ xs
Collapse
 
caseywebb profile image
Casey Webb

Here's another solution in Haskell

import qualified Data.Set as Set

solve :: Integral a => a -> [a] -> Bool
solve = solve' Set.empty

solve' :: Integral a => Set.Set a -> a -> [a] -> Bool
solve' _ _ [] = False
solve' possibleAddends targetSum (x:xs)
  | Set.member x possibleAddends = True
  | otherwise = solve' (Set.insert addend possibleAddends) targetSum xs
  where addend = targetSum - x
Collapse
 
bushblade profile image
Will Adams • Edited

In JavaScript you could do

function doTheyAddTo(a, k){
    for (const n of a){
        if (a.includes(k-=n))  {
            return true
            }
        }
        return false
    }

    doTheyAddTo( [10, 15, 3, 7], 17)
Collapse
 
sakalx profile image
Serhii Sakal

looks good, but when you use includes is it O(log n) ?

jsperf.com/hashtable-vs-includes

Collapse
 
caseywebb profile image
Casey Webb • Edited

Two problems I can find.

The first is that I don't think you want to do k-=n, rather just k-n. As written, this will only work with pairs with the first number in the array.

Second, you need to limit the includes check to the remainder of the array, or you will fail in cases such as doTheyAddTo([5], 10), because 5 will be considered as a valid addend of itself.

To fix the latter, refactor into a for (let i = 0, n; i < a.length; n = a[i++]) loop to get access to the index, or the much much more readable find, e.g...

function doTheAddTo(a, k){
  a.find((n, i) => {
    // because we're in a closure instead of a loop iteration,
    // we also don't have to worry about prematurely returning false
    // and can instead just return the value of our check
    return a.includes(k-n, i+1)
  })
}

Note, using find instead of reduce or filter will maintain the short circuit you had.

Collapse
 
bushblade profile image
Will Adams

Your function does not return anything though. Even if you do return the value of the initial find it won't be a Boolean.

Thread Thread
 
caseywebb profile image
Casey Webb

whoops. too much REPL, not enough real code. Should be return typeof a.find(...) !== 'undefined'.

Collapse
 
caseywebb profile image
Casey Webb

If you enjoy this, you may also like Project Euler

Collapse
 
jay profile image
Jay

Rust solution:

use std::collections::HashSet;

fn two_add_upto(list: &[i32], k: i32) -> bool {
    let mut set = HashSet::with_capacity(list.len());

    for i in list.iter() {
        let diff = k - i;
        if set.contains(&diff) {
            return true;
        }
        set.insert(i);
    }
    false
}

Made a sized Set to overcome the reallocation and copying of data. This increases the Size requirement to O(N).

Collapse
 
pavi2410 profile image
Pavitra Golchha • Edited

Probably the shortest

const twoNumbersEqual = (a, k) => !!a.filter(x => a.includes(k-x)).length

twoNumbersEqual([10, 15, 3, 7], 17) // == true
twoNumbersEqual([10, 13, 4, 7], 17) // == true
twoNumbersEqual([10, 15, 3, 9], 17) // == false
Collapse
 
caseywebb profile image
Casey Webb

slight improvement with a short circuit (find vs filter):

const twoNumbersEqual = (a, k) => !!a.find(x => a.includes(k-x))
Collapse
 
caseywebb profile image
Casey Webb • Edited

I just realized both of these solutions fail in the case of twoNumbersEqual([5], 10). This can be fixed by passing the index to includes, as such...

(a, k) => !!a.find((x, i) => a.includes(k-x, i+1))

This should also theoretically be more performant.

Collapse
 
keithlogan94 profile image
Keith

Best way for 1 pass would be to as you iterate through the array, store the number pair that would make that index total k into an array. And then on each subsequent number check it it exists in the check array before storing its pair

 
rrampage profile image
Raunak Ramakrishnan

This page lists the complexities of the major data structures in Python. Sets and dicts have a similar implementation. The lookup is O(1) in average case, the hash function can lead to O(n) lookups in pathological cases.
Also, amortized O(n) is possible when dicts/sets are resized when adding new elements

PS: For this specific problem, we do not need to use a set as an array will do i.e keys are integers. So if the range of numbers is limited, array index provides a good hash function.

Thread Thread
 
cwetanow profile image
Ivan

I do not think there is an upper range of numbers and they are also not subsequent, so using an array like a hashset is not usefull in this particular situation

Thread Thread
 
rrampage profile image
Raunak Ramakrishnan

Yes. I was not sure of that constraint. In that case, set is better.

Collapse
 
bhuntemann profile image
Bjorn Huntemann

This solution is, unfortunately, not correct. Let's say we have the set {1,2,3,10,16} with k=20. Clearly, there are no two numbers that sum up to k. Your code, and all the ones I've seen in the comments, would return true, because on the 3rd iteration (number=10) we find an entry in the set that matches, but it is the number itself.
Also, while your solution is O(n), I'm pretty sure it does two passes. If I remember correctly, building a hash set is itself O(n).

Collapse
 
cwetanow profile image
Ivan

In the given example, it would not return true, because the current number is added to the set after checking.
Also I tried it and it returns false.
I am not sure about the hashset building, could also use a dictionary, but it is wasted space.

Collapse
 
bhuntemann profile image
Bjorn Huntemann

Right, I am sorry, I misread. I thought you had built the hash set before you started the comparison pass, not during it. On closer inspection, good solution.

Collapse
 
nishant8bits profile image
Nishant Kumar
// Input [10, 5, 7, 3] k=17

function findSumPair(numbs, sum) {
  if (!(numbs instanceof Array)) throw new Error("Invalid Numbers Input");

  const pairSet = new Set();
  let pair = undefined;

  numbs.some(item => {
    const temp = sum - item;
    if (temp > 0 && pairSet.has(temp)) {
      pair = [temp, item];
      return Boolean(pair);
    } else {
      pairSet.add(item);
    }
  });

  return pair;
}

console.log(findSumPair([10, 5, 7, 3], 17));
Collapse
 
sakalx profile image
Serhii Sakal

JavaScript:
returning all possible matches:

function Problem01(arr, sum) {
  const hashTable = {};
  const res = [];

  arr.forEach(n => 
              !hashTable[n]
                ? hashTable[sum - n] = n
                : res.push([n, hashTable[n]])
             );

  return res;
}
Collapse
 
aukat profile image
(Carl) Esben Poulsen

I would sort the list and create a break condition in the innermost loop. When a sum bigger than k is observed there is no point in continuing on testing said number of the outer loop.

Collapse
 
sanderintveld profile image
Sander in 't Veld

If we sort the list first, I can do you one better: have two iterators/indices, i at the start and j at the end. If the sum of the pointees is k, return true. If it is less, increment i. If it is more, decrement j. If i and j meet (and the sum did not equal k), return false. This is O(n).

Unfortunately sorting the list is O(n log(n)).

Collapse
 
aukat profile image
(Carl) Esben Poulsen

Good point. I forgot that sorting is expensive

Collapse
 
nataz77 profile image
Simone Natalini

Don't know if it suits the problem given the fact that it'll output true/false on each item in the HashSet but here's my take on the problem

static void SumExists(HashSet<int> values, int k)
{        
   foreach (var item in values)
   {
       Console.WriteLine(values.Count(x => x + item == k) > 0);
   }
}

I know Count() maybe adds unneeded complexity and it may not be a one pass.
I'm looking for feedback here.