Solving the Blind 75 (1) - Two Sum
- tags
- #Algorithms & Data Structures
- published
- reading time
- 5 minutes
One way that I like to practice my skills as a developer is by solving coding problems. A famous post on blind contains a list that is designed to teach developers the most important concepts and techniques in the least amount of problems as possible. This list has gone on to be known as the Blind 75. In a series of blog posts, I will solve all of these problems and document my thinking and solutions. To start things off, I tackle the two sum problem.
Problem Statement
The two sum problem provides us with two inputs: an array of integers and a target integer. Given this information, we are asked to return the indices of the two numbers which add up to the target. We are told that each input has exactly one solution and that the same element may not be used more than once. The order in which we return the indices does not matter.
Let’s look at some examples to make this a little clearer:
First example
Input: numbers = [0, 1, 6, 3], target = 4
Output: [1, 3]
numbers[1] = 1 and numbers[3] = 3, which adds up to the target, 4
Second example
Input: numbers = [5, 10, 9, 0], target = 15
Output: [0, 1]
numbers[0] = 5 and numbers[1] = 10, which adds up to the target, 15
First Solution: Brute Force
The natural reaction when looking at this problem is trying a brute force approach. This means that we look at the first element and check if there is any element in the array that - if added to the current element - would match the target. If not, we move on to the next element and check the remaining elements again. We repeat this process until we find a solution. Implemented in Java, this would look something like this:
public int[] twoSum(int[] numbers, int target) {
// We look at every element in the numbers array
for (int i = 0; i < numbers.length; i++) {
// And for each element check the remaining elements
// to see if we can add them and match the target
for (int j = i + 1; j < numbers.length; j++) {
// If the sum matches the target, we return the indices of the elements
if (target == numbers[i] + numbers[j]) {
return new int[] { i, j };
}
}
}
return null;
}
This solution has a $O(n^2)$ time complexity. The worst case scenario for our algorithm would be that the indices of the solution are the last and second to last index in the array. To find the solution, we would have to check every element in the array, except the last one. This takes $O(n)$ time. Furthermore, for each of these elements, we would have to check all the remaining elements in the array. This also takes $O(n)$ time. This means that the total time complexity is $O(n^2)$.
This solution has a $O(1)$ space complexity. The space that our solution requires does not change when the provided array gets bigger. We also don’t introduce an extra data structure. This means that we consider the space complexity to be constant.
More Efficient Solution: Using a HashMap
Our first solution gives us a baseline to work with. We now know that we can solve this problem at $O(n^2)$ time complexity. The question then becomes: can we do better?
The key in finding a better solution is reversing our thinking. Instead of looking at an element and then checking all remaining elements to search for a match, we can also look at the elements one by one and ask ourselves: have I already seen an element which would lead me to the solution? This would mean that we would only have to go through all the elements once, even in the worst case scenario.
To do this, we have to keep track of the elements that we have already seen. A HashMap is an ideal data structure to do this, because it has a $O(1)$ time complexity for both saving and retrieving elements. Let’s implement this in Java and see what this approach would look like:
public int[] twoSum(int[] numbers, int target) {
HashMap<Integer, Integer> numsMap = new HashMap<>();
for (int i = 0; i < numbers.length; i++) {
int complement = target - numbers[i];
if (numsMap.containsKey(complement)) {
return new int[] { numsMap.get(complement), i };
} else {
numsMap.put(numbers[i], i);
}
}
return null;
}
This solution has a $O(n)$ time complexity. The worst case scenario is still having the solution indices in the last and second to last place. However, this time, we only have to go through all the elements once to find the solution.
This solution has a $O(n)$ space complexity. The usage of the HashMap requires more space and needs to store $(n-1)$ elements when the solution indices are at the end of the array.
Conclusion
We started by implementing a brute force solution that had a $O(n^2)$ time complexity and a $O(1)$ space complexity. Our second solution uses a HashMap and has a $O(n)$ time and space complexity. Thus, it requires a little more space, but is a lot more efficient when it comes to running time. It is therefore the superior solution.