Anatomy of problem-solving
July 14, 2019 12 minutes reading time Development go
We are software developers and solving problems is what we do. But how do we do it? And what is a problem? What are the constraints?
We have to first understand the problem in order to be able to solve it. While this seems obvious, it often does not happen. We see so many flawed solutions because a problem or requirement was not understood correctly. Spending time to really understand the problem may not sound productive - we could be coding already. But the price to pay for the wrong start will be high.
Understand
Most of the time, we solve someone else’s problems. We call her the customer. In most cases she is not a software developer. So a problem is translated from the business domain into the technical domain where it should be solved. A lot can be lost during this process. Just be aware that the customer normally knows the business domain better than you, so she can give you the most valuable feedback. Talk to her as often as you can and explain the problem back to her. At the same time pay attention that all use-cases are covered. Customers tend to concentrate on the most common aspects. When you hear: “Normally, we do it like this”, this a a sure sign that not all cases are covered.
Don’t start coding before at least the following questions are answered with a “yes”:
- is the specification clear enough?
- do you know what goals should be reached?
- can you prioritize the elements of the solution?
- do you understand the constraints?
- do you understand the data?
Plan
Coming up with a plan requires good ideas and analytic skills. This is where it gets tricky: there’s no one-size-fits-all solution to a working plan. But here are a few hints:
- use your knowledge from prior experiences
- how have others solved similar problems?
- use divide and conquer to break down a problem into a set of smaller ones
- simplify if possible; generalization/abstraction can be simplification
- don’t try to optimize prematurely
- if you do not make progress, try to work backwards from the output (solution) to the data (problem)
- if you’re stuck, don’t rush it!
Execute
You have failed if you build a tall tower and it collapses while you put in the last stone - in fact any stone. That’s why you permanently have to make sure that what you have built so far can carry the weight of the rest, and much more. Unit tests, integration tests, system tests, and end-to-end tests to the rescue - but mostly unit tests.
Pay attention that your foundations are strong. If you build on shaky ground, the project might be doomed from the start. Is there something you have to evaluate first, like concepts or libraries, in order to know the risks involved in using them?
- work top-down
Start with the overall structure/architecture before digging into the details. - prioritize the most important aspects and the ones with the highest risks
That way you run into the hard problems first. If you have to change course, there’s only some amount of work lost. - break and organize dependencies
Have you read Clean Code from Robert C. Martin (462 pages, from 2008)? Decouple as much as you can, it will make testing much easier. - put fragments together as early as possible
The earlier all developers working on one project get feedback on how their components behave within the system, the better it will be. - write unit tests as if your life depends on it
Because … it might. Bugs that are found in production might have the most severe consequences. Management that fails to understand that and does not enforce it - because there is e.g. not enough time - is incompetent. - treat documentation as a first-class citizen
As a developer, don’t write documentation that repeats what you can already see in the code. Focus on describing intent instead. There will always be some overlap, but at least try hard enough. - make your code easy to read and understand
We spend an order of magnitude more time reading code than writing it. So please absolutely no Code Golf. You want to easily understand your own code many weeks after you wrote it. The same is true for someone else who’s reading it for the first time.
“If I had more time, I would have written a shorter letter”.
– Blaise Pascal
(also attributed to Mark Twain)
Yeah Hip, that’s really a good summary of the last point: put more effort into writing something, so that you don’t steal your reader’s time. If you can help him understand more quickly or easily, by all means, do it!
A small example
The following artificial example will cover only a small, selected sample from the topics mentioned above. It has a very narrow scope, but even then I cannot cover every aspect without making this post overly long. So I won’t. All code will be in Go. Here it goes:
You have a vector of non-negative integer values sorted in ascending order and a positive integer value d
not greater than 20.
The vector might contain duplicate values and contains not more than 10,000 elements. No element is greater than 20,000.
How many tuples (a, b, c)
of 3 elements from this vector can be formed where c - b == b - a == d
?
This implies that a < b < c
.
Example: if the distance d
is 2 and the vector is [1, 2, 3, 4, 5, 6]
, the tuples (1, 3, 5)
and (2, 4, 6)
satisfy the requirement.
These two tuples are the only ones, so the answer would be 2
.
See the HackerRank problem Beautiful Triplets for details. Duplicate values are allowed here, which is (officially) not allowed in the specification on HackerRank - but they break their own specs by having at least one test case that uses duplicates and checks that (test case 10 for the Go language).
Understand?
The specification seems easy enough, and since it has been on HackerRank for a couple of years, they should have had all the time needed for adding missing information, right? But still, is there anything that is unclear to you? Check the questions from the checklist above. The nature of the questions is not only technical.
Plan?
Since only basic data types and calculations are involved, you devise a plan to solve the problem head on:
- initialize a
counter
to 0 - from the left of the vector, take the next element (has value
e
, that’s the first one initially), go to step 6 if there’s no next element - check the rest of the vector if you find
e + d
, if not: continue with step 2 - from the position where you found
e + d
, check the rest of the vector if you finde + 2*d
, if not: continue with step 2 - we have a match: increment counter by 1 and go to step 2
counter
holds the answer
Execute?
You implement the solution from the plan and follow all the points:
- break and organize dependencies: all functions are pure functions except for input and output (side effects). But you still make them easily testable by using interfaces where possible.
- write unit tests: you have reached 100% code coverage except for the
main
function that wires everything together - including the specific values for in- and output locations. - treat documentation as a first-class citizen: you document every function and add the occasional additional comment in the body of some functions.
- make your code easy to read and understand: where necessary you resist the urge to use single letter variable names, even though this is considered idiomatic in Go. You even do a code review with a colleague to check if he understands the code and has suggestions for improvements.
What happens next
Before merging the code into the develop
branch in the Git repository, you recognize that you’ve missed a test for duplicate values.
You write it, it fails.
Of course it fails, you have not included it in the plan!
What has to change so that the test passes?
Ok, you get it: while searching for e + d
and e + 2*d
you should not stop, but you have to count how often they both occur (initially you have not thought about that).
Example: [1, 3, 3, 5, 5]
with d == 2
should lead to the answer 4
- accounting for every possible combination of the elements.
So in steps 3 and 4 you establish a counter for e + d
(call it countD1
) and e + 2*d
(countD2
) each.
In step 5 you increment the tuple-counter by the product from the other counters: counter += countD1 * countD2
, thereby accounting for all possibilities.
You change the implementation according to the revised plan, and the test passes. Well, time to commit again and merge into develop
.
Some time passes and you do other work. Eventually, develop
is merged into master
by the lead developer.
A little later a release branch is created, and off it goes to production.
Not shortly after, a bug report comes in: the program hangs.
Because your software uses metrics, your team can quickly identify the culprit: it’s your implementation from above.
What? All tests were OK and you cannot believe that it can hang.
After further investigation, your team finds out that someone was feeding a vector of 9,800 element to your code. It did not hang, it simply did not provide the answer within a couple of minutes. Only 9,800 elements and it takes that much time?
That’s the point where you start remembering Big O notation that describes an upper bound for the performance/complexity of an algorithm. After looking at your code, you recognize three nested loops:
- first loop iterates over the vector, taking each element
- the second loop iterates over the the rest of the vector trying to find
e + d
, it stops if an element is greater thane + d
- the third loop iterates over the the rest of the vector that’s left after
e + d
is found, trying to finde + 2*d
(it stops, too)
You now quickly realize that your solution has on upper bound of $\mathcal{O}\left(n^{3}\right)$. For a vector of 9,800 elements that’s approx. $10^{12}$. Wow!
Rethinking the solution
Even though your solution works in theory, it’s not usable for vectors beyond more than a couple of elements. You have to come up with another solution! So you start thinking.
If you have an element e
, can you quickly find out if elements e + d
and e + 2*d
are present in the vector?
Not if you only have a vector.
But if you had a dictionary where you could look up e + d
and e + 2*d
it would be easy.
So you decide to break the problem down into two parts:
- create a dictionary (hash-map), with each element as the key, and how often it occurs in the vector as the value
- iterate over the vector (current element is
e
) and look up each elemente + d
, ande + 2*d
, how often it occurs- multiply the counts to account for all combinations
- add the products over the counts of
e + d
ande + 2*d
This solution requires two passes over the vector: one to build the dictionary, an another to calculate the final count. After the experience with your first solution, you want to make sure that you have a really good algorithm. You think a little harder.
It occurs to you, that you can solve the problem with only one pass over the vector.
How?
Since the vector is sorted in ascending order, you would have processed e
and e + d
already when you reach e + 2*d
.
So you decide to look backward:
for each element, you look for e - d
and e - 2*d
instead, multiply the frequency count, update the dictionary for the current element e
by incrementing its frequency count, and adding it all together.
Thereby the dictionary is created while you iterate over the vector, not in a separate step.
Both new solutions have a complexity of $\mathcal{O}\left(n\right)$.
Final solution
After implementing all necessary changes and adding tests with generated vectors up to 10,000 elements, you merge the following code into develop
.
This was 2 years ago and you have not heard from it ever again.
But today, you decide to revisit it while “on-boarding” a new developer for the team.
He does not understand the code immediately, but it does not take that long either.
So you give him his first task: improve the code/the documentation so that it would have been easier for him.
He just went through the experience and should come up with some ideas, right?
package main
import (
"fmt"
"io"
"os"
)
// beautifulTriplets calculates the number of 3-element tuples in array a
// where each element has the same distance d from the middle element.
// the array a is in ascending order.
func beautifulTriplets(d int, a []int) int {
countTriplets := 0
frequency := make(map[int]int)
for _, n := range a {
countD1, hasD1 := frequency[n-d]
countD2, hasD2 := frequency[n-2*d]
if hasD1 && hasD2 {
// all combinations if there're duplicates
countTriplets += countD1 * countD2
}
frequency[n] += 1
}
return countTriplets
}
func main() {
in, out := os.Stdin, os.Stdout
d, a := readInput(in)
result := beautifulTriplets(d, a)
writeOutput(out, result)
}
// readInput reads the inputs from the reader.
// returns the distance d and an array that has ascending order.
func readInput(reader io.Reader) (int, []int) {
var n, d int
fmt.Fscan(reader, &n, &d)
a := make([]int, n)
for i := 0; i < n; i++ {
fmt.Fscan(reader, &a[i])
}
return d, a
}
// writeOutput writes the outputs to the writer.
func writeOutput(writer io.Writer, result int) {
fmt.Fprintln(writer, result)
}
We all want to write good software, but sometimes that’s not enough. If you know how to do it, and you want to do it, your success still depends on factors you cannot control. Inside those limits you should do the best work possible. Nobody should honestly expect what’s not possible. I will end this post with a quote found on Twitter.