What do you normally do before coding some hard algorithm? Write directly the tests? Write some diagram first to detect what to test? Which type of diagrams or notes do you use?
And, do you follow the same steps for more easy algorithms? Or do you miss some steps?
Top comments (6)
I rarely find I encounter challenges as straightforward as "hard algorithm".
If it's adding functionality or fixing an issue with existing parts of the code it probably starts with a bit of reading through any existing code I might touch, and then ideally sketching out some functionality, followed by writing the tests to reflect that functionality and then writing the features one test at a time.
how do you sketch ? like UML diagrams or things you understand?
If I am not sure about the implementation, I open a text editor and type out the requirements that need to be met. Like bullet points, but more free form. Then I consider each bullet point, and I start to type the considerations for it nested underneath the point. I go back over each one several times and update them as I think of things. Sometimes this exercise just helps clarify my thinking on what to do but I don't actually look at it again. Other times I will go through it as I implement. It rarely ends up structured in code just like in the text file, but it helps.
If the problem at hand is more conceptual, I am likely to draw boxes and lines on a piece of paper or whiteboard. One thing I like to do is for every box, I draw speech bubbles off of it to represent each concern that needs to be addressed by that piece. For instance, if I represent an API as a box I might have bubbles coming off it for: Cross-Origin Resource Sharing (CORS), JWT auth, SSL cert mgmt, logging, secret mgmt, config, throttling. And then I will continue to draw bubbles off those bubbles to represent underlying concerns for those aspects. Often I will even draw branching chains of bubbles to represent possible ways to implement something, and bubbles off each implementation representing the trade-offs. That helps me decide which trade-offs are more favorable for a particular piece of the design.
When I'm getting into something challenging, my first goal is to understand the code that's already there. I sometimes write tests just to experiment and see what the code does under specific circumstances, but usually I just read it thoroughly and make sure I understand it. (This does not cover the circumstance where you're dealing with horrible legacy code with no tests, no documentation, and no institutional knowledge beyond the desperate certainty that it must not break. That's out of scope for today.)
I draw a lot of diagrams (mostly just boxes and arrows on paper) to help me understand the code, or to help me clarify how I plan to implement something. (Mostly I focus on how the data moves through the system, as that's usually more illuminating than looking directly on how the code operates. The code is the how, the data is the why.)
When I'm starting to code, I'll write one or two basic test cases to clarify my thinking and give me an easy way to run in a debugger, and add more tests as I get closer to a finished product.
First I take a look in Knuth, The Art of Programming.
Then I take a look in Cormen, Introduction to Algorithms.
If this doesn't answer my questions I'll hope timing some alternatives will, so I'd go for tests as soon as I have rough implementations ready.
It isn't certain that notes or UML cover or can predict the actual state of the program at some point or other during run-time (unless this is a formality, and then it is known to be formally so), so great debugging and inspection facilities are more important than extensive documentation of algorithms under development. When the code is stable documentation should explain it, though.
I describe the problem in plain language or I draw boxes with arrows on paper. This is also the point where I should do more TDD than I actually do...