DEV Community

Discussion on: Solve this simple problem with TDD

Collapse
 
jon_fm profile image
Jon Sullivan

I think the tricky thing about this particular thought experiment is the 'arbitrary' part - setting up a proper test generally requires being specific about the test case and having a tangible case to test. This problem subverts that by stating that the arrays (in this context) can be infinitely deep.

One approach that might actually help is to split this problem into two layers. The first layer is identifying two arrays that need to be merged (thus one is contained within the other). The second layer is actually combining the two arrays. That may seem a bit arbitrary, but splitting these two concepts will actually help us test.

Let's get more specific. Let's say that ResponsibilityA is simply determining that (if our scope is an array): we've found an array within it that needs to be decomposed. For the sake of tangible examples, this could be foo = [1, 2, [bar]] and ResponsibilityA(foo) would determine that foo needs [bar] decomposed at position 2. That's it. It doesn't care what bar itself is or what it contains.

Now, with that basis we can define ResponsibilityB as the decomposer. Its job is also fairly simple - do a single de-nesting. Again for tangibility, if we have baz = [1, 2, 3] and we call Responsibility(baz), it ought to return 1, 2, 3. This may seem like a small distinction since all we did was remove the square brackets, but the difference here is that it turned baz from a single object into three objects. If this seems hard to understand from a practical standpoint, that's because it is 🙂 most languages don't support a function that can return an arbitrary number of return objects, so you can either consider this as returning an enumerable / enumeration, or you can abstract this entire concept to Javascript's style spread operator: ...baz

The last step to making these two machines work together is setting up the structure of ResponsibilityA a little more. Let's say that as ResponsibilityA crawls through an array, once it finds a sub-array that needs to be decomposed, it actually calls out to ResponsibilityB to decompose that sub-array in real-time and in place, then re-assesses that index before moving forward. To put that into the visual, let's say foo = [9, 9, [foo, bar, [x, y, z]], 9]. ResponsibilityA(foo) would then begin crawling:

  • Element at position 0 is 9 which is not an array, move forward
  • Element at position 1 is 9 which is not an array, move forward
  • Element at position 2 is [foo, bar, [x, y, z]] which is an array:
    • Call ResponsibilityB and pass [foo, bar, [x, y, z]]
      • ResponsibilityB returns foo, bar, [x, y, z]
    • ResponsibilityA replaces element at position 2 with the returned value
    • ResponsibilityA augments loop or iterations to re-assess element at position 2 next
    • (for mental debugging purposes, foo is now [9, 9, foo, bar, [x, y, z], 9]
  • Element at position 2 is foo which is not an array, move forward
  • Element at position 3 is bar which is not an array, move forward
  • Element at position 4 is [x, y, z] which is an array:
    • Call ResponsibilityB and pass [x, y, z]
      • ResponsibilityB returns x, y, z
    • ResponsibilityA replaces element at position 4 with the returned value
    • ResponsibilityA augments loop or iterations to re-assess element at position 4 next
    • (for mental debugging purposes, foo is now [9, 9, foo, bar, x, y, z, 9]
  • Element at position 4 is x which is not an array, move forward
  • Element at position 5 is y which is not an array, move forward
  • Element at position 6 is z which is not an array, move forward
  • Element at position 7 is 9 which is not an array move forward
  • Index == Length; complete; return foo ([9, 9, foo, bar, x, y, z, 9])

So what's the point of splitting these two concepts? Each one is individually testable! As I mentioned above, the "arbitrary" bit of the problem statement prevents us from writing a full test for the problem since you can't write a test with arbitrary data. That would be infinite. What we can do is write a test for each of the individual responsibilities above to make sure they work independently, then just a simple test case to prove that they work together (the above walk-through is a perfect 'simple test case to prove that they work together', so we'll use it below).

(For the math geeks out there, this process is effectively similar to constructing a math proof on the basis of induction 🤓)

So let's do it. ResponsibilityA is a single-layer deep concern, meaning that if you call ResponsibilityA([1, 2, [foo, [bar]]), it will recognize that [foo, [bar]] needs to be decomposed at position 2 but it will not dig further into that array to also determine that [bar] needs to be decomposed too. Cool? Let's write a test then. In order to cover the case of a nested array being found at the first position, middle, and last position, let's just wrap this into a single test of inputs and outputs:

If I give ResponsibilityA an argument of [[1, 2], [[a], b], [#, $]], it should identify that array decompositions need to occur at positions 0, 1, and 2. How do we test that? We mock ResponsibilityB and expect it to receive a call with argument [1, 2]. Let's also mock it to return foo instead of 1, 2 so we can prove that the "replace in-place" bit is working too. So overall, we will expect ResponsibilityB to:

  • #1 (mentioned above) Receive a call with argument [1, 2] and mock return foo
  • #2 Receive a call with argument [[a], b] and mocked return [bar], baz
  • #3 Receive a call with argument [bar] and mocked return bar
  • #4 Receive a call with argument [#, $], and we'll mock it to return qux.

If we run that test, we can rely on the expectations of ResponsibilityB receiving those four calls with those three specific arguments, and the mock returns should guarantee that ResponsibilityA ultimately returns a final product of [foo, bar, baz, qux]. That's a perfectly valid test of ResponsibilityA that proves it is

  • Identifying sub-arrays
  • Calling to ResponsibilityB and passing the sub-arrays as it encounters them (indicated by receiving call #3 after getting response #2 [instead of jumping straight from #2 to #4])
  • Replacing the sub-array object at that index in-place with whatever ResponsibilityB gives back
  • Re-assessing the index of the sub-array it passed to ResponsibilityB (since #2 above sends back an array in the first position) after it does the replace-in-place

That's awesome. That test proves that ResponsibilityA does exactly what it's intended to do. Now we just need to test that ResponsibilityB actually does what it's supposed to.

For the sake of brevity, let's just say that if I pass ResponsibilityB [1, 2, 3] it should return 1, 2, 3 and if I pass it [1, [2], 3] it should return 1, [2], 3 (just takes off the outer brackets).

Since I've proven that ResponsibilityA correctly identifies and replaces sub-arrays in place (then re-assesses the same index) but doesn't actually determine how to de-nest the array and I've proven that ResponsibilityB can de-nest a single array, putting those together does indeed prove that de-nesting at an arbitrary length and depth is achieved. If that's hard to understand, that's totally okay! Induction is a really tough concept to wrap your head around. We're effectively proving that each responsibility works on its own arbitrarily and therefore putting them together will work arbitrarily too.

Technically we ought to also have at least one test that tests the two things actually working together, so without mocking what ResponsibilityB should expect and mocking what it will return, we can just write a simple test that says:

If I pass [[1, 2], [[a], b], [#, $]] to ResponsibilityA, it should return [9, 9, foo, bar, x, y, z, 9]. Same example from above but the idea is the same- prove both responsibilities individually then use a simple test to prove that they work together and you've proven their potential.

I hope that makes sense.. sorry it turned into an essay!!

-
JonSullivanDev

Also thanks for the inspiration; might turn this into a full blog post

Collapse
 
michelemauro profile image
michelemauro • Edited

Well Jon, this is more a proof than an implementation. While technically correct, this approach will however fail if you have an infinite (i.e. unknown in advance) array, or a stream.
What you call "responsibilities" I call "the different cases when inspecting next element". My first solution didn't yield a lazy solution like I wanted, however; I should try again and see if the same subdivision that you envision emerges from that approach or not.

The length and depth of your answer, however, begs a question: what really is a test? is the goal of TDD to prove a piece of code "correct", or just that "it works"?

You do prove that your approach is "correct" (for finite length arrays); but is this a "unit test"? Would you call this TDD?

Collapse
 
delbetu profile image
M Bellucci

Thanks Jon for taking the time for thinking of such a detailed solution, but the core of my question is TDD.
I wonder if this problem can be solved in a series of micro red-green-refactor cycles (30 seconds)
I don't want a solution I want you to try it and heard about your thoughts.
I'm questioning the usage of TDD not the problem.

Collapse
 
jon_fm profile image
Jon Sullivan

Well forgive me for being a rather wordy fellow but the tail end of my solution does outline the very specific red/green tests you could write to TDD the problem... I just give a lot of foundational theory basis for why I chose those tests ;)

You can TDD anything given the right mindset :D

@michelemauro I didn't read streams or infinite lists as being part of the parameters of the problem but the same solution could be slightly adjusted (really just in ResponsibilityB to handle infinite sequences and/or streams pretty readily.

Anyway, all around good conversation guys - cheers 👍🏻

Thread Thread
 
delbetu profile image
M Bellucci

Ok, as soon as I have some free time I'll try to follow your tests and tell you back what was my experience.
thank you!

Thread Thread
 
delbetu profile image
M Bellucci

Reading again all of your answer so you are describing an algorithm which possibly solves the problem and you identify two sub problem that can be tested individually,

This is a valid technique for solving a problem but is totally the opposite of tdd

Because in tdd you don’t know the solution in advance, as you resolve every micro test with the minimum amount of code to satisfy the test you discover the algorithm.
So you don’t know the algorithm in advance.