Tuesday, September 26, 2017

Why TDD?


Why do we care if people call what they're doing TDD or BDD or why do we care if they actually do it?

Saying It


When people mislabel what they’re doing and refer to it as “BDD” or “TDD” or “Scrum” or “Agile” when it isn't, it screws up all the conversations that follow until we manage to unravel that they’re really just doing automated testing, or iterations, or what-have-you.

Clarity in a conversation has the same value as clarity in code, and maybe more so.

Someone told me that they were doing TDD, and we were well into the conversation before I realized that everything I had said for several minutes had been totally misconstrued.

What they thought was TDD was “holding a testing sprint before release”

So, the value in crisp terminology is improved communication.

What is the value proposition for misusing terms?

I guess some people — even in the agile world — don’t know that TDD and BDD are processes, not artifacts or tools; and that neither is just another name for “automated testing.”

You’d think we all know better, but no.


Why does TDD beat just writing tests after using test-coverage tool?


It doesn't have to be TDD, you know. Cope says he can get equivalent results using DbC, and Cope isn’t given to making such statements lightly. So there are different ways to get similar results (it seems).

If you find other ways of achieving the same goals (or over-achieving by also solving other goals) then, by all means, teach me what you learn.

Of course, you can use test coverage tools with TDD, so really having the coverage tool is a wash. There is nothing about TDD that forbids having measurement.

Let's stick to TDD v. Test-After for a moment:

  • It brings testing across the RW/BS line. Test-after tends to live on the BS side, even though we like to think it doesn’t.
  • It begins with the developer as a user of a class/function and ends up with the developer as an implementer.
  • It keeps the code runnable at all times, since you can't have the code "up on blocks" for hours at a time and run the tests.
  • The feedback loops are hella fast. You know that everything was fine 10 seconds ago, and you’ve only written two lines of code and now it’s broken.


Typically, the code is written (as an implementor) so it has an inside-out API. You get the algorithm right, and then you expose the variables via some function calls and make sure that it does its job. This can be complicated in some cases, so the code that I'm typing at 14:30 may have been typed into code that was last run yesterday. Or the day before. If I'm writing it all in one go, I don't have to keep it runnable. After all, I only need it to work when it's done.

And then it’s done. Well, you know, other than tests.

The code could conceivably be shipped, and the author has checked and double-checked it on the fly. We've had the debugger out to fix some problems so we know it works.

Still, the Powers That Be say there have to be automated tests. So there is an immediate emotional foot-dragging and obligatory feel to the work, but we’re professionals so we push through…

 … and then we see that to test some variation, we would have to change the code. That’s one thing when it’s being written, but dammit the code is done and I don’t want to change it just to make the tests pass; how much do I need this test? This is another source of emotional foot-dragging and obligation. Maybe we push through that and change the code or write a complicated test.

… and writing the test we see that it’s a pain to pass all these parameters. It’s a little ticklish because there are 3 integers in the parameter list in a row so it’s easy to get the wrong value in the wrong place. OTOH, the code is done and changing the interface will mean changing the finished code and all the tests. Emotional foot-dragging kicks in. Can we just leave it the way it is? Note that in TDD we would be faced with this decision repeatedly, increasing the likelihood that we would revise the interface for safety's sake.

There are other points of friction. While there should logically be NO DIFFERENCE in doing test-after instead of test-before, it somehow just never turns out that way.

The only thing that test-after ensures is that there are tests; one of the less interesting side-effects of TDD. TDD isn’t about writing tests; it’s about driving the development of code in a special way with the side-effect that there is pretty good test coverage at the end.

Don't take my word for it.


Anyone is welcome to try an experiment.

If you are not sure that TDD makes any sense, or that it's effective, then try doing TDD for two weeks and then doing test-after for two weeks and evaluate the quality of the code and the quality of the tests and the quality of the experience.

  • Is the non-TDD way as safe? 
  • Is the code as frequently runnable and integrated?
  • Is the feedback as fast? 
  • Does it result in as good a code interface? 
  • Does it feel like the testing is a natural part of the work flow? 
  • Does it lend to refactoring as easily? 
  • Is the code just as good (accurate, simple, clear)?
  • What other benefits do you get from the non-TDD method? 

And don't just limit that to test-after. There are probably better ways than TDD, and we'll find them by going forward. Likely the answer is not "just don't do it" but in doing something even more simple, valuable, and profound in its place.

FIND THAT BETTER WAY!

And then come here and post a link to what you've learned. I'm all ears!

No comments:

Post a Comment