The critical flaw in this post is that Gary is not refuting what DHH said.
Gary makes this claim:
> You finally get to see what's really going on. David's tests run in a few minutes, and he's fine with that.
> I'm not fine with that. A lot of other people are not fine with that.
But what DHH actually said is this:
> You might think, well, that's pretty fast for a whole suite, but I still wouldn't want to wait 80 seconds every time I make a single change to my model, and want to test that. Of course not! Why on earth would you run your entire test harness for every single line change in a particular model? If you have so little confidence in the locality of your changes, the tests are indeed telling you that the system has overly high coupling.
and this:
> These days I can run the entire test suite for our Person model — 52 cases, 111 assertions — in just under 4 seconds from start to finish. Plenty fast enough for a great feedback cycle!
Using a workflow like Gary's, there's an argument to be made that 4 seconds is not acceptable, and this is why we want single files that can run in a few milliseconds.
However, that's not the only possible way of running tests, and the difference between 4 seconds and 300ms for the feedback you're actually interested is massively different than 300ms vs "a few minutes".
For a post that calls DHH out on a strawman, this is in itself a great example of one.
Yes, I focus on my per-file runtime in the post, and I mention David's suite runtime in one sentence at the beginning. They are not meant to be compared. David's file runtime is four seconds. This is unacceptable to me. This is unacceptable to other people who replied to your tweets. This would double the length of my high-speed TDD loop, which would make those portions of my TDD process take twice as long.
Yes, it would've been clearer for me to specifically address both suite runtimes and both unit runtimes. You know what else would've been clearer? All of the 2,000 words or so that I deleted from that post while I was editing it down into its final form. This is just how writing works. I don't think that it's misleading as written.
Of course, I've already told you, on Twitter, exactly my reasons for rejecting both four-minute suites and four-second test files. They're not in the post, but you know the reasons. You know that I wasn't selectively attacking a subset of his argument, because you know that I do have an answer for test file runtime. And yet, for some reason, here we are!
(For anyone reading this later, the tweets in question are gone. Lately I've been deleting all replies, as well as trivial non-replies, for Reasons.)
Gary makes this claim:
> You finally get to see what's really going on. David's tests run in a few minutes, and he's fine with that.
> I'm not fine with that. A lot of other people are not fine with that.
But what DHH actually said is this:
> You might think, well, that's pretty fast for a whole suite, but I still wouldn't want to wait 80 seconds every time I make a single change to my model, and want to test that. Of course not! Why on earth would you run your entire test harness for every single line change in a particular model? If you have so little confidence in the locality of your changes, the tests are indeed telling you that the system has overly high coupling.
and this:
> These days I can run the entire test suite for our Person model — 52 cases, 111 assertions — in just under 4 seconds from start to finish. Plenty fast enough for a great feedback cycle!
Using a workflow like Gary's, there's an argument to be made that 4 seconds is not acceptable, and this is why we want single files that can run in a few milliseconds.
However, that's not the only possible way of running tests, and the difference between 4 seconds and 300ms for the feedback you're actually interested is massively different than 300ms vs "a few minutes".
For a post that calls DHH out on a strawman, this is in itself a great example of one.