testing-concept-stages-of-testing

Testing, like QA, is a practical activity. Real-world considerations play heavily into the drive towards what we call mature tests. The idea, proposed by some, that you should delete tests that haven't failed in a year isn't proposed because of the ossification those tests impose on your codebase, but because they take precious time to run.

The stage of the test might also be driven by the stage of the software it's being written to test.

Dry the tests up once you find the right abstraction, but don't be looking for the right abstraction while just trying to get the test to work.

I feel like there should be a concept of... "test maturity". It's comparable to product maturity (we don't put as much effort during QA towards older parts of the app, since they're more mature and less likely to break). It's like change impact analysis. The difference between the QA process where you focus on things that were likely impacted by changes (where you realize it's impossible to test everything) and automated testing is that you don't really notice the automated test time creeping up over time (he talks about this towards the middle of talk-how-to-stop-hating-your-test-suite). You accrete new tests and eventually you have a behemoth where you are forced to do focused testing while developing. (Also see testing-concept-test-scaling)

when thinking about maturity it's helpful to think about what someone called legacy code. "Legacy code is untested code" (find quoter)

We don't attach a value judgement to maturity, we simply want to recognize that it makes sense to do things differently at different stages in software development.

The QA thing is good because it's a process where the inability to scale it is much more obvious than with testing.

paper-why-most-unit-testing-is-waste#test-has-never-failed

"Most programmers want to "hear" the "information" that their program component works. So when they wrote their first function for this project three years ago they wrote a unit test for it. The test has never failed."

 

The scaffolding (mocked isolated tests, private method testing) that it can help to move through, in the same way that it helps to follow a series of granular steps while performing a refactoring. (talk about how Martin Fowler's process seemed like a very slow, overly process-i-fied process until you realize that you're never confused, and don't end up in long debugging sessions)

Fast contexts and multiple assertions per test become an important performance optimization technique as the test suite grows, but on new features you should start with one assertion per test with full test isolation and run as focused tests to aid in development speed. These one assertion per test tests capture the requirements very well (rspec can be run in documentation mode, which is extremely helpful during development), and are easier to debug during the churn of development.

DRY the tests up once you find the right abstraction, but don't be looking for the right abstraction while just trying to get the test to work

the later stage speed optimizations (fast contexts, etc) are a concession to the realities of computer speed.

Even so-called tautological tests, which ostensibly test only themselves, and later get in the way of refactoring, serve to create the little "islands of safety" that Michael Feathers refers to in his "Working Effectively with Legacy Code" book. There are times, early in development where those little islands of safety are a godsend. They tend to arise while you're in the inner loop in outside-in development, and they help you complete the inner loop. The whole point of completing the inner loop is to complete the outer loop. Once you complete the outer loop, you should revisit which of the inner loop tests (the unit tests) actually serve a purpose now that you also have outer loop coverage. You don't need testing-concept-redundant-coverage, and should actually look to avoid it.

paper-why-most-unit-testing-is-waste#get-rid-of-unit-tests-that-are-tested-by-system-tests

"get rid of unit tests that duplicate what system tests already do"

 
talk-therapeutic-refactoring#exobrain

"Refactoring makes you smarter. Refactoring basically gives you an exobrain. So you offload a bunch of those little details that under normal circumstances go into working memory into your tests. Once you start refactoring you start reclaiming your brain."

 

Stages of the tests

testing-concept-red-green-refactor

Stages of the tests - the actual stages

mocks-vs-stubs#avdi-only-reason-to-mock

"I'll also point out that as far as I'm concerned, the only... and I mean only reason I use mocks is to guide design during TDD. If I'm not writing tests first, there is very little point writing mocks. I believe I am consistent with the authors of GOOS (and inventors/popularizers of mocking) in this view." - Avdi Grimm somewhere on parley, I believe.

 

Stages of the developer learning testing

You have to pass through these stages. There are no shortcuts. Just because you know the stages exist doesn't mean that you can simply skip to the final stage.

An idea to add into this list later: As you become more experienced, you need less granular validation of your assumptions. This pushes towards higher-level testing... if something goes wrong at a lower level you won't be flummoxed for long, and may not even need to break out a debugger. A lot of experience and fast, integrated tests can work wonders. Speed is critical because you will still introduce bugs, and if you run your tests quickly thereafter, you will have an idea what introduced them even without very granular tests.

podcast-bike-shed-episode-23-rust#tdd-style-change

Derek: I think when I first started doing test-driven development I was very much line-by-line, and now I take bigger steps.

 
blog-post-interview-with-donald-knuth#unit-testing-to-feel-around

the idea of immediate compilation and 'unit tests' appeals to me only rarely, when I'm feeling my way in a totally unknown environment and need feedback about what works and what doesn't.

 
book-rails-4-test-prescriptions#slow-down

"In practice, the more complicated the problem is and the less I feel I understand the solution, the more purist I get, taking slow steps."

 

You don't hear about this too much anymore. There is a pretty broad awareness.

Stage 2 - You've heard of it, but haven't used it

Frankly, it's hard to know where to start with testing. Everything seems foreign.

Stage 3 - TATFT!

You use it and it has saved you, and now you view it as the end-all-be-all and want to apply it everywhere, and only with the "best practices".

Even the best people do this: witness Gary Bernhard's testing-concept-overmocking

Stage 4 - Burned and a little wary

You've gotten burned by slow tests, brittle tests, or too many tests and are starting to have a nuanced understanding of when, where, and what to test.

Stage 5 - Questioning the best practices dogma and being introspective about your process

You start to question the dogma around testing (DHH's post about TDD) blog-post-beyond-tdd (see the quote from person-j-b-rainsberger) (are you back to square 1 or just enlightened?)

I'd argue that you're not back to square one because square one is not testing out of ignorance or apathy, not as a strategy testing-concept-no-tests.

tweet-dhh-testing-level-up#level-up

"If your TL;DR of my talk and post on TDD was "great, I don't have to write tests!", your comprehension skills are inadequate. Level up."

 

At this stage you strive to achieve the right equilibrium between testing and not testing. You can't just skip directly to seeking this equilibrium because you won't have the skills to assess what proper equilibrium is.

The point is that regardless of the level that you're currently at, you need to be very introspective about your process so you can identify what works for you. But you certainly need to be in the range of knowing the pros and cons of testing before being dismissive of it as an idea.

This is the least comfortable stage to be in, because you're continually questioning your assumptions, and you're very open to everything you know being wrong.

blog-post-beyond-tdd#j-b-rainsberger-quote-from-comments

"Practicing TDD encouraged me to do things in a certain sequence and style: write only a few lines at a time, get a simple example working by hardcoding some data, remove duplication mercilessly. I now do those things whether I test-drive my code or don't. Even so, I tend to prefer to test-drive, most of the time.

I used to stop myself from writing code without test-driving it. I no longer do. As with Liz, with Bob, with others, that comes from my myriad-plus-hours of practice.

I have taught people the first rule in Liz's bullet list for years: when in doubt, write the test. I call this a Novice Rule (Dreyfus Model) and teach it that way: "The Novice Rule is 'if you're not sure, then write the test; you need the practice.'"

 
podcast-rr-184-what-we-know-about-software-development#tdd-no-quality-or-speed-difference

Now, in the same book where we report on that, there's a meta-study compiling all of the evidence that we had in 2010 about test-driven development. And it turns out that on balance, there is no evidence that it has any impact up or down on the quality of software or the speed with which it's produced.

 
podcast-rr-269-testing#write-a-test-then-throw-it-away-hard-to-get-used-to

At 35: SAM: I was saying there's a really interesting idea, the idea that you can write a test and then throw it away. It took me so long to get comfortable with that.

 
podcast-rr-269-testing#you-can-write-tests-afterwards-if-you-know-tdd

At 16:30 Jessica: "I tend to write in a very testable style even when I'm not writing tests. Which saves my butt when I go back and want to add tests later."

 

Referring Pages

codedtested testing-concept-fast-contexts

People

person-ben-orenstein person-avdi-grimm person-uncle-bob-martin