As I have also outlined in destroy-all-software, in Destroy All Software #42, after writing a suite of tests, Gary does what heckle does, but he does it manually. He changes all conditionals to the opposite, and also forces them to both true and false to see that tests fail.
I've started to do this, too, but unlike Gary's approach, I try to do that conditional checking both manually, but then also programatically. I will find a way to DRY up the code as much as possible so that I can really exercise the one thing that should be the difference, and prove that it's actually working correctly for that reason.
An extremely simple example might be:
describe "team control affecting attributes used" do
it "uses the extended attributes when team has full control" do
TeamSetting.set(team.id, :has_full_control, true)
expect_uses_the_extended_attributes
end
it "does not use the extended attributes when team does not have full control" do
TeamSetting.set(team.id, :has_full_control, false)
expect_does_not_use_the_extended_attributes
end
end
I exercise both the positive and negative cases, and try to have the code in the spec that causes the change in outcome be as small and explicit as possible.
This could also be a part of the reason why people like to do the explicit testing-concept-red-green-refactor... forcing the red before writing the code.
We do this systematically to ensure that we are not falling prey to confirmation-bias, as described in article-quick-puzzle-to-test-your-problem-solving-skills : We're much more likely to think about positive situations than negative ones, about why something might go right than wrong and about questions to which the answer is yes, not no. (#) . Making a habit of this type of testing forces you to think of how to make the test fail, and by extension, about exactly why it is passing.
This is another area where testing-concept-fast-tests matter. You'll be more likely to do this if it doesn't take more than a few seconds to try to negative case.