At 8:30 he says that isolated tests are about design. He doesn't mention speed at all. I believe that's one of their big advantages.
To cover all the scenarios with integration tests, you need to have a tremendous number of tests.
"program to interfaces"
cost of change curve is a fallacy (but also sort of true)
Basically this talk is the opposite of this blog post, which I commented on:
The combinatorial explosion which results from integrated tests means that you'll never be able to test everything. Happy paths are, as you say, the only reasonable thing to test at this level. It's easy to fall into the trap of identifying an edge case via integrated tests and deciding to add a regression test to cover that case, deluding yourself into believing that you have better coverage than before, whereas really you've covered only one more of a virtually limitless number of combinations.
Integrated tests provide no interface guidance. Going to an extreme, even if you had excellent acceptance test coverage, the code could still be a mess under the covers. I wonder how much this matters if you're already an ace coder and don't need the design pressure that communication and contract tests give you.
(in fact, J.B. Rainsburger (the talk author) himself said this:
"Practicing TDD encouraged me to do things in a certain sequence and style: write only a few lines at a time, get a simple example working by hardcoding some data, remove duplication mercilessly. I now do those things whether I test-drive my code or don't. Even so, I tend to prefer to test-drive, most of the time.
I used to stop myself from writing code without test-driving it. I no longer do. As with Liz, with Bob, with others, that comes from my myriad-plus-hours of practice.
I have taught people the first rule in Liz's bullet list for years: when in doubt, write the test. I call this a Novice Rule (Dreyfus Model) and teach it that way: "The Novice Rule is 'if you're not sure, then write the test; you need the practice.'"