talk-simulation-testing

https://www.youtube.com/watch?v=N5HyVUPuU0E

From the description of the video:

Testing is not about proving a system is correct. It's a search problem. We look for paths through state space that result in errors. Unit testing explores a tiny subset of possible pathways. Even 100% unit test coverage doesn't guarantee 100% state space coverage.

Scripted automated tests are only as rigorous as your most devious testers' imaginations.

talk-simulation-testing#controlled-randomness-in-simulation1We can improved our results with controlled randomness. We can simulate inputs to the system under test, using randomness to broaden our search of the state space. At the same time, we control the randomness to ensure our tests are repeatable and that we can verify when bugs are fixed. talk-simulation-testing#controlled-randomness-in-simulation1

This talk will introduce the structure of simulation testing as a general technique. We will briefly discuss the Simulant open source framework as an instantiation of these ideas.


talk-simulation-testing#simulation-is-subset-of-property-based-testing1At 6:30 simulation testing is a subset of property-based testing talk-simulation-testing#simulation-is-subset-of-property-based-testing1

talk-simulation-testing#famous-inputs1At 7:45 famous inputs to property-based testing frameworks, like for integers 1, -1, 0, int max, int min talk-simulation-testing#famous-inputs1

At 10:15 when we move out to the scale of full systems, then that's simulation (like property-based systems applied to the system level)

talk-simulation-testing#use-randomness-to-create-deterministic-scriptAt 10:45 generate repeatable script of inputs that feed in the non-determinism talk-simulation-testing#use-randomness-to-create-deterministic-script

At 11:30 he references talk-testing-distributed-systems-with-simulations (which was in the same year at Strangeloop)

At 13:30 it's a search problem where you are searching the state space for a problem

At 30:50 they record a bunch of information about each test, but they write that information directly to the local disk and at the end they update the actual database with the run information. That is to not introduce slowness by contacting the database all the time during the test.

At 33 the validations are just queries on all of the data generated by the test run. When a validation fails it just stores the data structure that describes the failure and (I'm guessing) the parameters that lead to it. That can then be used to create console output, on graphs, or whatever.

At 36 when you run simulation test you need to decide how long to run them for. Unlike other tests they can run for a short time or a long time, it just depends how you value the tradeoff of quick results vs confidence. One approach might be to make a shorter suite that runs for developers prior to checkin, and a longer suite that runs for CI offline.

talk-simulation-testing#clock-multiplier1At 36:35 he calls the thing that speeds up the clock for simulation a clock multiplier. It allows you to turn up the intensity of your test. talk-simulation-testing#clock-multiplier1

Referring Pages

data-architecture-glossary tag-simulation testing-concept-simulation-testing testing-concept-famous-inputs

People

person-michael-nygard