Discussion about this post

User's avatar
Philip Heath's avatar

There is an important aspect of this that I rarely if ever see talked about. The thing that we are testing actually places a ceiling on how effective our testing efforts are going to be. Have you ever heard the complaint, "I can't run my tests locally because I need a working version of 'X' in order to test the business logic." where X is some external dependency such as a database, message queue, etc.? Software that suffers from this type of problem lacks testability, and it places limits on tests that no testing technique, tool, or strategy can overcome. The code has to be written in a way that clearly separates business logic from implementation details such as those previously mentioned.

I would also add a nuance to brittle tests. These happen because the setup, verify, and teardown steps of tests are tightly coupled to how the code is implemented - often the UI is the only way to create test data. But what if the developers treated testability as a first-class requirement that is met by providing a dedicated testing API that encapsulates the details of setup, verify, and teardown from the tests themselves? If it is a core part of the software to be testable, then the testing API would be maintained as the implementation details change. With this kind of software, tests are resilient. Failures mean that a requirement is no longer working as expected rather than the more likely being that the test is just flaky.

Wouldn't it be great to test software that has high testability?

Expand full comment

No posts