Friday, March 30, 2012

Why Performance Matters for Testing

While speed certainly matters in a production/e-commerce environment, I think it also is a significant factor for unit testing and especially TDD. While my tests haven't been tuned and certainly aren't the fastest in the world, my frameworks do test themselves in around 3 seconds using at present pretty much exactly 1000 tests.

That level of performance makes testing qualitatively different from having tests that run in 7 minutes down from 15 after some serious performance tuning described in the article, or 5 minutes down from 10. It means running the unit tests can be a normal part of edit-compile-run cycle, rather than a separate activity, supporting the idea that the tests are simply an intrinsic part of the code.

These are not lightweight tests, for example setting up and tearing down Postscript interpreters or running PDF and Postscript documents through text extraction engines. However, they do run in memory almost exclusively and mostly in a compiled language.

However, even 3 seconds is still too long a delay, feedback should be instantaneous to give a true interactive programming experience, or at least not get in the way of that experience. I saw a nice approach to this at the Hasso Plattner Institute using code coverage analysis to interactively run tests sorted by relevance to the code being edited (in Smalltalk). A simpler approach might be to just run the unit tests in the background while editing.

1 comment:

  1. Holger HoffstätteMarch 30, 2012 at 2:06 PM

    Some guy called Kent Beck had this idea too: http://junitmax.com/ :-)
    Unfortunately I don't think it's used much; behind-the-scenes "magic" still seems to "scare" many people. Also many people's "unit" tests are actually functional/integration/whatever-tests and often don't even run on the dev's machine - been there, seen that: people just commit random crap and let the CI server "sort it out". Anti-social behaviour at its best..

    ReplyDelete