Jump to content

Wikipedia:Lua unit testing

From Wikipedia, the free encyclopedia

This essay describes issues about testing units of Lua script as used to write Wikipedia articles, or support analysis of Wikipedia operations.

Overview of integration and unit testing

[edit]

There are related issues of "unit testing" combined with larger "integration testing" to pass a customer's wishes in "acceptance testing" where not every possible option must work because "YAGNI" (You Aren't Gonna Need It) when the module is in actual use for real "customers". After a major upgrade, the whole subsystem typically goes through "regression testing" to again insure major features still work. If the product will be in wide-scale use for "industrial strength" work, then better run "stress testing" to ensure it can handle the heavy workload, such as testing Lua-based cites with a huge article which contains 450 citations or such.

Tactics for unit-tests: Now, back to unit testing, there is a tactic of self-testing in units, such as a debug-mode switch to run extra testing inside a unit, along with typical "parameter validation" which catches and reports improper data sent into the unit. However, often there will be an external "test harness" of specially written unit-testing routines which call separate units, repeatedly, just to feed in data and compare results. For screen-oriented output, the unit-test harness often captures the output directly, as if reading the screen data to check for improper results.

Reporting results from units: Now, major problems are typically found by 2 aspects: data values, and logical control flow (in execution through multiple units). To handle the potential complexity, during operations, each unit can issue "trace statements" which indicate logical flow, from unit to unit, and also show the values of variables along the way, in case they are accidentally clobbered, such as by misusing one variable to store results intended for another variable. The traced information is often written into a huge "log file" (or set of them) because there are often too many details changing to view live, on-screen, so the log file will be re-examined at the end of an extensive run. In some cases, operational errors, as triggered by rare combinations of parameters, will be detected in log files collected for days, or weeks, before the unusual combination recurs, and the details are logged into a file.

Error detection and reporting in units: Overall, each unit, in unit testing, should be checked to ensure proper logging of its data into the log file (often as an article's edit-preview page) or maintenance categories, or writing the debug trace statements, as nothing is worse than logging the wrong data into a multi-day file, only to be misled (or delayed) by errors in error reporting. The emphasis on "test each unit" before integration testing, is part of "test-first development" which has been shown that "a stitch in time saves nine" as to catch an error early, inside a unit, long before it generates really bizarre results downstream, when passed into other units, unless they also check for invalid data, in "black box engineering". For people who do not follow these tactics, then it might take months of trial-and-error changes until problems are pinpointed, or most likely, the bugs will remain in a system for years, with manual workarounds to compensate, because no one could see how to fix the problems, once errors inside a unit have impacted a larger complex system. -Wikid77 (talk) 08:45, 15 March 2013 (UTC)[reply]

See also

[edit]