Jump to content

Template talk:Convert/testcases/sigfig

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Where are the actual test cases for the live template?

[edit]

I was trying to add a failing test case, but I fail to see any actual test cases here.

The failing test case is {{convert|1|to|3|m|ftin|abbr=on|sigfig=1}}, which should give "1 to 3 m (3 to 10 ft)" but currently gives "1 to 3 m (3 ft 3 in to 9 ft 10 in)". It's a bug related to significant figures, so I looked here. I tried looking in other places too, but I couldn't find the actual (and preferably automated) test cases.

With actual test cases I mean stating the input such as "{{convert|1|to|3|m|ftin|abbr=on|sigfig=1}}", and the literal expected output as a plain text string such as "1 to 3 m (3 to 10 ft)".

With automated test cases I mean that for each test case you automatically get a green check mark (or so) if it passes or a red cross mark (or so) if it fails, and a total count of fails, so you get a quick overview with how many fails there are currently (as in the https://wiki.riteme.site/wiki/Module:UnitTests framework; example: https://wiki.riteme.site/wiki/Module_talk:Urltowiki/testcases).

Where are these actual test cases for the {{convert}} template (or module) placed?

I would think they should be at https://wiki.riteme.site/wiki/Module:Convert/testcases, but that just redirects to https://wiki.riteme.site/wiki/Template:Convert/testcases#Sandbox_testcases which does refer to automated test cases for 4 sandboxes (such as https://wiki.riteme.site/wiki/Template_talk:Convert/testcases/sandbox1), but I can't find the test cases for the actual live template.

Where are the automated test cases for the live template? I'm thinking that there must be an authoritative master test suite for the live template, to show what is currently expected — and how it currently fails to deliver the expected when that is the case.

If there actually isn't any test suite for the live template because it's unwanted to have a test suite that fails when a bug hasn't been fixed yet (making it harder to discover when a new feature creates a regression), perhaps the tests that are known to fail could be placed in a separate section for known bugs.

--Jhertel (talk) 12:08, 5 June 2020 (UTC)[reply]