Textpattern CMS support forum
Tag-based unit testing
There’s been a GitHub issue about unit testing open for a while now. While doing this on the admin-side and catching Ajax stuff requires more thought than just conventional unit testing (as outlined in the comments) there’s nothing to stop us doing tag unit testing on a fresh install right now.
All it really needs(!) is someone to come up with a single article or Form (or series thereof) that contains every tag in it, in as many combinations as possible. This includes:
- Single tags.
- Container tags.
- Tag nesting.
- Core attributes (
- Custom fields – this will unfortunately require changes to a stock install as CFs will need to be assigned to an article to test).
- Shortcodes and short-tags.
- CSS and other content (files, images, links) that will also require known-content to be present in the DB.
The notion is:
- Install latest dev/beta/whatever.
- Load the test article(s) and Form(s).
- View the test article(s) on an empty Page template (so we don’t get any unexpected page furniture output).
- Grab the HTML produced.
- Compare it to a “known good” hunk of HTML obtained from a previous, verified run of that same article.
If the two chunks of HTML match, no regressions have been introduced. The new article and its output then become the new baseline. Next time we change any core tags, we run the test again and compare them to check we haven’t introduced any bugs.
If we add new tags then we need to generate a new baseline (write more test cases in our big article) once we’ve verified that the output is as we expect.
Steve (netcarver) and I did this via a YAML file when we were testing Textile. We had a set of 80+ Textile tests that were run through a simple bootstrap script that loaded up a Textile instance, called
->textileThis() on all the tests and captured the output.
It then compared the results to our gold master HTML and flagged any instances, colour coded green, amber or red based on the severity of the violation, and what the differences were between what we got and what we expected. It also displayed a little table at the bottom with the overall results of the number of tests that passed and failed.
Armed with this, we could be sure that any changes to the parser, surrounding functions, or tag code didn’t inadvertently break anything else.
I’d love to do something similar for Textpattern tags.
This, I understand, is an uphill task for one person and the barrage of tags we have. But as a community, surely we could come up with a series of tests and collate them somewhere, then compile them into a single resource and build a tiny test suite around it. Anyone could then download that periodically – perhaps it also includes a database with the necessary tests and content in it – and verify that everything works as it should.
Let’s say this was desirable (and I think it is), how would we proceed? Where should we collect these tests and package them up?
Perhaps it would be nice to put them in a dedicated directory in the main dev GitHub repo that was excluded when we made a release? Or in the
.github directory where some of our other tools reside? Or a separate repo entirely just for testing, with pre-packaged content? Open to ideas on the best way to proceed.
The point is, having this resource available would be a huge benefit to verify our core wranglings. The fact it only tests the public-facing code is fine right now – we can worry about the admin-side later.
It doesn’t even have to be a full Txp installation. We could build a simple command-line wrapper that loads up a parser and runs the tests, then does a diff against the gold master. Having it run in a browsers is a nice-to-have because we could prettify it.
Maybe it could also run in “benchmark mode” to also output the time it took, or repeat the tests N times, which would be handy for optimizations, as we could use it to detect where the bottlenecks were.
I think we should start doing this. Step 1 is to figure out how to build it or to set up a test environment with known content that is accessible to everyone. Ideas welcome.
Then I’d like to call upon everyone in the community to please submit tag tests – as many as you can think of – that we can add to a huge list that can be used to test and verify the operation of Txp. Everything from the simplest tags with no attributes to more complex tags that stretch the parser. If each tag test could be submitted with its expected HTML output, we could marry the two and start to compile an ultimate tag testing suite.
Whaddya reckon? Thanks in advance for any input.
Re: Tag-based unit testing
One way to do this and keep it compartmentalised would be to introduce a Form for each test/tag. Each Form could have its tags first and its expected output in a second block in the same Form.
If we put all such Forms in a single group (say,
Section or something) and confined shortcodes to other form types (e.g.
Article) then we could simply write a Page template to load up all ‘Section’ Forms and run through them one by one, executing the first half and comparing it to the known-good HTML in the second half of each Form, spitting out the results of the comparison.
Would that work?
EDIT: if the test suite was a separate repo, people could make pull requests to add or amend tests and expected output, which could then be compiled into the final repo using a CI hook ready for anyone to download and run… maybe?
Last edited by Bloke (2018-04-02 09:26:34)
Re: Tag-based unit testing
As of right now, we have a shiny new Unit Testing framework available for the Textpattern tag language only.
Please visit the official unit-test repo. I’ll add some usage instructions there in due course but here’s the lowdown:
- Clone the repo and merge it with a (preferably empty) Textpattern installation of your choosing. This will add/overwrite the testing files (images currently) and tag-test theme in the setup directory.
- Install Textpattern as normal.
- In a section of your choosing (either a new one or an existing one) assign the tag-test theme’s
- Visit that section in the front end of your site to run any tests you’ve set up.
If you are including this repo in an existing installation, you can copy the textpattern/setup/themes/tag-test to your /themes/ directory and import the theme from the Textpattern back-end, OR just change your Theme directory pref to point to textpattern/setup/themes.
Tests are defined (currently) as forms of type ‘Section’. It’d be nice to use a dedicated type, but as it’s not possible to create these on the fly (yet) we’ll make do with that type for now.
A test comprises a YAML document that is of the following format – the spacing is vital – no space before a test name, two spaces before a directive, and four spaces before input/output. The pipe means to expect multi-line content:
Name of test for info purposes input: | Some <txp:.. att1="value" att2="value" boolean /> etc expect: | Expected HTML output Name of another test for info purposes input: | A different <txp:.. att1="value" att2="value" boolean /> etc expect: | Expected HTML output of second test ...
In the repo so far are two tests: one for images and one for site tags, each with a couple of tests in them. When you visit the section it grabs all forms of type ‘Section’ and runs through them one by one, executing the tags and comparing them to the expected output.
Run an individual test, or list of tests by adding the
?name=test-form-name1, test-form-name2, ... to the URL and it will execute just those Forms in the order you specify. Otherwise, it’ll run them all one after the other. It reports successes and failures as it goes (and if it fails, it shows a rudimentary diff) and gives a final report at the end on the number of tests passed/failed.
The output needs tidying up and it might be nice to display a list of Forms it’s found so you can click on them to execute them. Pull requests are welcome :)
The idea is that we can build up a gold set of tests that we can run from time-to-time (or even automate) that are staged on a known-good database and filesystem with known content. Then, when we update the codebase during dev we can push the new core files to the repo, log in to execute any upgrade, and run the tests again. This will help us to feel more confident that we’ve not introduced any backwards-incompatible changes to the front-end processing.
From time to time, as the tag language evolves, new tags are added, and attributes are changed, we’ll add or amend tests. And as new features come online we might alter the content.
The idea is that we’ll also make available the database .sql file so you can import it into your environment as a baseline and run your own tests there if you wish.
Now, this is where you come in.
Please, if you have any cool tags that would stretch the system – or even simple ones that do things like set variables and output stuff – submit them here or via Pull Request so they can be added to the test suite. The more tests we have, the more confident we can be that things work.
What we need:
- A name for the test
- The input (tag, HTML, etc)
- The expected output (HTML, primarily)
- Any resources (content) or particular setup (sections, categories, prefs, etc) that need to be set in order for the test to succeed
Tests don’t have to always work. Remember that testing tags under error conditions is still valid, so if a test outputs some failure or is expected to emit an error then by all means submit it and the expected (error) output so we can check that things work out of bounds. Examples might be tests that use incorrect values, or bogus attributes that we expect to throw “unknown attribute” warnings, etc.
One thing that might lob a spanner in the works is translations. So for now, let’s stick with English please, but I’m conscious that we do need to supply a mechanism in the test files to cater with different language output. Watch this space. I’ll also see if we can bolster the test YAML format to cater for things such as setting prefs specific to that test (or globally) or adding/setting content so that a test can set up its own environment. That then lends itself to more automated testing.
Next steps also include a way to execute the tests via the CLI so we can script the hell out of this and run it as part of our CI pipeline. Any help with this in the form of PRs or ideas gratefully appreciated.
Thank you in advance for any tests and assistance you can come up with. The more the merrier.
Last edited by Bloke (2021-03-17 14:02:34)
Re: Tag-based unit testing
I’ve just added some more defensive code to the framework so if you supply invalid YAML it’ll try and detect that and show ‘invalid test’ instead of just silently ignoring them.
Also, to aid debugging, you can add a URL parameter
?show= with any of these four values (comma-separate them for sanity if you use more than one):
input: To show the test input read from the YAML doc.
expect: To show the expected output of the test, as read from the YAML doc.
output: To show the actual HTML of the test after parsing.
parsed: To show the actual parsed output, rendered as Txp sees it.
Hopefully those will also help design tests, as you can supply a test without expected output. The test will fail in that situation, but if you supply
?show=output in the URL it will display what the output of the tag is. If that works and you’re satisfied it displays as intended, you can copy and paste that output into the
expect: | rule so it can be used as expected output.
Re: Tag-based unit testing
New feature: ability to add/edit prefs, alter (on-page) sections, and switch language (provided it’s installed).
See the README for details, and examples. It’s a bit of a cheat right now because it only changes local copies of global variables so they can be restored after each set of tests run (i.e. on each change of Form). This means they have limited use due to various caching features of certain tags, which may well bypass local copies of globals such as
But they may still prove useful until such time as we bolster the framework to allow permanent changes to the database at runtime. That’ll be done through some extension of the new
env: system. Stay tuned.