The project is maintained on github and uses the the well known "nvie" workflow. As of version 1.0, version numbers follow semantic versioning while still complying with PEP 440. See the note on semantic versioning in that document.
Please use the github project to submit any requests.
As per the proposed workflow, each feature should be implemented on a feature branch. This also goes for anything related to pull-requests (even for proposed bugfixes). Keeping each pull-request on a dedicated branch will make merging much easier.
Important files/directories:
test.pl # Generates "storable" files. tests/ | +-- resources/ # Contains the generated "storable" files. | +-- results/ # Contains the "expected" Python values. | +-- tests.py # Finds and runs tests.
Unit tests are organised in the following way:
- (only needed to test new data formats) The file
test.plgenerates test-data. It writes the generated files intotests/resourcesadding subfolders for the used architecture and storable version. - For each test-case generated by
test.pl, the corresponding "expected" value is contained in the foldertests/results. The file has the same name as the storable file, except for a different extension. - The test module
tests/test.pycontains the testing code. It searches for "storable" files and reconciles the conversion result with the content in the expetced Python files. This part is dynamic and need not be touched when adding new tests.
In most cases the only thing needed is to create/modify the "expected" Python
files in tests/results. The module only needs to provide a variable in its
global namespace called result. The actual result from python-storable
will be compared against that one. Looking at existing files may help.
Optionally, for more complex comparison cases, the module can contain a
function called is_equal taking two arguments (the storable result, and the
"expected" value). The function should return a boolean value with True
meaning that both values are to be considered equal.
I highly recommend using pytest to run the tests. This has several reasons:
- Unit-tests are dynamically generated and there is no runner included. Pytest comes with it's own really useful runner.
- Depending on bug/error the console output becomes barely readable due to the
sheer number of tests.
pytestcan help with betterstouthandling and--exitfirst. pytestcan easily limit the executed tests using-k.
Example Command (assuming a virtualenv in ./env):
./env/bin/pytest tests/test.py