Skip to content

Conversation

@sfraczek
Copy link
Contributor

@sfraczek sfraczek commented Dec 9, 2025

Problem

Make it easier to review if failed natlab tests have only failed because of cleanup timeout or other teardown problems

Solution

Log when natlab tests pass "TEST CASE PASSED"

☑️ Definition of Done checklist

  • Commit history is clean (requirements)
  • README.md is updated
  • Functionality is covered by unit or integration tests

Copy link
Contributor

@mathiaspeters mathiaspeters left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the goal but I don't like this way of implementing it. There should be a way with pytest to execute code before and/or after a test runs. And I think doing it with some pytest harness thing gives us another benefit: we might be able to include the name of the test in the log output so that if a test has a lot of output, you don't have to scroll up to find out which test just finished

@gytsto
Copy link
Contributor

gytsto commented Dec 10, 2025

I like the goal but I don't like this way of implementing it. There should be a way with pytest to execute code before and/or after a test runs. And I think doing it with some pytest harness thing gives us another benefit: we might be able to include the name of the test in the log output so that if a test has a lot of output, you don't have to scroll up to find out which test just finished

100% agreed, I'd suggest using something like pytest_sessionstart / pytest_sessionfinish

@LukasPukenis
Copy link
Collaborator

@mathiaspeters @gytsto I suggested this solution.

In NatLab we observe timeouts(180s) when Client executes it's finally.

It takes time to check if the timeout has happened in the test code itself versus cleanup stage. None of the obvious solutions look clean or quick to implement. As a compromise we agreed to just log a line.

@tomaszklak
Copy link
Contributor

@mathiaspeters @gytsto I suggested this solution.

In NatLab we observe timeouts(180s) when Client executes it's finally.

It takes time to check if the timeout has happened in the test code itself versus cleanup stage. None of the obvious solutions look clean or quick to implement. As a compromise we agreed to just log a line.

I don't like how this is implemented and I'm pretty sure we will constantly forget to add this line. If you have investigated this in depth and you are sure that there is no better way than to manually add those logs to each tests than I have one requirement. In the test running script on gitlab, add a post processing of the test log so that the overall test job will fail if there not all tests have emitted this log. This way if we forget to add this line, we will not be able to merge such change.

@LukasPukenis
Copy link
Collaborator

In the test running script on gitlab, add a post processing of the test log so that the overall test job will fail if there not all tests have emitted this log. This way if we forget to add this line, we will not be able to merge such change.

That's a sensible requirement 👍

@mathiaspeters
Copy link
Contributor

Did you try using pytest_runtest_logreport? It's executed after each individual test and gets a TestReport object with which you can print that line like:

if report.when == "call" and report.outcome == "passed":
    # do your logging

Or maybe you already tried that?

@LukasPukenis
Copy link
Collaborator

When the natlab testcase fails, it's useful to quickly identify if the failure is in the test body or in the cleanup. Cleanup failure is a legit failure, we just want to steer the efforts into the right direction without checking python testcase code and the logs.

There are other options, such as - checking for Cleanup complete messages in logs for all the clients involved in the testcase. Finding all of them would point to the test-body failing(this logic is tightly coupled with Client class usage).

I find the printing approach to be very explicit and useful at the end of test body, even though the implementation is not as graceful.

@mathiaspeters
Copy link
Contributor

I get the problem now. Let's go with this solution and the extra check that Tomasz suggested

The nat-lab test logs would fail in cleanup due to timeout.
It was hard to figure out if the test logic succeeded.
With this print it will be clear that test case succeeded.
@sfraczek sfraczek force-pushed the log-natlab-test-passed branch from 5dda42d to f5b8f1b Compare December 17, 2025 13:34
@sfraczek sfraczek changed the title log when natlab test passes [LLT-6912] log when natlab test passes Dec 17, 2025
@sfraczek
Copy link
Contributor Author

There is an idea to rewrite tests to use fixtures and then probably @mathiaspeters idea would work?

@mathiaspeters
Copy link
Contributor

If it moves the execution of the finally block for our Clients to the pytest teardown phase then yes, it should work

@stalowyjez
Copy link
Contributor

@mathiaspeters @sfraczek I think that would be a valuable refactor - it makes this task much bigger, but if there is nothing urgent now, maybe it's worth to give it a shot?

@sfraczek
Copy link
Contributor Author

sfraczek commented Jan 7, 2026

@mathiaspeters @sfraczek I think that would be a valuable refactor - it makes this task much bigger, but if there is nothing urgent now, maybe it's worth to give it a shot?

Ok but first I will try to make this solution pass and then next week I can see what comes out of adding fixtures.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants