sift_client.util.test_results
¶
Test Results Utilities.
This module provides utilities for working with test results.
Context Managers¶
ReportContext- Context manager for a new TestReport.NewStep- Context manager to create a new step in a test report.
Example¶
client = SiftClient(api_key=api_key, grpc_url=grpc_url, rest_url=rest_url)
with ReportContext(client, name="Example Report") as rc:
with rc.new_step(name="Setup") as step:
controller_setup(step)
with rc.new_step(name="Example Step", description=desc) as parent_step:
cmd_interface.cmd("ec1", "rtv.cmd", 75.0)
sleep(0.01)
with parent_step.substep(name="Substep 1", description="Measure position") as substep:
ec = "ec1"
pos_channel = "rtv.pos"
pos = tlm.read(ec, pos_channel)
result = substep.measure(pos, name=f"{ec}.{pos_channel}", bounds=(min=74.9, max=75.1))
return result # This is optional for other uses, but the step and its parents will be updated correctly i.e. failed if the measurement fails.
Manually Updating Underlyling Report¶
You can also manually update the underlying report or steps by accessing the context manager's attributes.
with ReportContext(client, name="Example Report") as rc:
with rc.new_step(name="Example Step") as step:
if !conditions:
step.update({"status": TestStatus.SKIPPED})
else:
step.measure(name="Example Measurement", value=test_value, bounds={"min": -1, "max": 10})
rc.report.update({"run_id": run_id})
For a larger class or script, consider creating the context in a setup method and passing it to the test functions.
def main(self):
self.sift_client = SiftClient(api_key=api_key, grpc_url=grpc_url, rest_url=rest_url)
with ReportContext(self.sift_client, name="Test Class", description="Test Class") as rc:
setup(rc)
test_one(rc)
test_two(rc)
teardown(rc)
cleanup()
Pytest Fixtures¶
The report context and steps can also be accessed in pytest by importing the report_context and step fixtures.
How to use:¶
- These fixtures are set to autouse and will automatically create a report and steps for each test function.
- If you want each module(file) to be marked as a step w/ each test as a substep, import the
module_substepfixture as well. - The
report_contextfixture requires a fixturesift_clientreturning anSiftClientinstance to be passed in.
Note: FedRAMP users: report_context will log test results to a temp file to avoid API calls during test execution. If this is a shared environment, you can disable logging by passing --sift-test-results-log-file=false.
Configuration¶
Import the pytest_addoption function to add configuration options for Test Results to the commandline or add the options to your pyproject.toml file (https://docs.pytest.org/en/stable/reference/customize.html#configuration). If ommitted, will use the default values described below.
- Git metadata: Include git metadata (repo, branch, commit) in the test results. Default is True. You can disable it by passing
--no-sift-test-results-git-metadata. - Log file: Write test results to a file. This happens automatically but you can configure specify a specific log file by passing
--sift-test-results-log-file=<path>or disable logging by passing--sift-test-results-log-file=false. - Check connection: Pass
--sift-test-results-check-connection(off by default) to make thereport_context,step, andmodule_substepfixtures no-op when the Sift client has no connection to the server. Requires aclient_has_connectionfixture to be available.
Example at top of your test file or in your conftest.py file:¶
import pytest
@pytest.fixture(scope="session")
def sift_client() -> SiftClient:
grpc_url = os.getenv("SIFT_GRPC_URI", "localhost:50051")
rest_url = os.getenv("SIFT_REST_URI", "localhost:8080")
api_key = os.getenv("SIFT_API_KEY", "")
client = SiftClient(api_key=api_key, grpc_url=grpc_url, rest_url=rest_url)
return client
from sift_client.util.test_results import *
Then in your test file:¶
# Because step was already imported and set autouse=True, this test will automatically get a step created for it.
def test_no_includes():
assert condition, "Example failure"
# Passing the fixtures to the test function allows you to take measurements or create substeps.
def test_example(report_context, step):
# This will add a measurement to the current step for this function
step.measure(name="Example Measurement", value=test_string_value, bounds="expected_string_value")
with report_context.new_step(name="Example Step") as substep:
example_measurement = tlm.read(channel_name)
substep.measure(name="Substep Measurement", value=example_measurement, bounds=(min=74.9, max=75.1))
| MODULE | DESCRIPTION |
|---|---|
bounds |
|
context_manager |
|
pytest_util |
|
| CLASS | DESCRIPTION |
|---|---|
NewStep |
Context manager to create a new step in a test report. See usage example in init.py. |
ReportContext |
Context manager for a new TestReport. See usage example in init.py. |
| FUNCTION | DESCRIPTION |
|---|---|
client_has_connection |
Check if the SiftClient has a connection to the Sift server. |
module_substep |
Create a step per module. |
pytest_addoption |
Register Sift-specific command-line options. |
pytest_runtest_makereport |
You should import this hook to capture any AssertionErrors that occur during the test. If not included, any assert failures in a test will not automatically fail the step. |
report_context |
Create a report context for the session. |
step |
Create an outer step for the function. |
NewStep
¶
NewStep(
report_context: ReportContext,
name: str,
description: str | None = None,
assertion_as_fail_not_error: bool = True,
)
Bases: AbstractContextManager
Context manager to create a new step in a test report. See usage example in init.py.
Initialize a new step context.
| PARAMETER | DESCRIPTION |
|---|---|
report_context
|
The report context to create the step in.
TYPE:
|
name
|
The name of the step.
TYPE:
|
description
|
The description of the step.
TYPE:
|
assertion_as_fail_not_error
|
Mark steps with assertion errors as failed instead of error+traceback (some users want assertions to work as simple failures especially when using pytest).
TYPE:
|
| METHOD | DESCRIPTION |
|---|---|
measure |
Measure a value and return the result. |
measure_all |
Ensure that all values in a list are within bounds and return the result. Records measurements for all values outside the bounds. |
measure_avg |
Calculate the average of a list of values, measure the average against given bounds, and return the result. |
report_outcome |
Report an outcome from some action or measurement. Creates a substep that is pass/fail with the optional reason as the description. |
substep |
Alias to return a new step context manager from the current step. The ReportContext will manage nesting of steps. |
update_step_from_result |
Update the step based on its substeps and if there was an exception while executing the step. |
| ATTRIBUTE | DESCRIPTION |
|---|---|
assertion_as_fail_not_error |
TYPE:
|
client |
TYPE:
|
current_step |
TYPE:
|
report_context |
TYPE:
|
assertion_as_fail_not_error
class-attribute
instance-attribute
¶
current_step
class-attribute
instance-attribute
¶
current_step: TestStep | None = create_step(
name, description
)
measure
¶
measure(
*,
name: str,
value: float | str | bool | int,
bounds: dict[str, float]
| NumericBounds
| str
| None = None,
timestamp: datetime | None = None,
unit: str | None = None,
) -> bool
Measure a value and return the result.
| PARAMETER | DESCRIPTION |
|---|---|
name
|
The name of the measurement.
TYPE:
|
value
|
The value of the measurement.
TYPE:
|
bounds
|
[Optional] The bounds to compare the value to.
TYPE:
|
timestamp
|
[Optional] The timestamp of the measurement. Defaults to the current time.
TYPE:
|
unit
|
[Optional] The unit of the measurement.
TYPE:
|
returns: The result of the measurement.
measure_all
¶
measure_all(
*,
name: str,
values: list[float | int] | NDArray[float64] | Series,
bounds: dict[str, float] | NumericBounds,
timestamp: datetime | None = None,
unit: str | None = None,
) -> bool
Ensure that all values in a list are within bounds and return the result. Records measurements for all values outside the bounds.
Note: Measurements will only be recorded for values outside the bounds. To record measurements for all values, just call measure for each value.
| PARAMETER | DESCRIPTION |
|---|---|
name
|
The name of the measurement.
TYPE:
|
values
|
The list of values to measure the average of.
TYPE:
|
bounds
|
The bounds to compare the value to.
TYPE:
|
timestamp
|
[Optional] The timestamp of the measurement. Defaults to the current time.
TYPE:
|
unit
|
[Optional] The unit of the measurement.
TYPE:
|
returns: The true if all values are within the bounds, false otherwise.
measure_avg
¶
measure_avg(
*,
name: str,
values: list[float | int] | NDArray[float64] | Series,
bounds: dict[str, float] | NumericBounds,
timestamp: datetime | None = None,
unit: str | None = None,
) -> bool
Calculate the average of a list of values, measure the average against given bounds, and return the result.
| PARAMETER | DESCRIPTION |
|---|---|
name
|
The name of the measurement.
TYPE:
|
values
|
The list of values to measure the average of.
TYPE:
|
bounds
|
The bounds to compare the value to.
TYPE:
|
timestamp
|
[Optional] The timestamp of the measurement. Defaults to the current time.
TYPE:
|
unit
|
[Optional] The unit of the measurement.
TYPE:
|
returns: The true if the average of the values is within the bounds, false otherwise.
report_outcome
¶
Report an outcome from some action or measurement. Creates a substep that is pass/fail with the optional reason as the description.
| PARAMETER | DESCRIPTION |
|---|---|
name
|
The name of the substep.
TYPE:
|
result
|
True if the action or measurement passed, False otherwise.
TYPE:
|
reason
|
[Optional] The context to include in the description of the substep.
TYPE:
|
returns: The given result so the function can be used in line.
substep
¶
substep(
name: str, description: str | None = None
) -> NewStep
Alias to return a new step context manager from the current step. The ReportContext will manage nesting of steps.
update_step_from_result
¶
update_step_from_result(
exc: type[Exception] | None,
exc_value: Exception | None,
tb: TracebackException | None,
) -> bool
Update the step based on its substeps and if there was an exception while executing the step.
| PARAMETER | DESCRIPTION |
|---|---|
exc
|
The class of Exception that was raised.
TYPE:
|
exc_value
|
The exception value.
TYPE:
|
tb
|
The traceback object.
TYPE:
|
returns: The false if step failed or errored, true otherwise.
ReportContext
¶
ReportContext(
client: SiftClient,
name: str,
test_system_name: str | None = None,
system_operator: str | None = None,
test_case: str | None = None,
log_file: str | Path | bool | None = None,
include_git_metadata: bool = False,
)
Bases: AbstractContextManager
Context manager for a new TestReport. See usage example in init.py.
Initialize a new report context.
| PARAMETER | DESCRIPTION |
|---|---|
client
|
The Sift client to use to create the report.
TYPE:
|
name
|
The name of the report.
TYPE:
|
test_system_name
|
The name of the test system. Will default to the hostname if not provided.
TYPE:
|
system_operator
|
The operator of the test system. Will default to the current user if not provided.
TYPE:
|
test_case
|
The name of the test case. Will default to the basename of the file containing the test if not provided.
TYPE:
|
log_file
|
If True, create a temp log file. If a path, use that path. All create/update operations will be logged to this file.
TYPE:
|
include_git_metadata
|
If True, include git metadata in the report.
TYPE:
|
| METHOD | DESCRIPTION |
|---|---|
create_step |
Create a new step in the report context. |
exit_step |
Exit a step and update the report context. |
get_next_step_path |
Get the next step path for the current depth. |
new_step |
Alias to return a new step context manager from this report context. Use create_step for actually creating a TestStep in the current context. |
record_step_outcome |
Report a failure to the report context. |
resolve_and_propagate_step_result |
Resolve the result of a step and propagate the result to the parent step if it failed. |
| ATTRIBUTE | DESCRIPTION |
|---|---|
any_failures |
TYPE:
|
client |
TYPE:
|
log_file |
TYPE:
|
open_step_results |
TYPE:
|
report |
TYPE:
|
step_is_open |
TYPE:
|
step_number_at_depth |
TYPE:
|
step_stack |
TYPE:
|
create_step
¶
create_step(
name: str, description: str | None = None
) -> TestStep
Create a new step in the report context.
| PARAMETER | DESCRIPTION |
|---|---|
name
|
The name of the step.
TYPE:
|
description
|
The description of the step.
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
TestStep
|
The created step. |
new_step
¶
new_step(
name: str,
description: str | None = None,
assertion_as_fail_not_error: bool = True,
) -> NewStep
Alias to return a new step context manager from this report context. Use create_step for actually creating a TestStep in the current context.
client_has_connection
¶
Check if the SiftClient has a connection to the Sift server.
Can be used to skip tests that require a connection to the Sift server, and is
consulted by the Sift fixtures when --sift-test-results-check-connection is set.
module_substep
¶
module_substep(
report_context: ReportContext | None,
request: FixtureRequest,
pytestconfig: Config,
) -> Generator[NewStep | None, None, None]
Create a step per module.
No-ops when --sift-test-results-check-connection is set and the client
has no connection (or when the session-scoped report_context resolved to None).
pytest_addoption
¶
Register Sift-specific command-line options.
pytest_runtest_makereport
¶
You should import this hook to capture any AssertionErrors that occur during the test. If not included, any assert failures in a test will not automatically fail the step.
report_context
¶
report_context(
sift_client: SiftClient,
request: FixtureRequest,
pytestconfig: Config,
) -> Generator[ReportContext | None, None, None]
Create a report context for the session.
The log file destination is controlled by --sift-test-results-log-file.
Defaults to a temp file when not set.
When --sift-test-results-check-connection is passed, this fixture will no-op
(yield None) if the Sift client has no connection to the server. That mode
requires a client_has_connection fixture to be available in the session.
step
¶
step(
report_context: ReportContext | None,
request: FixtureRequest,
pytestconfig: Config,
) -> Generator[NewStep | None, None, None]
Create an outer step for the function.
No-ops when --sift-test-results-check-connection is set and the client
has no connection (or when the session-scoped report_context resolved to None).