sift_client.util.test_results
¶
Test Results Utilities.
This module provides utilities for working with test results.
Context Managers¶
ReportContext- Context manager for a new TestReport.NewStep- Context manager to create a new step in a test report.
Example¶
client = SiftClient(api_key=api_key, grpc_url=grpc_url, rest_url=rest_url)
with ReportContext(client, name="Example Report") as rc:
with rc.new_step(name="Setup") as step:
controller_setup(step)
with rc.new_step(name="Example Step", description=desc) as parent_step:
cmd_interface.cmd("ec1", "rtv.cmd", 75.0)
sleep(0.01)
with parent_step.substep(name="Substep 1", description="Measure position") as substep:
ec = "ec1"
pos_channel = "rtv.pos"
pos = tlm.read(ec, pos_channel)
result = substep.measure(pos, name=f"{ec}.{pos_channel}", bounds=(min=74.9, max=75.1))
return result # This is optional for other uses, but the step and its parents will be updated correctly i.e. failed if the measurement fails.
Manually Updating Underlyling Report¶
You can also manually update the underlying report or steps by accessing the context manager's attributes.
with ReportContext(client, name="Example Report") as rc:
with rc.new_step(name="Example Step") as step:
if !conditions:
step.update({"status": TestStatus.SKIPPED})
else:
step.measure(name="Example Measurement", value=test_value, bounds={"min": -1, "max": 10})
rc.report.update({"run_id": run_id})
For a larger class or script, consider creating the context in a setup method and passing it to the test functions.
def main(self):
self.sift_client = SiftClient(api_key=api_key, grpc_url=grpc_url, rest_url=rest_url)
with ReportContext(self.sift_client, name="Test Class", description="Test Class") as rc:
setup(rc)
test_one(rc)
test_two(rc)
teardown(rc)
cleanup()
Pytest Plugin¶
The pytest plugin lives at sift_client.pytest_plugin. Opt in
from your conftest.py:
The plugin ships an autouse session-scoped report_context fixture (one
TestReport per session), an autouse function-scoped step fixture, and an
optional module_substep fixture. It also registers a default sift_client
fixture that reads SIFT_API_KEY, SIFT_GRPC_URI, and SIFT_REST_URI from
the environment. Override it by defining your own sift_client fixture in
your conftest.
Note: FedRAMP users: report_context will log test results to a temp file to
avoid API calls during test execution. If this is a shared environment, you
can disable logging by passing --sift-test-results-log-file=false.
Configuration¶
CLI options registered by the plugin:
--sift-test-results-log-file: Path to write the JSONL log file.true(default) auto-creates a temp file;false/nonedisables logging; a path writes to that location.--no-sift-test-results-git-metadata: Exclude git metadata (repo, branch, commit) from the test report. Included by default.--sift-test-results-check-connection: Makereport_context,step, andmodule_substepno-op when the client has no connection. Requires aclient_has_connectionfixture (the plugin ships a default).
To disable the plugin for a single run:
pytest -p no:sift_client.pytest_plugin.
| MODULE | DESCRIPTION |
|---|---|
bounds |
|
context_manager |
|
| CLASS | DESCRIPTION |
|---|---|
NewStep |
Context manager to create a new step in a test report. See usage example in init.py. |
ReportContext |
Context manager for a new TestReport. See usage example in init.py. |
NewStep
¶
NewStep(
report_context: ReportContext,
name: str,
description: str | None = None,
assertion_as_fail_not_error: bool = True,
metadata: dict[str, str | float | bool] | None = None,
)
Bases: AbstractContextManager
Context manager to create a new step in a test report. See usage example in init.py.
Initialize a new step context.
| PARAMETER | DESCRIPTION |
|---|---|
report_context
|
The report context to create the step in.
TYPE:
|
name
|
The name of the step.
TYPE:
|
description
|
The description of the step.
TYPE:
|
assertion_as_fail_not_error
|
Mark steps with assertion errors as failed instead of error+traceback (some users want assertions to work as simple failures especially when using pytest).
TYPE:
|
metadata
|
[Optional] Structured key/value metadata to attach to the step.
TYPE:
|
| METHOD | DESCRIPTION |
|---|---|
measure |
Measure a value and return the result. |
measure_all |
Ensure that all values in a list are within bounds and return the result. Records measurements for all values outside the bounds. |
measure_avg |
Calculate the average of a list of values, measure the average against given bounds, and return the result. |
report_outcome |
Report an outcome from some action or measurement. Creates a substep that is pass/fail with the optional reason as the description. |
substep |
Alias to return a new step context manager from the current step. The ReportContext will manage nesting of steps. |
update_step_from_result |
Update the step based on its substeps and if there was an exception while executing the step. |
| ATTRIBUTE | DESCRIPTION |
|---|---|
assertion_as_fail_not_error |
TYPE:
|
client |
TYPE:
|
current_step |
TYPE:
|
report_context |
TYPE:
|
assertion_as_fail_not_error
class-attribute
instance-attribute
¶
current_step
class-attribute
instance-attribute
¶
current_step: TestStep | None = create_step(
name, description, metadata=metadata
)
measure
¶
measure(
*,
name: str,
value: float | str | bool | int,
bounds: dict[str, float]
| NumericBounds
| str
| None = None,
timestamp: datetime | None = None,
unit: str | None = None,
description: str | None = None,
metadata: dict[str, str | float | bool] | None = None,
channel_names: list[str] | list[Channel] | None = None,
) -> bool
Measure a value and return the result.
| PARAMETER | DESCRIPTION |
|---|---|
name
|
The name of the measurement.
TYPE:
|
value
|
The value of the measurement.
TYPE:
|
bounds
|
[Optional] The bounds to compare the value to.
TYPE:
|
timestamp
|
[Optional] The timestamp of the measurement. Defaults to the current time.
TYPE:
|
unit
|
[Optional] The unit of the measurement.
TYPE:
|
description
|
[Optional] Notes about the measurement. Server caps at 2000 characters; longer strings are truncated with a warning.
TYPE:
|
metadata
|
[Optional] Structured key/value metadata to attach to the measurement.
For metadata shared across measurements, prefer the
TYPE:
|
channel_names
|
[Optional] Sift channel names or
TYPE:
|
returns: The result of the measurement.
measure_all
¶
measure_all(
*,
name: str,
values: list[float | int] | NDArray[float64] | Series,
bounds: dict[str, float] | NumericBounds,
timestamp: datetime | None = None,
unit: str | None = None,
description: str | None = None,
metadata: dict[str, str | float | bool] | None = None,
channel_names: list[str] | list[Channel] | None = None,
) -> bool
Ensure that all values in a list are within bounds and return the result. Records measurements for all values outside the bounds.
Note: Measurements will only be recorded for values outside the bounds. To record measurements for all values, just call measure for each value.
| PARAMETER | DESCRIPTION |
|---|---|
name
|
The name of the measurement.
TYPE:
|
values
|
The list of values to measure the average of.
TYPE:
|
bounds
|
The bounds to compare the value to.
TYPE:
|
timestamp
|
[Optional] The timestamp of the measurement. Defaults to the current time.
TYPE:
|
unit
|
[Optional] The unit of the measurement.
TYPE:
|
description
|
[Optional] Notes attached to each out-of-bounds measurement. Server caps at 2000 characters; longer strings are truncated with a warning.
TYPE:
|
metadata
|
[Optional] Structured key/value metadata for each out-of-bounds measurement.
TYPE:
|
channel_names
|
[Optional] Sift channel names or
TYPE:
|
returns: The true if all values are within the bounds, false otherwise.
measure_avg
¶
measure_avg(
*,
name: str,
values: list[float | int] | NDArray[float64] | Series,
bounds: dict[str, float] | NumericBounds,
timestamp: datetime | None = None,
unit: str | None = None,
description: str | None = None,
metadata: dict[str, str | float | bool] | None = None,
channel_names: list[str] | list[Channel] | None = None,
) -> bool
Calculate the average of a list of values, measure the average against given bounds, and return the result.
| PARAMETER | DESCRIPTION |
|---|---|
name
|
The name of the measurement.
TYPE:
|
values
|
The list of values to measure the average of.
TYPE:
|
bounds
|
The bounds to compare the value to.
TYPE:
|
timestamp
|
[Optional] The timestamp of the measurement. Defaults to the current time.
TYPE:
|
unit
|
[Optional] The unit of the measurement.
TYPE:
|
description
|
[Optional] Notes about the measurement. Server caps at 2000 characters; longer strings are truncated with a warning.
TYPE:
|
metadata
|
[Optional] Structured key/value metadata to attach to the measurement.
TYPE:
|
channel_names
|
[Optional] Sift channel names or
TYPE:
|
returns: The true if the average of the values is within the bounds, false otherwise.
report_outcome
¶
Report an outcome from some action or measurement. Creates a substep that is pass/fail with the optional reason as the description.
| PARAMETER | DESCRIPTION |
|---|---|
name
|
The name of the substep.
TYPE:
|
result
|
True if the action or measurement passed, False otherwise.
TYPE:
|
reason
|
[Optional] The context to include in the description of the substep.
TYPE:
|
returns: The given result so the function can be used in line.
substep
¶
substep(
name: str,
description: str | None = None,
metadata: dict[str, str | float | bool] | None = None,
) -> NewStep
Alias to return a new step context manager from the current step. The ReportContext will manage nesting of steps.
update_step_from_result
¶
update_step_from_result(
exc: type[Exception] | None,
exc_value: Exception | None,
tb: TracebackException | None,
) -> bool
Update the step based on its substeps and if there was an exception while executing the step.
| PARAMETER | DESCRIPTION |
|---|---|
exc
|
The class of Exception that was raised.
TYPE:
|
exc_value
|
The exception value.
TYPE:
|
tb
|
The traceback object.
TYPE:
|
returns: The false if step failed or errored, true otherwise.
ReportContext
¶
ReportContext(
client: SiftClient,
name: str,
test_system_name: str | None = None,
system_operator: str | None = None,
test_case: str | None = None,
log_file: str | Path | bool | None = None,
include_git_metadata: bool = False,
)
Bases: AbstractContextManager
Context manager for a new TestReport. See usage example in init.py.
Initialize a new report context.
| PARAMETER | DESCRIPTION |
|---|---|
client
|
The Sift client to use to create the report.
TYPE:
|
name
|
The name of the report.
TYPE:
|
test_system_name
|
The name of the test system. Will default to the hostname if not provided.
TYPE:
|
system_operator
|
The operator of the test system. Will default to the current user if not provided.
TYPE:
|
test_case
|
The name of the test case. Will default to the basename of the file containing the test if not provided.
TYPE:
|
log_file
|
If True, create a temp log file. If a path, use that path. All create/update operations will be logged to this file.
TYPE:
|
include_git_metadata
|
If True, include git metadata in the report.
TYPE:
|
| METHOD | DESCRIPTION |
|---|---|
create_step |
Create a new step in the report context. |
exit_step |
Exit a step and update the report context. |
get_next_step_path |
Get the next step path for the current depth. |
new_step |
Alias to return a new step context manager from this report context. Use create_step for actually creating a TestStep in the current context. |
record_step_outcome |
Report a failure to the report context. |
resolve_and_propagate_step_result |
Resolve the result of a step and propagate the result to the parent step if it failed. |
| ATTRIBUTE | DESCRIPTION |
|---|---|
any_failures |
TYPE:
|
client |
TYPE:
|
log_file |
TYPE:
|
open_step_results |
TYPE:
|
report |
TYPE:
|
step_is_open |
TYPE:
|
step_number_at_depth |
TYPE:
|
step_stack |
TYPE:
|
create_step
¶
create_step(
name: str,
description: str | None = None,
metadata: dict[str, str | float | bool] | None = None,
) -> TestStep
Create a new step in the report context.
| PARAMETER | DESCRIPTION |
|---|---|
name
|
The name of the step.
TYPE:
|
description
|
The description of the step.
TYPE:
|
metadata
|
[Optional] Structured key/value metadata to attach to the step. For
metadata shared across every step in a report, prefer the
TYPE:
|
| RETURNS | DESCRIPTION |
|---|---|
TestStep
|
The created step. |
new_step
¶
new_step(
name: str,
description: str | None = None,
assertion_as_fail_not_error: bool = True,
metadata: dict[str, str | float | bool] | None = None,
) -> NewStep
Alias to return a new step context manager from this report context. Use create_step for actually creating a TestStep in the current context.