Creating Custom Integrations
QLTY Framework uses a lifecycle-based integration system. Integrations extend the
Integration base class and override hooks that run at specific points during
test execution. A central registry dispatches lifecycle events to all enabled
integrations.
Lifecycle Hooks
Hook |
When it runs |
|---|---|
|
Before tests execute. Use for config validation and connection checks.
If this raises an exception, the integration is deregistered and its
other hooks will not be called. Tests still run — unless the integration
is marked as |
|
After each individual test completes. |
|
After all tests complete. Use for reporting results to external systems.
|
Required Integrations
Set required = True on an integration to make it a hard dependency. If a
required integration’s on_run_start() raises, the entire test run is aborted
instead of just deregistering the integration.
Use this for pre-flight checks that must pass before tests can safely run — for example, verifying API access needed for test data cleanup:
class ApiCheckIntegration(Integration):
required = True
def on_run_start(self):
response = requests.post(f"{base_url}/api/login", ...)
response.raise_for_status()
Project-Level Custom Integrations
Test projects can register their own integrations without modifying framework
code. Add a CUSTOM_INTEGRATIONS list to the project’s settings.py:
# settings.py
CUSTOM_INTEGRATIONS = [
'integrations.my_module.MyIntegration',
]
Entries are dotted path strings (module.path.ClassName) that are resolved at
registration time. This avoids circular imports since settings.py is loaded
before integration modules. You can also pass pre-built instances if there is no
circular import concern.
Framework-Level Integrations
For integrations that ship with the framework (Slack, TestRail, etc.), follow these steps:
Create the integration class
Create a new file in
qlty/classes/integrations/:# qlty/classes/integrations/example_integration.py from qlty.classes.integrations.base_integration import Integration from qlty.utilities.utils import setup_logger import settings logger = setup_logger(__name__, settings.DEBUG_LEVEL) class ExampleIntegration(Integration): def __init__(self): self.api_key = settings.EXAMPLE['API_KEY'] def on_run_start(self): """Validate credentials before tests run.""" logger.info('Validating Example integration...') # Make a test API call, raise on failure ... def on_run_end(self, test_results, test_run_id, elapsed_time, log_path=None, context=None): """Report results after tests complete.""" from qlty.classes.core.test_runner_utils import TestRunnerUtils totals = TestRunnerUtils.get_testrun_totals(test_results) logger.info('Reporting {} results to Example'.format( totals['total_testcases'])) # Post results to external service ...
Only override the hooks you need. The base class provides no-op defaults.
Add a config flag
In
qlty/config.py:#: Enable Example integration EXAMPLE_INTEGRATION = False
Add a CLI argument
In
qlty/utilities/argument_parser.py, add a new argument that sets the flag:parser.add_argument('-x', '--example', action='store_true', help='Enable Example integration')
Then in the argument processing logic, set the config flag:
if args.example: config.EXAMPLE_INTEGRATION = True
Register the integration
In
qlty/qlty_tests.py, add to_register_integrations():if config.EXAMPLE_INTEGRATION: from qlty.classes.integrations.example_integration import ExampleIntegration registry.register(ExampleIntegration())
Add settings
In the test project’s
settings.py:EXAMPLE = { 'API_KEY': os.getenv('EXAMPLE_API_KEY'), }
Error Handling
on_run_start: If validation raises andrequired = False(default), the integration is deregistered. Tests proceed normally. Ifrequired = True, the entire test run is aborted.on_run_end: If one integration fails, the error is logged and the remaining integrations still run.