From 28f31ba09814590b2023c96b9f8af7187cd8e3e7 Mon Sep 17 00:00:00 2001 From: Denys Kuchma Date: Mon, 4 May 2026 12:34:05 +0300 Subject: [PATCH] fix img type source --- src/content/docs/project/runs/running-automated-tests.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/src/content/docs/project/runs/running-automated-tests.md b/src/content/docs/project/runs/running-automated-tests.md index 47774782..c42a43cc 100644 --- a/src/content/docs/project/runs/running-automated-tests.md +++ b/src/content/docs/project/runs/running-automated-tests.md @@ -121,7 +121,7 @@ When you enable reporing for tests running in parallel, you might end with multi In this case multiple independent launches will report data to the report matched by the same Run title. - +![Shared run strategy](./images/image-12.png) Pick the unique name for this run and use `TESTOMATIO_SHARED_RUN=1` environement variable to enable shared report: @@ -155,7 +155,7 @@ npx @testomatio/reporter run "" Under hood `@testomatio/reporter run` creates a new empty run and passes its ID as environment variable into all spawned processes. So no matter how many parallel processes are started they will report to the single Run report. - +![Reporter run strategy](./images/image-10.png) However, this might not work in all cases. @@ -165,7 +165,7 @@ However, this might not work in all cases. In this case you create a run, receive its ID and manually close it after all runs are finished. - +![Manual run strategy](./images/image-9.png) Create a run via `@testomatio/reporter start`: @@ -191,7 +191,7 @@ If you have a complex pipeline, you can start Run on the stage #1, execute tests Sometimes, during test automation, unexpected issues may arise, or a test can be stopped for various reasons. - +![Terminated test run](./images/terminated-test-run.png) For example, during the execution of the problematic test case, the some gateway becomes unresponsive due to a server issue. This issue was unforeseen and not within the control of the testing team. Testomat.io detects the problem and initiates a termination of the problematic test case. The custom timeout you defined (min 30 minutes) comes into play. If the test case does not complete within this time frame, it is terminated automatically.