M
"

Directory

SUNRISE Safety Assurance Framework

Inputs to the SAF

There are five arrows that input into the SAF components from the ODD, behaviour, external requirements, and test objectives. The information carried within such input layer include ODD description, requirements, system under test, variables to be measured during test execution, pass/fail criteria for successful test execution, and monitoring requirements.

At all five arrows, the description of the ODD of the SUT is considered. The ODD description includes ranges of relevant attributes, such as the maximum vehicle designed speed and rainfall intensity. The ODD description may adopt an inclusive approach, i.e., describing what is in the ODD, an exclusive approach, i.e., describing what is not part of the ODD, or a combination of the two. It would be highly beneficial if the ODD description is standardized and machine readable. For formatting the ODD, it is further suggested to follow the norms related to the ODD definition format listed in ISO 34503 (2023). The scenario creation can utilize the ODD description to create scenarios that are part of the described ODD. The concretizing test scenario and associating test objectives component uses the ODD description to generate the test cases that are needed for the safety assurance of the ADS or CCAM system, including the description of the needed output to analyse the results. The ODD description is used by the test evaluation to determine whether the ADS or CCAM system operated safely within its ODD in the specific test. The same evaluation is done in system evaluation, but then on a system level. The in-service monitoring and reporting component employs the ODD description in order to verify whether the system is operating in its ODD.

Next to the ODD description, the scenario creation, concretizing test scenarios and associating test objectives, system evaluation, and in-service monitoring and reporting processes also consider the requirements of the ADS or CCAM system. These requirements should reflect the required behavioural competences, (external) regulations, rules of the road, safety objectives, and standard/best practices. The requirements can be a source for creating scenarios, which is why the requirements are considered to be part of the first interface. Furthermore, it is important that the process of concretizing test scenarios and associating test objectives considers the requirements, as the goal of the SAF is to assure that the requirements are met. Note that this process also establishes the means to measure compliance with the requirements for the tests cases. Not all requirements can be formulated using test validation criteria, which is why the requirements can also be communicated to the system evaluation. For example, requirements like “get 5* Euro NCAP”, or “fail maximum x% of all cases”, is not scenario / test case specific, but only available on the overall level for use in the system evaluation. Lastly, the input to the in-service monitoring and reporting needs the requirements to check whether system-level requirements are satisfied over the lifetime of the system. Note that the requirements can be very different from system to system, so formalizing this might be challenging. At least, a standardized format would be preferred.

The System under Test (SUT) is the main subject of the test cases, thus this needs to be provided to the concretizing test scenarios and associating test objectives process. The SUT can also be a source for creating scenarios, e.g., scenarios created using knowledge of the system architecture and fault analysis techniques such as systems-theoretic process analysis (also known as STPA). The SUT can be, e.g., a physical prototype or mathematical model of the actual system.

In case some variables need to be measured during the test execution – additional to the variables that are needed to verify the test objectives – this information can be provided to the concretizing test scenarios and associating test objectives process.

Additionally, pass/fail criteria for successful test execution have to be defined in the concretizing test scenarios and associating test objectives process, in order to identify when test cases have or have not successfully been executed. For example if there are certain tolerances on speed values or lateral path deviations, these shall be included. When the test case is executed outisde of the pass/fail criteria, then its execution would have to be deemed as unsuccessful from an execution point of view. These examples are related to both real world and test track test allocation, but other examples may apply to virtual test environment (e.g. when simulation output contains error)

For all inputs, it is assumed that simulation models (other than the SUT) and the simulation platform are part of the “Execute” component and, therefore, not provided externally. Hence, simulation models (other than the SUT) and the simulation platforms are not part of the listed interfaces.

You may use this xlsx file to send your comments on any part of the SAF Handbook, following the integrated instructions! Thank you in advance for your time!