Satisfying SIL Requirements: Ensuring Functional Safety of E/E/PE Safety-Related Systems

Safety functions are increasingly being carried out by electrical, electronic, or programmable electronic systems. These systems are usually complex, making it impossible in practice to fully determine every failure mode or to test all possible behavior. Although it is difficult to predict the safety performance, testing is still essential. The challenge is to design the system in such a way as to prevent dangerous failures or to control them when they arise.

Safety is one of the key issues of today’s and tomorrow’s electrical/electronic/programmable electronic safety-related systems. New functionalities increasingly touch the domain of safety engineering. Each function that is required to keep a risk at an accepted level is called a safety function. To achieve functional safety these functions need to fulfill safety function requirements (what the function does) and safety integrity requirements (the likelihood of a function behaving in a satisfactory manner). Future development and integration of the functionalities containing safety functions will further strengthen the need to have safe system development processes and to provide evidence that all reasonable safety objectives are satisfied.

With the trend of increasing complexity, software content, and mechatronic implementation, there the risks of systematic failures and random hardware failures are rising. IEC 61508 includes guidance to reduce these risks to a tolerable level by providing feasible requirements and processes.

The purpose of this document is to detail how the use of development testing can help software development teams meet requirements for particular SIL levels. It first introduces the idea of SIL as defined by the IEC 61508 standard. Next, it describes an integrated development testing solution for automating best practices in software development and testing. Finally, it presents how a development testing platform can be used to fully or partially satisfy software development process requirements for particular SILs.

Software Integrity Levels

Safety Integrity Level (SIL)—as defined by the IEC 61508 standard—is one of the four levels (SIL1-SIL4) corresponding to the range of a given safety function’s target likelihood of dangerous failures. Each safety function in a safety-related system needs to have appropriate safety integrity level assigned. An E/E/PE safety-related system will usually implement more than one safety function. If the safety integrity requirements for these safety functions differ, unless there is sufficient independence of implementation between them, the requirements applicable to the highest relevant safety integrity level shall apply to the entire E/E/PE safety-related system.

According to IEC 61508, the safety integrity level for a given function is evaluated based on either the average probability of failure to perform its design function on demand (for a low demand mode of operation) or on the probability of a dangerous failure per hour (for a high demand or continuous mode of operation).

The IEC 61508 standard specifies the requirements for achieving each safety integrity level. These requirements are more rigorous at higher levels of safety integrity in order to achieve the required lower likelihood of dangerous failures.

How Development Testing Helps Achieve SIL Safety Requirements

Development testing is best executed on a development testing platform, which automates a broad range of best practices proven to improve software development team productivity an software quality. Development testing facilitates:

  • Static analysis – static code analysis, data flow static analysis, and metrics analysis
  • Peer code review process automation – preparation, notification, and tracking
  • Unit testing – unit test creation, execution, optimization, and maintenance
  • Runtime error detection – memory access errors, leaks, corruptions, and more

This provides teams a practical way to prevent, expose, and correct errors in order to ensure that their code works as expected. To promote rapid remediation, each problem detected is prioritized based on configurable severity assignments, automatically assigned to the developer who wrote the related code, and distributed to his or her IDE with direct links to the problematic code and a description of how to fix it.

Enforce Your Programming Policies with Static Analysis

A properly implemented coding policy can eliminate entire classes of programming errors by establishing preventive coding conventions. Organizations should use a development testing platform to enforce a coding standards policy specific to their needs. Software engineers should statically analyze code to check compliance with such a policy.

Static analysis works by checking code against a database of rules the define how code should be written. These rules help identify potential defects, enforce best coding practices, and improve code maintainability and reusability. A development testing platform should also enforce standard API usage and prevent the recurrence of application-specific defects after a single instance has been found.

Interprocedural static analysis simulates feasible application execution paths—which may cross multiple functions and files—and determines whether these paths could trigger specific categories of runtime bugs. A development testing platform should be able to detect the use of uninitialized or invalid memory, null pointer dereferencing, array and buffer overflows, division by zero, memory and resource leaks, and various flavors of dead code. The ability to expose bugs without executing code is especially valuable for embedded code, where detailed runtime analysis for such errors is often not effective or possible.

Automated Code Review

Code review is known to be the most effective approach to uncover code defects. Unfortunately, many organizations underutilize code review because of the extensive effort it is thought to require. Two common code review flows are:

  • Post-commit code review. This mode is based on automatic identification of code changes in a source repository via custom source control interfaces, and creating code review tasks based on pre-set mapping of changed code to reviewers.
  • Pre-commit code review. Users can initiate a code review from the desktop by selecting a set of files to distribute for the review, or automatically identify all locally changed source code.

Organizations should use a development testing platform to automate team code reviews following static analysis. The one two punch of static analysis and automated code review virtually eliminates the need for line-by line review because the team’s coding policy is monitored automatically. By the time code is submitted for review, violations have already been identified and cleaned. Reviews can then focus on examining algorithms, reviewing design, and searching for subtle errors that automatic tools cannot detect.

Monitor the Application for Memory Problems

Application memory monitoring is the best known development testing practice for eliminating serious memory related bugs with zero false positives. Application memory monitoring works by constantly monitoring the application for certain classes of problems—such as memory leaks, null pointers, uninitialized memory, and buffer overflows—and results are visible immediately after the testing session is finished. A development testing platform will instrument the application (add special instructions to the code for the monitoring purposes) and analyze it with standard functional testing.

If your development testing platform collects coverage metrics, then you can check the coverage metrics report to see what part of the application was tested and to fine tune the set of regression unit tests (complementary to functional testing). Runtime error detection allows you to:

  • Identify complex memory-related problems through simple functional testing—for example memory leaks, null pointers, uninitialized memory, and buffers overflows
  • Collect code coverage from application runs
  • Increase the testing results accuracy through execution of the monitored application in a real target environment

SIL Requirements

The following tables describe how a development testing platform supports the software development lifecycle methods required for the safety functions to achieve a given SIL. The information presented here is for SIL-related verification and testing process. Please refer to the standard and consult functional safety experts for clarification of any requirements defined by the IEC 61508 standard.The following markers are used in the tables presented below to indicate:

  • R – functionalities matching methods recommended by the IEC 61508 standard
  • HR – functionalities matching methods highly recommended by the IEC 61508 standard 5

Development Testing Platform Capability descriptions contain reference to the appropriate techniques/measures as defined by the IEC 61508-3, Annex A, for example (Table A.3: 1) references IEC 61508-3, Table A.3, Technique 1.

Coding Standards Compliance – Static Analysis

Development Testing Platform Capability
SIL

1

2

3

4

Coding standards compliance module- general
Using a static analysis tool for C programming language (Table A.3:1)
HR
HR
HR
HR
Enforcement of specific coding standards (Table A.4:5)
R
HR
HR
HR
Using static analysis (Table A.9:3)
R
HR
HR
HR
Analysis types
Using code metrics (e.g. function size, function parameter counts, etc.) to enforce structured programming (Table A.4:6)
HR
HR
HR
HR
Enforcement of industry-recognized coding standards rule sets, such as MISRA C/C++, JSF, HIS source code metrics, etc. (Table B.1:1)
HR
HR
HR
HR
Enforcement of specific coding conventions (Table B.1:1)
HR
HR
HR
HR
Enforcement of specific formatting conventions (Table B.1:1)
HR
HR
HR
HR
Using code metrics (e.g. cyclomatic complexity, essential complexity, etc.) to enforce low complexity of the code (Table A.9, 5, Table A.10: 3)
R
R
R
R
Using coding standards to avoid common failures (Table A.10:5)
 
R
HR
HR
Using coding standards to enforce using only a subset of the language, e.g. to avoid unsafe constructions (Table A.3: 3)
 
 
HR
HR
Specific coding standards guidelines
Finding multiple exit points in functions (Table B.9:4)
HR
HR
HR
HR
Finding implicit conversions to enforce strong typing (Table A.3:2)
HR
HR
HR
HR
Using source code metrics to reduce the software module size (Table B.9:1)
R
HR
HR
HR
Finding unconditional jumps (Table B.1:7)
R
HR
HR
HR
Reporting unsafe usage of dynamic objects (Table B.1:2)
R
HR
HR
HR
Enforcement of information hiding / encapsulation (Table B.9:2)
 
R
HR
HR
Enforcement of defensive implementation techniques – for example, checking the return value of malloc, checking the error code value returned by called functions, etc (Table A.4:3)
 
R
HR
HR
Reporting unsafe usage of dynamic variables (Table B.1:3a)
 
R
HR
HR
Reporting recursive functions (Table B.1:6)
 
R
HR
HR
Reporting unsafe pointer usage (Table B.1:5)
 
R
HR
HR
Enforcement of failure assertion programming (Table A.2:3a)
R
R
R
HR
Using source code metrics to limit the parameter number in
functions (Table B.9:3)
 
 
HR
HR

Note that working in the C language without using specific coding standards and static analysis tools is positively not recommended for SIL3 and SIL4.

Static Data and Excecution Flow Analysis

Development Testing Platform Capability
SIL

1

2

3

4

Flow analysis- general
Using static flow analysis (Table B.8:4)
R
HR
HR
HR
Using flow control analysis (Table B.8:3)
R
HR
HR
HR
Analyzing source code using an abstract representation of possible values for the variables using data flow diagrams (Table B.8:83)
R
R
HR
HR
Flow analysis – specific rule examples
Reporting erroneous pointer issues (Table B.1:5)
R
HR
HR

Unit Testing

Development Testing Platform Capability
SIL

1

2

3

4

Unit testing – general
Unit test execution (Table A.5:4, Table A.7:3)
HR
HR
HR
HR
Automatic unit tests generation
Automatic unit test generation using boundary values (Table B.2:1, Table B.3:3)
R
HR
HR
HR
Using factory functions to prepare sets of input parameter values for automatic unit test generation (Table B.2:4)
R
R
R
HR
Automatic unit test generation using random input combinations (Table A.5:1)
R
R
HR
Test management
Using user-defined test to test specific atomic cases of the given requirement (Table A.5:4, Table A.7:3)
HR
HR
HR
HR
Using Data Sources to efficiently provide multiple inputs for functionally equivalent atomic cases of the given requirement (Table A.5:4, Table A.7:3)
R
R
R
R
UUsing Test Case Explorer for managing test cases and reviewing test case status (Table A.5:2)
R
HR
HR
HR
Function stubs
Using stubs to control the flow of the executed tests as specified in the given requirement (Table A.5:4)
HR
HR
HR
HR
Using function stubs to substitute the user interface for automatic unit test execution (Table A.5:6)
R
R
HR
HR
Using stubs to provide fault conditions in tests (Table B.2:2)
R
R
R
R
Coverage
Analyzing statement, branch, and MC/DC code coverage for structure testing (Table B.2:6)
R
R
HR
HR

The development testing platform should:

  • Run the unit tests in both instrumented and non-instrumented mode—for example to show that coverage instrumentation does not impact the test results.
  • Execute unit tests in the production environment on a target device or on a simulator.

Application Monitoring

Development Testing Platform Capability
SIL

1

2

3

4

Application monitoring module – general
Monitoring of the running application reporting detected runtime problems (Table A.9:4)
R
HR
HR
HR
Coverage module
Analyzing statement, branch, and MC/DC code coverage for structure testing (Table B.2:6)
R
R
HR
HR

Summary

Development testing helps software development teams fully or partially satisfy software development process requirements required for all Software Integrity Levels as defined by the IEC 61508 document. A broad range of analysis types—including coding standards compliance analysis, data and control flow analysis, unit testing, application monitoring and automated peer code review process—together with the configurable test reports containing high level of details, significantly facilitates the work required for the software verification process.