In Part 1 and Part 2 of this series on Safety Requirements Specification (SRS), we started with an introduction on challenges, and we proposed staged development of SRS as a remedy, and we discuss classification and traceability of requirements. In this last part we talk about what is normally ‘not said‘ in SRS documents, yet they are as important as what is ‘said,’ and they are equally critical in delivering safety. We also talk about inclusive inspection of SIS program and we explain how that can help meeting the untold requirements.

5. What Was Not Said: The Untold Requirements in an SRS
Safety Requirements Specifications (SRS) for Safety Instrumented Systems (SIS) typically focus on what the system must do, rather than what it must not do. For instance, a typical positive requirement might state: “A close command must be initiated to the valve if the tank pressure exceeds the 98 bar threshold.”
However, it’s much less common to find statements like: “No alarms should stay active on the operator screen when the plant is in ‘normal’ operation mode,”
or “No reset command should be initiated to energize the trip relay while the pump is in an emergency shutdown state.”
These are negative requirements, which define actions or conditions that must not occur under certain circumstances. In contrast, positive requirements define what should happen. Negative requirements are common in fields like software development—e.g., “The program must not enter an infinite loop”—but are often overlooked in process industry SRS documents.
This oversight stems from the assumption that failure cases are well-known by system suppliers and therefore don’t need to be specified. However, that assumption is not universally valid:
- Not all system faults stem from standard components; some are project-specific.
- Project engineers may have diverse backgrounds and may not foresee all failure conditions.
By omitting negative requirements, systematic failures can remain hidden until testing, commissioning, or even real-world operation. Negative requirements are not just the inverse of positive ones—they are independent, standalone requirements that define conditions to prevent.
To improve SRS completeness and reliability, all foreseeable failure scenarios—both positive and negative—should be reviewed and, where necessary, formally included. Methods like Failure Modes and Effects Analysis (FMEA) can help identify these scenarios. A progressive SRS development process also ensures that such requirements are captured as the design matures.
6. Dangerous Omissions: The Risk of Unrequired Failure Scenarios
The SRS is the foundation of any SIS lifecycle. It underpins design, development, verification, and validation. The greatest risk lies in missing requirements—particularly negative ones—that never make it into the design or validation phases.
Sources of systematic faults include:
- Component-level hardware failures
- Architectural-level hardware integration faults
- Embedded software bugs
- System-level software issues (e.g., OS errors)
- Application-level software (SIS AP) faults
Among these, application-level faults (Item 5) pose the highest risk. Software is more modifiable and complex than hardware, increasing the chances of undocumented failure modes.
Standard verification and validation processes are limited because they only assess conformance to the SRS—which may not include negative requirements. Where adding such requirements isn’t practical, a deep exploratory inspection of the SIS AP is essential.
Techniques to Identify Hidden System Faults
Two methods stand out:
- Failure Mode Reasoning (FMR):
FMR analyzes the actual SIS application logic to identify how faulty inputs or parameters may propagate and cause undesired states. This logic-based analysis goes beyond written requirements and reveals systemic risks. - Automated Simulation Testing:
This method simulates SIS behavior by feeding dynamic test cases into the system’s software interface. Unlike manual testing, which often stops at the SRS boundary, automated tools can explore a broader range of scenarios and uncover edge-case failures.
Both techniques—FMR and automated test generation—can reveal failure scenarios not documented in the SRS. However, they may require custom tool development, as commercial solutions for such advanced analysis are still limited.
7. Conclusion: Toward a More Resilient SRS
This 3-part article explored SRS challenges and proposed actionable best practices to minimize systematic failures in safety-critical systems. While these recommendations are not one-size-fits-all, they can significantly reduce risk and improve project outcomes.
Key Takeaways:
- Classify and trace all requirements
- Distinguish normative vs. informative content
- Include negative requirements where applicable
- Clarify roles and responsibilities in requirement definition
- Develop the SRS progressively throughout the design
- Use exploratory techniques such as FMR or simulation testing alongside standard verification
By incorporating both positive and negative requirements into the SRS—and supplementing the process with in-depth software inspection—you can enhance the completeness, clarity, and robustness of safety-critical systems.
References
- AS/NZS ISO/IEC/IEEE 15288: System and software engineering – System life cycle processes. ISO (2015).
- ISO/IEC/IEEE 12207: System and software engineering – Software life cycle processes. ISO (2017).
- CENELEC: CENELEC – EN 50126: Railway Applications – The Specification and Demonstration of Reliability, Availability, Maintainability and Safety (RAMS) (2017).
- Hirshorn, S.R., Voss, L.D., Bromley, L.K.: NASA Systems Engineering Handbook. NASA/Langley Research Center, Hampton (2017).
- IBM: Engineering Requirements Management DOORS – Overview of DOORS. https://www.ibm.com/docs/en/engineering-lifecycle-management-suite/ doors/9.7.0?topic=overview-doors, Accessed Mar-2025.
- IEC: IEC 61508-4: Functional safety of electrical/electronic/programmable electronic safety related systems – Part 4: Definitions and abbreviations (2010).
- IEC: IEC 61511-1: Functional safety-Safety instrumented systems for the process industry sector – Part 1: Framework, definitions, system, hardware and application programming requirements (2016).
- IEC: IEC 60812: Failure Mode and Effects Analysis (FMEA and FMECA). IEC (2018).
- INCOSE: Guide to Writing Requirements. INCOSE (2023).
- INCOSE, Wiley: INCOSE Systems Engineering Handbook: A Guide for System Life Cycle Processes and Activities. John Wiley & Sons, Incorporated, New York (2015).
- Jahanian, H.: Failure mode reasoning in safety-critical programs. Ph.D. thesis, Macquarie University.
- Jahanian, H., Parker, D., Zeller, M., McIver, A., Papadopoulos, Y.: Failure Mode Reasoning in Model Based Safety Analysis. In: 7th International Symposium on Model-Based Safety and Assessment (2020).
Note: The article is an adapted version of parts of an original paper shared in ArXiv (2503.13958), and it is also listed on our Resources page: Resources – Slima.