A case study on safety engineers using AI tools to speed up software release approvals
From The Embassy of Good Science
Cases
A case study on safety engineers using AI tools to speed up software release approvals
Related Initiative
What is this about?
This case study examines how safety engineers use AI tools to accelerate safety analyses in the automotive industry. Companies developing autonomous and assisted driving systems must follow strict safety standards such as ISO 26262 and ISO 21448 (SOTIF). These analyses identify hazards, unsafe control actions, and potential risks using methods like System Theoretic Process Analysis (STPA). NIT is developing an AI-assisted tool that suggests control actions, identifies failure modes, and detects gaps in safety analyses, helping engineers complete safety assessments faster while maintaining compliance with safety standards.
Why is this important?
This case is important because safety analyses determine whether vehicles with autonomous and assisted driving systems are safe for road users. As automotive companies release frequent software updates through CI/CD pipelines, there is increasing pressure to speed up approval processes. AI-assisted tools can reduce analysis time and support engineers in identifying hazards. However, they also raise concerns about overreliance on automation, potential complacency among safety engineers, and the risk of unsafe software releases if human oversight is reduced.
For whom is this important?
What are the best practices?
Best practices include maintaining strong human oversight when using AI-assisted safety analysis tools. Safety engineers should critically evaluate AI-generated suggestions rather than relying on them automatically. Developing algorithmic literacy helps engineers understand AI limitations, scenario coverage, and potential failure modes. Parallel manual analyses can be used to validate AI-supported results and ensure reliability. Proper documentation of why AI recommendations are accepted, modified, or rejected is also essential. Continuous monitoring of AI tool performance ensures compliance with safety standards and preserves the integrity of safety assessments.
