IMechE is driving forward a programme, motivated by its 132nd president Carolyn Griffiths and co-sponsored by the Royal Academy of Engineering and the Parliamentary Advisory Council for Transport Safety, to establish better practices to safely integrate autonomous systems. During her time as chief of the Rail Accident Investigation Branch, Griffiths saw her fair share of accidents caused by inadequate understanding and anticipation of human performance.
She said: “We must step up our understanding of the range of human performance, how it can change over time or due to other factors, in an automated environment. We are overdue a systematic framework of protocols and principles that ensure we engage new understanding. Engineers must recognise that complex systems require people and technologies to work together, contributing to the performance of the systems themselves, as well as the impact of system design on behaviour.”
To this end, Griffiths organised the 2024 Thomas Hawksley Lecture by Professor Sarah Sharples, chief scientific adviser at the Department for Transport. The feedback from this event confirmed that this is an area in need of more concerted attention from the engineering community, so Griffiths and Sharples convened a roundtable of 20 of the UK’s top experts to discuss these challenges, focused in four areas. What follows is a summary of each of these challenges and a high-level overview of the key takeaways.
1 How should the engineering design process change with the adoption of AI in transport?
The behaviour of autonomous and AI-enabled systems poses challenges due to the complexity of the systems themselves and their ability to evolve and learn, as well as their operation within a complex and highly variable system of systems. Operator performance will also fluctuate due to natural factors such as health, ageing and ambient conditions. Interactions between the driver, other vehicles, the environment and remote decision-makers produce an effectively infinite set of circumstances for autonomous systems to process and react to.
Iterative design is key to building up the capabilities of the systems to manage this complexity, with initially unforeseen interactions and circumstances emerging in testing. This more agile design process is not yet prevalent, but would provide designers with a wealth of additional data to be the basis of assurance of the system, which includes elements of AI.
A threshold must be agreed upon to define what is “safe enough”. That threshold will be different for different modes: there is a variation in degrees of freedom across transport modes. Professor Roderick Muttram, who chaired the discussion, noted: “The improvements made in UK mainline rail mean that the last driver error-caused fatality was at Ladbroke Grove in 1999, over 25 years ago. Any system for controlling trains must achieve a similar level of integrity.”
Realistically, an adoption of a threshold is likely to mean regulation will exclude some technology capabilities where there is less data for assurance from deployment, but this compromise is imperative to help build trust. Moreover, these considerations should feature in design and validation processes, defined as part of the regulated approvals process.
2 How do we develop a means of safety assurance for transport systems that incorporate AI?
Many AI technologies are intended to change and improve over time, learning through exposure to different experiences and circumstances. This presents a challenge for assurance and regulation: how do we regulate a technology that evolves after its efficacy has been assessed?
Machine learning and AI-enabled systems can self-learn from data collected in operations, but this self-learning should be done in a controlled environment away from production, so it can be validated and verified before being deployed on vehicles in live environments.
This need to conduct assurance will limit the rate of change for systems evolving and constrains some of the benefits of ‘continuously deployed AI’, but it is a necessary trade-off to make systems safer.
There is also the question of transparency into both the algorithms at the heart of these systems and the performance of AI models within strict intellectual property frameworks. Safety can currently only be assured through an understanding of what is going on in the decision-making process. Yet the use of opaque technologies, which even their designers cannot understand, results in the so-called ‘fallacy of explainability’ i.e. the belief that humans can explain their decisions and therefore that it is possible for AI-based systems to produce equally compelling explanations.
A culture where manufacturers work with regulators, extending what is required by the legislation that established air, rail and marine investigation branches, will allow accidents and aberrant performance to be investigated without threatening intellectual property. This open collaboration is essential for safety.
3 How should training evolve to enable engineering and human factors professions to work effectively with AI?
As the technology underpinning safety-critical and safety-related systems evolves, so must the competencies of designers, operators and, to some extent, the public when it comes to products on the market.
The obvious answer to this issue is that existing further and higher education courses must evolve with contemporary challenges in mind. There is agreement that teaching young engineers concepts of systems thinking, and generally making courses more interdisciplinary, is to be desired. However, there is already great pressure on syllabi; there comes a point at which adding more content to undergraduate courses means dispensing with other material. It may take time to incorporate these principles into education, and the skills gap risks widening further, but there is an imperative for change to reflect the technology at large.
Much of the emphasis, at least in the short to medium term, must be placed on engineers’ continuing professional development (CPD) to upskill experienced engineers and meet the requirements for early-career professionals taking their first steps in industry. CPD cannot replace education as the process for embedding core competencies in engineers, but it can bridge a gap in workforce capability while technology becomes more widely prevalent, and ensures professionals can remain competent as technology evolves.
Professor John McDermid, director of the Centre for Assuring Autonomy at the University of York, had this to say: “Our experience is that CPD must be grounded in practical issues that are relevant to the domain, e.g. rail or maritime. We see real willingness to engage at the engineering level, but it is also important to provide CPD to policy and decision-makers.”
In the UK, a clear case must be made for regulators, the Engineering Council and professional engineering institutions to embed human factors into both education and CPD, and likewise human factors professionals must be equipped with a robust understanding of the design process.
4 What is needed to support multidisciplinary teams to maximise the impact of AI on transport safety, effectiveness and resilience?
The emphasis on interdisciplinary thinking must go beyond traditional branches of engineering to include human factors. Incorporating human factors comprehension should be mandated and not ‘left to chance’. Professionals at the interface between engineering and human factors should have both specific understanding of how to identify and mitigate the risks, and how to capitalise on the opportunities of interaction and integration of human performance, automation and AI.
Frequently, human factors concerns are thought to be taken care of simply by the presence of a specialist, rather than being considered at each stage of the design process. There is a risk that businesses only see the true value of human factors when damage is done, either to their bottom line or after a serious accident. Instead, engineers must take responsibility for considering the principles of human factors throughout the design process – embracing human factors expertise rather than seeing it as a box to tick.
Emphasis must be placed on shared experiences and a common language within multidisciplinary teams to change the culture and enable the development of hybrid skillsets. Engineers throughout the value chain must buy into a set of principles underpinned by an all-encompassing appreciation of the relation between engineering, human factors and safety.
Summary
Work continues within this programme to define the problems described above and how they can be resolved. This roadmap will build on many sources of research and opinion that already exist and the expertise of those engaged in this workstream.
What is clear already is that these challenges will not be overcome individually, in one mode of transport or in one discipline of engineering. Problems emerging for the deployment of new technology in a system of systems require the input of experts drawn from throughout those systems.
Want the best engineering stories delivered straight to your inbox? The Professional Engineering newsletter gives you vital updates on the most cutting-edge engineering and exciting new job opportunities. To sign up, click here.