Designed to speed up the development and integration of batteries and accelerate the transition to electric vehicles (EVs), the product was developed by London-headquartered Monolith.
Speaking at the Battery Show North America yesterday (14 September), CEO and founder Dr Richard Ahlfeld said that battery testing has become a “significant bottleneck” to the timely launch of new EVs, compounded by escalating demand and intense competition around range and charging times.
“Engineers perform battery tests across thousands of channels, generating terabytes of data per week. They’re running out of test stands and don’t know what optimal tests to run, and certainly don’t have the ability to learn from this vast amount of data as quickly as they need,” he said.
“This is where AI comes in. Through the ability to learn from data, test engineers can understand behaviour characteristics that are so complex, that without the right tools it is incredibly difficult to decipher. AI software that learns from real world test data is a reliable and effective means for solving the intractable physics of batteries that current simulation and test planning tools don’t efficiently solve.
“The promise of AI, therefore, is simple: test plan optimisation that offers greater R&D efficiency and faster time-to-market. For the electric car industry, this means speeding the development and integration of batteries, and, for customers, a faster and safer transition to electric vehicles.”
Monolith released a new product based on its machine learning approach, called the Next Test Recommender (NTR), this year. Built on a “robust active learning algorithm”, the NTR gives recommendations on the validation tests to run during the development of hard to model products such as batteries and fuel cells. Using this AI algorithm, engineers can reduce testing by up to 70%, the company said.
“In one fuel cell use case, an engineer trying to configure a fan to provide optimal cooling for all driving conditions had a test plan for this highly complex application that included running a series of 129 tests,” a Monolith announcement said.
“When this test plan was inserted into Monolith software, it returned a ranked list of what tests should be carried out first. Out of 129 tests, the platform recommended the last test – number 129 – should actually be among the first five to run, and that 60 tests were sufficient to characterise the full performance of the fan, representing a 53% reduction in testing.”
Want the best engineering stories delivered straight to your inbox? The Professional Engineering newsletter gives you vital updates on the most cutting-edge engineering and exciting new job opportunities. To sign up, click here.
Content published by Professional Engineering does not necessarily represent the views of the Institution of Mechanical Engineers.