When using digital twins, industrial companies are facing a problem that has been little talked about in public discussions and the media. Between virtual testing with digital twins and physical object testing, quite significant discrepancies, as much as 30-40%, are often found. Why is that? Andrey Labutin, head of the JSC V.A. Degtyarev Plant’s IT department, will talk about one of the possible reasons in his series of publications.
When practice doesn’t support theory.
Imagine we already have everything we need to build a digital twin, namely:
- a supercomputer for conducting multifactorial multiload virtual tests;
- a ready-made digital object (twin) designed to replace physical testing with virtual;
- correct proven mathematical calculation models for all types of virtual tests that we want to conduct with our digital twin;
- software that can place a grid model of a given cell size over a digital object for carrying out load calculations as well as other types;
- software that can apply our mathematical calculation models by superimposing over the load grid of digital objects.
We calculate and we get certain results.
We take a physical prototype and run the same tests with it, and these are the same tests we ran with the digital prototype, but in real life. Unexpectedly, we get only 70% convergence, while the expected result was supposed to be 95+%.
We take the second physical prototype, perform the same manipulations, and get as little as 60% convergence.
We go back to the virtual prototype, adjust the dimensions and magnitude of force application along with other factors of influence, but the convergence doesn’t improve. We adjust the size of load grid cells and other parameters, but the results remain disappointing.
The saddest thing is that the convergence of different physical prototypes is not a stable value. This means the problem is neither the calculation models, nor the digital twin, it’s the physical objects.
Through painstaking measurements and verification whether the design and technical process comply with the actual physical prototype processing, we conclude that we need to check the quality of the materials the prototypes are made of.
The calculation model was based on reference data including the properties of materials that were used in designing our product and its digital twin. The discrepancies between the data of virtual and physical tests are caused by nothing less than the discrepancies between the materials’ reference data and the properties of a particular physical prototype materials.
Moreover, the calculations' discrepancy of prototypes made fr om materials that came in one batch means that materials within one batch, even one sheet, bar, pipe, etc. may have different properties.
Imagine your virtual tests featured bending the structure. To visualize it, imagine a chair, its legs made of steel tubes. The force is applied to the tip of the rear leg at an angle of 20 degrees. The tubes for all the four legs were cut off one longer tube. However, all the four legs show a different physical testing result, with a different deviation fr om one stable calculated value of the digital twin.
The reason may be:
- the steel tube was manufactured according to specifications, but there were deviations in the steel chemical composition;
- along-the-length tube thickness deviations exceeded the limits allowed by the national standard.
What do we actually know about the materials used for making specific products? What materials data can we include in our calculations?
Design vs Digital Model vs Digital Twin
To design a product in its static form or calculate its maximum load, it was often enough to use the data fr om reference books that followed the national standard, for example, design materials manual. Even the transition to a digital design model didn’t change the approach to checking the structure’s operability and loads.
However, the transition fr om the digital model to the digital twin, i.e. an attempt to make a virtual design more like the real product, requires not only amending the approach to structural elements, but also to the state of what they are made of and how.
Moreover, the transition to the digital twin and virtual testing requires the introduction of a new value for design evaluation — TIME. Earlier, in most calculations, time was only considered as part of other lim it values, for example: viability, wear lim it, corrosion resistance, number of cycles till need of replacement, etc.
To illustrate, let's slightly change the testing conditions of the chair with steel legs. Say, the force is applied to the rear leg at angles from 0 to 20 degrees in time t and from 20 to 0 degrees in time t/2 – that means, a person is tilting the chair.
That’s when we get a multidimensional dataset, with tilt angle depending on time and distance from the edge of the leg to any point on this chair’s leg.
Even if we neglect this last measurement in the multidimensional dataset (the distance from the edge of the leg to any point on this chair’s leg), we still have the bending force dependence on the leg tilt angle over time.
Can you ignore this data?
In a nutshell, YOU CAN! But then you will very likely have discrepancies between virtual and physical tests, with the level of discrepancies unpredictably varying from test to test.
Sooner or later, the question will pop up, how much you can trust digital twins and to what degree they’re actually twins to physical prototypes. 60%? 80%? Yesterday it was 80%, but today we have a different supplier, so perhaps the number will be 60%?
Will you be satisfied with a car brake system warranty document that says, “car brake system virtual testing confirmed that the braking distance on dry asphalt with hot tires when driving 100 km/h is ~ 20 meters, with possible deviations of ±30-40% in the physical vehicle you are purchasing”?
Perhaps, the question should be articulated differently — how can you consider the materials properties to bring the digital twin as close as possible to physical prototypes?
The first step to take this is to develop the rules of DMP use.
What is DMP?
DMP stands for Digital Material Passport. Other possible names (synonyms) include:
- DCP — Digital Circularity passport;
- DCCP — Digital Cradle-to-cradle passport;
- DPP — Digital Product passport.
DMP is a single storage for electronic documents describing:
- product properties;
- components used;
- test and diagnostics results;
- defects, failures and repairs across the entire chain of manufacturing cooperation;
- storage and operation conditions;
- disassembly, destruction or recycling conditions.
A number of countries across the world have been developing DMP and DPP standards for decades. However, the goals that those standards were supposed to meet didn’t correlate much with digital twins and virtual testing.
A good overview of the international regulatory framework for BIM and DMP in the construction field was prepared back in 2016 by PhD of Technical Sciences, Professor, Professor at Tomsk State University (city of Tomsk), CEO of IndorSoft LLC (Tomsk) Mr. A.V. Skvortsov.
In 2020, European Union’s Horizon received grant subsidy no. 642384 to conduct an analysis of DMP use and application in the European market and released Materials Passports — Best Practice prepared by Matthias Heinrich and Werner Lang.
Prototype testing and standard development in the DMP field are carried out in:
- USA, wh ere COBie originated. This basic specification was carefully elaborated by the US Department of Defense for military units;
- China and the Republic of Korea started with COBie and are now developing their own sets of standards;
- 15 European countries, with Germany as the leader in development;
Several countries have a legally binding obligation to generate electronic BIM with (at least partial) DMP support for construction projects exceeding a certain cost. For example:
- Germany — over €10 million,
- Denmark — over €2.7 million,
- South Korea — over $50 million.
Large Russian manufacturers, including companies exporting products to Europe and the USA, have not been seen to do any serious work in this area.
A representative of ASCON (Russia) confirmed that their company is not involved in and has not heard of such a standards' development project. They have only received one commission from their customers to make a DPP according to the customer's company internal regulations (report by ASCON at the ITOPK conference, Kaluga, 2020).
Today, in Europe in the field of DMP:
- Neither the data generation rules, nor the data content (mandatory, recommended, general) have been approved.
- No unified format for generating, storing, transferring, and processing of data has been approved.
Materials Passports — Best Practice has emerged as a sort of collection of accumulated and disconnected European knowledge in the field of DMP in an attempt to systematize them and translate them into an optimal unified standard in order to further unify them at least along the lines of data structure and uniform naming of elements.
A group of IT experts at JSC V.A. Degtyarev Plant took the best European practices as a basis to develop the structure and DMP template format which could be applied at the Plant for purposes of integration into information systems, given the automation level achieved.
Due to a lack of suppliers (according to a supplier survey, including those from abroad) who are ready to generate and provide digital passports for the products they supply, the implementation of this template has been postponed.
However, the forecast of the most probable transformation of the said DMP template along the trending IT industry development areas (in case DMP is recognized as a crucial Digital Twin element) shows that this will not only improve quality of digital twins and virtual testing, but also generate new digital services in the International Patent Classification and Public Key Infrastructure market.
Find out more on this topic in our other materials. To be continued…
18 countries have unveiled the first international agreement on how to protect artificial intelligence from irresponsible players. It aims to develop AI solutions that are "inherently safe".
On November 30, the professional IT community GlobalCIO hosted a large-scaled international conference "Global CIO Insights: Digital Transformation with AI". During the event, leading experts shared their practical experience in launching projects utilizing artificial intelligence (AI) and highlighted approaches that helped elevate their companies to new heights.
Voting for projects participating in the "Project of the Year" contest is open. The voting began on December 1st and will continue until January 15th inclusive. The winners will be announced on February 7th, 2024.
Online sales is one of the areas where the quality of IT tools directly affects business profitability. Kamza Nugumanov, CIO of Jusan store, tells about the experience of deploying a rapidly growing Kazakh marketplace.