PREDICTIVE METHOD BASED UPON MACHINE LEARNING FOR THE DEVELOPMENT OF COMPOSITES FOR TIRE TREAD COMPOUNDS

Information

  • Patent Application
  • 20240006033
  • Publication Number
    20240006033
  • Date Filed
    November 30, 2021
    2 years ago
  • Date Published
    January 04, 2024
    5 months ago
Abstract
The present invention refers to a computer implemented predictive method based upon machine learning for the development of composites for tyre tread compounds. The method comprises the following steps: providing a raw data database, namely, a dataset consisting of recipes for already existing composites and of corresponding known dynamic properties, to be used as a reference; normalizing the data contained in the raw data database according to an iterative procedure; pre-processing the normalized data by means of Data Mining in order to eliminate aberrant data and to add new fictitious ingredients relating to specific categories of actual ingredients; training an algorithm based upon automatic learning by means of the pre-processed data; applying said trained algorithm to a set of experimental data that are representative of the recipe of the composite to be tested, for the prediction of the dynamic properties of said composite to be tested.
Description
DESCRIPTION

The present invention refers to a predictive method for the dynamic properties of rubber compounds that is based upon machine learning, to be implemented therefore by means of an electronic computer, for the development of composites for tire tread compounds.


BACKGROUND

The present invention is in the tire manufacturing sector, in particular with reference to the determination of the composition of those rubber compounds used for manufacturing tire treads.


The dynamic properties of these rubber compounds (for example tan δ and E′) play a key role in determining tire performance, especially those that relate to energy dissipation (for example wet braking, dry braking and the Coefficient of Rolling Resistance (RRC).


These properties are ensured by the characteristics of the recipes used for the composites, in particular in terms of the ingredients, the quantity thereof and the particular synergies that are established between two or more thereof.


Commonly, the correct formulation of the recipes used for composites must go through several validation steps in the laboratory in order to first find the right technological package and then optimize the formulation by means of progressive fine-tuning until the objective is fully achieved.


Each of these iterative experimental campaigns leads, from the product point of view, to an increase in lead-times and costs in developing the product (time to market) and, from the data point of view, to the generation of a database with intrinsic variability due to random noise within the measurements made during the various test campaigns.


The expectations for the performance of the product, in relation to energy dissipation (for example wet braking, dry braking and RRC), is determined by evaluating the dynamic properties of the rubber compound (for example tan δ and E′). Such an evaluation requires extensive laboratory testing in order to reach validation of the composite and requires time and resources.


The object of the present invention is, therefore, to solve those problems left unresolved by the prior art, by providing a process as defined in claim 1.


In particular, an object of the present invention is that of simulating laboratory tests, in order to provide an accurate estimate of some of the significant dynamic properties of composites for the production of rubber compounds for tires without the need to perform any physical tests. These properties are, for example, tan δ and E′, which represent key parameters to determine the performance of those products that are associated with energy dissipation (for example wet braking, dry braking and RRC).


Further characteristics of the present invention are defined in the corresponding dependent claims.


On the other hand, the use of a software tool that can predict the behavior of composites, and therefore tire performance, allows for:

    • a significant reduction in recurring costs (raw materials, labor, etc...);
    • optimized execution capacity and quality of laboratory tests (making it possible to allocate manpower to other activities);
    • shorter time to market for new products;
    • increased predictive precision in relation to known methodologies.


Other clear advantages over the prior art, together with the characteristics and usages of the present invention, will become clear from the following detailed description of the preferred embodiments thereof, given purely by way of a non-limiting example.





BRIEF DESCRIPTION OF THE FIGURES

Reference will be made to the drawings in the attached figures, wherein:



FIGS. 1A, 1B, 1C show, by way of example, the process of the present invention;



FIG. 2 shows a scatter plot of the original tan δ values versus the expected tan δ values;



FIG. 3 represents the “connections”, i.e., the possibilities of reducing the variability, between the various experimental sessions;



FIG. 4 shows two graphs representing the quality of the prediction with and without the application of physical constraints.





DETAILED DESCRIPTION OF POSSIBLE EMBODIMENTS OF THE INVENTION

The present invention will be described below with reference to the above figures.


A methodology will therefore be described for the prediction of dynamic properties (for example Tan δ and E′) of composites that can be used for the production of rubber compounds for tires.


In general terms, the process involves the following procedure:

    • generation of a raw data database, namely, a dataset consisting of recipes for already existing composites and of corresponding known dynamic properties (N experimental sessions, each containing tests on MN compounds);
    • a procedure for the iterative normalization of the data contained in the raw data database;
    • the pre-processing of the normalized data by means of Data Mining;
    • the training and application of an algorithm based upon automatic learning (machine learning, for example an artificial neural network).


In particular, after a step of training the model using the dataset contained in the normalized and pre-processed database as described above, it is possible to predict the dynamic properties of the composite with greater precision than by directly applying an algorithm straight to the raw data in the database.


In fact, in this way, it is possible to drastically reduce the effect, on the predictive precision, of database noise and the intrinsic variability of the data.


In fact, by means of the procedure for the iterative normalization of the data, operating by means of data relating to multiple laboratory tests on the same species of reference (at least one of the MN compounds present in each of the N experimental sessions), the aim is to reduce the intrinsic experimental variability. Indeed, each repeated test, performed during specific experimental sessions, is used to estimate the rate of variability due to these specific experimental conditions.


Furthermore, the pre-processing procedure (Data Mining) is used to improve the accuracy of predictions by developing new capabilities, removing aberrant data and performing a principal component analysis (PCA).


Finally, the machine learning algorithm, for example an artificial neural network (ANN), predicts some of the main dynamic properties of the composites under examination, for example, as already indicated, the tan δ versus temperature trend and the E′ versus Temperature trend.



FIGS. 1A, 1B, 1C show, by way of example, the process of the present invention.


Theoretical Background

During the dynamic mechanical analysis (DMA), a sinusoidal force (stress σ) is applied to a material and the resulting displacement (deformation) is measured. If the material is perfectly elastic, the resulting tension and stress are perfectly in phase. If the material is purely a viscous fluid, a 90 degrees' deformation phase lag, with respect to stress, is observed.


Viscoelastic polymers have intermediate characteristics whereby some phase delays occur during DMA tests. In this context:

    • E′ it is the storage module, measuring the stored energy, i.e., the elastic portion;
    • tan δ is the loss module, measuring the energy dissipated as heat, i.e., the viscous portion.


The results are validated by comparing the values of tan δ and E′, as predicted by the developed algorithm, with those known experimentally for a plurality of new experimental recipes, which were obviously not used to feed the neural network (ANN) during the training step. FIG. 2 shows the scatter plot of the original tan δ values versus the predicted tan δ values as an example of training and the performance of the test set. As can be seen, both scatter plots are characterized by a high value of R2 (>0.99).


It should be noted that, according to the invention, important preprocessing steps are performed before the ANN algorithm training step. More specifically, the aforementioned data normalization procedure +the preprocessing step by means of data mining.


Iterative Data Normalization Procedure

This normalization procedure has demonstrated the most efficient improvement. In this type of application, due to the repeated experimental sessions, it is usually possible to observe high variability as regards the target properties. Indeed, some recipes are often repeated in several experimental sessions and the target properties thereof sometimes demonstrate significant differences. By investigating all of the N experimental sessions performed, it is possible to find different recipes, amongst the MN possible recipes, that can be used to reduce this variability in relation to the experimental sessions.


The normalization is carried out on each experimental session by referring to those physical properties of the recipe that are common to the various experimental sessions. If such a recipe cannot be used to normalize some of the experimental sessions, insofar as it is not included in them, a new recipe will be selected, in such a way that it is included in at least one already normalized experimental session and in those experimental sessions still to be normalized. By means of this selection it will be possible to iteratively extend and apply the normalization to new experimental sessions.



FIG. 3 shows a representation of the “connections”, namely the possibilities of reducing the variability by means of common formulations (recipes), between the various experimental sessions. The spots represent the experimental sessions, while the lines represent the “connections”, i.e., the methods of normalization of the experimental sessions by means of the reference compounds/formulations (recipes). The graph represents all possible ways to “connect” (i.e., normalize) the experimental sessions, and thus to reduce the variability thereof. As can be understood by looking at the proposed graph, each experimental session can be linked to many other sessions. Such a procedure can therefore be performed iteratively in order to reduce the variability in as many experimental sessions as possible.


From a mathematical point of view, these connections can be made in many ways and therefore different normalization procedures can be used.


According to the invention, each target property (i.e., for example, tan δ and E′) is divided by those corresponding to the recipes used as a reference in the experimental sessions.


From an operational point of view, the iterative normalization procedure is performed as follows:

    • 1. The selection of all experimental sessions that contain the recipe FMR (Most Repeated Formulation) that is most repeated in the dataset.
    • 2. The physical properties of all the formulations included in all of the experimental sessions, selected in the previous point, are normalized by referring to the corresponding properties of the recipe FMR;
    • 3. Each normalized experimental session SSNormalized, is connected, according to the graph of FIG. 3, by means of a recipe FC (Common Formulation), to a non-normalized experimental session SSNotNormalized, therefore:
      • a. The physical properties of the recipe FC included in SSNotNormalized are normalized by taking as reference the physical properties of FC included in SSNormalized;
      • b. The physical properties of all of the recipes included in SSNotNormalized are normalized by taking as reference those physical properties of FC included in SSNotNormalized (which has already been pre-normalized);
    • 4. The procedure described in point 3 is applied iteratively to all the experimental sessions according to the graph of FIG. 3.


It is important to highlight that, according to the invention and contrary to what happens in the known art, data normalization is not applied to the data set as a whole. The normalization procedure is applied in a specific and targeted way to each experimental session, and is developed in order to make each individual experimental session comparable to the others, thereby forming the data set as a whole. This objective is achieved by reducing the variability in relation to the experimental session. This means that what is generally discouraged in the known art, insofar as it introduces harmful non-linearities, namely the normalization of different data sets in different ways, according to the invention is used and exploited in order to achieve the desired results by means of the implementation of iterative normalizations that are determined according to the connections of the graph in FIG. 3.


The normalization procedure can be described as:








y
_


i
,
j
,
k


=


y

i
,
j
,
k



y

i
,

ref
i

,
k







wherein: i stands for i-th experimental session, j stands for the j-th example belonging to the specific experimental session, k stands for the k-th target property, ref, indicates the reference example of the i-th experimental session and Yi,j,k stands for y normalized.


The following Table 1 shows the difference between performing the data normalization procedure or not in terms of accuracy.


In doing so, accuracy is defined as the percentage of recipes that show a percentage prediction error that is lower than the target percentage error. The E′ value prediction model showed an increase in accuracy greater than 30% by virtue of the application of the data normalization procedure (see the DELTA column), while the tan δ value prediction model showed an increase in precision greater than 11%.











TABLE 1









Accuracy (% of population within the target error)











Target
% Target
Without
With



property
error
normalization
normalization
DELTA





E′ @ 0°
Err <12%
28.57%
68.57%
40.00%


E′ @ 30°
Err <12%
34.28%
65.71%
31.43%


E′ @ 60°
Err <12%
37.14%
68.58%
31.44%


tanδ @
Err <7%
71.43%
82.86%
11.43%





tanδ @
Err <7%
68.57%
80.00%
11.43%


30°


tanδ @
Err <7%
57.14%
77.14%
20.00%


60°









This table shows the predictive precision of E′ and tan δ with the aim of highlighting the impact of the data normalization procedure. Normalized data processing improves the predictive performance of each individual target property. Interestingly, the normalization procedure introduces a 40% improvement in the prediction accuracy of E′ @ 0° (from 28.57% accuracy without normalization to 68.57% accuracy with normalized data).


Pre-Processing Using Data Mining

The accuracy of the prediction is greatly improved when a correct data mining operation (iterative normalization, aberrant data removal, PCA) is performed on the experimental dataset used to build the algorithm during the “training step”. Indeed, PCA is able to remove those ingredients that do not affect the target properties (for example tan δ and


E′) from the recipes of the training dataset and to add new fictitious ingredients, created specifically in order to emphasize the informative content of the dataset.


With the term informative contribution of a feature (and therefore, by extension, of the dataset) reference is being made to the fact that the effect thereof upon the physical properties being predicted is well interpreted by the model, again in relation to the quantity and interaction thereof with other ingredients, in line with the performance thereof. A correct increase of 2 M pa in relation to one property of those in question following an increase-decrease of a certain ingredient is a ratio that, if properly interpreted by the model, is a positive informative contribution.


The anomalous data removal procedure is designed to be implemented by taking into account both each individual experimental session alone and all of the various experimental sessions jointly. This dual nature of the procedure makes it possible to take advantage of every single session.


In order to add new fictitious ingredients, with the aim of facilitating the subsequent creation of predictive models, the original ingredients have been divided into certain categories, i.e., polymers, fillers, accelerators, etc. PCA was then applied to each ingredient category in order to estimate a new fictitious ingredient that could enhance the informative content of that particular ingredient category. In this context, therefore, it is possible to define as a fictitious ingredient a linear combination of the actual ingredients, as supplied to the PCA, that is such that it can emphasize the informative contribution of that specific category of ingredients. This linear combination therefore combines the informative contribution of the initial ingredients. From this it follows that the informative contribution made by the fictitious ingredient summarizes and amplifies the informative contribution of the initial ingredients. Finally, for each category of ingredients, the fictitious ingredients determined in such a way have been added to the input list (i.e., ingredients) that the prediction algorithm has the task of processing and, therefore, both the original informative contributions and those amplified in the fictitious ingredient are subject to analysis.


Management of Physical Conditions in Relation to the Dynamic Properties E′ and tan δ

Refer now to FIG. 4 and to the following Table 2. The quality of the predictions also depends upon a series of physical conditions that should be satisfied by the algorithm during the “training step”.


In particular, the predictions become more reliable when the model is forced to simultaneously satisfy the following significant physical constraints:










i
.




y



T



<
0

&




y

(

60

°

)



>

0



(

monotonous


and


positive


trend


of



y

60

°




versus


temperature










ii
.





2

y




x
2




>

0



(

convexity


of


the


trend


of


y


versus


temperature

)









iii
.


y

30

°



=


y

0

°


(

1
-

R

0


°
/
30


°



)








iv
.


y

60

°



=


y

0

°


(

1
-

R

0


°
/
60


°



)





wherein







R

i


°
/
j


°


=



y

i

°


-

y

j

°




y

i

°







and y stand for E′ and tan δ respectively.


The factor Ri°/j° is introduced in order to normalize the target functions tan δ (t) and E′(t). The normalization Ri°/j° is a necessary condition in order to ensure that the algorithm applies an equal “weight” to all temperatures, although the absolute values of tan δ (T) and E′(T) decrease. These constraints are applied by means of a weight/penalty logic to the cost function to be minimized during the training of the model itself. This means that the cost function is conveniently multiplied by:

    • a coefficient equal to 1 if all of the physical constraints are respected;
    • a coefficient greater than 1 if one or some of the physical constraints are not respected.


      The purpose of this procedure is to promote models that are capable of making predictions that respect the physical constraints.


Although both methodologies (with and without constraints) correctly predict the decay tendency of tan δ versus temperature, only the constrained model correctly estimates the increase in tan δ with an oil content.















TABLE 2







CPD
A
B
C
D





















Recipes
NR
80
80
80
80


[phr]
BR
20
20
20
20



CB
50
50
50
50



6PPD
1.5
1.5
1.5
1.5



SULFUR
1
1
1
1



ACCELERANT
2
2
2
2



RAE OIL
0
5
10
15


Constrained
tanδ 0°
0.304
0.324
0.344
0.356


model
tanδ 30°
0.217
0.235
0.253
0.264



tanδ 60°
0.175
0.189
0.203
0.212


Unconstrained
tanδ 0°
0.283
0.274
0.268
0.265


model
tanδ 30°
0.211
0.206
0.203
0.203



tanδ 60°
0.178
0.175
0.173
0.174









The following Table 2 shows the prediction of the oil content on tan δ versus Temperature. Four materials were studied (A, B, C, D). The first lines show the recipes, in which the CPD column reports the formulation (from NR to RAE OIL) for the four materials (A, B, C, D). The middle rows indicate the predicted tan δ values, obtained by using a constrained model, while the last middle rows indicate the predicted tan δ values, obtained by using an unconstrained model.


The present invention has heretofore been described with reference to the preferred embodiments thereof. It is intended that each of the technical characteristics implemented in the preferred embodiments described herein, purely by way of example, can advantageously be combined, in ways other than that described heretofore, also with other characteristics in order to give form to other embodiments which also belong to the same inventive nucleus and that all fall within the scope of protection afforded by the claims recited hereinafter.

Claims
  • 1-9. (canceled)
  • 10. A computer-implemented method for predicting dynamic properties of a composite to be tested for the production of tire tread compounds, the method comprising: providing a raw data database comprising a dataset of recipes for already existing composites and of corresponding known dynamic properties;normalizing data contained in the raw data database according to an iterative procedure;pre-processing the normalized data by data mining to eliminate aberrant data and to add new fictitious ingredients relating to specific categories of actual ingredients;training an algorithm based upon automatic learning via the pre-processed data;applying the trained algorithm to a set of experimental data representative of the recipe of the composite to be tested, for prediction of dynamic properties of the composite to be tested.
  • 11. The method of claim 10, wherein the dynamic properties are a loss module and a storage module of the composite to be tested.
  • 12. The method of claim 10, wherein the raw data database comprises data representative of a plurality of experimental measurement sessions.
  • 13. The method of claim 12, wherein the normalization step provides for an iterative normalization based, at each iteration, upon a recipe that is most repeated in the dataset, to perform connections between the experimental sessions and reduce a variability thereof.
  • 14. The method of claim 10, wherein the iterative normalization step is performed by dividing each of the dynamic properties to be predicted by a corresponding property of the iteratively selected composites used as a reference recipe, wherein the reference recipe constitutes the connection between various experimental sessions and enables comparison of the various experimental sessions.
  • 15. The method of claim 10, wherein a weight/penalty logic, applied to calculation of a cost function to be minimized during the training step, imposes physical constraints upon the dynamic properties to be predicted.
  • 16. The method of claim 15, wherein the constraints are represented by:
  • 17. The method of claim 10, wherein the pre-processing step comprises application of data mining algorithms.
  • 18. The method of claim 17, wherein the data mining algorithms remove anomalous data and/or execute a principal component analysis (PCA) to add new fictitious ingredients relating to specific categories of actual ingredients.
Priority Claims (1)
Number Date Country Kind
102020000029054 Nov 2020 IT national
PCT Information
Filing Document Filing Date Country Kind
PCT/EP2021/083492 11/30/2021 WO