News

  • Home
  • News & informations
  • Modeling: Mechanistic or Statistical ? Part 1 : Modeling expectations

Modeling: Mechanistic or Statistical ? Part 1 : Modeling expectations

  • by Ypso-Facto
  • March 13, 2023
Modeling: Mechanistic or Statistical ? Part 1 : Modeling expectations

Digital tools are used for well-known applications in our everyday life: energy production, navigation, drugs development, weather prediction, diseases detection. They are useful because they allow predicting: weather, illness, position and direction, conditions for minimizing energy consumption... 

For decades developing processes for producing molecules required to perform many experiments because of a lack of predictive tools. This experimental work is aimed at finding conditions allowing to get the target product, with minimum production costs, maximum sustainability... Let call this zone the “sweet spot”.

More precisely, this experimental work aims at supporting:

  • process design and scale up by assessing process performances according to operating conditions
  • process operations by identifying bottlenecks, troubleshooting …..
  • safety by identifying risk of hazardous operations
  • quality by proposing Proven Acceptable Ranges (PAR) of conditions, identifying Critical Process Parameters (CPP) , Critical Materials Attribute (CMA)

Decreasing development costs and time-to-market are important drivers. That is the reason why decreasing the experimental burden is of key interest. Attempts to select experiments in a rational way (thus giving the maximum of information to identify the sweet spot) based on statistic considerations are often known as Design of Experiment (DoE) approaches.  This is certainly not a new concept as it was conceptualized by Ronald Aylmer Fisher a century ago.

These approaches may have some merits but cannot be game changers as they still require performing a lot of experiments.  

A disruptive approach would be to replace experimental work by digital simulations: ideally a “clever computer code” would be able to predict the results of the experiments and thus to replace them, allowing to save time for finding the sweet spot.

Any computer code aimed at simulating experiments must be based on a model acting as the engine. It is critical for the user to understand, at least in principle, what is in this model.

 

Models … and Models

Models are sometimes simply used for representing results keeping in mind that they should ideally be used to Predict results that have not been carried out.

It must be noted that the ability of a model to represent experimental results does not mean anything about the Prediction ability of such model.

Let us consider a thought provocative example. Assume that one wants to represent the trajectory of a satellite around the earth and that two positions recorded at two different times are available. Certainly, a straight line would be able to represent the position of the two points. What would be the ability of the straight line to represent the complete trajectory of the satellite? Null obviously.

Considerations related to prediction are a bit more subtle: one can consider prediction within the investigated domain (the Box), or prediction outside the investigated domain (outside the Box). The latter is certainly by far more challenging but also interesting.

If the two above mentioned positions of the satellite are close enough, the straight line may be good enough to predict position versus time between the two measured positions (Inside the Box) but would be meaningless to predict positions outside the two measured positions (Outside the Box).

Many approaches and software used in life science process development are based on models deriving from a statistical approach. The results Y of the experiments (typically KPIs like productivity, purity, …) are directly related to the manipulated variables X of the experiments (temperature, residence time, initial composition ….) via an equation Y=f(X). Very often, function f is a linear or a quadratic function of the variable X. Approaches based on neural networks are similar in essence: all are “black-box” models built from experimental data that do not depend on the underlying physico-chemical phenomena. For the sake of simplicity, we will call these models Statistical Models.

These are used for producing the well-known 3D color curves representing KPIs versus variables use in many reports.

If the model (in fact function f) is not even able to represent the experimental data, you must disregard it ! If it is, this model may be able to “predict” results inside the investigated box, but THEY MUST NOT BE USED OUTSIDE THE BOX. For illustration, if you studied the temperature influence between 25°C and 50 °C, never use these models to try to guess what would happen at 75 °C.

Opposite, you can consider creating a mathematical model based on the laws of physics to predict he system behavior. Such models are named Mechanistic because they intent to account for mechanisms underlying the system’s behavior: mass conservation, thermodynamics, mass and heat transfer, fluid mechanics …. THEY CAN POSSIBLY BE USED OUTSIDE THE BOX.

In the case of the satellite example, one must consider gravity laws, inertial forces, friction forces coming from particles present in the high atmosphere, solving force balance equations …. A bit more complex than the straight line or a quadratic expression but certainly safer for the astronauts!

In general, you should not expect an explicit Y=f(X) relation from mechanistic modeling. If one takes the situation of a chemical reactor, one needs to consider: a stoichiometric scheme, mass balances, heat balances, reaction kinetics, heat and mass transfer, hydrodynamics. This typically results in a set of algebraic differential equations that must be solved. When this is done, you get access to species concentrations, temperature profiles, …. that must then be processed to get required outputs Y (productivity, purity …).

The mechanistic approach allows defining submodels, like bricks that can then be associated to simulate complex systems. As an illustration, if you have identified the stoichiometry and kinetics associated with the reactions system you want to model, you can simply predict the behavior of different reactors (stirred, plug flow, with recycling …) by solving dedicated mass balances. In other words, you can extrapolate. This is not possible with statistical approach for which you need to identify one model/parameters set per type of system (stirred, plug flow, with recycling …); the knowledge of the behavior or a stirred reactor tells you nothing about the potential behavior of a plug flow reactor As a second illustration, in chromatography when you have determined thermodynamic and kinetic  models, the mechanistic approach allows you to simulate single column as well as multicolumn systems, isocratic as well as gradient ….. when the statistical approach imposes to identify one model per situation (system).

Both approaches (Statistical and Mechanistic) must be fed with experimental information. There is a big difference however:

  • for statistical models, one needs to directly provide the experimental Y=f(X) information.
  • for mechanistic models, one needs to provide information required to identify important physico-chemical parameters (kinetic constants, activation energy, pKas, viscosity, boiling points, solubilities, …..). The experimental Y=f(X) will then be used to check the prediction ability of the model.

This difference induces differences in the nature of the experiments. People using the statistical modeling approach typically perform experiments not too far from the expected sweet spot. The problem is sometimes to identify this sweet pot. A discovery phase is necessary: the objective is first to identify positive, neutral, or negative correlations between many process parameters and process results and thus to select a short list of parameters (typically two to four) that will then be investigated.

People using mechanistic modeling should not be afraid of selecting experimental conditions or nature very far from the sweet pot if they allow efficient parameters estimation. The model will then allow identifying the sweet pot.

When using statistical modeling, an experiment not delivering the target KPI (ie: purity for instance) is a failure. This could be a great means to understand with a mechanistic modeling.

Anytime you consider using/assessing a simulation tool, you should ask yourself:

  1. Is the model able to represent adequately experimental results
  2. Is the able to predict Inside the box (interpolate)?
  3. Is the able to predict Outside the Box (extrapolate)? Scale-up: size, but also temperature, compositions...

But one question remains: how to choose between mechanistic modeling and statistical approach?

Click here to access the second part of the article "Modeling : Mechanistic or Statistical ? Part 2 : Making a choice..."

 Author : Roger-Marc Nicoud 

Do you have any questions or comments on this? Contact us to discuss : contact@ypso-facto.com 

 


Ypso-Facto is a service company helping industrial firms to develop, optimize and secure their chemical processes and bioprocesses.

Learn more

Contact Us
© Ypso-Facto 2014 - Powered by Ealys | Legal Information