I. Introduction: The importance of calibration and validation
The discrete element method (DEM) is a powerful tool which - if correctly applied - can simulate, with quantitative accuracy, a diverse range of particulate systems with applications spanning multiple academic disciplines and almost every industrial sector (3).
One of the major limitations of DEM, however, is that without rigorous calibration and validation - that is to say, the careful choice of simulation parameters to match those of the ‘real’ particles simulated, and the verification that the behaviours of the resultant model match those of the ‘real’ system - the simulations produced are liable to be inaccurate, or even unphysical (4). The calibration of DEM models is, however, not a trivial task. While there exists a diverse array of tools designed to characterise the properties of powders and particulates (5), most - if not all - of these tools quantify only the bulk or ‘macroscopic’ properties of powders (flowability, yield stress, angle of internal friction...) as opposed to the particle-level or ‘microscopic’ parameters (restitution coefficient, coefficient of sliding and/or rolling friction...) required by contemporary DEM engines.
In this article, we provide an overview of contemporary methods for the calibration and validation of DEM models, discussing the advantages and limitations of different approaches, and how they may be developed and improved in the future.
II. Calibration
i. Indirect calibration
Methods for the calibration of DEM simulations can be divided into two broad categories: direct (or ‘bottom-up’) calibration and indirect (or ‘topdown’) calibration. In the former, the parameters of the particle contact model are directly estimated from particles’ microscopic properties measured us ing particle-scale experiments, such as measuring the coefficient of friction of a single particle, to give an example. In the latter, one or more bulk properties of the particulate are measured, and the microscopic properties of the individual particles back-computed from these measurements to determine the contact model parameters. This can be achieved either by using scaling laws mapping macroscopic to microscopic properties (6) or, as we will focus on in this article, by using ‘digital twins’ examples of which may be seen in Fig. 1 of the powder characterisation devices used to determine the set of model parameters that provide the same bulk behaviour exhibited in the ‘real’, physical system. In addition to direct and indirect calibration, we must also consider input - the process of determining the particle properties (e.g. morphology and density) we wish to emulate in simulation.
When building a DEM simulation, the sizes and shapes of the DEM particles have to be defined. There exists a wide array of methodologies through which to estimate the sizes, and particle size distributions, of particles. These methodologies include sieving (7), image analysis (8), laser diffraction, dynamic light scattering and other light-scattering methods (9, 10), the electrozone method (10, 11), and time-of-flight measurements (12), each of which carries its own distinct strengths and weaknesses which, for the sake of brevity, we will not discuss at length here.
A crucial limitation common to many of the aforementioned methods, however, is that the size of a particle is quantified using a single, scalar value, typically a sphere- or circle-equivalent sphere diameter (13), thus omitting crucial details regarding the particle’s geometry. While this may be sufficient for highly-spherical particles, for aspherical particles, key properties such as elongation (aspect ratio) (14), angularity (15) or nonconvexity (16) - which may significantly affect particle dynamics and thus should be represented in any rigorous DEM model - are not meaningfully captured. Nonetheless, widely-available (twodimensional) image analysis methods can capture averaged statistics relating to these properties, and the full, three-dimensional geometries of particles may be accurately captured using 3D scanning methodologies (17,18).
The accurate measurement of particle geometry is, however, only half of the battle, as the implementation of complex shapes in DEM simulations usually requires a rework of the contact detection algorithm, as well as other parts of the DEM model, relating to the mechanical forces and heat transfer between particles (19-26). While this is not necessarily the case for ‘clump’ (multi-sphere/gluedsphere) methods (27), such methods are generally considered not to be suitable for modelling highly angular particles. Implementing an accurate particle size distribution for spherical particles is, by contrast, mercifully straightforward - though simulating wide PSDs with high fidelity can prove computationally challenging (28).
In the following sections we discuss the direct and indirect calibration of contact model parameters in further detail, outlining their major strengths and weaknesses, the practicalities of their application, and their general suitability for use in industrial DEM.
ii. Direct calibration
Direct calibration, while conceptually simple, can, in practice, be highly challenging. For the measurement of certain properties there exist wellestablished, standardised procedures, which may be simply performed using commercially-available systems. For example, particle density can generally be measured to a high degree of accuracy using gas pycnometry (29).
However, other particle properties are distinctly more difficult to measure precisely than the sizes and shapes of particles. The coefficient of restitution (CoR), ε, provides an infamous example. Intuitively, one may expect this parameter to be fairly easy to measure - drop a particle from a known height, measure how high it bounces (off a solid surface or another particle) and use the ratio of bounce height to drop height to measure the energy lost, and thus the value of ε. In reality, however, there are numerous complicating factors to consider, of which we give here just a few of the most significant: the particle must be dropped in such a manner that it has no ‘spin’ either before or after collision, as in most set-ups energy lost to the particle’s rotational modes cannot be distinguished from energy lost due to restitution; the collision surfaces must be clean, dry and as close to perfectly smooth as possible so as to avoid imperfect contacts or energy loss to other sources; particles must be suitably large, dense and dry that cohesion effects - which cannot be easily distinguished from the effect of the CoR - do not influence results; a large enough number of repeat experiments must be performed to ensure adequate statistics. For aspherical particles, measurements are more complex still (30): though the shape of a particle does not influence the inherent CoR of a material, if the particle used in an experiment such as those described above is not a perfect sphere (or, alternatively, a perfect cube) it is extremely difficult to perform said experiments without energy loss to the rotational modes, which may be difficult to decouple from energy loss due to restitution.
Despite these difficulties, several researchers have made excellent attempts at characterising and measuring the restitution coefficients of diverse materials (31-35), including the particularly challenging case in which particles or surfaces are wetted (36,37). However, the time-intensive nature of the experiments, the lack of commercially-available equipment with which to perform them, and the aforementioned limitations on particle size, make such measurements unsuitable for many industrial applications.
The coefficient of friction is somewhat easier, though still by no means trivial, to measure directly. Commonly-used approaches for such mea surements include microtribometry (See Fig. 2) (38-40) and simple planar angle of friction tests (41,42). The former involves the use of a robotic arm to apply a pre-defined normal force (Fn) to the material of interest (typically a single particle), which is glued or otherwise attached to said arm, whilst translating it horizontally across a surface and measuring the tangential force (Ft) produced, thus allowing the (sliding) frictional coefficient, µs, to be calculated based on Coulomb’s law (Ft ≤ µsFn). The latter test is conducted by affixing a monolayer of particles to a pair of flat plates which are placed atop one another. They are then tilted until the plates slide apart allowing a sliding friction coefficient to be estimated as µs = tanθ. Both methods can be used to measure both particle-particle and - where a suitable sample of the relevant wall material is also available - particle-wall sliding friction coefficients.
In the context of DEM calibration, there remain questions as to whether the frictional coefficients measured using the above techniques correspond precisely to the coefficients implemented in DEM, and thus whether they can be used safely for direct calibration. Certainly, prior work comparing a ‘real’ and simulated planar angle of friction tester has shown that there is not a one-to-one correspondence (41). The discrepancies observed between the experimental and numerical cases likely arise from differences between the perfect contact assumed in DEM and the real-life situation, where microscopic surface roughness will typically yield an imperfect contact between any two macroscopic objects. This discrepancy could potentially be reconciled by measuring the surface roughness of said objects to determine an effective contact area though - as is so often the case with DEM calibration - this is not a trivial task.
The coefficient of rolling friction, µ r - a parameter relevant to spherical particles, akin to sliding friction but, as its name suggests, applied to the relative rotational motion of particles - may also be measured using a similarly simple test. The ramp rolling friction test (43,44) involves rolling a particle down a smooth ramp of height h and onto a smooth, level substrate. The coefficient of rolling friction may then be determined simply as the ratio /, where d is the distance travelled across the substrate before coming to rest. Similarly to the previously mentioned CoR tests, the results of the ramp rolling friction tests become difficult to interpret when adhesive effects are likely to be significant. As such, while such tests can be useful for measuring the properties of larger particles, they become less useful for smaller (and otherwise more adhesive) particles, limiting their value in many industrial applications. The tests are also once again reliant on the use of highly clean, smooth surfaces. The cohesive and adhesive properties of particles can also be measured in a relatively straightforward manner using, for example, atomic force microscopy (45-48); once again, however, such measurements require careful preparation and choice of equipment (45,47), as well as expertise in the operation thereof.
While the above certainly does not represent a comprehensive list of direct measurement techniques, it nonetheless demonstrates a clear running theme: that such measurements, though conceptually simple, require an extremely (in some cases almost unfeasibly) careful execution in order to produce meaningful results. It thus illustrates an important point, namely that in order to fully calibrate a material using direct measurement techniques, one requires both a significant amount of specialised, often not-commercially-available equipment, and a significant amount of time and resources to execute the measurements correctly and produce adequate statistics.
It is finally worth noting the common - though somewhat contentious - suggestion that due to the key simplifications associated with numerical models (DEM included), direct calibration cannot be expected to accurately reproduce the bulk behaviours of the systems modelled. To put it differently, the advantage of a precise evaluation of microscopic particle parameters can be lost in the simplicity of the contact model - even if we put in the effort to rigorously and accurately measure the frictional coefficient and CoR, the resultant simulation often has little correlation to the real system. Certainly this is not true in all cases - for example, in the 2003 work of Xu et al. (49), good, quantitative agreement was achieved between low-energy microgravity experiments, kinetic-theory-based models, and hard-sphere DEM simulations calibrated using exclusively direct methods. However, for less idealised systems (notably dense, high-stress systems where considerable deformation may be expected and thus the fundamental assumptions of DEM begin to break down) the efficacy of direct-calibration methods has not, to the authors’ knowledge, been rigorously proven. Further research into a) the effectiveness of direct calibration methods under diverse system conditions, and b) the development of more detailed models of the physical mechanisms underlying particle interactions, would both represent highly valuable pursuits.
From the previous section, it is apparent that while the full calibration of a DEM simulation using exclusively direct methods is possible under suitable conditions, for most industrial applications - where particles are often small, cohesive and geometrically complex, time and manpower are finite, and a wide range of distinct materials is typically used - such an approach is unfeasible. Indirect calibration provides an alternative approach which overcomes some of these limitations, but nonetheless introduces complexities of its own.
A common approach to indirect calibration in industry can be broadly outlined as follows:
Perform one or more characterisation tests on the particulate of interest, thus determining one or more macroscopic responses.
Perform multiple simulations of the characterisation process using a digital twin of the characterisation tool(s) used, implementing various combinations of values for the microscopic parameters to be calibrated.
For each simulation performed, determine the macroscopic response(s) measured in i., and compare the equivalent simulated value obtained to the experimental measurement.
For a suitable set of experiments and simulations, the set of microscopic parameters producing the closest agreement between simulation and experiment can be taken as the ‘true’ values.
A simple example would be to use an angle of repose tester to measure the experimental angle of repose (AoR), φ exp. , of a given particulate, perform a set of simulations with a DEM model of the same angle of repose tester using various values of, for example, the sliding (µ s ) and rolling (µ r ) coefficients of friction, and then determine the values of µ s and µ r for which the difference between the experimental and simulated values of the angle of repose, σ φ = |φ exp. − φ sim. |, is minimal. Assuming all other particle properties are already known, the combination of µ s and µ r yielding a minimum in σ φ should, hypothetically, represent the true values of these parameters. In reality, however, this is not necessarily the case - see, as an example, Fig. 3, which shows the value of σ φ for 256 permutations of µ s and µ r . It is immediately obvious from this image that there exist numerous distinct permutations yielding extremely low σ φ , suggesting that there are multiple possible combinations of µ s and µ r capable of recreating the experimentallymeasured angle of repose. Such an outcome should perhaps not be surprising, as in the current case we are, in essence, attempting to solve a problem with two free parameters using only a single objective. To put it in less abstract terms, as the AoR is dependent on both the sliding and rolling coefficients of friction, it makes intuitive sense that a decrease in µs might be compensated for by an increase in µr in order to maintain a single value of φ, thus giving rise to the ‘L’-shaped region of possible solutions observed.
The problem outlined above is not insurmountable, however. The yield loci produced by shear cells (50-52), or the torque vs. depth profiles of the Freeman FT4 powder rheometer (53), for example, produce distinct functional forms dependent on the frictional, cohesive and geometric properties of the particulates examined. As such, they each provide more than one objective/constraint, allowing them to potentially determine multiple free parameters simultaneously. Indeed, even for the case of a (dynamic) angle of repose tester, one may solve such problems by considering not only a single-valued angle of repose, but the full free-surface profile of the particulate flow (see e.g. Fig. 4). For highly multi-dimensional optimisation problems - that is to say, if we wish to simultaneously calibrate a significant number of distinct parameters using an exclusively indirect approach - it will likely become necessary to use multiple characterisation devices in order to provide the necessary number of objectives/constraints.
Increasing the number of parameters to indirectly calibrate also introduces additional complexities beyond simply the need to perform more characterisation tests. In the example depicted in Fig. 3, we wish to determine the values of two DEM parameters, meaning that we must, in effect, solve an optimisation problem in a two-dimensional (µ s µ r ) parameter space. Even in this comparatively simple, two-dimensional example, we have performed N s = 162 = 256 simulations. If we wanted to calibrate also the restitution coefficient, for example, and to explore the new, three-dimensional phase space to the same degree of detail as the original 2D case, then we would need to perform instead N s = 163 = 4,096 simulations. To calibrate also the cohesive energy density would thus require N s = 65,536. If we additionally wanted to separately calibrate particle-wall and particleparticle values for µ s and µ r then we would need to run a staggering N s = 16,777,216 simulations which, even for an extremely well-resourced company, would likely prove an insurmountable challenge. As such, it is clear that in order to maximise the value of indirect calibration approaches, we need to utilise more intelligent schemes than the simple, brute-force approach exhibited here. Considerable increases in efficiency can be achieved, for example, by using standard Design of Experiment approaches (54) to explore the parameter space more efficiently, and/or by reducing the size of the parameter space to explore using prior knowledge of the system (for example, if the material is known to be free-flowing, there is no need to calibrate the cohesive energy density, thus immediately reducing the dimensionality of the problem).
The development of intelligent methods and algorithms for the calibration of DEM models is, in fact, an active area of research. Researchers have proposed a variety of methods utilising, for example, advanced design of experiments (DoE) combined with simple optimisation strategies (55-57), artificial neural networks (58-60), Bayesian inference (61-65) and evolutionary algorithms .
Even with an extremely well-designed DoE, however, there remains an additional problem to consider: to what extent do the tests truly characterise the material, and how relevant is a given characterisation test to the conditions of the real system of interest? For example, an angle of repose tester is likely to provide meaningful information regarding a particle’s frictional properties, but is largely insensitive to the Young’s modulus of a particle. Similarly, a shear cell may be expected to provide valuable insight into a particulate’s cohesive properties, but no meaningful information regarding its restitution coefficient. As such, it is important to assure that the characterisation tools used mimic, as far as possible, the system we wish to ultimately model - if we wish to model a high-stress environment, we should expose particles to high stresses in our calibration experiments; if we wish to model a rapidly-flowing system, a dynamic characterisation test should be employed.
Moreover, with indirect calibration the set of parameters of the contact model are determined such that the macroscopic response of the system matches that obtained from external characterisation of the real material. This allows one to overcome, to some extent, the limitation of the direct calibration related to the simplified particle shapes and size distribution used. Indeed, the information on the complex shape of the particles of the real material will be included and distributed into the calibrated model parameters. However, this is at the price of a loss of physical meaning of these parameters.
A final issue worth mentioning in relation to indirect calibration is that of precisely determining and modelling cohesion. Cohesion, and specifically the modelling thereof, is a matter of significant importance in the majority of particle-handling industries, yet one about which our present knowledge is decidedly lacking; indeed, while we have a reasonable understanding for the case of spheres, beyond this we know very little with any certainty. The ‘macro’ cohesion experienced by a given system is usually the result of an interplay between numerous particle properties and external forces, including (but not limited to) van der Waals, liquid bridging and electrostatics, as well as potentially geometric effects (e.g. interlocking) (66,67). Determining the individual contributions of these various factors through indirect calibration methods would be, at best, highly challenging. To account for this issue, DEM practitioners often use simpler (and thus less physical) force models (68) to ‘lump together’ these contributions; while these simplified models are capable of successfully reproducing the dynamics of a given system, they nonetheless limit the physical insight one may gain. Nonetheless, in industry - where the primary aim is typically to accurately model a given system as opposed to obtaining a deep understanding of its underlying physics - such an approach is typically satisfactory.
III. Validation
An important, but unfortunately often overlooked, aspect of DEM calibration is that even if the set of parameters determined from an indirect calibration process can perfectly reproduce the dynamics of the characterisation tester in which they were simulated, it does not necessarily hold that they will be similarly capable of simulating the industrial system of interest. To take an extreme example, calibration parameters acquired from a dense, frictionally-dominated system such as a Schulze shear cell may not provide accurate simulation of, say, a dilute, collisionally-dominated vibrofluidised bed (69). Even in cases where the dynamics and mechanics of the characterisation system and industrial system are more closely matched, however, full agreement is not guaranteed. As such, in addition to rigorous calibration, the detailed validation of DEM simulations is also necessary if we are to trust the results produced thereby. Though the best practice for DEM validation is a complex subject, and as such cannot be given a comprehensive treatment in this short article, one can broadly state that there exist two key principles to which one should adhere in order to rigorously validate DEM simulations:
The system used for validation should be as close to the ‘real’ system of interest as possible - ideally, validation data should be acquired from the system itself or, failing that, a scale model.
The comparison of DEM results to experimental results should be as detailed and multifaceted as possible.
Following from the above discussion, the importance of principle I is fairly easy to see. Principle II is slightly more subtle and again, unfortunately, overlooked in many cases. For example, DEM simulations of mixers or attritor mills are often ‘validated’ simply by comparing experimental and simulated measurements of the torque exerted on the mixing blades or impeller (70). While disagreement between such measurements can certainly tell us that our simulation is invalid, agreement does not necessarily mean that the simulation is valid. Similar to our discussion regarding the possibility of multiple solutions during indirect calibration in the preceding section, it cannot be guaranteed that a single, scalar measurement can fully and uniquely capture the dynamics of a complex, three-dimensional system. For example, a bladed mixer in a highly dynamic state losing energy predominantly through energetic inter-particle collisions may, hypothetically, produce the same power draw as a system undergoing solid-body rotation, losing energy almost exclusively through frictional interactions with the system’s sidewalls.
More rigorous validation can be performed by experimentally imaging the systems of interest, allowing the direct comparison of a variety of quantities and fields (see e.g. Fig. 5). Such imaging thus facilitates a deeper, more detailed, ‘multipoint’ comparison, and ultimately a significantly more rigorous validation - while it is highly possible that two divergent systems may produce the same impeller torque, the probability that an inaccurately-calibrated simulation will, by random chance, quantitatively reproduce (for example) the full, three-dimensional velocity vector fields and density distributions of a given system is decidedly lower!
In scientific systems, DEM simulations are often validated against data acquired using optical imaging techniques such as PIV or PTV (71-74). However, such techniques are often unsuitable for the imaging of industry-relevant systems, which are typically dense, three-dimensional and optically opaque. For such systems, techniques such as xray tomography or radiography (75,76), or positron emission particle tracking (77,78), which use highlypenetrating x-rays or gamma-rays for imaging, are more typically used. However, even if the technique itself is suitable, there exist additional potential pitfalls which must be considered. Most notably, we must not assume that the quantities and fields extracted from experimental data (with its inherent limitations on temporal and spatial resolution and, in some cases, propensity to produce artefacts) provides ‘perfect’ ground-truth data. For example, particle tracking experiments may ‘miss’ particle collisions in between successive frames, leading to inaccurate measurements of velocity and fluctuant velocity (79,80). More subtly, the finite duration of experiments inherently requires the partitioning of the experimental volume into finite regions for the calculation of such system properties, leading in turn to errors relating to both limited statistics, and the potential for the under-sampling of gradients in said properties (79). In order to maximise the rigour of DEM validation, we must match as closely as possible the properties of our simulations to those of our simulations. Continuing with our example of particle tracking data, it is important, for example, to match the output timestep of simulations to the frame rate of the experimental imaging methodology, as well as the size of the sub-regions into which the system is divided, and to calculate velocities indirectly from DEM position data, as opposed to using the velocity values directly output by the simulations (83).
IV. Conclusions and outlook
In this article, we have provided a concise overview of the manners in which discrete element method (DEM) simulations may be calibrated and validated, and the practicalities, strengths and weaknesses associated with the various approaches discussed. We have seen that direct calibration can in some (though not all) cases be time-consuming, involve a large amount of (often not commerciallyavailable) testing equipment, and require extremely careful operation, rendering precise measurements in some cases unfeasible in the real world. Indirect measurement techniques, meanwhile, carry the potential to yield multiple solutions (that is to say they may provide DEM parameter sets that correctly simulate the test system, but not necessarily the real system), and can become computationally expensive to the point of unfeasibility when attempting to simultaneously calibrate multiple parameters.
In reality, the optimal approach is perhaps to perform a mix of direct and indirect calibration - measure as much directly as can be (feasibly and efficiently) achieved so as to minimise the dimensionality of the parameter space remaining to explore, and use indirect methods to determine the missing parameters. The process can then be further expedited using appropriate design of experiment methodologies and optimisation tools.
In terms of future outlook, the calibration and validation methods used by industry (and indeed academia!) are at present generally somewhat ad hoc, with different institutes adopting strongly differing practices. Additional research is required to determine, for example, the most suitable tools and combinations of tools for DEM calibration - and whether the optimal choices vary dependent on the process to be modelled. Similarly, an established ‘gold standard’ for the validation of DEM simulations, and a workflow through which this may be reliably achieved, would be of significant value to the field. Recently-developed autonomous, machinelearning and artificial-intelligence-based methods for the calibration of DEM models show significant promise, and certainly represent another valuable area of research.
Finally, while in this article we have focused exclusively on the use of experimental data for the calibration and validation of DEM simulations, the use of hyper-realistic physics models such as direct numerical simulation (DNS) represent another potentially valuable source of ground-truth data for such efforts.