Anda di halaman 1dari 178

Dynamic Adaptive Concurrent Multi-Scale

Simulation of Wave Propagation in 3D Media

Nicola Bombace
Exeter College
University of Oxford

A thesis submitted for the degree of


Doctor of Philosophy

Hilary Term, 2018


Acknowledgements

My deepest gratitude goes to Prof. Nik Petrinic and Dr. Ettore Barbieri for their
patient guidance during these years. The knowledge they shared and their constant
encouragement have been invaluable to solve the technical challenges we encountered
throughout this dissertation.
My thanks to Prof. Alan Cocks and Prof. Antoine Jerusalem for the good advices
and stimulating conversations during our meetings. Thank you to all the people in
the Impact Engineering team who have always challenged me to approach my re-
search from both numerical and experimental aspects. Thank you, in particular, to
Dr. Sascha Knell for constantly sharing with me his knowledge and research activity
on dynamic simulations and Dr. Antonio Pellegrino for showing me different method-
ologies to model Split Hopkinson Bar experiments by means of Finite Elements.
Thank you also to Francesco De Cola, Simone Falco, Mattia Montanari and An-
tonino Parrinello for making this experience unique and being great friends during
these years.
Finally, I wanted to thank my family. My wife, Francisca, not only for her constant
and unconditional love that has been and will always be my strength, but also for
being a brilliant scientist who has helped me with invaluable suggestions. My father
and my mother who will always be next to me, every day, no matter how far I am
from home. My sister Sara and my brother Luca for showing me that our bond does
not know temporal and spatial limits.

i
Dynamic Adaptive Concurrent Multi-Scale
Simulation of Wave Propagation in 3D Media
Nicola Bombace, Exeter College, University of Oxford
For the degree of D.Phil. in Engineering Science, Hilary Term 2018

...

Over the last decades the use of numerical simulations to characterize the response
of real structures has proven to be a valid tool that can accelerate the design process.
However, the correct interpretation of the mechanical behaviour including stress lo-
calisations and geometrical features, requires the adoption of fine discretisations that
drastically increase the computational cost. Even when using one numerical descrip-
tion for the mechanichal representation of the structure (e.g. continuum mechanics),
the use of concurrent adaptive multi-scale frameworks in which finer temporal and/or
spatial discretisation scales are generated during computation within the regions of
interest and coupled both spatially and temporally to the original coarse scale, can
lead to drastic reductions of the computation times. In this context, the challenge is
the formulation of a stable coupling between the discretisation scales, which avoids
the generation and propagation of numerical artefacts such as spurious wave reflec-
tion. Moreover, the detection of the error in dynamic simulations based on the the
popular super-convergent patch recovery technique, even if formally correct, requires
a substantial computational effort, that can result in a bottleneck for the whole sim-
ulation.
Based on these current limitations, the conducted research was to develop a novel
numerically efficient concurrent dynamic finite element framework which automat-
ically detects the regions of interest and simulates them in a finer time and length
scales. In particular, this thesis investigates the concurrent coupling between domains
with the same numerical description (i.e. Finite Elements for fine and coarse scales),
where a scale is defined as a computational domain which presents a finer/coarser

ii
temporal and spatial discretisation. The first main contribution of this work is the
formulation of an error estimator based on an hermitian interpolation of the kine-
matic variables that is local to each element. The proposed procedure, avoids the
need for neighbour searches and resolution of complex systems of equations. Another
key feature of this methodology is that it can be used to directly transfer the variables
from the coarse to the fine scale, resulting in smooth strain and stress field without
the use of a balance step. Another key contribution of this thesis is represented by the
coupling methodology, formulated in terms of nodal forces over an evolving coupling
volume. This novel formulation enforces the kinematic link between the scales over
a volume defined as the set composed by the first neighbours of the elements high-
lighted for refinement. The resulting coupling can be evaluated explicitly at every
node of the coupled domain, resulting in a procedure numerically efficient and easy
to implement.
The properties of the framework are demonstrated both analytically and through
a set of numerical simulations, using the one dimensional wave propagation in elastic
rods for validation. Firstly, the proposed error estimator is compared against the
analytical error showing similar rate of convergence. The above hermitian strategy
is used to transfer the variables to the finer scales, where the equilibrium among in-
ternal, external and inertial forces is respected without the use of an intermediate
balance step. Subsequently, parametric studies have demonstrated the paramount
importance of suitable coupling lengths and weighting parameters to avoid the for-
mation of spurious wave reflections.
The novel mathematical findings, expanded in the three dimensional space, result
in the fundamental building blocks of a novel dynamic adaptive recursive concur-
rent multi-scale framework. The proposed implementation idealises the relationship
among scales as parent-child in which one coarse scale can generate several child scales
based on the implemented refinement criteria. The validation of the framework has
been proven in reversible and irreversible settings. Finally, after the rigorous demon-

iii
stration of the validity of the methodology future research lines are suggested, based
on the versatility of the framework.

iv
Contents

1 Introduction 1

1.1 Background and motivation . . . . . . . . . . . . . . . . . . . . . . . 1

1.2 Aim and objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.3 Research strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.4 Thesis outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2 Literature Review 6

2.1 Multi-scale modelling strategies for heterogeneous materials . . . . . 6

2.1.1 Multi-scale modelling for static simulations . . . . . . . . . . . 6

2.1.2 Multi-scale Modelling for dynamic simulation . . . . . . . . . 12

2.1.3 Adaptive concurrent multi-scale frameworks . . . . . . . . . . 20

2.1.4 Applications for heterogeneous materials . . . . . . . . . . . . 26

2.2 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3 A novel dynamic adaptive concurrent multi-scale framework for wave


propagation 29

3.1 Strong form of the dynamic wave propagation in continuum media . . 29

3.1.1 Weak form and fully discretised Finite Elements . . . . . . . . 30

3.2 A novel efficient adaptive framework for the coupling of differently


discretised domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

3.2.1 Determination of the critical time step for the novel framework 39

3.2.2 Resolution Algorithm . . . . . . . . . . . . . . . . . . . . . . . 42

v
3.2.3 Stability analysis . . . . . . . . . . . . . . . . . . . . . . . . . 45

3.2.4 Error estimation . . . . . . . . . . . . . . . . . . . . . . . . . 47

3.2.5 A local spatial error estimator based on hermitian interpolations 47

3.2.6 Mesh kinetic and kinematic data transfer based on hermitian


interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

3.2.7 Adaptive Framework . . . . . . . . . . . . . . . . . . . . . . . 53

3.3 Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

3.3.1 Analytical Approach . . . . . . . . . . . . . . . . . . . . . . . 56

3.3.2 Hermitian interpolation based error estimator for one-dimensional


mesh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

3.3.3 Coupling Properties . . . . . . . . . . . . . . . . . . . . . . . 62

3.3.4 Parametric study of the coupling properties . . . . . . . . . . 65

3.3.5 Adaptive dynamic concurrent multi-scale 1D framework . . . . 70

3.4 Concluding remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

4 A novel dynamic adaptive concurrent multi-scale framework for 3D


wave propagation in homogeneous media 81

4.1 Uniform strain hexahedron element for explicit simulations . . . . . . 81

4.2 A local spatial error estimator based on 3D hermitian interpolation . 84

4.3 An efficient refinement algorithm for 3D hexahedral meshes . . . . . . 88

4.4 A novel 3D framework for dynamic adaptive concurrent multi-scale


simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

4.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

4.5.1 Square bar without lateral inertia . . . . . . . . . . . . . . . . 93

4.5.2 Lateral Inertia Effect in thick square bar . . . . . . . . . . . . 100

4.5.3 Slender Circular bar . . . . . . . . . . . . . . . . . . . . . . . 107

4.5.4 Dynamic plastic localisation in dog-bone specimen . . . . . . . 116

4.6 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

vi
5 Conclusions and further work 129
5.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
5.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
5.3 Limitations and proposed further work . . . . . . . . . . . . . . . . . 134

A Hermitian interpolation in three dimensions 137


A.1 Code implementation and verification . . . . . . . . . . . . . . . . . . 137

vii
List of Figures

2.1 Principle of separation of scales. The two scales are fully separated
when the condition Lv >> `v is respected. . . . . . . . . . . . . . . . 7

2.2 Computational homogenisation scheme (adapted from [1]). The macro-


scale deformation gradient FM is transferred to a micro-scale BVP
(Boundary Value Problem). The micro-scale communicates back the
stress and the tangent stiffness matrix. . . . . . . . . . . . . . . . . . 8

2.3 Non overlapping concurrent multi-scale Framework for Concrete. In


this case a micro-scale domain is defined a priori to simulate the crack
evolution. Reprinted with permission from [2] . . . . . . . . . . . . . 10

2.4 Non overlapping concurrent multi-scale Framework for Composites,


reprinted with permission from [3]. In composites the crack propa-
gation is a result of different competing mechanism, that can only be
corrected represented at the fibre-matrix level. . . . . . . . . . . . . . 10

2.5 Overlapping Arlequin Concurrent multi-scale Framework. The fine


scale Ωf s and Ωcs communicate through a transition layer of width `0 .
The blending of the energies among the scales is ensured by a weighting
parameter w = l
`0
. reprinted with permission from [4]) . . . . . . . . 12

2.6 Principle of separation of scales for dynamic events. In dynamics a


third scale λ connected to the boundary condition is introduced. The
separation of scales is verified when λv >> Lv >> `v . . . . . . . . . . 13

viii
2.7 Algorithm for time scale coupling, adapted from [5]. The one-dimensional
mesh is divided in three sets. A purely explicit (blue), a purely implicit
(orange) and a mixed implicit/explicit (green) domain. . . . . . . . . 13

2.8 Comparison between GC and PH framework (reprinted with permis-


sion from [6]). While the GC framework computes the Lagrangian
multiplier at each micro time-step, the PH framework computes it
only at macro time-steps, and interpolates this result on the micro-
scale. Such methodology of the PH framework avoids the dissipation
present in GC. This effect is achieved, in the PH framework, by cou-
pling the coarse at the fine scale only at coarse time-steps, while in the
classical GC the two domains are coupled at every fine time-step. . . 16

2.9 MST technique described in [7]. The domain is divided in a fine scale
(blue) a coarse scale (green), and a transition layer (shaded). While
at the interface the micro and macro scale communicate using the GC
framework, in the transition layer the material properties are weighted
similarly to the Arlequin method.(reprinted with permission) . . . . . 17

2.10 Adaptive Concurrent Multi-scale Framework, reprinted with permis-


sion from [8]. In this case the crack propagation in composites is adap-
tively represented using different scales of approximation. . . . . . . . 21

2.11 Data transfer procedure for integration point variables reprinted with
permission from [9]. Firstly, the integration point variables are trans-
ferred on the nodes using the interpolation functions a). Subsequently
using the same interpolation functions the interpolated nodal quanti-
ties are re-interpolated on the nodes of the new mesh b). Finally, such
quantities are interpolated on the integration points of the new mesh
c). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

3.1 Domains Definition for strong form of dynamic problem . . . . . . . . 30

ix
3.2 Domains definition in weak form of the Arlequin Model. The body Ω
is split in three partitions: a coarse scale ΩM , a fine scale Ωm and a
coupling domain Ωc . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

3.3 Schematic representation of the recovery technique presented in [10],


for one dimensional element with one integration point. The original
stress σ h is discontinuous at the nodal position. A continuous post-
processed stress can be obtained averaging the contributions coming
from different elements resulting in σ ∗ . At every integration point, the
error, eeσ , is defined as the difference between the two stresses. . . . . 48

3.4 Central difference temporal discretisation on the right. At every mo-


ment of the simulation, the solution is propagated from tn−1 to tn . The
velocities are always half time-step ahead of the accelerations. On the
left, the multi-scale time stepping scheme proposed is depicted, for a
refinement factor of 2. At every macro time-step the stresses, displace-
1
ments and velocities at time tN −1 and tN − 2 , are interpolated on the
micro scale. Once the interpolation is complete, the micro-scale is up-
dated 2 times and the corrective forces are sent back to the macro-scale,
N + 12
where a new vM is computed. . . . . . . . . . . . . . . . . . . . . . 51

3.5 Comparison between hermitian and linear interpolation for refinement,


in linear elastic elements. When using linear shape functions for the
interpolation of the macro-displacements (blue arrows), the resulting
elastic strain is constant over the micro-elements that share the same
macro-element domain. When using hermitian shape functions, such
situation is avoided, because of the use of nodal derivatives (orange
arrows) resulting in a better interpolation of the elastic strain on the
micro-mesh. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

x
3.6 Transition of elements from macro to micro scale based on error detec-
tion. At every coarse time-step the error of every element is compared
against a user input threshold. If any element has a high error it will
be selected for refinement. The neighbouring elements of the original
flagged ones are used for coupling purposes. . . . . . . . . . . . . . . 54

3.7 Reference Problem Configuration. A slender bar of length L, is sub-


jected to a velocity boundary condition V (t) while is free on the oppo-
site face . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

3.8 Analytical solution to the elastic one-dimensional wave propagation


problem. At any point in time the solution is the combination of a
forward and a backward travelling wave . . . . . . . . . . . . . . . . . 57

3.9 Element configuration for computation of error. The central node 2


can recover its nodal strain from its neighbouring elements. . . . . . 58

3.10 Representation of the trapezoidal (left) and sinusoidal (right) applied


boundary conditions. . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

3.11 Rate of convergence for different error estimators and trapezoidal bound-
ary condition at t = 0.8∗ Lc . The hermitian error estimator presents the
same convergence rate of the ZZ estimator and is close to the analytical
error on the considered mesh. . . . . . . . . . . . . . . . . . . . . . . 61

3.12 Rate of convergence for different error estimators and sinusoidal bound-
ary condition at t = 0.8∗ Lc . The hermitian error estimator presents the
same convergence rate of the ZZ estimator and is close to the analytical
error on the considered mesh. . . . . . . . . . . . . . . . . . . . . . . 62

3.13 Bar Coupled Model. The slender bar is partitioned a priori in a coarse
ΩM , fine Ωm and coupling domain Ωc . . . . . . . . . . . . . . . . . . 63

3.14 Elastic wave propagation at t= 0.5 Lc for coupled model. The fine scale
domain (left) is able to represent the low and high frequency content
of the velocity boundary condition applied. . . . . . . . . . . . . . . 66

xi
3.15 Elastic wave propagation at t= 0.8 Lc for coupled model. The coarse
scale domain (right) can only represent the low frequency content of
the velocity boundary condition applied. However the high frequency
is mostly dissipated through the coupling condition. . . . . . . . . . . 67

3.16 Evolution of the total energy for the coupled problem. The micro-scale
total energy is initially zero, then it reaches a maximum and decreases
when the energy is transferred to the coarse scale domain. The high
frequency content of the energy is dissipated, but its contribution to
the total energy is negligible. . . . . . . . . . . . . . . . . . . . . . . . 68

3.17 Ratio of the Energy Spectral Density (ESD) of the dissipated and total
power, for linear coupling and 16 coarse scale elements in the coupling
zone. The dissipation is more important at frequencies closer to the
cut-off frequency of the coarse domain. . . . . . . . . . . . . . . . . . 69

3.18 Elastic wave propagation at t= 0.8 Lc for linear weighting function at


different coupling length. When the coupling length is increased the
fine scale domain presents less spurious oscillations. At the same the
trapezoidal pulse is better transferred to the coarse scale domain. . . 70

3.19 Total energy in coarse and fine scale domain when using a linear weight-
ing function and different coupling lengths. The amount of energy
trapped in the fine scale domain is negligible compared to the total
energy, however when using small coupling lengths, there is a stronger
energy dissipation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

xii
3.20 Energy dissipation index as defined in equation 3.94 for linear weighting
function at different coupling lengths. Increasing the length of the
coupling zone has a double beneficial effect. On the one hand, the
low frequency content is retained, because of the smaller dissipation at
higher coupling lengths, leading to a better transfer of energies between
macro and micro scale. On the other hand, the high frequency content
is more effectively dissipated resulting in the absence of spurious wave
generation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

3.21 Elastic wave propagation at t= 0.8 Lc for power law weighting function
at different coupling length. When increasing the coupling length, the
fine scale domain presents less spurious oscillations. At the same time
the trapezoidal pulse is better transferred to the coarse scale domain. 73

3.22 Total energy in coarse and fine scale domain when using a power law
weighting function and different coupling lengths. The amount of en-
ergy trapped in the fine scale domain is negligible compared to the
total energy, however when using small coupling lengths, there is a
stronger energy dissipation. . . . . . . . . . . . . . . . . . . . . . . . 73

3.23 Energy dissipation index as defined in equation 3.94 for power law
weighting function at different coupling lengths. Increasing the length
of the coupling zone has a double beneficial effect. On the one hand, the
low frequency content is retained, because of the smaller dissipation at
higher coupling lengths, leading to a better transfer of energies between
macro and micro scale. On the other hand, the high frequency content
is more effectively dissipated with resulting in the absence of spurious
wave generation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

xiii
3.24 Elastic wave propagation at t= 0.8 Lc for power law and linear weighting
function using the same coupling length of 8*`eM . The use of the power
law weighting function decreases the amount of spurious wave reflection
and transfers more accurately the trapezoidal pulse at the coarse scale. 75
3.25 Total energy in coarse and fine scale domains when using power law
and linear weighting functions. The use of the power law weighting
function transfers more accurately the energy at the macro-scale do-
main, resulting in less energy dissipation. . . . . . . . . . . . . . . . . 75
3.26 Comparison of conventional and multi-scale analysis for trapezoidal
pulse case. The multi-scale simulation presents qualitatively a smaller
numerical error. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
3.27 Comparison of conventional and multi-scale analysis for trapezoidal
pulse case after the reflection from the free boundary. The simultane-
ous effect of data transfer and coupling does not affect the shape of the
wave. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
3.28 Comparison of conventional and multi-scale analysis for sinusoidal pulse
case. The multi-scale simulation presents qualitatively a smaller nu-
merical error. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
3.29 Comparison of conventional and multi-scale analysis for sinusoidal pulse
case after the reflection from the free boundary. The simultaneous ef-
fect of data transfer and coupling does not affect the shape of the wave. 78
3.30 Comparison of the error in the case of conventional and multi-scale
simulation for trapezoidal pulse computed from analytical solution.
The error of the multi-scale simulation is sensibly reduced. . . . . . . 79
3.31 Comparison of the error in the case of conventional and multi-scale
simulation for sinusoidal pulse computed from analytical solution. The
error of the multi-scale simulation is sensibly reduced . . . . . . . . . 79

4.1 Configuration of hexahedron 8-nodes element in parent element domain. 82

xiv
4.2 Derivation of the approximation of the derivative terms in the vector Φ
for the node I. Employing a central difference scheme, an approxima-
tion for such terms can be computed only considering the neighbouring
nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

4.3 The refinement procedure for 3D elements generates shared nodes be-
tween the original elements. This condition does not arise in 1D mesh. 89

4.4 Refinement procedure proposed in [11] where different element tem-


plates are used to generate a conforming mesh. . . . . . . . . . . . . . 90

4.5 Schematic representation of the refinement algorithm proposed. The


flagged elements (a) are firstly refined in their parent element domain
(b), and then mapped on the deformed configuration (c) . . . . . . . 91

4.6 Representation for the proposed adaptive concurrent multi-scale frame-


work. The coarse scale ΩM generates two micro-scales Ωm and Ωm
1 2

that communicate only with their parent domain (the communication


is represented by the dashed arrows). The child/fine scales can recur-
sively generate finer domains, in this case Ωm is the parent of Ωm ,
2 3

which employ the same communication scheme. . . . . . . . . . . . . 93

4.7 Flowchart for the proposed adaptive concurrent multi-scale framework.


The coarse scale ΩM generates two micro-scales Ωm and Ωm that com-
1 2

municate only with their parent domain (the communication is repre-


sented by the dashed arrows). The child/fine scales can recursively
generate finer domains, in this case Ωm is the parent of Ωm , which
2 3

employ the same communication scheme. . . . . . . . . . . . . . . . . 94

4.8 Geometry of the square cross sectional bar. . . . . . . . . . . . . . . . 95

xv
4.9 Case of square bar without lateral inertia. Comparison between the
mono-scale and multi-scale spatial distribution of σz at t = 0.0789
ms a) and t = 0.1407 b). The yellow line represents the axis over
which the velocities will be plotted. On the lateral plane the analytical
solution for this problem, at the same time, is depicted. The micro-
scale simulation is activated for the entire portion of the domain that
is interested by the wave. Its extension is dictated by the macro-scale
elements that present a high error together with their neighbours. . . 97

4.10 Configuration of the coupling parameter α for the refined mesh at


t = 0.0789 ms. The coupling parameter varies linearly over the length
of one macro-scale element. . . . . . . . . . . . . . . . . . . . . . . . 97

4.11 Case of square bar without lateral inertia. Comparison of stresses for
conventional and multi-scale simulation at t = 0.0789 ms . . . . . . . 98

4.12 Case of square bar without lateral inertia. Comparison of velocities for
conventional and multi-scale simulation at t = 0.0789 ms . . . . . . . 98

4.13 Case of square bar without lateral inertia. Comparison of stresses for
conventional and multi-scale simulation at t = 0.1407 ms. . . . . . . . 98

4.14 Case of square bar without lateral inertia. Comparison of velocities for
conventional and multi-scale simulation at t = 0.1407 ms. . . . . . . . 98

4.15 Case of square bar without lateral inertia. Comparison between the
kinetic energy for both the mono-scale and multi-scale simulations.
The two curves show very similar trends meaning that the energy is
globally the same. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

4.16 Case of square bar without lateral inertia. Comparison between the
elastic energy for both the mono-scale and multi-scale simulations. The
two curves show very similar trends meaning that the energy is globally
the same. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

xvi
4.17 Comparison among the mono-scale fine simulation, and the coarse sim-
ulation at t = 0.05 ms. The coarse scale can not capture the magnitude
of the particle velocity along the bar . . . . . . . . . . . . . . . . . . 101

4.18 Comparison among the mono-scale fine simulation, and the coarse sim-
ulation at t = 0.125 ms. After the wave rebound the coarse scale can
not track the particle velocity along the bar. . . . . . . . . . . . . . . 101

4.19 Contour plot of lateral velocity for coarse (top) and fine (bottom) sim-
ulation at t = 0.05 ms. The macro-scale simulation cannot capture the
lateral motion of the bar. . . . . . . . . . . . . . . . . . . . . . . . . . 102

4.20 Contour plot of lateral velocity for fine mono-scale (top) and multi-
scale (bottom) simulation at t = 0.05 ms. The two simulations show
similar results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

4.21 Comparison among the multi-scale simulation and the mono-scale fine
simulation at t = 0.05 ms. The elements flagged for refinement are the
ones behind the wave front. . . . . . . . . . . . . . . . . . . . . . . . 104

4.22 Comparison among the multi-scale simulation and the mono-scale fine
simulation at t = 0.125 ms (after the reflection). At this time-step the
whole bar has been flagged for refinement. . . . . . . . . . . . . . . . 104

4.23 Plot of the proposed weights for the parametric study of the weighting
function αm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

4.24 Comparison among the multi-scale simulations for two different weight-
ing parameters, namely αm
C
and αm
L
at t = 0.05 ms. The constant
coupling shows a significant amount of spurious wave reflections. . . . 106

4.25 Comparison among the multi-scale simulations for two different weight-
ing parameters, namely αm
L
and αm
P
at t = 0.05 ms. The two couplings
show very similar performances without the generation of spurious waves.106

xvii
4.26 Geometrical divisions for a section of a square circular bar. The cross
section mesh is defined by the number of divisions in the square nsq
and the number of divisions on the diagonal ndiag . . . . . . . . . . . . 108

4.27 Comparison of the axial stress along the bar at t = 0.05ms. The coarser
mesh present a higher amount of numerical error when compared to
the solution with a finer mesh. . . . . . . . . . . . . . . . . . . . . . . 109

4.28 Cross section of the refined geometry and coarse scale. The proposed
refinement algorithm can efficiently refine complex shapes, however it
cannot represent the real geometry. . . . . . . . . . . . . . . . . . . . 109

4.29 Comparison of the longitudinal stress at the small refined scale (upper
mesh) and the coarse scale simulation (lower mesh) at t = 6.36e-2
ms. The refinement criterion highlights the elements interested by the
wave, and triggers a multi-scale simulation whose results are reported
in the top half of the image. Since the finer mesh can capture with
a better resolution the evolution of the longitudinal stress over the
computational domain, it is respresented with a smoother variation
when compared with the coarse scale simulation in the bottom half. . 110

4.30 Case of slender circular bar. Comparison of longitudinal velocities for


conventional and multi-scale simulation at t = 2.53e-2 ms . . . . . . . 112

4.31 Case of slender circular bar. Comparison of longitudinal velocities for


conventional and multi-scale simulation at t = 6.37e-2 ms . . . . . . . 112

4.32 Case of slender circular bar. Comparison of longitudinal velocities for


conventional and multi-scale simulation at t = 8.91e-2 ms. . . . . . . 113

4.33 Case of slender circular bar. Comparison of longitudinal velocities for


conventional and multi-scale simulation at t = 1.40e-01 ms. . . . . . . 113

4.34 Elastic Energy for the whole domain as function of time, in the case of
a slender bar. The similar trends show that the multi-scale simulation
is stable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

xviii
4.35 Kinetic Energy for the whole domain as function of time, in the case of
a slender bar. The similar trends show that the multi-scale simulation
is stable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

4.36 Comparison between multi-scale simulation and mono-scale fine sim-


ulation in terms of longitudinal velocities at t = 6.37e−2 ms. The
two simulations show perfect agreement, confirming the validity of the
proposed framework. . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

4.37 Case of slender circular bar. Comparison of longitudinal velocities for


conventional and multi-scale simulation at t = 1.4e-1 ms. . . . . . . . 115

4.38 Split Hopkinson Bar apparatus configuration. A tensile pulse is gener-


ated in the incident bar which loads the specimen. The transmission
bar serves as momentum trap for the apparatus. . . . . . . . . . . . . 116

4.39 Specimen design for the simulation of dynamic plastic localisation in


dog-bone specimen, with dimensions in millimetres (above). The spec-
imen is pulled from one face while axially supported on the opposite
face (below). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

4.40 Smoothed applied boundary condition for dog-bone dynamic plastic


localisation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

4.41 Coarse mesh for dog-bone specimen with `M = 0.7 mm. . . . . . . . . 118

4.42 Comparison between the normal forces at the two opposite faces of the
dog-bone specimen. When t > 0.05 ms the two forces are equal and
opposite, therefore the condition of dynamic equilibrium is achieved. . 119

4.43 Results of the coarse scale simulation of the dog-bone specimen at t =


0.2 ms. The maximum displacement in the z direction is of 0.44 mm
(above). The maximum value of equivalent plastic strain is registered
in the middle of the specimen with a value of 0.64 (below). . . . . . 122

xix
4.44 Results of the fine scale simulation of the dog-bone specimen at t =
0.2 ms. The maximum displacement in the z direction is of 0.4926 mm
(above). The maximum equivalent plastic strain is registered in the
middle of the specimen with a value of 0.8436 (below). . . . . . . . . 123

4.45 Evolution of the plastic wave over time, alongside the refined mesh. On
the left side the elements of the coarse scale are flagged over time, and
the boundaries of the highlighted area are used to enforce the coupling
at the micro-scale through αm . . . . . . . . . . . . . . . . . . . . . . . 123

4.46 Results of the multi-scale simulation of the dog-bone specimen at t =


0.2 ms. The maximum displacement in the z direction is of 0.506 mm
(above). The maximum equivalent plastic strain is registered in the
middle of the specimen with a value of 0.8421 (below). The differences
with the fine mono-scale simulation are less than 2%. . . . . . . . . . 124

4.47 Comparison of the multi-scale simulation of the dog-bone specimen at


t = 0.2 ms. The figure is obtained computing for every integration
point the difference of the plastic equivalent strain between the fine
mono-scale simulation and the multi-scale simulation. . . . . . . . . . 124

4.48 Flagging process of the coarse domain with different choices of the
threshold parameter P∗ . Increasing the threshold value the multi-scale
process starts later in time. Moreover the highlighted portion of the
domain becomes smaller. . . . . . . . . . . . . . . . . . . . . . . . . . 125

4.49 Results of the multi-scale simulations with different values of P∗ at t =


0.2 ms. The highlighted are for refinement is smaller at higher values
of the threshold, however the accuracy of the simulation decreases. . . 126

xx
4.50 Parametric study of the Threshold parameter for the multi-scale sim-
ulation. Small values of P∗ result in high computational cost and low
error. On the other hand high values result in an error close to the
coarse mono-scale simulation. The non linear behaviour of the error
highlights at optimum value of the threshold value of 0.2. . . . . . . . 127

A.1 Validation of the proposed hermitian interpolation scheme. The method-


ology is validated since the sum of proposed functions have a value of
1 on the vertices of the hexahedron. . . . . . . . . . . . . . . . . . . 140

xxi
List of Symbols

a acceleration
α weighting parameter
B spatial derivatives of shape functions
BH spatial derivatives of hermitian shape functions
β Courant number
c sound propagation velocity in medium
ce sound propagation velocity in element
(·)c coupling quantity
D rate of deformation
∆ matrix of derivatives operators
dΩ external boundary of body Ω
dV infinitesimal volume in the body
dVel infinitesimal volume in element
det(·) matrix determinant operator
δ(·) variational operator
∆t computational time-step
∆tcrit critical computational time-step
E Young’s modulus
Ėint rate of internal energy
Ėkin rate of kinetic energy
Ėtot rate of total energy
ET tangent Young’s Modulus
eσ exact error in stress
ẽσ approximated error in stress
ẽ approximated error in strain
ẽv exact error in velocities
 1-d strain
f frequency
f ext external forces
f int internal forces
f corr corrective forces
Φ hermitian kinematic vector
G hermitian polynomial matrix
Γt region of space where traction is assigned
Γv region of space where velocity is assigned

xxii
H n (·) Sobolev space of order n
H hermitian interpolation matrix
H restriction of the hermitian interpolation matrix to
one component
I identity matrix
K tan Linearised stiffness matrix
L lagrange multiplier matrix
Lv characteristic length of the structure
`v characteristic length of the material
`e characteristic length of the element
λv characteristic length of excitation
λ lagrange multipiers
Λ̌ , Λ̊ square of eigenfrequencies
M mass matrix
(·)M macro scale quantity
(·)m micro scale quantity
(·)M →m macro to micro scale projection
(·)m→M micro to macro scale projection
n normal to a given direction
η second component of the spatial coordinates
η substitution variable in D’Alembert’s method
N¯ shape functions
N restriction of the shape function to one component
Nλ shape functions for lagrange multipliers
p first component of the displacements
pe first component of the displacement in one element
Ptot Fourier transform total power
Pdiss Fourier transform dissipated power
P sum of power contributions
Pint internal power
Pλ power associated to lagrange multipliers
Pkin kinetic power
Q sum of internal and inertial forces
q second component of the displacements
qe second component of the displacements in one ele-
ment
r third component of the displacements
re third component of the displacements in one ele-
ment
rf refinement factor
ρ density
S dissipation index energy spectral density
Stot energy spectral density of the total power
Sdiss energy spectral density of the dissipated power

xxiii
S compliance matrix of the material
σ stress matrix
σ0 initial stress
σY 0 yield stress
t time
T maximum time of interest
t assigned traction
U unity vector
u displacement
v velocity
vb assigned velocity
V0 space of velocity test functions
X initial spatial configuration
X first component initial spatial configuration
X hermitian constant parameter matrix
x present spatial configuration
x first component present spatial configuration
ξ spatial coordinates in parent element domain
ξ first component of the spatial coordinates in parent
element domain
ξ substitution variable in D’Alembert’s method.
¯
Y second component initial spatial configuration
y second component present spatial configuration
Z second component initial spatial configuration
z second component present spatial configuration
ω eigenfrequency
Ω generic body
ζ third component of the spatial coordinates in par-
ent element domain
¯
(·) trial parameter
ˇ
(·) perturbed parameter
˜
(·) weighted parameter

In this thesis bold symbols, such as σ, represent a matrix, while italic symbols
such as t, represent a scalar quantity.

xxiv
Chapter 1

Introduction

This chapter introduces the proposed adaptive concurrent multi-scale framework


which has been developed during the research which preceded writing this thesis. It
summarizes the main relevant background information and the related motivation
for the conducted research together with its objectives and research strategies before
providing the thesis outline and justifying the invested effort by the corresponding
literature review in the subsequent chapter.

1.1 Background and motivation

The reversible and irreversible behaviour of structures is strongly affected by the


mechanical properties and arrangement of the single constituents of the material as
well as by the geometry of the component and the loading regime. In quasi-static
loading cases, where the main frequencies of excitation are well below the vibrational
period of the structure, the body has enough time to adjust to the applied load and
the external energy is balanced mainly by elastic/plastic mechanisms. At the same
time, the excited spectrum is far from the one which triggers micro-structural failure
modes, and as a consequence, the heterogeneous material responds as an homogenised
medium. On the other hand, the application of dynamic loads generates significant
localised inertia effects in which, at any time, the interaction among the stress waves
travelling through the bulk could trigger different failure micro-mechanisms. The

1
macroscopic result of such loadings could be unexpected, such as the formation of
damage away from the impact zone.

Even if seemingly different, these physical mechanisms are governed by the same
fundamental principles of conservation of energy and momentum. Hence the Finite
Element Method (FEM) provides a valid tool for the prediction and validation of
different designs of structural components [12]. When simulating structural behaviour
two different formulation of FEM are available: Implicit and Explicit. Both methods
lead to similar solutions, but for events that have a short duration in time such
as wave propagation, an explicit scheme is generally used, because is considerably
more efficient (without sacrificing accuracy) than an implicit scheme. The biggest
disadvantage of an explicit method is its conditional stability, based on a minimum
time-step dependent on the smallest eigenvalue of a combination of mass and stiffness
matrix. Therefore the computational time of an explicit simulation is not proportional
to the total number of degrees of freedom but it is also connected to a smaller time-
step.

However, fine and uniform meshes are required in a variety of applications. As an


example, stress wave propagation simulations require the use of fine meshes to capture
high stress gradients produced during wave propagation or impact, like the one which
is generated during impact between two collinear rods. In this case, the analytical
solution predicts an infinite gradient in the stress wave and real experiment show
very sharp rising times of the pulse. Another example of limitations in the maximum
characteristic length size can be found in the field of continuum damage mechanics. In
particular the softening behaviour, after the onset of damage, is regularised to avoid
mesh dependent results. However, the regularisation process leads to a maximum
characteristic length above which the model will not dissipate the correct amount of
energy. This non exhaustive set of limiting conditions for the coarsest mesh translates
in very detailed computational meshes that hinder the simulation of real structures,
due to their high computational cost.

2
Mesh adaptivity represents a well known and valid technique to lower the cost
of large simulations. The main idea is that the mesh adapts in space, increasing the
number of nodes in zones where the solution exhibits high concentration of error. Such
technique has been developed and validated mostly for static problems, however it is
still an open research field for dynamic applications. The main problems are connected
to non physical responses associated with the juxtaposition of meshes with different
characteristic lengths. In particular, at the interface between the two meshes the
total energy is only partially transmitted from the coarse to the fine scale (and vice-
versa) creating a numerical impedance [13]. At the same time, the high frequency
content of the fine mesh that cannot be represented at the coarse mesh generates
numerical oscillations known as spurious wave reflection [14]. Moreover, the coupling
of different temporal scales among the various domains can lead to the generation
and propagation of non-physical forces, whose accumulation over time can introduce
artificial energy in the simulation, making it unstable [15].
Another important limitation in the coupling of dynamic simulations in which
different time and length scales are represented concurrently, is connected to the
quantification of a correct measure of the error, to automatically highlight the el-
ements that need refinement. Finally, an open research field is represented by the
definition of transfer operators for kinetic and kinematic data among the different
meshes, that avoids numerical pollution and presence of unbalanced forces both at
the fine and at the coarse scale.

1.2 Aim and objectives

Given the limitations of the standard formulation of the explicit finite element
method, the aim of this doctoral thesis has been to develop an adaptive and explicit
finite element framework, capable of simulating dynamic events at a considerably im-
proved computational cost, by automatically identifying the sub-domains of interest
and by solving the equations at the corresponding time and length scale. At the

3
same time the proposed framework should be able to concurrently couple different
constitutive models and/or discretisation techniques, both in time and space, allow-
ing simulations with better accuracy at the minimum computational cost. Based on
the aims of this work, the specific objectives have been

• The definition of consistent down-scale and up-scale operators to transfer the


solutions in terms of kinetic and kinematic quantities among different scales
conserving the energy as well as the dynamic equilibrium.

• The formulation of a coupling constraint that does not generate numerical errors
(i.e. spurious wave propagation and numerical impedance) at the interfaces
between differently discretised domains.

• The formulation of an efficient a posteriori error measure based on local inter-


polation, to highlight/flag the elements that present high error.

1.3 Research strategy

The aims and the objectives of the thesis were guided by an extensive literature
review on multi-scale simulation in dynamic environments and adaptive coupling of
different time and scale discretisations. The proposed novel framework is first tested
in a one-dimensional elastic environment, where the different challenges and salient
aspects are highlighted and solved.
First of all, a novel refinement criterion based on a norm of the estimated error is
proposed and verified for one dimensional constant strain under-integrated elements.
Subsequently, the use of a downscaling interpolation scheme based on Hermitian
functions enforces the continuity of strain as opposed to the use of standard FEM
interpolation functions. The coupling between the different scales, at each fine scale
time-step, is achieved through the computation of nodal coupling forces, which ensure
that a weighted equality constraint of the velocities in a predefined coupling area is
respected. The communication among the two scales is executed by introducing a

4
coupling volume, where at each fine scale time-step the two simulations exchange
information.
Subsequently, the formulation is expanded from 1 dimension to 3 dimensions where
the performances of the resulting framework are firstly assessed in an elasto-dynamic
setting. Finally, to prove the generality of the proposed methodology, the framework
is applied to irreversible processes observed and quantified in actual experiments.

1.4 Thesis outline

This thesis is organized as follows. Chapter 2 will introduce the relevant literature
review in the field of adaptivity and coupling of different computational models, jus-
tifying the aim of the objectives of this work. Chapter 3 introduces a novel adaptive
multi-scale framework applied to one-dimensional elastic wave propagation, with em-
phasis on the stability and accuracy of the overall scheme in which a new definition
of the numerical error is implemented to identify the regions to be refined. The new
framework is compared with conventional coupling methods, showing the improve-
ment in terms of spurious wave reflection. The extension to three dimensions of the
developed framework, is shown in Chapter 4, studying the wave propagation with
different mesh topologies and alignments, where all the elements at the refined scales
are generated on the fly. In the last part of Chapter 4 the framework is demonstrated
to be valid in elasto-plastic dynamic problems, where strain localisation is observed
in form of necking. Finally, Chapter 5 draws the major conclusions of this thesis and
the further directions of this work.

5
Chapter 2

Literature Review

This chapter reviews previously proposed theories relevant to the development of


a novel multi-scale dynamic framework. In particular three main areas are discussed:
error estimation, transfer of data variables from coarser to finer mesh and coupling
at the interface of differently discretised meshes. At the end of the chapter the key
opportunities for improvements are highlighted, with particular attention to the case
of transient phenomena.

2.1 Multi-scale modelling strategies for heteroge-


neous materials
2.1.1 Multi-scale modelling for static simulations

All materials, regardless of their nature, present heterogeneities. The possibility of


representing such materials at a continuum level is based on the principle of separation
of scales, as introduced in [16]. Figure 2.1 gives an intuitive perspective of this
principle. In a generic static problem there are two scales involved, the characteristic
length of the structure Lv and the smallest characteristic length of the material `v .

When Lv >> `v the scales are fully separated, hence the behaviour at the smallest
scale can be homogenised at the coarsest scale. Such techniques are classified as
hierarchical, and in the early steps of multi-scale simulations they relied on closed
forms of analytical solutions, such as the pioneering work of Voigt [17] and Reuss [18].

6
Figure 2.1: Principle of separation of scales. The two scales are fully separated when the condition
Lv >> `v is respected.

These approaches have been extended to include different effects and several closed-
form homogenization techniques have been proposed in this framework such as the
Eshelby Mechanics [19], the Hashin-Strikman variational principle [20] and the self
consistent method [21]. A comprehensive overview of hierarchical homogenisation
methods can be consulted in the work carried out by Nemat-Nasser and Hori [22].
Hierarchical multi-scale approaches are very efficient, however they are restricted to
relatively simple microscopic geometries and small strain analysis.

Semi-concurrent multi-scale approaches are used when the principle of separation


of scales still holds but a closed form of the micro-mechanical solution is not avail-
able because of the geometry and/or loading effects. This technique is essentially
based on the solution of nested boundary value problems, one for each scale. The
main advantage of such methods is represented by the fact that the behaviour of the
material at each scale is not assumed a priori. The basic idea of semi-concurrent
multi-scale method is illustrated in figure 2.2. The approach relies on the use of a
Representative Volume Element (RVE) defined as the minimum volume that can be
used to determine the effective properties of a given material which can be adopted in
a homogenised macroscopic problem [23]. The stress at each integration point of the
macro-scale is computed solving a boundary value problem on the RVE at the micro-
scale. The macro and micro scale boundary value problems have to be equivalent
in the sense that the macro and micro mechanical energies have to respect the Hill-

7
Figure 2.2: Computational homogenisation scheme (adapted from [1]). The macro-scale deformation
gradient FM is transferred to a micro-scale BVP (Boundary Value Problem). The micro-
scale communicates back the stress and the tangent stiffness matrix.

Mandel principle [24]. Differently from hierarchical approaches, in semi-concurrent


multi-scale methods there is a two way communication between the different scales.
Examples of semi-concurrent multi-scale approaches can be found in [25–31] .

In cases in which damage and cracks evolve at the micro-scale, Lcrack ≈ `v , the
principle of separation of scales is violated, and standard homogenisation is no longer
applicable as the formation and propagation of cracks break the periodicity of the
micro-structure. The homogenisation is only valid locally because it is dependent
on the local orientation of the crack at the micro-scale. The local solution near the
crack tip is not representative of the general state of the material and therefore the
RVE is substituted by a Microscrutural Volume Element (MVE). The main difference
between RVE and MVE is the communication that is established among the scales.
Bosco et al. in [32, 33], proposed a communication scheme where the macro-scale
solution is based on the use of X-FEM enrichment [34, 35]. The presence of an
enrichment, capable of representing discontinuities in displacement, allow the micro-
scale solution to be up-scaled in presence of strain localisation regions. At the same
time new boundary conditions are applied at the micro scale problem, to account for
strain localization development at the micro-scale in a realistic direction. Finally the

8
micro-to-macro transition is based on a generalisation of the Hill-Mandel principle
in case of strain localisation. Similarly, Loehnert and Belytschko [36] proposed the
use of XFEM to track, at the macro-scale, the trajectory of the evolving micro-scale
damage. Their approach uses two different MVEs: one that represents the localisation
band, and another from which the continuum behaviour is extracted. Semi-concurrent
methods in the presence of cracks offer little advantage with respect to concurrent
multi-scale methods, mainly because the dimensions of the MVE are comparable
with the coarse scale elements. A review of semi-concurrent approaches can be found
in [37].

Another, more effective, approach that solves local effects in the presence of strong
coupling between the scales, is the concurrent multi-scale framework. The main
idea of this methodology is to circumvent the lack of scale separation by modelling
simultaneously the different scales in the regions of interest of the computational
domain that can adaptively grow or shrink depending on the damage evolution. The
main idea of such framework is depicted in figures 2.3 and 2.4.

The computational domain is decomposed in different non-overlapping portions,


based on the level of accuracy needed, and information are exchanged through the
common boundaries. In [2], the damage caused by a crack growing in a concrete
structure is simulated. The damaged zones represent explicitly the micro-structure
while away from the crack tip, a homogenised constitutive model, based on the rule
of mixtures is used. The enforcement of a displacement continuity condition at the
common interface provides the communication among the scales. The approach used
for the coupling of different meshes was first proposed by Farhat et al. in [38, 39] as
Finite Element Tearing and Interconnecting (FETI), originally proposed for parallel
implementation of the Finite Element Method. In this framework, different meshes
and/or computational models share a common boundary on which a continuity on
displacements is enforced using lagrangian multipliers. Despite the addition of new
degrees of freedom, if the coupled area is small, the added computational time and

9
Figure 2.4: Non overlapping concurrent
multi-scale Framework for
Composites, reprinted with
Figure 2.3: Non overlapping concurrent permission from [3]. In com-
multi-scale Framework for posites the crack propaga-
Concrete. In this case a tion is a result of different
micro-scale domain is de- competing mechanism, that
fined a priori to simulate the can only be corrected rep-
crack evolution. Reprinted resented at the fibre-matrix
with permission from [2] level.

10
storage requirement is minimum. The addition of new degrees of freedom can be
avoided if the meshes are coincident at the interface, as proposed by Canal [3]. A
homogenised model cannot represent the microscopic interaction among fibres and
matrix in composite structures, therefore the zones where damage is expected are
explicitly modelled. The concurrent multi-scale modelling of heterogeneous material
in static conditions has been successfully applied in several works, for example [40–43].

Another class of concurrent multi-scale strategy decomposes the computational


domain in several overlapping parts. The most known overlapping method is the Ar-
lequin approach proposed by Ben-Dhia [44]. In addition to the micro and macro-scale
domains, the Arlequin method proposes a transition layer as depicted in figure 2.5.
In the overlapping region between the two domains the elastic energies of the macro
and micro-scale are continuously blended through the use of a weighting parameter w.
The coupling among the scales is established by adding to the variational statement a
condition which ensures continuity of displacements, L2 coupling , or the continuity of
both displacements and strains, H1 coupling, over the overlapping domain. The dif-
ferent couplings are enforced using lagrangian multipliers. Guidault and Belytschko
in [45] conducted a study comparing different parameters of the Arlequin method for
static problems. They found out that for the L2 coupling (in which continuity of
displacements is enforced) the weighting parameter function has to be continuous,
and surprisingly the lagrangian multipliers mesh has to be the coarser one. If these
conditions are not met the solution will not converge to the correct one. On the other
hand, H2 coupling (in which the continuity of displacements and strains is enforced) is
more flexible. In this context the Finite Element Tearing and Interconnecting (FETI)
method can be seen as a particular case of the Arlequin method, when l0 = 0. It
is clear that the Arlequin method is computationally more demanding than a FETI,
given the same mesh, since it introduces more unknowns. However, in static formu-
lation, it provides a flexible tool for the coupling of particle and continuum models
as for example in [46–48].

11
Figure 2.5: Overlapping Arlequin Concurrent multi-scale Framework. The fine scale Ωf s and Ωcs
communicate through a transition layer of width `0 . The blending of the energies among
the scales is ensured by a weighting parameter w = `l0 . reprinted with permission
from [4])

2.1.2 Multi-scale Modelling for dynamic simulation

The simulation of dynamic events in heterogeneous materials involves an addi-


tional scale as depicted in figure 2.6. The new scale represented, λ, takes into account
the spatial variation of the external applied forces over the structure. In particular the
static condition can be obtained by imposing λ >> L. Therefore, in cases in which
λv >> Lv >> `v either hierarchical or semi-concurrent methodologies can be used
without any modification. When λv ≈ `v , local resonance effects, defined as micro-
inertia, etc., take place in the micro-structure and classical homogenisation schemes
lead to wrong results [49]. Different variations of the Hill-Mandel principle have been
recently proposed to represent local resonance effects. In particular Pham et al. [50]
and De Souza Neto [51] proposed an homogenisation scheme in which they demon-
strate that the macro-scale stress is not only function of the deformation gradient,
as in the classical scheme, but also on the macro-scale momentum. Recently Sridhar
et al in [52] used the dynamic version of the Hill-Mandel principle from the previous
works and solved the problem at the micro-scale, reducing the computational time at
the micro-scale using the Craig Bampton Mode Synthesis [53].

12
Figure 2.6: Principle of separation of scales for dynamic events. In dynamics a third scale λ con-
nected to the boundary condition is introduced. The separation of scales is verified when
λv >> Lv >> `v .

Figure 2.7: Algorithm for time scale coupling, adapted from [5]. The one-dimensional mesh is di-
vided in three sets. A purely explicit (blue), a purely implicit (orange) and a mixed
implicit/explicit (green) domain.

As for the static cases where the separation of scales is violated, concurrent multi-
scale schemes offer a better solution for the explicit simulation of the micro-scale
at a minimum computational cost. Dynamic problems are discretised both in space
and time, and different frameworks looked at one or both aspects simultaneously.
Belytschko and Mullen [5] were the first to propose an implicit-explicit coupling in
different domains, for the coupling of different time-scales. The basic idea is depicted
in figure 2.7 for one dimensional constant strain elements. The whole mesh is di-
vided in explicit, interface and implicit zone represented as blue, green and orange
respectively. In this example the time-step of the implicit domain is twice as big

13
as the time-step of the explicit domain. Firstly the solution of the explicit and the
interface portions of the domain is propagated from time t to time t + ∆t. The flow
of information is such that the solution of node J − 1, cannot be propagated with an
explicit integration at the next time-step t + 2∆t. However, its kinematic quantities
can be used to update explicitly the solution of node J − 2, from time-step t + ∆t to
t + 2∆t. Once the solution of the whole explicit portion is updated to time t + 2∆t,
the displacement of node J − 2 is used as boundary condition to update the implicit
part of the domain. Mullen and Belytschko demonstrated in [54] the stability of the
methodology. An interesting result of the paper is that the coupled solutions, even if
stable, show numerical oscillations known as spurious wave reflection [13].

Belytschko et al. subsequently in [55] extended the technique for the temporal
coupling of explicitly integrated domains, where the ratio among the time-steps is
an integer number. In particular the domain with a bigger time-step is integrated
first and the interface displacement are interpolated linearly, when requested from the
smaller time-step domain. In a successive work Neal and Belytschko [56] proposed a
framework in which the different explicit partitions of the domain have a non-integer
time-step ratio. The basic idea underlying this subcycling procedure is the presence
of a nodal time-step and associated nodal clock together with a master time-step and
associated master clock. The master time-step is computed as the least common mul-
tiple of its nodal counterparts, and the master clock is advanced only when all nodal
clocks are at the same time. The proposed framework, even if still presents spurious
wave reflection at the interface among different domains, is shown to be stable.
However the analysis of such technique by Klinisky in [57] and Daniel in [15] show
that the proposed subcycling algorithms are not stable in a classical sense. The sub-
cycling techniques proposed by Belytschko et al, are in fact probabilistically stable,
meaning that for problems with a big number of elements instability is highly un-
likely. This effect is mostly caused by the generation and accumulation of unbalanced
forces due to the interpolation and extrapolation of nodal values, and it is significant

14
for extreme time-step differences. Modification of the subcycling techniques using a
modified trapezoidal rule for the update of the displacements and the velocities have
been proposed in [58, 59]. Casadei et al. in [60], proposed an integer subcycling tech-
nique in which every element can use its own time-step. The elements are separated
in different levels, and the presence of integer time-step ratios avoids the need for
interpolation.

Another approach for the coupling of time and space length scales simultaneously
comes from the direct extension of FETI (which in statics only couples length scales)
to the dynamic case. The dynamic version of the FETI framework is known in
literature as Heterogeneous Asynchronous Time Integrator (HATI). Farhat et al. [61]
proposed a first extension of the FETI algorithm to transient dynamics. In this
framework the interface between the different meshes share the same node position
(as in subcycling techniques). Differently from subcycling, the continuity at the
interface is imposed on the nodal velocity rather than the nodal displacement. The
coupling is achieved, as in the static FETI, applying lagrangian multipliers, in iterative
form. Gravouil and Combescure in [62], proposed a different resolution technique, GC
framework, for the general formulation of HATI. In this work, it is demonstrated that,
to achieve stability in multi time-step methods, the velocity needs to be continuous
at the interface between the differently integrated domains. Their approach is based
on a predictor-corrector strategy. First the coarse scale is advanced in time of one
time-step, without considering the interface constraint. Subsequently the fine scale
is updated, imposing the constraint condition between the fine scale velocity and the
linear interpolation in space and time of the coarse scale velocities. As pointed out,
the algorithm is stable but dissipative due to the linear interpolation of the velocities
at the interface.

Prakash and Hjelmstad in [6] modified the GC framework, proposing a differ-


ent resolution procedure for the interface lagrangian multiplier condition. The new
framework, named PH after the authors, has the main advantage of not being dis-

15
Figure 2.8: Comparison between GC and PH framework (reprinted with permission from [6]). While
the GC framework computes the Lagrangian multiplier at each micro time-step, the PH
framework computes it only at macro time-steps, and interpolates this result on the
micro-scale. Such methodology of the PH framework avoids the dissipation present in
GC. This effect is achieved, in the PH framework, by coupling the coarse at the fine
scale only at coarse time-steps, while in the classical GC the two domains are coupled
at every fine time-step.

sipative. The comparison among GC and PH is depicted figure 2.8. Suppose that
two subdomains A and B communicate through a common interface. The subdomain
(scale) A has a coarser time-step than the subdomain (scale) B. In the GC framework
the interface condition is computed on the finer scale at each fine time-step of the
simulation, while in the PH framework the interface continuity condition is computed
only once when the times among the scales are the same. The resulting framework
does not only preserve the energy of the computational domains but is also less com-
putationally demanding. Mahjoubi, Gravouil, Combescure et al [63] extended the PH
framework, to allow time coupling independently of the time-integrator. Differently
from the PH, the new framework (MGC) solves the constraint in time in a weak sense.
The Lagrangian multipliers are supposed to be constant over the coarse time scale,
while they are discontinuous on the fine scale domain, and the integral over time of
the lagrangian multipliers on the coarse and fine scale are imposed to be the same.
The resolution of Lagrangian multipliers at the interface requires the resolution of a
non linear system of equation, therefore an iterative solver is required, and this as-
pect of the computation is non desirable when two subdomains with explicit solvers
are coupled in time and space. Nevertheless a faster convergence of the interface

16
Figure 2.9: MST technique described in [7]. The domain is divided in a fine scale (blue) a coarse
scale (green), and a transition layer (shaded). While at the interface the micro and
macro scale communicate using the GC framework, in the transition layer the material
properties are weighted similarly to the Arlequin method.(reprinted with permission)

problem can be achieved using Hermitian time interpolation of the velocity at the
coarse scale at a finer time step as proposed by Bettinotti in [64]. In [65] Gravouil
and Combescure point out that the coefficient matrix, to solve the non linear system
for the Lagrangian multipliers is constant, and therefore, it can be pre-computed,
but this condition holds only if the coupled domains are linear and elastic at the
interface. On the other hand the juxtaposition of different computational models
creates the propagation of numerical error, and in particular the formation an prop-
agation of non-physical, spurious, waves due to the abrupt change in time and space
resolution [13, 66, 67].

The most effective strategy to efficiently mitigate the spurious wave effect is to
add a ”transition” layer around the coupled interface. Such layer gradually adapts the
wave as it travels from one domain to the next, effectively reducing, if not eliminating
altogether, the formation of non-physical waves. Gigliotti and Pinho [7] proposed the
use of a transition layer between two conforming meshes. The solid body is divided

17
in three parts as shown in figure 2.9: a micro-scale (blue), a macro-scale (green) and
a region where the two of them coexist, called Mesh Superposition Technique (MST)
Zone. In the MST zone the stiffness and mass properties of the elements are weighted
with a parameter, that ensures partition of unity and energy blending in the overlap
area. The definition of such parameter is exactly the same as in the Arlequin Method.
The models are coupled at the interface using an explicit/implicit coupling in the GC
framework. The simulations show a good level of accuracy compared with a micro
mono-scale model, at a fraction of the total computational cost. The framework is
however limited to conforming meshes, meaning that in the MST zone the micro
and macro-scale mesh have to share the same nodes. Moreover, given the dynamic
nature of the simulations it is not always possible to pre-determine the extension of
the damaged area, and therefore adequately model the micro-scale.

Marchais et al [68], proposed a different one-dimensional approach for the con-


struction of the transition layer. In the vicinity of the interface a filtering layer is
constructed such that the part of the signal that cannot be represented by the coarser
scale is damped out of the simulation. In particular at each time step of the fine scale,
the solution at every node in terms of acceleration, displacement and velocity is split
in two contributions: a macro contribution (representable by both the micro and the
macro-scale) and a micro-scale contribution (representable only on the fine scale).
The split of the solution is achieved using projection operators, that project the solu-
tion from the fine to the coarse scale and vice-versa. Such operators are constructed
ad-hoc on the transition layer minimizing the difference between the micro and the
projected macro scale kinetic energy. Several projection operators are proposed to
ensure an easy computation of the resulting projection matrix. The difference be-
tween the projected micro-scale kinetic field on the coarse scale mesh and the actual
field on the micro-scale is effectively the part of the micro-scale solution that cannot
be represented on the macro-scale. The non representable field is damped out using a
Perfectly Matched Layer (PML) [69, 70], a technique originally proposed to simulate

18
wave propagation in infinite medium. One dimensional elasto-dynamic simulations
have shown that the technique is effective in annihilating the spurious wave reflec-
tion, providing at the same time a minimal energy loss thanks to the properly defined
projection operator. The framework has not been extended to multiple dimensions,
as research is still needed for the definition of projection operator and PML in three
dimensions and on complex meshes.

Another approach to couple two or more computational domains with a transition


layer is represented by the dynamic form of the Arlequin Method. Ghanem et al. [71]
extended the static version of the Arlequin method to dynamics, with the possibility
of coupling different time scale, while retaining the same spatial discretisation. The
“gluing” condition over the overlapping zone is formulated in terms of displacement,
and the lagrangian multiplier problem is solved every coarse scale time-step. While
the stability of the framework is not demonstrated analytically, results are presented
with ratios of fine to coarse scale time-step size of up to a 1000. Recently, Fernier et
al. [72] proposed a stable Arlequin framework in which, differently from the previous
works, attention is given to the stability of the gluing condition, while retaining the
same time scale. In their work the explicit formulation of finite elements is used, and
particular attention is given on the effect of the weighting parameter on the stability
(CFL) condition, when an acceleration continuity over the gluing patch is enforced in
the form of lagrangian multipliers. The major finding is that the use of the weighting
parameter in the gluing zone, affects the stability condition decreasing the minimum
stable time-step.

A dynamic form of the Arlequin method, known as bridging domain, capable of


coupling different time and length scales has been proposed by Xiao and Belytschko
[14] for the simulation of coupled particle and continuum models. The different
communicating domains are both discretised using the explicit formulation of the
finite elements with different spatial and temporal resolution. In the overlapping
area the energies of the two domains are blended and a velocity continuity condition

19
is enforced using lagrangian multipliers, and are verified on the fine length scale
mesh at each fine time-step. The resolution of the lagrangian multipliers is executed
using the original coefficient matrix or its lumped form. The advantages of using the
lumped form of the lagrangian multipliers are evident, since their values can be found
without employing coupled system of equations. The most interesting feature of such
lumping scheme is that it is more efficient in annihilating spurious wave reflections
with respect to the consistent scheme. The influence of the coupling size and the
weighting parameter function is also studied. The main drawback of the scheme is
that it still needs to compute and lump a matrix whose size is equal to the number
of fine nodes present in the coupling zone. Additionally, Talebi et al. [73] extended
the bridging domain method to three dimensions.

In summary all the frameworks proposed for dynamic concurrent multi-scale simu-
lation assume the presence of a fixed area of the whole computational domain where
a more detailed model is needed. However it is not always possible to determine a
priori the region of interest, therefore a number of simulations are needed to assess the
influence of the size of the micro-scale with respect to the macro-scale. It is clear that
especially in dynamics, the introduction of adaptivity, where the model estimates, on
the fly, the size and the location of the micro-scale domain size is needed. The added
complexities of a shrinking/expanding micro-scale domains, together with the issue of
a moving interface boundary, highlighted in the next section, hinder the formulation
of such adaptive concurrent dynamic multi-scale frameworks.

2.1.3 Adaptive concurrent multi-scale frameworks

Adaptive frameworks in concurrent multi-scale simulations do not assume the


position and the size of a micro-scale domain a priori. The shape of the microscopic
domain, instead, is computed on the fly based on the characteristics of the solution.
Most of the work in this area concerns static analysis and the main challenges are
highlighted in this section. Ghosh et al. [8] proposed an adaptive concurrent multi-

20
Figure 2.10: Adaptive Concurrent Multi-scale Framework, reprinted with permission from [8]. In
this case the crack propagation in composites is adaptively represented using different
scales of approximation.

scale framework in which the macro model adaptively transforms into a micro model
through 4 levels in which different criteria are checked. In this framework three
regions are identified. The first region (level 0) is the purely macroscopic region in
which a hierarchical constitutive model is employed. The second region (level 1) is
closer to the damaged area and is modelled through a semi-concurrent framework,
while the localisation is explicitly represented at the third microscopic scale (level 2),
surrounded by transition elements that gradually vary in size form level 2 to level 1.

The coupling in space among the scales is performed with a FETI engine. Par-
ticular attention in the framework is given to the transition criteria that are used
to jump from one scale to the next. Such criteria can be based on the capability
of the mesh employed to properly represent the steep gradients of displacements,
and are employed in h or p-refinement schemes. In particular elements of the level
0 are refined where the traction jumps (at each element boundary) are bigger than
the average traction jump computed on the whole mesh, as proposed in [74]. Other
sets of criteria, instead of dealing with the quality of the mesh, assess whether the
solution is exhibiting localisation and therefore other constitutive models need to be

21
used. Such criteria are used in the transition from level 0 to 1 and from level 1 to 2.
Once the elements of level 2 are created, the state of stress and displacements history
is simulated again to ensure that the energy is not dissipated during the switch of
an element from level 1 to 2. Vernerey and Kabiri [75], proposed a similar scheme,
however the proposed framework uses only macro and micro-scale. In this framework
the coarse scale homogenised mesh is firstly refined employing a h-refinement scheme
until the discretisation error is below a threshold. Such error is assessed by evaluating
the first gradient of the displacement on the mesh an comparing it with the traction
jumps among the element edges. When the size of the refined elements is comparable
to the size of the RVE, the refined mesh will represent the explicit micro-structure.
A similar technique has been proposed by Larsson et al. [76] in which the elements
are refined in four different steps, using different constitutive models for composite
material, starting from homogenised down to the actual micro-structure.

Greco et al. [77] proposed an adaptive FETI framework. Differently from the
previous approaches the original mesh size is the same as one RVE. The criterion for
which elements switch from a homogeneous constitutive model to the explicit micro-
structure is based on the relative distance among the elements and the crack. The
micro-structure is essential for tracking the correct crack path evolution, not well
represented by using solely a homogeneous constitutive model.

Akbari Rahimabadi et al. [78] proposed a framework that accounts for both dis-
cretisation error (only dependent on the mesh coarseness) and homogenisation error
(only dependent on the validity of the separation of scales principle). In particular the
discretisation error of the mesh is evaluated adopting the Zinkiewicz-Zhu (ZZ) [10,79],
error estimator. In the ZZ approach, an approximation for the exact solution is deter-
mined by defining a nodal continuous approximation of stress. This post-processed
solution is subsequently compared with the original one, and their difference is used
as an estimation of the discretisation error. Such error decreases when the nodal
mesh density is increased. On the other hand, the homogenisation error is computed

22
based on the approach proposed by Temizer an Wriggers [80] who demonstrated that
the second derivative of the displacement measures the deviation of the homogenised
solution and the actual micro-scale. Differently from the discretisation error, the
homogenisation error increases with the nodal mesh density. These two competing
mechanisms determine whether an element needs to be simply refined or it should be
switched to the next scale.

In dynamic analysis, together with discretisation and homogenisation error, an-


other source of error has to be taken in consideration, namely the temporal error.
This third source of error is generated from the discretisation of the temporal scale.
Zinkiewicz [81] proposed a temporal error estimation technique for the Newmark time
integrators. In particular Newmark time-stepping schemes are obtained expanding
in Taylor series the value of the displacements ignoring third order terms. Therefore,
Zinkiewicz proposed a technique to evaluate the third order derivative of the dis-
placement in time to compute the temporal error. Another approach was proposed
by Wiberg and Li [82], and it is considered as the temporal application of the ZZ
estimator. The Newmark family time integrators present discontinuous acceleration
between two time steps. The authors proposed a post-processing technique to obtain
a continuous acceleration between two time steps and therefore estimate the temporal
error from the comparison of the post-processed and original solution. Romero and
Lacoma [83], proposed a more general approach that can be extended to any type of
time integrators for all members of the Newmark integration family. The main idea
is to construct an error estimation based on the explicit construction of a quadrature
rule which can estimate the high order terms excluded in the time stepping proce-
dure. The estimation of the error requires minimum computational cost and can be
extended to different families of time integrators. Wiberg and Li [9] proposed a h-
adaptive refinement technique for dynamic problems that simultaneously adapts the
time-step and the mesh size (using the ZZ error estimator) to reduce the error in the
solution. The resulting model, though, does not have multiple time scales, rather

23
the smallest time-step on the smallest mesh is used for the whole model. Moreover,
since the meshes are simply refined and juxtaposed, the adaptive solution presents
spurious wave oscillations. Yue and Robbins have proposed a similar approach for
the estimation of temporal and special error and have applied their technique to elas-
tic [84] and plastic dynamic [85] problems. Similarly to the precedent framework, a
single time scale is adopted for the whole problem. The mesh size is adjusted using a
s-refinement, where different meshes are superimposed and a continuity of displace-
ment is enforced at the common boundary. The main advantage of this refinement
scheme is that it does not need geometrical transition zones to adapt the fine mesh
size to the coarse space. The main disadvantage of the framework is that it presents
numerical spurious oscillations at the interface among different mesh.

The mitigation of numerical reflections due to the sudden change in mesh size
can be achieved by using an adaptive formulation of the Arlequin/Bridging domain
method. Gracie [86] proposed an adaptive formulation of the bridging domain method
applied to the simulation of the propagation of dislocations. At every time step all
the coarse scale elements that are at a certain distance from the dislocations are
converted from coarse to fine scale. Moreover, at every time step a buffer zone is
detected where a continuity of the velocity is enforced using the lumped formulation
of the Lagrangian multipliers. The adaptive formulation of the bridging domain
method has been extended by Moseley in [87], where in addition to the refinement
criterion, a coarsening criterion is implemented to ensure computational efficiency.
The main drawback of the proposed adaptive bridging domain framework is that a
single time scale is used, meaning that the smaller time-step over the whole domain
is used to advance all the nodal quantities in time. However, if the fine scale nodes
are distributed over a small area, an improved computational time efficiency can be
achieved using different time-steps for fine and coarse scales. This is because the
effect of the time-step on the small fine scale portion of the domain, will not influence
the time-step of the coarse scale domain. Another major drawback of the formulation

24
Figure 2.11: Data transfer procedure for integration point variables reprinted with permission from
[9]. Firstly, the integration point variables are transferred on the nodes using the
interpolation functions a). Subsequently using the same interpolation functions the
interpolated nodal quantities are re-interpolated on the nodes of the new mesh b).
Finally, such quantities are interpolated on the integration points of the new mesh c).

is the computation of a new mass matrix every time the velocity continuity condition
is enforced over a new area.

A common issue of all the cited frameworks, both in static and dynamics, is what
in literature is referred to as “data-transfer”. Once the elements at the coarse scale
have been flagged for refinement and have been refined, the coarse scale kinetic and
kinematic variables have to be mapped from the old mesh to the new one. Peric
et al. [88] pointed out that this process has to guarantee consistency in the energy
transferred among the mesh, respect the equilibrium and minimise the diffusion of
the interpolated state on the new mesh. One way to map the kinematic and kinetic
variables is to apply the boundary conditions coming from the coarse scale on the
fine scale and solve a boundary value problem for the newly generated mesh. This
approach is more popular in static applications [74–78], and has the major drawback
of being computationally expensive. Therefore a faster data transfer procedure that
does not require the resolution of a boundary value problem, but instead makes use
of interpolation functions to transfer nodal as well as integration point material state
variables is required, as proposed in the work of Saksono and Peric [89] and Ortiz and
Quigley [90]. The main idea of the procedure is to interpolate the nodal quantities on
the new mesh using the standard definition of the interpolation functions. For the in-
terpolation of the variables defined at the quadrature points a three stage procedure is
adopted. Firstly the variables at the quadrature points are interpolated on the nodes

25
of the old mesh (utilising the ZZ procedure) as in figure 2.11a). Subsequently the
obtained values are interpolated on the nodes of the new mesh 2.11b). Finally, using
the interpolation functions of the new mesh the nodal quantities are transferred on
the new integration points 2.11c). Even if the procedure is computationally effective
it is highly diffusive due to the different interpolation stages involved. In the last
type of data procedure approach, the value at one point (node or quadrature point)
is computed from a higher order reconstruction of the field in the neighbourhood of
this point. This type of remapping can be applied on unstructured meshes since it
relies on clouds of points rather than on a mesh. Such algorithms have been applied
in the works of Gracie [86] and Moseley [87]. Summarising it is clear that there is
not a unique procedure for the transfer operation since every approach has its own
advantages and drawbacks. For a review of data transfer operators consult the work
of Bussetta et al. [91].

2.1.4 Applications for heterogeneous materials

Several frameworks mentioned in this section have been utilised in different appli-
cations involving evolution of damage for heterogeneous materials.

Gigliotti and Pinho in [7] applied the MST technique for the simulation of impact
on a composite plate. The impact is simulated blending two different models: a de-
tailed one with cohesive elements in the areas where delaminations are expected and
a simplified discretisation without cohesive elements away from the damaged areas to
simulate the bending of the plate. The models are coupled at the interface using an
explicit/implicit coupling in the GC framework implemented in Abaqus. The simu-
lations show a good level of accuracy compared with a micro mono-scale model, at a
fraction of the total computational cost if the blending zone is far from the delamina-
tion boundaries. This effect can be mitigated using adaptive frameworks where the
coarse scale is converted to micro-scale during the computation.
Another interesting applications regarding the study of damage in composite

26
plates are proposed in the work of Ghosh in [8], as depicted in figure 2.10. In par-
ticular a composite plate with a central hole is subjected to a tensile load until is
damaged, and a crack appears in the domain. Similar applications are presented in
Vernerey and Kabiry [75] Larsonn [76] and Greco et al. [77], where different RVEs and
geometries are proposed for the validations of the frameworks. On the other hand,
Akbari Rahimabadi in [78] studied the propagation of damage in plates of polycrys-
talline materials.
This range of applications justifies the use of adaptive frameworks for a more effi-
cient simulation of damage in heterogeneous material, explicitly modelling the crack
path only when and where needed in the computational domain. However, further
research is needed to address several gaps, highlighted in the next section.

2.2 Concluding Remarks

From the previous section it is clear that there is a lack of a concurrent adaptive
dynamic multi-scale framework that could couple different length scales and different
time scales contrarily to [9, 84–87] where a single time-step is used for both fine and
coarse scale domains. Moreover, the original formulation of the Arlequin [44] and
bridging domain method [14], even if it is effective in the elimination of the spurious
wave reflection at the interface of fine and coarse scale domains, it is computationally
non-efficient in adaptive formulations since it requires the computation at each coarse
scale time-step of the mass matrix and the lagrangian multiplier constraint matrix.
Lastly, there is a gap in the literature on the formulation of an efficient transfer
operator that can fulfil all the properties outlined in the work of Peric [88].
In the next chapter a novel framework will be proposed, able to capture the
dynamic response of homogeneous materials through the coupling of subsequently
refined scales. The main features of such framework will be the formulation of a novel
and more efficient error estimator for the identification of zones that need refinement.
Moreover, a novel coupling formulation for overlapping domains will be proposed and

27
analysed to ensure a stable framework. Lastly, particular attention will be given to
the formulation of a consistent data transfer scheme. The framework will be validated
in Chapter 3 with one-dimensional elastic wave propagation in rods. Subsequently
three dimensional applications will be presented in Chapter 4, where the elastic wave
propagation of complex mesh will be examined in the proposed multi-scale setting.
Finally, the approach will be extended to irreversible wave propagation problems to
explore the coupling using different constitutive models.

28
Chapter 3

A novel dynamic adaptive


concurrent multi-scale framework
for wave propagation

This chapter introduces a novel adaptive concurrent framework for dynamic prob-
lems. The first part of the chapter presents the mathematical formulation of the
framework with particular attention to its novelties and its stability properties. Sub-
sequently the novel methodology is validated by solving simple problems such as the
propagation of elastic waves in slender bars, selected as reference problem. The per-
formances of the novel framework are compared against non adaptive explicit finite
formulations, showing improved accuracy.

3.1 Strong form of the dynamic wave propagation


in continuum media

Let Ω be a body in the Euclidean space IR3 and let ∂Ω be its boundary. At any
instant in time, the deformation of the body is characterised by a smooth invertible
function Φ(X, t) which maps every point X from the initial configuration Ω0 to the
point x in the actual configuration Ω, as depicted in figure 3.1. Without loss of
generality, neglecting the effect of body forces, denoting with a(x, t), v(x, t) and
u(x, t) the acceleration, velocity and displacement fields respectively, as well as the
final time of interest T , the dynamic problem, as defined in [12] and [92] consists in

29
Figure 3.1: Domains Definition for strong form of dynamic problem

determining at every time t ∈ [0, T ], the velocity field and the Cauchy stress tensor
σ(x, t), for every point x of the body Ω such that:


 ρa = ∇ · σ in Ω X [0, T ], (3.1a)

(3.1b)

v = vb on Γv X [0, T ],



(3.1c)


σ · n = t on Γ X [0, T ],
t
(3.1d)

0) = (X)
X ∈ Ω,



 σ(X, σ 0
(3.1e)


 v(X, 0) = v (X)

X ∈ Ω,
0

in which ρ is the material density and Γv and Γt are the portions of ∂Ω, on which
the stress and velocity boundary conditions are assigned to the values vb and t, re-
spectively. Moreover, initial stress conditions σ0 (X) and v0 (X) are assigned over the
body.

3.1.1 Weak form and fully discretised Finite Elements

The weak (variational) formulation of the dynamic problem is obtained as in [12],


by multiplying equation 3.1a with a variation δ(·) of a velocity test function field v
and integrating it over the current configuration Ω, whose infinitesimal element is dV .

(3.2)
Z
(ρa − ∇ · σ) · δv dV = 0.

30
The test function v represents a field of velocities that are continuous over the domain
and are zero on Γv , existing in the space V0 .

(3.3)
n o
δv ∈ V0 , V0 = δv| δv ∈ H 1 (Ω), δv = 0 on Γv ,

in which H 1 (Ω) is the Sobolev space of order 1. The integration by parts of the
second contribution of the integral in 3.2 together with the vanishing property of the
test function over the applied velocity boundary conditions leads to:

(3.4)
Z Z
ρa · δv + σ : δD dV − t · δv dA = 0,
Ω Γt

where D is the symmetric part of the gradient of velocity defined as sym( ∂v(x,t)
∂x
).
Equation 3.4 represents the weak form of 3.1a, in which every term represents a
variation of power with respect to v(t). In particular the different terms of 3.4 can
be collected as:
= Ω ρa · δv dV,
 R
δPkin


= Ω σ : δD dV, (3.5)
R

δP int
δPext = Γt t · δv dA,

 R

in which δPkin , δPint and δPext represent the variation of kinetic, internal and external
power, respectively and dA is the infinitesimal element of the surface Γt . Substituting
3.5 in 3.4 the final form of the weak form, known as the principle of virtual power is
obtained as:
δPkin + δPint − δPext = 0. (3.6)

The variational statement expressed in 3.6 can be solved by means of Finite Ele-
ments. To achieve this, the physical space Ω is substituted with a computational
domain, generally referred as the “mesh”, such that Ω ≈ Ωe , in which Nelem
PNelem
e=1

represents the number of elements in the computational domain. On the mesh, both
the displacements and the test function velocities are expressed as:

δv(X, t) = N (X)δv e (t), (3.7)


v(X, t) = N (X)v e (t), (3.8)

in which δv e and v e represent the arrays containing the nodal values of the test
function and the velocities grouped for each element, respectively. On the other

31
hand, N is the matrix of interpolation functions that are used to approximate the
value of the kinematic variables between nodes. The semi-discretised weak form is
obtained by substituting the expression for the approximated displacement and trial
velocities from 3.7 in 3.4, and invoking the arbitrariness on the test function, leading
to:

M a = f ext − f int , (3.9)

where
NX
elem Z
M= ρN T N dVel , (3.10)
e=1 Ωe
NXelem Z
f int = B T σ dVel , (3.11)
i=1 Ωe
NXelem Z
f ext = N T t dVel , (3.12)
i=1 Ωe

f = f ext − f int . (3.13)


In equation 3.11, B represents the matrix containing the derivative o the shape
functions, with respect to the spatial variable x. It is important to note that both the
mass matrix M and the vector of forces f are evaluated over the current configuration.
Also, when using the explicit form of Finite Elements, the mass matrix is diagonalised
using lumping techniques such as the row sum technique [12].
The nodal acceleration a in equation 3.9 can be discretised in time using the
central difference formula
1 1
v n+ 2 − v n− 2
a = n
. (3.14)
∆t
Substituting equation 3.14 into 3.9, it is possible to obtain, the fully discretised ex-
plicit form of the Finite Elements, referred as the leap-frog marching scheme, as:

1
v n+ 2 = M −1 ∆tf n + v n− 2 .
1
(3.15)

Given the lumped nature of the mass matrix and the central difference scheme used
in time, the update of the nodal velocities from equation 3.15 can be accomplished
without solving any system of equations, which is a great advantage in terms of
computational time. The main drawback of such explicit scheme is that the solution

32
will grow unbounded if the time-step is too big. Such condition, known as conditional
stability, sets the value for the maximum stable time step. As an example a stable
time-step for linear elements is given by:

2 `e (3.16)
∆t = β∆tcrit ∆tcrit = ≤ min .
ωmax e ce

Equation 3.16 requires the time-step ∆t for a whole mesh to be smaller of a critical
value ∆tcrit . The parameter β, known as Courant number, is generally smaller than
0.8, in practical FE simulations, to avoid potential instabilities growth due to complex
phenomenon such as contact and damage. Even though the critical time-step should
be evaluated through the use of the maximum eigenfrequency of the problem ωmax ,
a good approximation of the conditional stability is given by the inverse of the time
that an elastic wave takes to travel the length of the smallest element in the mesh,
computed as the minimum over the whole mesh of the length of an element `e and
the sound propagation velocity in that element ce . Finally the FE resolution of the
dynamic problem is resumed in flowchart 1, and reported in [12, 92]. This will be
referred in this thesis as standard explicit finite element formulation.

Flowchart 1. Flowchart for standard explicit integration of Finite Elements

1. Initialise mesh: set v0 ,σ0 , n = 0, t = 0, compute M

2. Evaluate Nodal forces f n from equation 3.13

3. Evaluate accelerations an = M −1 f n

n+ 21 n+ 12 −η
1 if n = 0
4. Update Nodal Velocities v =v + η∆tan : η = 2
1 if n > 0

5. Overwrite nodal velocity for enforced boundary conditions


1
6. Update nodal displacements: un+1 = un + ∆tv n+ 2

7. Update counter and time

8. If simulation is not complete go to 2.

33
Figure 3.2: Domains definition in weak form of the Arlequin Model. The body Ω is split in three
partitions: a coarse scale ΩM , a fine scale Ωm and a coupling domain Ωc

3.2 A novel efficient adaptive framework for the


coupling of differently discretised domains

As highlighted in the literature review section, the coupling of two discretisations,


using different temporal and scale lengths, poses a challenge mainly linked with the
generation of spurious waves at the coupled interface. The dynamic version of the Ar-
lequin approach, that defines an overlapping area/volume between the two domains,
is arguably the most effective methodology that can be found in open literature to
couple different computational domains without generating spurious reflections. How-
ever, the high computational cost associated with the use of Lagrange multipliers over
the coupling domain limits the use of this approach for relatively large simulations.
In this section, a novel methodology, based on the expression of the coupling in terms
of nodal corrective forces, is presented as an improvement of the dynamic Arlequin
framework. In particular, the main advantage gained by the novel framework devel-
oped consists in avoiding assembling and inverting matrices connected to the coupling,
substantially increasing the computational efficiency of the multi-scale approach, thus
making it suitable for large simulations.

34
Let ΩM , Ωm and Ωc = ΩM ∩ Ωm , in which the subscript M indicates the coarse (or
macro) scale, m the fine (or micro) scale and c the coupling volume, be the partitions
of Ω as represented in picture 3.2. The weak form of the dynamic coupled problem
can be written as follows [71]:

∂δv T
R  
∂ 2 uM
T
· dVM + ΩM ∂xM : (αM σM ) dVM + T
· λ dVc = 0,
R R

 ΩM


δv M α M ρ ∂t 2 Ωc δvM


R   T
∂ 2 um
T
· dVm + Ωm ∂δv : (αm σm ) dVm − T
· λdVc = 0,
R R
 Ωm δvm α m ρ ∂t 2 ∂x
m
Ωc δvm



δλ(vM − vm ) dVc = 0.

R

Ωc

(3.17)

The equations 3.17 have to be valid for every δvM δvm and δλ, that satisfy:

(3.18)
n o
δv M ∈ V0M = δv M ∈ H 1 (ΩM )| δv M = 0 on ΓM ,
(3.19)
n o
δv m ∈ V0m = δv m ∈ H 1 (Ωm )| δv m = 0 on Γm ,
(3.20)
n o
vM ∈ V M = v M ∈ H 1 (ΩM )|v M = vd
M on ΓM ,

(3.21)
n o
vm ∈ V m = v m ∈ H 1 (Ωm )|v m = vd
m on Γm ,

δλ , λ ∈ H 1 (Ωc ), (3.22)

in which vd
M and v
d m are the applied velocity boundary conditions on the macro

and micro domain, respectively. In 3.17 the weighting functions ensure a partition of
the unity for the energy in the gluing zone such that αM + αm = 1. The coupling
is enforced imposing the continuity of velocities in the overlap domain, Ωc using
a Lagrangian multipliers field λ. Such coupling is not respected locally at nodal
locations, rather it is imposed in a weak form. Macro, micro and coupling domains
are discretised in space using the classic definition of shape functions, defining in
particular three different sets NM , Nm , Nλ to approximate respectively the macro
and micro kinematic variables as well as the Lagrange Multiplier. Based on these
approximations the semi-discretised form is as follows

35
+ f˜int + LM λ = 0,

M M M
M̃ a


M̃ m am + f˜int
m
+ Lm λ = 0, (3.23)
L v + Lm v m = 0,
 M M

in which M̃ M and M̃ m , represent the lumped weighted mass matrices, similarly


f˜int
M
and f˜int
m
are the weighted internal force vectors of the macro and micro scale
respectively and LM and Lm represent the Lagrange multiplier matrices.
It is clear from equation 3.23 that the original form of the Arlequin method re-
quires the assembly of two matrices connected to the Lagrange multipliers, for the
macro and micro domain. Moreover, the presence of the weighting functions in the
integrals connected to the mass and force matrix, limits the use of under-integrated
C 0 elements, common in explicit dynamics, since their integration rules should be
modified to account for the weights. It has been demonstrated that while non-linear
weighting on Ωc have a beneficial effect in the reduction of spurious wave reflection,
they introduce a significant reduction of the time-step in explicit simulations [72].
The first improvement with respect to the original framework looks at those two
aspects related to the use of the weighting functions. In particular it can be noticed
that partition of unity for the energies in Ωc , can be achieved associating the weight at
nodal positions after the spatial semi-discretisation. Therefore the first modification
of the original framework applies the parameters αM and αm once the integrals in
space are evaluated. This process formally leads to:

+ αM fint + LM λ = 0,

M M M M
α M a


αm M m am + αm fint
m
+ Lm λ = 0, (3.24)
L v + Lm v m = 0,
 M M

in which αM and αm are the nodal diagonal matrices of the weights and M m , fint
M

and fint
m
are the original mass and internal forces matrices as if no coupling was
applied. The advantage of this new formulation is twofold. Firstly, the elements do
not need to use modified integration rules for the computation of mass matrices and
internal forces, making suitable the use of under-integrated elements. Moreover, the

36
assignment of the weights directly to the nodes, does not change the eigenvalues of the
system, not altering the wave propagation properties, and more importantly the time-
step of the simulation as demonstrated in the next section. The second improvement
of this new formulation looks at the way the coupling is imposed. In particular, in
3.17, the coupling is applied in a weak form, which will lead to the assembling and
inverting of non-diagonal matrices. In the spirit of the explicit formulation of finite
elements, such coupling can be applied in a strong sense. The continuity expressed
as:

N v M − v m = 0, (3.25)

requires the velocities at the micro nodal positions to conform the projection of the
macro velocities at the same nodal position at every time. Such constraint can be
applied using the lagrangian multipliers, in which the power introduced on the dis-
cretised system by the coupling, Pλ , is expressed only using the coupled nodes as:

Pλ = λT · (N v M − v m ) = (fcorr
m
)T · (N v M − v m ), (3.26)

where a dimensional analysis reveals that the Lagrangian multipliers, can be treated
as nodal forces fcorr
m
acting on the nodes of the fine mesh. Equation 3.26 can be used
to compute the variation of the power with respect to δv M , δv m and δfcorr
m
as:

∂Pλ M ∂Pλ ∂Pλ (3.27)


δPλ = M
δv + m δv m + m
m
δfcorr .
∂v ∂v ∂fcorr

The partial derivatives in 3.27 can be computed using 3.26 as:

= N T fcorr

∂Pλ m


 ∂v M
,






∂Pλ
∂v m
= −Ifcorr
m
= −fcorr
m
, (3.28)






= N vM − vm,

 ∂Pλ

m
∂fcorr

in which I represents the identity matrix. Using the principle of virtual power ex-
pressed in equation 3.6 it is possible to express the semi-discretised equations of

37
motion for the coupled system in terms of the variation of the macro, micro and
coupling powers δP M , δP m and δPλ . In particular:

δP = δP M + δP m + δPλ = 0. (3.29)

The contributions to the macro and micro powers, P M and P m considering the weights
and the equations 3.5 discretised over the macro and micro domains can be defined
as: 
δP M = (αM M M aM + αM fintM
) · δv M ,
(3.30)
δP m = (αm M m am + αm fint
m
) · δv m .

Invoking the arbitrariness of the functions δv M , δv m and δfcorr


m
the semi-discretised
equations of motions, for this framework, can be written as:

+ αM fint + N T fcorr = 0,

M M M M m
α M a


α M a + α fint − fcorr = 0,
m m m m m m (3.31)

N v M − v m = 0.

The matrix N T in the system of equation 3.31 represents the interpolation matrix
form the fine scale to the coarse scale. It is interesting to note that at the fine scale the
Lagrange multipliers have the physical interpretation of nodal forces, that are then
interpolated on the coarse scale. Using the assumption of the linking condition stating
that the nodal quantities on the fine scale have to be equal to the linear interpolation
of the coarse scale kinematic quantities, it is possible to derive an explicit expression
for the coupling forces:

+ αM →m fint + fcorr = 0,

M →m
α

 M M →m aM →m M →m m

α M a + α fint − fcorr = 0,
m m m m m m (3.32)

 M →m
− v m = 0,

v

in which the operator (·)M →m is a shorthand for the projection of a generic variable
(·) from the coarse to the fine scale domain. By definition the weights on the coarse
scale can be expressed using the partition of unity condition leading to:

αM →m = 1 − αm . (3.33)

38
Before discretising in time the system of equations in 3.32, it is important to assess
the effect of both the coupling forces and the weights on the critical time-step that
will not lead to instabilities.

3.2.1 Determination of the critical time step for the novel


framework

As stated in section 3.1.1 the major drawback of explicit time integration schemes
is their conditional stability that limits the maximum time-step to a critical value.
Such ∆tcrit can be evaluated using the equation 3.16, in which ωmax represents the
maximum eigenfrequency of the domain. The evaluation of such eigenvalue can be
computationally expensive for large problems, considering that it requires assembling
the global stiffness matrix, which is not contemplated in explicit numerical methodolo-
gies. It is clear that the determination of an estimation is crucial for the applicability
of the proposed framework. Therefore, it is of paramount importance to establish
the effect of the presence of weighting parameters and kinematic coupling on ∆tcrit .
To this purpose as suggested in [12], the linear stability of the system 3.31 will be
investigated in this section.
The equilibrium configuration is perturbed by a small quantity, in this context
ˇ . Since the perturbations are assumed to be small,
represented with the symbol (·)
the dynamic equations are considered to be linear, that is a linearized model is used
for the perturbed internal forces:
fˇM →m

= K M →m,tan ǔM →m ,
int
(3.34)
fˇm = K m,tan ǔm ,
int

where in 3.34 K M →m,tan and K m,tan represent the linearised (tangent) global stiffness
matrices for the two different scales and ǔM →m and ǔm the perturbed displacements.
Moreover, it is clear that if the constraint in 3.31 expressed in terms of velocities
is always verified, its validity will not change when expressed in terms of perturbed
acceleration as:

ǎM →m − ǎm = 0. (3.35)

39
Perturbing the equation 3.35 and using the equations 3.34 and 3.31, simplifying the
equilibrium state, it is possible to write the perturbed linearised form of the coupled
system of equations as:

+ αM →m K M →m,tan ǔM →m + fˇcorr = 0,



M →m
α

 M M →m ǎM →m m

α M ǎ + α K
m m m m
ǔ − fcorr = 0,
m,tan m ˇm (3.36)

 M →m
− ǎ = 0.
m

in which fˇcorr
m
represents the force necessary to enforce the coupling in the perturbed
state. The linear system of ordinary differential equations in 3.36 admits a general
solution in the exponential form:
 √
= AM →m e Λ̌
ǔM →m M →m
t,

√ (3.37)
ǔm = Am e Λ̌m t,

in which AM →m and Am are constant vectors containing the amplitudes, while


q q
Λ̌M →m and Λ̌m represent the square of the eigenfrequencies ω for the two do-
mains, respectively. From the equations in 3.37 it can be derived that the accelerations
can be written as:  √
=A
ǎM →m M →m M →m M →m
Λ̌ e Λ̌ t,

√ (3.38)
= Am Λ̌m e Λ̌ t.
m
ǎm

The substituion of the equations 3.37 and 3.38 into 3.36 yields to:

  M →m √Λ̌M →m 
Q̌M →m
  
0 I A e √ t 0
0 Q̌m −I  =  0 , (3.39)
 m

m Λ̌
A e t 
 
I −I 0 fˇm 0
corr

in which I represents the identity matrix, 0 a zero-entries matrix. It is important


to note that the system is conveniently partitioned such that the upper left partition
represents the uncoupled system. The matrices Q̌m and Q̌M →m are defined as:

Q̌M →m = αM →m K M →m,tan + αM →m Λ̌M →m M M →m , (3.40)

Q̌m = αm K m,tan + αm Λ̌m M m . (3.41)

From 3.39 it is possible to notice that the unconstrained problem is bordered with
constant matrices. For such system of equations, it is possible to invoke Rayleigh’s

40
theorem for multiple constraint, whose proof is in [12], which states that the maximum
eigenvalue of the constrained matrix is smaller or equal than the biggest eigenvalue
of the unconstrained matrix. This is expressed as:

Λ̌max ≤ Λ̊max , (3.42)

in which Λ̊max represents the maximum eigenvalue of the system:


# √ 
Λ̊M →m
" " #
Q̊M →m 0 A M →m
e t = 0
 √ , (3.43)
0 Q̊m Am e Λ̊ t
m 0

in which, using the same convention as in equation 3.40,

(3.44)
 
Q̊M →m = αM →m K M →m,tan + αM →m Λ̊M →m M M →m ,
(3.45)
 
Q̊m = αm K m,tan + αm Λ̊m M m .

In virtue of 3.42 the critical time step can be evaluated using the matrix 3.43. For
the system to be stable, the equations in 3.43 have to be valid for every value of the
amplitude matrices. This condition can be expressed as:

" #!
Q̊M →m 0 (3.46)
det = 0,
0 Q̊m
in which the operator det([·]) indicates the computation of the determinant of the
matrix [·]. The determinant in equation 3.46 can be evaluated as:

(3.47)
     
det Q̊M →m Q̊m = det Q̊M →m det Q̊m = 0.

By definition, the macro and micro portions of the domain, possess the same
material properties and therefore M M →m = M m . Moreover, due to the performed
projection, the macro stiffness matrix can be simplified leading to K M →m = K m .
Using this assumption the condition in equation 3.47 is expressed as:
  
det αM →m K m,tan + αm→M Λ̊M →m M m = 0,
  (3.48)
det αm K m,tan + αm Λ̊m M m = 0.

41
Using the linear property of the determinant operator in equation 3.48 the weight
matrices can be collected, leading to:
  
det(αM →m )det K m,tan + Λ̊M →m M m = 0,
  (3.49)
det(αm )det K m,tan + Λ̊m M m = 0.

 
Assuming that the weight matrices for both equation are not null, the terms det αM →m
and det (αm ) can be simplified leading to the final form of the eigenvalues system:

  
det K m,tan + Λ̊M →m M m = 0,
  (3.50)
det K m,tan + Λ̊m M m = 0.

Clearly from 3.50 the matrices involved in the eigenvalues evaluations are the same
for the two scales and therefore Λ̊M →m = Λ̊m . Moreover, the stiffness and mass
matrices are the same as if no weights was applied. For this reason, even in presence
of weights the equation 3.16 still yields a valid critical time-step estimation. In the
classical form of the Arlequin method, this condition does not necessarily hold true.
In fact, as demonstrated in [72] the weights applied have an adverse effect on the
critical time-step of the simulation.

3.2.2 Resolution Algorithm

The equations in 3.32 can be solved by defining a set of trial kinematic quantities,
defined as ¯·, that do not take in consideration the coupling condition, but are linked
to the internal forces.


αM M m N āM + αM fint
M →m
= 0,
(3.51)
αm M m ām + αm f m = 0.
int

The weights in both equations can be simplified since they are non zero quantities.
Substituting equations 3.51 in equation 3.32, and using central difference in time from
equation 3.14, a new expression for the discretised coupled system is obtained.

42
v M 1 −v M 1 v̄ M 1 −v M 1
 ! !
+ αM M m N + fcorr = 0,
n+ n− n+ n−
αM M m N m

 2 2 2 2
∆tm ∆tm





v m 1 −v m 1 v̄ m 1 −v m 1

 ! !
+ αm M m = 0,
 n+ n− n+ n−
αm M m m
− fcorr
 2 2 2 2


∆tm ∆tm (3.52)
1 = 0,

M m
N vn+ − vn+



 1
2 2


1 = 0.
N v M 1 m

− vn−

n− 2 2

In equations 3.52 the velocities at the previous half time-step are not the trial
ones, because they take in account the corrective forces of the previous time-step. In
this system the only unknown is fcorr
m
that can be solved for resulting in:

M m αM αm (N v̄n+
M
1 − v̄
m
n+ 1
)
m
fcorr = 2 2
. (3.53)
∆tm
The equation 3.53 expresses the corrective forces needed to enforce the coupling
on the two domains. The presence of the weight function at the micro-scale weights
the effect of this force such that its contribution is scaled according to αm and αM .
In the next section, the influence of this scaling will be studied. It is important to
notice, as well, that equation 3.53 requires the macro-scale velocity at micro-scale
time-steps, therefore a linear interpolation in time can be used as:

1 = v̄
(3.54)
n− 1 + āN ∆t ,
M M M m
v̄n+
2 2

in which n is the micro step and N is the macro step. The complete algorithm for
the coupling of two simulations is outlined in box 2

Flowchart 2. Flowchart for novel concurrent explicit framework

1. Initialise macro and micro mesh: set v0M ,σ0M , v0m , σ0m
tM = 0, tm = 0 compute M M and M m

Macro Scale Update: tN −1 → tN , ∆tM

2. Evaluate macro nodal forces fNM from equation 3.13

43
3. Evaluate macro trial accelerations
N = (M ) (fN − fNM,int )
M −1 M,ext
āM

4. Update macro trial nodal velocities 


1 if N = 0
M
v̄N + 21
= vN
M
+ 12 −η N : η = 
+ η∆tM āM 2
1 if N > 0

5. Overwrite trial macro nodal velocity for enforced boundary conditions

6. Update macro trial nodal displacements


N +1 = uN + ∆t v̄N + 1
ūM M M M
2

7. Update macro nodal velocities


1 = v̄ + ∆tm āM
M M
v̄n+ n− 1 n
2 2

8. Update macro counter and time

Micro Scale Update: tN −1 → tN , ∆tm

9. Evaluate micro nodal forces fnm from equation 3.13

10. Evaluate micro trial accelerations


n = (M ) (fn − fnm,int )
m −1
ām m,ext

11. Update micro trial nodal Velocities



1 if n = 0
1 = v + η∆tm ām
n : η =
m m 2
v̄n+ n+ 1 −η
2 2 1 if n > 0

12. Overwrite trial nodal velocity for enforced boundary conditions

13. Update micro trial nodal displacements


n+1 = un + ∆t v̄n+ 1
ūm m m m
2

Micro Scale Coupling

14. Compute corrective forces from equation 3.53

15. Evaluate micro accelerations


n = (M ) (fn − fnm,int + fnm,corr )
m −1
am m,ext

16. Update micro nodal Velocities


1 = v + ∆tm am
m m
vn+ n− 1 n
2 2

17. Update micro nodal displacements


n+1 = un + ∆t vn+ 1
um m m m
2

44
18. Update micro scale counter and time

19. If micro scale simulation not complete go to 9.

Macro Scale Coupling

20. Evaluate macro accelerations


N = (M ) (fN − fNM,int − N T fNm,corr )
M −1 M,ext
aM

21. Update macro nodal Velocities


M
vN +1
= vN
M
−1
+ ∆tM aM
N
2 2

22. Update macro nodal displacements


N +1 = uN + ∆t vN + 1
uM M M M
2

23. Update macro scale counter and time

24. If macro scale simulation not complete go to 2.

3.2.3 Stability analysis

The macro and micro scale are coupled through the corrective forces, that can be
directly evaluated using the equation 3.53. Employing the energy method introduced
by Hughes in [93] and used in [6,63–65] it is possible to establish whether the coupling
leads to a stable system or to the formation and propagation of instabilities. This
approach studies the sign of the increment of mechanical energy in the discretised
system over one time-step. When the increment of energy is positive the framework
under examination is unstable, since the energy will grow boundlessly in time. On the
other hand, if the increment is negative, the system will dissipate energy over time,
but in a stable manner. Finally, if there is no increment of energy in one time-step
the system is stable and conservative. The balance of the rate of energy Ė tot can be
split in different contributes from the macro and micro scale domains as in [94] as:

Ė tot = Ėint
M
+ Ėkin
M
+ Ėint
m
+ Ėkin
m (3.55)

45
in which the contribution coming from the internal forces Ėint
M
, Ėint
m
and kinetic con-
tribution Ėkin
M
, Ėkin
m
are defined as:

N elements M Z
M T
(3.56)
 
M
= v M
B T σ M dV,
X
Ėint α
i=1 Ωi

N elements m Z
m
= (v ) αm T m
B T σ m dV, (3.57)
X
Ėint
i=1 Ωi

(3.58)
 T
M
Ėkin = vM αM M M aM ,

m
Ėkin = (v m )T αm M m am . (3.59)

Substituting equation 3.31 and equations 3.56 to 3.59 in 3.57 the increment of total
energy can be expressed as:
N elements M !
Z 
M T
 
tot
= v M M T m
+α M T M
X 
Ė −α −
fint N fcorr  B σ dV

  Ωi
(3.60)
i=1
N elements m !
Z 
+ (v )m T
fint + fcorr + α
m m m m T m
X 
−α    B σ dV ,
 
 Ωi
i=1

in which the contributions connected to the internal forces cancel out in virtue of
the equation 3.11. The condition for which the framework ensures conservation or
dissipation of energy can be then expressed as:

Ėtot ≤ 0. (3.61)

Such condition can be derived from 3.60 as:

(3.62)
 T
Ėtot = − v M N T fcorr
m
+ (v m )T fcorr
m
≤ 0.

Equation 3.62 defines the power dissipated by the framework, due to the coupling.
The inequality in 3.62 holds true because the projection operator N always subtracts
energy from the fine scale, thus resulting in small dissipation. In particular the frame-
work results dissipative at frequencies close to the cut-off frequency of the coarse scale
domain. Such dissipation is beneficial for the spurious reflections. The magnitude of
this dissipation at both macro and micro scales is studied in the next sections.

46
3.2.4 Error estimation

The adequacy of a mesh to resolve all the frequencies of the waves travelling the
domain, can be analysed estimating the error committed at every time-step. The
different techniques available in literature can be divided in two groups, namely a
priori and a posteriori error estimation. A priori estimators are designed to provide
information about the asymptotic behaviour of the discretisation error. On the other
hand, a posteriori techniques make use of the finite element solution to estimate
the actual error of a particular mesh. Grätsch and Bathe have reviewed different
formulations of error estimators in [95]. In this thesis, a novel a posteriori error
estimation technique will steer the error estimation based on a local higher order
interpolation of the displacement in each element.

In dynamic problems the error is split in a temporal and a spatial contribution and
they can be computed separately [9, 84, 85]. However, in explicit dynamics time and
length scales are intimately connected through the conditional stability expressed in
equation 3.16. This condition justifies the use of only the spatial error to adaptively
refine a mesh, since the time-step (and therefore the temporal error), will be reduced
accordingly. As highlighted in the literature review chapter in section 2.1.3 the most
popular a posteriori technique for the error estimation is the super-convergent patch
recovery (SPR) proposed by Zienkiewicz and Zhu [10]. This estimator requires the
resolution system of equations associated to patch of elements, that could become
computationally expensive for models that include several thousands of elements.

3.2.5 A local spatial error estimator based on hermitian in-


terpolations

Following the approach presented in [10] extended to dynamic problems, at any


time t, the local spatial error of in the stresses of a finite element solution can be
expressed as the difference between the solution of the problem σ(x, t) and the in-
terpolated value of this solution using the finite element shape functions σh (x, t)

47
Figure 3.3: Schematic representation of the recovery technique presented in [10], for one dimensional
element with one integration point. The original stress σ h is discontinuous at the nodal
position. A continuous post-processed stress can be obtained averaging the contributions
coming from different elements resulting in σ ∗ . At every integration point, the error, eeσ ,
is defined as the difference between the two stresses.

as:

eσ (x, t) = σ(x, t) − σh (x, t). (3.63)

For convenience, the mesh locations over which the local errors are computed corre-
spond to the position of the integration points of each element, in which the stress
tensor is readily available. It is worth noticing that the local error eσ (x, t) has the
same dimensions of the stress vector. In practical finite element simulations the solu-
tion in terms of stress, is not known a priori, and the equation 3.63 is approximated
as:

eeσ (x, t) = σ ∗ (x, t) − σh (x, t), (3.64)

in which eeσ (x, t) represents the approximated error and σ ∗ (x, t) is a recovered solu-
tion for the finite element problem.
In one-dimension, using linear elements, such post-processed solution is computed
in two steps. In the first step the stress σh (x, t), originally discontinuous at nodal
position as shown in figure 3.3 is smoothed averaging the contributions coming from
the different elements. The recovered stress represents σ ∗ (x, t) that can be easily
interpolated at the integration points to compute the local stress error. Since the
error defined in 3.64 represents a stress, its energy norm, ||(·)|| can be computed as:

Nelements Z
||eeσ (x, t)|| = eeσ (x, t)S(x, t)eeσ (x, t)dV, (3.65)
X

i=1 Ωi

48
in which S(x, t) represents the compliance matrix of the material, potentially depen-
dent on time and position in the computational domain.
The same approach can be applied for the local error in strains rather than stresses,
therefore the local error in stresses can be replaced by the local error in terms of
strains:
ee (x, t) = ∗ (x, t) − h (x, t). (3.66)

In equation 3.66, a post-processed strain ∗ (x, t) is compared against the original


strain in the element h (x, t) rather than a stress. The error in strains, differently
from the one in stresses, can be computed from a higher order interpolation of the
displacements. In particular, at every time t, one component of original displacements
can be approximated as:

uh (x, t) = N (ξ e (x))ueh (t), (3.67)

in which ueh (t) represents the collection of nodal displacements for the element e.
Moreover, the interpolation functions N are computed in the parent element domain,
whose coordinates are identified by ξ, via the time-invariant mapping ξ e . In the
context of linear under-integrated one dimensional elements, such mapping evaluates
to -1 and 1 in nodal positions and to 0 on the location of integration point. The
evaluation of the original strains is achieved using:

h (x, t) = B(ξ e (x))ueh (t), (3.68)

in which B contains the derivative terms of the shape functions, with respect to the
spatial derivatives in the physical domain. In this context, a higher order interpolation
is represented by the Hermitian functions as:

u∗ (x, t) = H(ξ e (x))Φ(t), (3.69)


ueh (t)
 
(3.70)

 

Φ(t) = ∂ueh (t) .
 
∂ξ
 

In equation 3.69 H(ξ e (x)) represents a third order polynomial function obtained
by imposing at every node not only the displacements uh (t) but also their derivative

49
∂uh (t)
∂ξ
in the parent domain to a fixed value. The main advantage of such interpolation
is that is local, and does not depend on neighbour elements once the derivatives in
the parent element domain are determined. Such interpolations have been used with
success in 2D formulation of finite element, for the development of the Adini-Clough
bending shell element [96,97]. However, they have never been used for error estimation
and not been extended to three dimensions. The hermitian interpolation can be used
to compute an augmented strain for every element as:

∗ (x, t) = ∆HΦ = B H Φ, (3.71)

in which ∆ contains the derivative operators, such as d(·)


dx
for one-dimensional elements
and B H contains the spatial derivatives of the shape functions H. Since the local
error in 3.66 has the units of strains, defining C(x, t) as the stiffness matrix of the
material, its energy norm can be used to estimate the global error as:
Nelements Z
||ee (x, t)|| = ee (x, t)C(x, t)ee (x, t)dVelem . (3.72)
X

i=1 Ωi

The main advantage of this formulation of the error with respect to the ZZ formulation
is that it can be used to compute not only a higher order strain, but also a higher
order displacement.

3.2.6 Mesh kinetic and kinematic data transfer based on her-


mitian interpolation

Once the elements of the macro-scale are refined at lower scales, the elements
at the micro-scale need initialisation of the kinetic and kinematic quantities. In the
literature review section, it has been shown that such process is not trivial, since the
new generated mesh needs a consistent transfer of the macro-scale state. In particular
a consistent transfer has to ensure that

• The energies transferred do not alter the equilibrium state

• The diffusion of the interpolated quantities is negligible

50
Figure 3.4: Central difference temporal discretisation on the right. At every moment of the simula-
tion, the solution is propagated from tn−1 to tn . The velocities are always half time-step
ahead of the accelerations. On the left, the multi-scale time stepping scheme proposed
is depicted, for a refinement factor of 2. At every macro time-step the stresses, dis-
1
placements and velocities at time tN −1 and tN − 2 , are interpolated on the micro scale.
Once the interpolation is complete, the micro-scale is updated 2 times and the corrective
N+ 1
forces are sent back to the macro-scale, where a new vM 2 is computed.

For implicit schemes in which the equilibrium is sought iteratively, a balance step
ensures that the data transferred is consistent, as in [9, 82, 88, 98]. In this section a
different data transfer scheme based on hermitian interpolations is developed. Par-
ticular attention is given to the consistency of the interpolated states between the
mesh, which is ensured without enforcing a balancing step.

The propagation of the solution at one macro-time step is depicted in figure 3.4.
The main assumption of the central difference algorithm is that the acceleration is
constant over half time-steps. Once the solution is computed at time tN , the error is
checked using equation 3.72. If the error is higher than an user input tolerance the
time-step is rejected, and a refined mesh is created that will compute a new state of the
mesh at time tN . The kinematic constraint based on velocities of the last two equa-
1
tions in 3.52, prescribe the velocity v N − 2 on the fine scale, while the main assumption
of constant acceleration over one macro-time step prescribes the acceleration aN −1 on
the micro-scale. However since the velocity is half-time step forward with respect to
M
N −1+ ∆t
the acceleration, the starting time of the micro-scale simulation is t 2rf , where
rf indicates the refinement factor. However, the accelerations aN −1 , in absence of

51
Figure 3.5: Comparison between hermitian and linear interpolation for refinement, in linear elas-
tic elements. When using linear shape functions for the interpolation of the macro-
displacements (blue arrows), the resulting elastic strain is constant over the micro-
elements that share the same macro-element domain. When using hermitian shape
functions, such situation is avoided, because of the use of nodal derivatives (orange
arrows) resulting in a better interpolation of the elastic strain on the micro-mesh.

external forces, are computed from the stresses at σ N −1 , therefore is legitimate to


∆tM
approximate the state at tN −1+ 2r , with the one at tN −1 , as depicted in figure 3.4.
To obtain consistent kinetic and kinematic quantities, one of the two sets has to be
directly interpolated (e.g. kinematic), and the other one has to be derived (e.g. ki-
netic). This concept is clear with the example of figure 3.5. In this example two
1D elements with constant elastic strain are refined. If the displacement are linearly
interpolated over the new mesh the elastic strain, and therefore the elastic stress will
be constant over the refined elements that are contained in the same macro element
as shown in picture 3.5a). This contradicts the hypothesis of linear interpolation of
the stress. Using the hermitian interpolation proposed in the previous section, the
lower scale elements can compute a C 3 displacements, and a C 2 elastic strain, which
ensures a consistent interpolation of both kinetic and kinematic quantities, as shown
in figure 3.5b) without the use of a balancing step.
At every macro-time step the dynamic nature of the problem is such that the
macro-scale zones that present high error evolve in the macro-computational domain,

52
constantly shrinking or expanding. Such situation means that at the micro-scale there
is a continuous addition and deletion of elements at each macro time-step. Therefore,
the initialisation of the new data is needed only for the newly added elements, since
the old micro-scale elements can inherit their previous solution. The computation of
the coupling forces, at the end of every micro-scale update, both at the micro and
macro scale over a coupling domain Ωc , ensures the two ways communication be-
tween the domains. In particular the corrective forces N T fcorr
m
are used to compute
N + 12
a corrected acceleration aN
M and generate a corrected velocity vM .

3.2.7 Adaptive Framework

The proposed error estimation, data transfer and coupling scheme can be simul-
taneously used in a novel adaptive framework for concurrent multi-scale modelling of
dynamic phenomena. The main advantages of the resulting methodology are multiple.
To begin with, an efficient scheme is introduced to detect the size of the micro-scale
domain based on the macro-scale solution, using only nodal quantities for each ele-
ment. A consistent data transfer among the scales ensures the correct energy transfer,
avoiding the use of a balance step solution. Finally, the proposed coupling scheme
avoids the formation of numerical errors at the interfaces between the micro-scale
mesh and the macro scale one. The proposed complete framework is outlined in box
3.

Flowchart 3. Flowchart for adaptive concurrent explicit framework

1. Initialise macro mesh: set v0M ,σ0M


tM = 0, compute M M

Macro Scale Update: tN −1 → tN , ∆tM

2. Evaluate macro nodal forces fnM from equation 3.13

3. Evaluate macro accelerations


N = (M ) (fN − fNM,int )
M −1 M,ext
aM

53
4. Update macro nodal velocities 
1 if N = 0
M
vN +1
= vN
M
+ 1 −η
+ η∆tM {aM
N} : η =
2
2 2 1 if N > 0

5. Overwrite macro nodal velocity for enforced boundary conditions

6. Update macro nodal displacements


N +1 = uN + ∆t v N + 1
uM M M M
2

7. Check error condition for elements of macro scale


Eel
Umax

8. Mark all elements that verify the error condition and their neighbours

9. If any element has been marked go to 10 otherwise go to 13

10. Generate micro-scale mesh

11. Inherit previous results or use Hermitian interpolation for the initialisation
on the micro-scale of kinetic and kinematic quantities
umn−1 = HuN −1
M

vm
n− 21
= Hv M 1
P R N− 2
σ n−1 =
m
Ωm
e
EBum n−1 dV

Micro Scale Update: tN −1 → tN , ∆tm , refer to flowchart 2

Micro Scale Coupling over ΩC , refer to flowchart 2

Macro Scale Coupling over ΩC , refer to flowchart 2

12. Update macro scale counter and time

13. If macro scale simulation not complete go to 2.

Figure 3.6: Transition of elements from macro to micro scale based on error detection. At every
coarse time-step the error of every element is compared against a user input threshold.
If any element has a high error it will be selected for refinement. The neighbouring
elements of the original flagged ones are used for coupling purposes.

In summary at each macro scale time-step the error is computed over all the

54
elements of the macro-scale mesh. All those for which the error is higher than a pre-
determined threshold are refined at a smaller scale Ωm , together with their neighbours
that will be used to enforce the velocity continuity condition becoming the domain
Ωc . Such transition of coarse scale element from ΩM to Ωm or Ωc is depicted in figure
3.6.

3.3 Verification

The reference problem that will be used to validate the numerical simulations in
this chapter is depicted in figure 3.7.
It consists of a rod in which the cross sectional diameter D is smaller of its length
L, loaded along its longitudinal axis. Under the hypothesis that each cross section of
the rod remains plane during the motion an that the stress σ is uniform over it, the
displacement of the particles along the rod can be expressed by a scalar function:

u = u(x, t), (3.73)

in which u represents the displacement, x the position along the longitudinal axis and
t is the time. The equilibrium of the body without considering body forces is given
by Newton’s second law:
∂σ ∂ 2u (3.74)
=ρ 2,
∂x ∂t
in which ρ represents the material density. Under the assumption of elastic mate-
rial, and relating the strain to the first spatial derivative of the displacement, the
equilibrium equation can be expressed only as function of u:

∂ 2u ∂ 2u (3.75)
c2 = ,
∂x2 ∂t2
where s
E
c= , (3.76)
ρ
in which E represents the Young’s Modulus of the material and c is the longitudinal
wave speed in the material. The expression in 3.75 is a second-order linear hyperbolic

55
Figure 3.7: Reference Problem Configuration. A slender bar of length L, is subjected to a velocity
boundary condition V (t) while is free on the opposite face

partial differential equation for the description of wave propagation in longitudinal


rods.

3.3.1 Analytical Approach

Equation 3.75, can be solved exactly by the D’Alembert’s method. First, we


introduce two new variables ξ = x − ct and η = x + ct. The substitution of these
¯ ¯
variables in the original dynamic equilibrium equation leads to:

∂ 2u (3.77)
= 0.
∂ξ ∂η
¯ ¯
The solution of the equation 3.77 can be expressed as:

u(ξ , η ) = F (ξ ) + G(η ) = F (x − ct) + G(x + ct). (3.78)


¯ ¯ ¯ ¯
Equation 3.78, represents the general solution of the problem. The functions F (x)
and G(x), can be obtained from the the boundary conditions. Before the reflection
from the boundary it can be shown that the solution is:

L (3.79)
v(x, t) = V (x − ct), 0 ≤ t ≤ ;
c
where V (t) represents the applied Neumann Boundary condition. The solution is
therefore represented by a forward travelling wave with speed c. In particular, at any
point in time and space the solution is represented by a shift of the initial conditions,
as depicted in figure 3.8. For this reason the wave propagation in elastic rods is said
to be non dispersive.
For forward travelling waves the stress can be derived as:

∂u (3.80)
σ(x, t) = E = E = E[F 0 (x − ct)],
∂x
56
Figure 3.8: Analytical solution to the elastic one-dimensional wave propagation problem. At any
point in time the solution is the combination of a forward and a backward travelling
wave
.

while the velocity can be expressed as:

∂u (3.81)
v(x, t) = = −cF 0 (x − ct).
∂t

By eliminating F 0 (x − ct) we obtain:

dx (3.82)
σ(x, t) = −ρcv(x, t) for = c.
dt

Applying the same procedure for backward travelling wave we obtain:

dx (3.83)
σ(x, t) = −ρcv(x, t) for = −c.
dt

Equations 3.82 and 3.83 describe the relationship between particle velocity and stress
for forward and backward travelling waves. At the boundaries it is possible to show
that:

• At a free end an incoming compressive (or tensile) stress wave is reflected as


tensile (or compressive) stress wave of the same magnitude.

• At a fixed end an incoming compressive (or tensile) stress wave is reflected as


compressive (or tensile) stress wave of the same magnitude.

57
Figure 3.9: Element configuration for computation of error. The central node 2 can recover its nodal
strain from its neighbouring elements.

Finally, the analytical solution can be summarized as follows: at any space and
time the solution is represented by a forward compressive travelling wave, which is
obtained shifting the initial condition with the variable x − ct. As a result of the
boundary conditions, the wave will be reflected as tensile.

3.3.2 Hermitian interpolation based error estimator for one-


dimensional mesh

In section 3.2.4 a novel hermitian interpolation based error estimator has been
introduced. This section will validate the implementation and performances of such
error estimator when applied one dimensional elements with constant strain and one
integration point. At any moment in time, given the nature of the linear shape
functions, the strains are constant over one element and discontinuous across element
boundaries. A nodal continuous strain can be approximated considering two adjacent
elements of length dx1 and dx2 comprised of the nodes 1,2,3 in element 1 and 2
as shown in figure 3.9.
Employing a classical central difference formula the nodal strain at location 2 can
be approximated as:

u3 − u1 u3 − u1 + u2 − u2 u3 − u2 u2 − u1
= 2 + 2 , (3.84)
1 2
2 = = = +
dx1 + dx2 dx1 + dx2 dx1 + dx2 dx1 + dx2

in which the last equality in equation 3.84 is divided in the contributions coming
1 2
solely from the elements 1 and 2 as 2 and 2 respectively. In such form the
nodal strain is easier to compute and assemble. For every element i the nodal strain
i
j at node j can be transformed from the physical domain x ∈ [xj , xj+1 ] to the
i
physical element domain strain ¯j (ξ), ξ ∈ [−1, 1] as:

58
!
u2 − u1 dx1
¯2
1
= 2
1
detJ = , (3.85)
2 dx2 + dx1

in which detJ = `e
2
represents the determinant of the Jacobian of the transformation
from parent to physical domain, equal to half the element length `e . The nodal
strain of equation 3.85 can be used to compute a higher order approximation of
the displacement using an Hermitian interpolation, over the parent element domain,
specialising equation 3.69 for one dimensional elements as:

 

 u1 

u 
 
u∗ (ξ, t) = {H1 H2 H3 H4 }  2  , (3.86)
 ¯1 
¯2

  

in which the hermitian interpolation matrix H = {H1 H2 H3 H4 } can be defined


as:

H1



= 41 ξ 3 − 34 ξ + 12 ,



= − 14 ξ 3 + 43 ξ + 12 ,

H2


(3.87)
H3 = 41 ξ 3 − 14 ξ 2 − 14 ξ + 14 ,








= 14 ξ 3 + 14 ξ 2 − 14 ξ − 14 .

H

4

Equation 3.86 represents a post-processed higher order displacement inside a element,


from which a post-processed strain can be obtained by differentiation with respect to
the variable ξ, computing the matrix B H defined in equation 3.71 as:






B1H = dH1 dξ
dξ dx
= ( 34 ξ 2 − 34 ) dx

,



H
= dH2 dξ
= (− 43 ξ 2 + 34 ) dx


B ,


2 dξ dx
(3.88)
B3
H
= dH3 dξ
= ( 34 ξ 2 − 12 ξ − 14 ) dx

,




 dξ dx


= dH4 dξ
= ( 34 ξ 2 + 12 ξ − 14 ) dx

 H
B

.
4 dξ dx

For one dimensional element with a single integration point the equation 3.71 can
be specialised computing 3.88 at the position of the integrations point (i.e. ξ = 0),

59
Figure 3.10: Representation of the trapezoidal (left) and sinusoidal (right) applied boundary condi-
tions.

E 207 GP a
ρ 7.83e−6 mm
kg
3

L 1 mm

Table 3.1: Material and Geometrical properties used in the study of error estimation.

leading to:
 
u1 
 
dξ n H u 
o 
∗ = ¯∗ (0, t) = B1 (0) B2H (0) B3H (0) B4H (0) 2
=
dx ¯1
(3.89)

 

¯2

  

3 3 1 1 2
 
− u1 + u2 − ¯1 − ¯2 .
4 4 4 4 `e
The post-processed strain from equation 3.89 can be substituted in equation 3.66 to
obtain the elemental error at time t. The energy norm of the error over the whole
mesh is defined starting from the element contribution ee (t) as in equation 3.72.
The proposed error estimation procedure is validated against the analytical solu-
tion of the reference problem using two different excitation functions V (t), namely
trapezoidal and sinusoidal pulse on the same system. The material and geometrical
properties used for the simulations are summarised in table 3.1.
In the first case a trapezoidal velocity pulse whose total duration time is equal to
t∗ = L
8c
and its rise and fall time are equal tr = tf = L
80c
, as depicted in figure 3.10.
The wave is propagated through different mesh with increasing number of nodes and

60
Figure 3.11: Rate of convergence for different error estimators and trapezoidal boundary condition
at t = 0.8 ∗ Lc . The hermitian error estimator presents the same convergence rate of
the ZZ estimator and is close to the analytical error on the considered mesh.

E 3.4 GP a
ρ 1.2 cmg 3
L 1 mm

Table 3.2: Material and Geometrical properties used in the study of the coupling properties.

the error over the whole mesh is compared with different techniques. In particular,
the analytical (true) error in figure 3.11 is relative to the difference between the finite
element and the analytical solution of the problem. The ZZ estimator, computes the
error between the finite element and the post-processed solution as in [79]. Finally,
the novel hermitian estimator computes the error as the difference between the finite
element solution and the hermitian post-processed interpolation of the strains in
equation 3.89. The results reported in figure 3.11 at time tc = 0.8 Lc . In the second
case a sinusoidal pulse whose total duration time is equal to t∗ = L
16c
, as depicted
in figure 3.10, is analysed with the same technique and the results are resumed in
figure 3.12. In both cases an increased number of nodes reduces the error against the

61
Figure 3.12: Rate of convergence for different error estimators and sinusoidal boundary condition at
t = 0.8 ∗ Lc . The hermitian error estimator presents the same convergence rate of the
ZZ estimator and is close to the analytical error on the considered mesh.

analytical solution. From a quantitative point of view the rate of convergence was
found to be of order O(N 1.5 ), with N nodes in the mesh. Such convergence rate is
equal to the ZZ estimator and the analytical one.

3.3.3 Coupling Properties

The high frequency dissipation of the framework, demonstrated analytically in


section 3.2.3 can be evaluated numerically against the reference problem introduced
in section 3.3.1. In this case the computational domain is divided in three different
areas as depicted in figure 3.13. The material and geometrical properties for this
simulations are reported in table 3.2. A half of the bar length is discretised with
element of length `M
e =
1
100
L, while the second half is discretised with finer elements
of length `m
e =
1
1000
L. The continuity condition in terms of velocities in enforced over
a coupling length of size 16 ∗ `M
e .

Between the two scales a coupling zone is defined a priori where the velocity

62
Figure 3.13: Bar Coupled Model. The slender bar is partitioned a priori in a coarse ΩM , fine Ωm
and coupling domain Ωc

continuity is enforced. In these examples a velocity boundary condition is applied on


the last node of the fine scale, as:

V (t) = V ¯(t) + V (t) ,


0
(3.90)

in which V ¯(t) represents the low frequency content, a trapezoidal pulse whose total
duration time is equal to t∗ = L
24c
and its rise and fall time are equal tr = tf = L
8c
.
0
V (t) represents the high frequency content modelled as a sinusoidal pulse of same
duration of the trapezoidal pulse, and frequency of f = c
10`m
. Since `M
e = 10`e ,
m
e

because of how the mesh is constructed we have that f = c


`M
= fcM where fcM is
e

the cut-off frequency of the coarse scale domain. Therefore the fine scale is able to
represent both the low and the high frequencies content as shown in figure in 3.14,
while the wave is moving in the fine part of the domain. The coupling zone effectively
behaves like a filter on which the wave adapts to the coarse scale frequencies as shown
in figure 3.15, when the wave has already passed through the coupling zone, and it is
totally contained in the coarse scale domain.
The conservation properties of the coupling zone can be appreciated in figure 3.16,
where the kinetic energy in the computational domain is divided in the coarse and
fine scale contribution. The kinetic energy is computed as the sum of the contribution
from the two domains, therefore its values are valid only when the wave its completely
enclosed in the coarse or the fine scale. A correct interpretation of the values of the
energies in the coupling zone should consider as well the weighting parameters. In
particular the kinetic energy in the fine scale domain portion is zero, then it grows
until it reaches a maximum and decreases when the energy is transferred to the coarse

63
scale domain. On the coarse scale domain the kinetic energy shows a similar trend,
however the maximum value achieved is smaller than the original one from the fine
scale domain. Such dissipation in the frequencies higher than the cut-off frequency
of the coarse scale domain (high frequencies) is beneficial to avoid the generation
of spurious wave reflections, however dissipation in the low frequency band regime,
might alter significantly the energy transfer between the different domains. To assess
the frequency band in which the dissipation is more predominant the power dissipated
in the coupling zone will be compared to the power in the fine scale, in the frequency
domain. The analysis of the velocity response of the nodes inside Ωc can help assess
the frequency bands in which the dissipation if predominant. The power dissipated
in the coupling zone is defined in equation 3.62, while the sum of the internal and
kinetic powers in the fine zone only can be derived from the equations 3.59 and 3.57.
The energy spectral density spectrum of the dissipated power, as in [100], is defined
as the square of the magnitude of the frequency spectrum, Pdiss (fk ), in the discrete
frequency fk as:
Sdiss (fk ) = ||Pdiss (fk )||c , (3.91)

in which ||(·)||c represents the norm of the complex number (·). The computation
of Pdiss (fk ) in 3.91 is performed from the original discrete signal in time Pdiss (t). In
particular, in the fine scale domain the dissipated is sampled with time ∆tm . The
discrete Fourier transform of the power in the coupling zone can be expressed as:

X−1
j=N
(3.92)
m
Pdiss (fk ) = Pdiss ((j + 1) · ∆tm )e−i2πfk j∆t ,
j=0

in which i represents the imaginary unity, N the number of samples for the power
Pdiss (t) and fk represents a discrete frequency defined as

k (3.93)
fk = k ∈ 0 . . . N − 1.
N ∆tM

The quantity 1
N ∆tM
is defined as the resolution of the discrete Fourier transform, and
for the analysis in this section it will be fixed and connected to the fine scale time

64
domain step domain. The computation of the discrete Fourier transform is performed
using the Fast Fourier Transform (FFT) algorithm implemented in MATLAB. The
same procedure can be applied to compute the Fourier transform of the total power
Ptot (t), as the sum of the nternal and kinetic power, in Ωf , to obtain Ptot (fk ) and sub-
sequently the energy spectal density Stot (fk ). To assess the frequency band in which
the dissipation is predominant, for every discrete frequency an index of dissipated
power dissipated can be computed as:
Sdiss (fk ) (3.94)
S(fk ) = ,
Stot (fk )
in which 0 ≤ S ≤ 1. A value of 0 means that there is no dissipation for that frequency,
while a value of 1 means that all the energy in that frequency has been dissipated
by the coupling. The percentage of the dissipated power at each frequency is shown
in picture 3.17. The graph is clearly separated in two areas: a low frequency regime
(generated by the trapezoidal pulse) and a high frequency regime close to the cut-off
frequency of the coarse scale domain (generated by the truncated sinusoidal pulse).
The FFT analysis is valid because the frequency of interests, where the energy is
concentrated are in the coarse scale regime, while the sampling frequency is connected
to the fine scale stable time-step. In the high frequency regime the strong dissipation
confirms the absence of spurious wave reflections as shown in 3.15. At the same time,
in the low frequency regime the dissipation is weaker but it still results in a smaller
total energy transferred to the coarse scale domain as shown in 3.16.

3.3.4 Parametric study of the coupling properties

The parametric study conducted in this section shows the convergence properties
of the coupling framework proposed. The relevant parameters for the coupling are
the extension of the coupled section ΩC and the weighting function αm . The general
shape of αm is modelled as:


ξc
αm = , (3.95)
`c

65
Figure 3.14: Elastic wave propagation at t= 0.5 Lc for coupled model. The fine scale domain (left) is
able to represent the low and high frequency content of the velocity boundary condition
applied.

in which ξc is the axis who has its origin on the fine-to-coarse interface and points
towards the coarse-to-fine interface and `c is the length associated to Ωc . The par-
ticular of the form in equation 3.95 makes it suitable to describe a range of different
couplings: constant when γ = 0, linear when γ = 1 and polynomial for γ > 1. In
this study two different values of γ are examined; γ = 1 which corresponds to a
linear weighting function αm
L
and γ = 2 which corresponds to a power-law weighting
function αm
P
. Furthermore, for every αm four different coupling lengths `c are selected
corresponding to 2,4,8 and 16 times `M
e .

The simulations are compared in terms of particle velocity along the bar at t=
0.8 Lc (figures 3.18 and 3.21), total energy of the system during the analysis (figures
3.19 and 3.22) and dissipation properties for linear and power coupling weighting
functions respectively. Moreover following the approach presented in section 3.3.3,
the simulations are compared in terms of the energy spectral density of the fine nodes

66
Figure 3.15: Elastic wave propagation at t= 0.8 Lc for coupled model. The coarse scale domain (right)
can only represent the low frequency content of the velocity boundary condition applied.
However the high frequency is mostly dissipated through the coupling condition.

in the coupling and in the fine zone.

Figure 3.18 compares the predictions obtained for the linear weighting function
over different coupling lengths and shows that there is a minimum value of coupling
length required to properly filter the high frequency wave at the interface between
the coarse and the fine scale and, therefore, to avoid the generation of a spurious
wave reflection trapped in the fine scale domain. At the same time, while the rising
and falling time of the trapezoidal pulse are correctly transferred, it is interesting
to notice that the amplitude of the trapezoidal pulse is not correctly propagated to
the coarse scale domain. This behaviour is reflected in the total energy as shown in
picture 3.19, where the amount of energy transferred to the coarse scale decreases as
the coupling length is shortened, and such energy is trapped in the fine scale domain,
in the form of spurious wave reflections. Moreover, a bigger coupling length results
simultaneously, in a better transfer of the low frequency content at coarse fine scale

67
Figure 3.16: Evolution of the total energy for the coupled problem. The micro-scale total energy is
initially zero, then it reaches a maximum and decreases when the energy is transferred
to the coarse scale domain. The high frequency content of the energy is dissipated, but
its contribution to the total energy is negligible.

and in a stronger dissipation in the coarse scale. Such effect however, is beneficial
for the spurious wave reflection generation. Therefore, increasing the coupling length
results in a clearly convergent framework where the energy dissipation only affects the
high frequencies. The quantitative analysis of such dissipation can be carried out, for
every coupling length, as outlined in section 3.3.3. The results are summarised in fig-
ure ??, where different trends can be observed. In particular, increasing the coupling
length the dissipation at low frequencies regime is weakened, enhancing the passage
of low frequency content from fine to coarse scale. This behaviour can be observed in
the spatial domain in figure 3.18, where the trapezoidal pulse at the coarse scale is
better represented with increasing coupling length. At the some time the dissipation
at frequencies close to the cut-off frequency of the coarse domain is more effective
with bigger coupling lengths. This behaviour results in less spurious wave reflection
in the fine scale domain as shown in picture 3.18.

68
Figure 3.17: Ratio of the Energy Spectral Density (ESD) of the dissipated and total power, for linear
coupling and 16 coarse scale elements in the coupling zone. The dissipation is more
important at frequencies closer to the cut-off frequency of the coarse domain.

When using a weighting power law, similar trends are observed, therefore a big-
ger coupling length has a beneficial effect on the filtering of the high frequencies as
shown in picture 3.21. Figure 3.22 shows that the transfer of total energy from micro
to macro-scale is less affected by the coupling length when using a power law. Such
effect is confirmed by the analysis in the frequency domain resumed in picture 3.23.
The conclusions of such analysis is coherent with the previous observations: higher
coupling lengths lead to smaller dissipation at the low frequency regime and to higher
dissipation in the high frequency regime. However, when comparing the linear and
the power law dissipation in the frequency domain, both of these behaviours are more
pronounced, ultimately resulting in a better energy transfer and less spurious wave
generation for the power law case.
Figure 3.24 compares power law and linear coupling for a coupling length of 8*`M
e

in which the use of the power law results in a better dissipation of the high frequencies
as well as a better representation of the pulse at the coarse scale. When comparing
the total energies in 3.25, this effect is clear at the macro scale where the artificial
energy dissipation is lower when using the power function. The conclusion of the
parametric study shows that the use of the coupling layer results always in a con-

69
Figure 3.18: Elastic wave propagation at t= 0.8 Lc for linear weighting function at different coupling
length. When the coupling length is increased the fine scale domain presents less
spurious oscillations. At the same the trapezoidal pulse is better transferred to the
coarse scale domain.

vergent behaviour, dissipating only some relatively high frequencies that cannot be
represented at the coarse scale. However the dissipative frequencies of the coupled
domain can be tuned using an ad-hoc weighting function that can results in better
transfer of the energies between the two domains.

3.3.5 Adaptive dynamic concurrent multi-scale 1D frame-


work

The numerical implementations and performances of the building blocks of the


dynamic concurrent multi-scale framework have been numerically investigated in sec-
tion 3.3.3 and 3.3.2 for one dimensional elements with one integration point. The
defined error estimator and the coupling among different scales can be used in the
adaptive concurrent multi-scale framework defined in flowchart 3. The geometrical
and material properties of the modelled bar are resumed in table 3.2, and the ini-
tial mesh has a constant element size of `e = 1
100
L. The applied velocity boundary

70
Figure 3.19: Total energy in coarse and fine scale domain when using a linear weighting function
and different coupling lengths. The amount of energy trapped in the fine scale domain
is negligible compared to the total energy, however when using small coupling lengths,
there is a stronger energy dissipation.

conditions are defined in section 3.3.2 and correspond to two waveforms, namely a
trapezoidal and a quarter of sinusoidal pulse.

At every coarse time-step of the simulation the error is assessed on the coarse
scale based on the actual coarse scale kinetic properties. In figure 3.26, for the case
of the trapezoidal pulse the single scale solution in red presents a significant amount
of error in form of numerical oscillations. Such error is effectively captured by the
devised error estimator, and corrected at a lower scale with a refinement factor of
10. The black dots in figures 3.26, 3.27, 3.28 and 3.29, show the size of the fine
scale domain at that coarse scale time-step. The proposed data transfer schemes and
coupling among the scales do not generate spurious wave oscillations, and retain the
correct shape of the wave even after the reflection as shown in figure 3.27. Similar
conclusions can be drawn in the case of a sinusoidal pulse in which the size of the fine
scale domain generated at every coarse time-step is still comparable to the width of
the pulse. In figure 3.28, even if the amount of numerical error is smaller, the fine

71
Figure 3.20: Energy dissipation index as defined in equation 3.94 for linear weighting function at
different coupling lengths. Increasing the length of the coupling zone has a double
beneficial effect. On the one hand, the low frequency content is retained, because of
the smaller dissipation at higher coupling lengths, leading to a better transfer of energies
between macro and micro scale. On the other hand, the high frequency content is more
effectively dissipated resulting in the absence of spurious wave generation.

72
Figure 3.21: Elastic wave propagation at t= 0.8 Lc for power law weighting function at different
coupling length. When increasing the coupling length, the fine scale domain presents
less spurious oscillations. At the same time the trapezoidal pulse is better transferred
to the coarse scale domain.

Figure 3.22: Total energy in coarse and fine scale domain when using a power law weighting function
and different coupling lengths. The amount of energy trapped in the fine scale domain
is negligible compared to the total energy, however when using small coupling lengths,
there is a stronger energy dissipation.

73
Figure 3.23: Energy dissipation index as defined in equation 3.94 for power law weighting function
at different coupling lengths. Increasing the length of the coupling zone has a double
beneficial effect. On the one hand, the low frequency content is retained, because of the
smaller dissipation at higher coupling lengths, leading to a better transfer of energies
between macro and micro scale. On the other hand, the high frequency content is more
effectively dissipated with resulting in the absence of spurious wave generation.

74
Figure 3.24: Elastic wave propagation at t= 0.8 Lc for power law and linear weighting function us-
ing the same coupling length of 8*`eM . The use of the power law weighting function
decreases the amount of spurious wave reflection and transfers more accurately the
trapezoidal pulse at the coarse scale.

Figure 3.25: Total energy in coarse and fine scale domains when using power law and linear weighting
functions. The use of the power law weighting function transfers more accurately the
energy at the macro-scale domain, resulting in less energy dissipation.

75
scale is still activated and the error corrected, even after the reflection as shown in
figure 3.29. Following the approach outlined in section 3.2.5, it is possible to define
an error using the nodal velocities of the FEM solution and the analytical solution of
equation 3.3.1. The error on the nodal velocities can be defined as:

ev (x, t) = v(x, t) − vh (x, t), (3.96)

in which ev represents the error at every nodal position between the finite element so-
lution vh (x, t) and the analytical solution v(x, t). In this section non post-processing
will be applied since the analytical solution is readily available. The energy norm of
this error, ||(·)||, as defined in section 3.2.5, using matrix notation is equal to:

||ev (t)|| = ev (x, t)T M ev (x, t), (3.97)

in which the norm has the dimensions of energy, and is zero when the FEM solution
is equal to the analytical solution, and non-zero otherwise. It is important to notice
that this error has only a time dependency since it encapsulates in the sum the spatial
dependency. Equation 3.97 is computed for the mono and multi-scale simulations,
using mono and multi-scale computed velocities, respectively. In figures 3.30 and 3.31
the improvement of the multi-scale simulation is noticeable. In particular, while the
mono-scale simulation exhibits a growing error over time, the multi-scale simulation,
in both cases, shows a smaller error almost constant over time.

3.4 Concluding remarks

This chapter has described the mathematical framework of a novel adaptive dy-
namic concurrent multi-scale approach. Wave propagation in solids can be treated
mathematically as a hyperbolic partial differential equation in its strong form. How-
ever,when discretised in time and space, while the hyperbolic nature of the problem is
preserved, numerical errors are introduced. While different methodologies have been
proposed in literature to couple several computational domains using overlapping
techniques, the formulation of an efficient coupling is still an open research field.

76
Figure 3.26: Comparison of conventional and multi-scale analysis for trapezoidal pulse case. The
multi-scale simulation presents qualitatively a smaller numerical error.

Figure 3.27: Comparison of conventional and multi-scale analysis for trapezoidal pulse case after
the reflection from the free boundary. The simultaneous effect of data transfer and
coupling does not affect the shape of the wave.

77
Figure 3.28: Comparison of conventional and multi-scale analysis for sinusoidal pulse case. The
multi-scale simulation presents qualitatively a smaller numerical error.

Figure 3.29: Comparison of conventional and multi-scale analysis for sinusoidal pulse case after the
reflection from the free boundary. The simultaneous effect of data transfer and coupling
does not affect the shape of the wave.

78
Figure 3.30: Comparison of the error in the case of conventional and multi-scale simulation for trape-
zoidal pulse computed from analytical solution. The error of the multi-scale simulation
is sensibly reduced.

Figure 3.31: Comparison of the error in the case of conventional and multi-scale simulation for sinu-
soidal pulse computed from analytical solution. The error of the multi-scale simulation
is sensibly reduced

79
In this chapter two main improvements have been presented for the formulation
of a coupling condition between two differently discretised domains. The first one
regards the application of the weighting functions after the discretisation in time and
space, allowing the use of arbitrarily continuous elements while retaining the original
eigenvalues (as well as conditional stability properties), of the computational mesh
as if no coupling was applied. Subsequently an explicit formulation of the coupling is
proposed that does not require the resolution of a system of equations. It is proved
that such coupling is dissipative, with a stronger influence in the range of frequencies
not representable by the coarsest scale in the model. The developed coupling scheme
has been subsequently used in an adaptive framework in which a novel Hermitian
interpolation based error estimator is used, allowing for a fast error detection on the
computational mesh. The results on different simulations show that the proposed
error estimation can be used to efficiently detect the areas of the coarse scale model
that need refinement. Moreover the coupling among the scales avoids spurious wave
oscillations intrinsic of adaptive dynamic finite element simulations. Lastly, the data
transfer scheme constructed allows a consistent transfer of information among the
micro and macro-scale reducing significantly the error of a conventional mono-scale
simulation.
The implementation of this novel framework in a three-dimensional setting are
not trivial. In first instance a complete third order polynomial in three dimensions
is formed by 64 coefficients, while the combined conditions on the displacements and
the derivatives of the displacements for a hexahedron sum up to only 32 coefficients.
Moreover, the refinement of hexahedrons involves the generation of complex meshes
that can result in heavily distorted elements. Lastly, further research is needed to
assess the validity of the coupling in three dimensional simulations where different
type of waves, such as longitudinal shear and Rayleigh waves interact simultaneously.
In the next chapter these challenges will be tackled to formulate a three-dimensional
dynamic concurrent adaptive framework.

80
Chapter 4

A novel dynamic adaptive


concurrent multi-scale framework
for 3D wave propagation in
homogeneous media

In this chapter the previously described adaptive concurrent multi-scale framework


is applied to 3D simulations. Particular attention is given to the methodology used
for the generation of computational meshes during the computation and the defini-
tion of three-dimensional hermitian shape functions for the interpolation of kinematic
quantities from different meshes. These contributions are implemented in a novel
framework that, from a computational standpoint, privileges the possibility to include
different refinement criteria as well as the efficient communication among different
scales. The methodology is verified simulating elastic wave propagation in bars and
validated against experimental results obtained on uniaxial tensile specimens involving
strain localisation due to their inelastic deformation.

4.1 Uniform strain hexahedron element for explicit


simulations

In the previous chapter a novel adaptive concurrent multi-scale framework has


been introduced for the analysis of wave propagation in elastic media. In particular
the general procedure is outlined in flowchart 3. In this chapter the full 3D form of

81
Figure 4.1: Configuration of hexahedron 8-nodes element in parent element domain.

Node Label ξ η ζ
1 -1 -1 -1
2 1 -1 -1
3 1 1 -1
4 -1 1 -1
5 -1 -1 1
6 1 -1 1
7 1 1 1
8 -1 1 1

Table 4.1: Uniform strain Hexahedron node numbering and position

the proposed framework will be analysed and tested on hexahedral meshes.

The 8-node brick elements are widely used in explicit computational mechanics.
When fully integrated they tend to present volumetric locking in which the displace-
ments are unpredicted by large factors, therefore, a common solution for such be-
haviour is the use of selective integration in which the volumetric stress is integrated
using a single quadrature point (reduced integration), while the deviatoric stress re-
lies on full integration. However, the use of reduced integration for both deviatoric
and volumetric stresses results in the appearance of zero energy spurious deformation
modes, known as hourglassing. Flanagan and Belytschko [101] proposed an under-
integrated hexahedron element in which the hourglassing is controlled by using an

82
artificial stiffness that opposes the zero energy modes. This element formulation,
efficient and stable at the same time, will be used as well in this doctoral thesis.
The element configuration in the parent domain coordinate system is depicted in
figure 4.1 and table 4.1. The element shape functions are represented by the product
of linear shape functions in the directions ξ, η and ζ as reported in equation 4.1 in
which ξi , ηi and ζi represent the coordinate of the node i in the parent element
domain.
1 (4.1)
Ni = (1 + ξξi )(1 + ηηi )(1 + ζζi ).
8
In particular the equation 4.1 can be used to define a three dimensional interpolation,
for the displacement field u(ξ, t) as:

u3x1 = N3x24 ue24x1 , (4.2)

in which the dimensions of each matrix are reported as subscript number of rows
x number of columns. The 24 rows of the right hand side of the equation 4.2, are
given by the presence of 3 dimensions per every one of the 8 nodes of the hexahedron.
Expanding the array u and ue along the three orthogonal directions as:
  
p

 
 
 
=

u3x1 q ,





 
 
 
r



e 
 (4.3)






 p 8x1 

e
= e

u24x1 q8x1  ,



 
  e 

 r8x1

in which pe , q e and r e are column vector which store the nodal components of dis-
placements along ξ, η and ζ, respectively. On the other hand p, q and r represent the
interpolated values of the displacement in ξ, η and ζ. Equation 4.2 can be written in
its expanded form as:

e
    
p
  N 1x8 01x8 01x8  p8x1 

q =  01x8 N 1x8 01x8  q8x1
  e
. (4.4)
 e 
01x8 01x8 N 1x8  r8x1

 
r 

83
In equation 4.4, 0 stands for a matrix with entries equal to 0, and N is defined as:

(4.5)
n o
N = N1 . . . Ni . . . N8 ,

in which the components Ni are defined in 4.1. This element presents several analogies
with its 1D counterpart, however the refinement and error estimation procedures
applied in the previous chapter are not directly applicable in a 3D context.

4.2 A local spatial error estimator based on 3D


hermitian interpolation

Following the approach outlined in section 3.2.5 for linear 1D elements, a higher
order hermitian interpolation is sought for the 3D hexahedron element. In particular,
the equation 3.69 will be specialized for this element, both in terms of the matrix
H(ξ) and Φ(t). Following the approach of equation 4.4 an hermitian interpolation
on an hexahedron can be defined as:


   p  
p 
  H1x32 01x32 01x32  Φ32x1 

q∗ =  q
 01x32 H1x32 01x32  Φ32x1 ,

(4.6)
 ∗
01x32 01x32 H1x32 Φr32x1 
   
r

in which H represents the hermitian interpolation matrix restricted to a single com-


ponent Φp , Φq and Φr of the total vector of degrees of freedom Φ. Simultaneously,
p∗ , q ∗ and r∗ represents the component of the interpolated hermitian displacement
u∗ . Since the matrix H is the same for every component, this section will focus
on its definition. The analysis will be focus on a single component p of the nodal
displacements, however the hermitian interpolation matrix H(ξ) can be used for the
other two components of the displacements in similar manner. Following the labelling
defined in table 4.1 and the notation introduced in equation 4.1, for the hexahedron
element depicted in 4.1 the vector Φp (t) contains the nodal displacements and the
derivatives of such displacement with respect to the parent spatial variables ξ, η and

84
ζ as a vector containing 32 entries as:
 
 p1 

 ... 
 


 
 


 

p

 


 8 
dp1 

 

 

 dξ 
.
 
.
 
.

 


 

 
dp
 
8

 

 

Φp (t) = dp1 . (4.7)
dη 
 
.

.. 

 


 

 

 


 dp8  

dη 

 
 


 dp 1



 dζ 
.. 

 
.

 


 


 

 dp8 
 
dζ 32x1

Using the equation 4.7 the equation 4.6 can be restricted to a single dimension as:

p∗ = H1x32 Φp32x1 , (4.8)

in which the spatial and time dependencies can be derived from 3.69.
The matrix H can be expressed as the product of two matrices, one containing
only the polynomial terms G expressed as ξ i η j ζ k and a matrix of constant numerical
coefficients X leading to:

p∗ = G1x32 X 32x32 Φp32x1 . (4.9)

Therefore the hermitian interpolation will be fully defined defining the matrices G
and X . Since the hermitian interpolation represents a third order polynomial in the
spatial coordinates ξ, η and ζ the natural definition of the elements of G would be of
all the terms that satisfy the condition:

Gijk = ξ i η j ζ k : 0 ≤ i, j, k ≤ 3. (4.10)

However the definition in 4.10 leads to a matrix G of dimensions 1x64, and therefore
not adequate for the use in equation 4.9. An approach to define a suitable interpola-
tion is to selectively reduce the order of coefficients used in equation 4.10 eliminating

85
higher order terms. This approach was proposed by Melkes and is known in the Fi-
nite Element Literature as the Adini-Clough element. Such approach has been proved
efficient in 2D simulations, but this expansion has not been used in three dimensions.
Melkes has proved that for every dimension d a reduced form of the polynomial 4.10
is characterized by terms having a degree less or equal than 3 in each dimension and
degree larger than one in no more than one dimension. Such conditions lead to:

Gijk = ξ i η j ζ k : 0 ≤ i, j, k ≤ 3 ∪ i + j + k ≤ 5. (4.11)

With equation 4.11 at hand it is possible to define the array matrix G as:

(4.12)
n o
G = G000 . . . G113 .
1x32

To completely define the matrix H, the numerical coefficients of X must be deter-


mined. Such coefficients can be determined imposing the delta Kronecker condition
at nodal positions, meaning that the hermitian shape functions should evaluate to
one at nodal positions and zero elsewhere [12]. Such conditions lead to:

(4.13)
n o
p∗ = pi = G(ξi , ηi , ζi )X Φp i = 1...8 ,

in which the index i scrolls the eight nodes of the hexahedron. Imposing this condition
to be valid for a generic Φp , equation 4.13 can be written as:

(4.14)
n o
G(ξi , ηi , ζi )X = U k i = k = 1...8 ,

in which U k represents an array of 32 entries, which is zero everywhere but at position


k, where it assumes a value of 1. For example for the node 1 equation, taking in
consideration the table 4.1, the equation 4.14 can be expressed as:
 
X1 1 . . . X 1 32
.. .. ..
n o n o
−1 . . . − 1 . = 1, 0 . . . 0 (4.15)
 
1x32

 . . 
 1x32
.
X 32 1 . . . X 32 32 32x32

Repeating the same procedure for the other seven nodes, it is possible to obtain a
system of 8x32 = 256 equations in 32x32 = 1024 unknowns.

86
The remaining conditions can be found, imposing the delta Kronecker condition
on the derivative of the displacement with respect to the three spatial directions to
match the nodal values. These conditions can be expressed as:

 ∗
dp dpi dG n o n o


 = = X = Uk i = 1...8 k = 9 . . . 16 ,

 dξ dξ dξ
 dp∗

dpi dG
 n o n o
= = X = Uk i = 1...8 k = 17 . . . 25 , (4.16)
 dη∗ dη dη


 dp dp i dG n o n o
= = X = Uk i = 1...8 k = 26 . . . 32 ,




dζ dζ dζ

in which the array G is differentiated with respect to the variables ξ, η and ζ using
the expressions from equation 4.10. Similarly to the equation 4.14, each one of the
equations in 4.16 lead to 8x32 conditions. Therefore the union of the conditions
expressed in 4.14 and 4.10, leads to a system of 1024 equations in 1024 unknowns.
The actual resolution of this system of linear equation is reported in Appendix A,
using the software MATLAB. Once X has been determined, H can be computed as
the product of GX. Finally, the matrix H, which defines the hermitian interpolation
is assembled using the equation 4.6.

Once the matrix H has been determined the degrees of freedom in Φ need to be
evaluated. Since the operations are equivalent for each direction the array Φp will
be defined. The first 8 components of such array represent the standard degrees of
freedom of the Standard under-integrated hexahedron pe . To determine the derivative
terms the approach used in section 3.2.5 will be extended to the 3 dimensions space.
As an example consider the 8 elements of figure 4.2, the aim is to determine the
dp dp dp
approximation for , and for the central node of the assembly, defined as the
dξ dη dζ
query node and labelled as I. The node I can be isolated from the assembly together
with its neighbour nodes labelled from II to V II, and using a central difference
approximation the derivatives of the displacement pI , with respect to the parent

87
Figure 4.2: Derivation of the approximation of the derivative terms in the vector Φ for the node
I. Employing a central difference scheme, an approximation for such terms can be
computed only considering the neighbouring nodes.

domain coordinates can be approximated as:

∂pI pIII − pII




 = ,
4



 ∂ξ
 ∂p pV − pIV

I
= , (4.17)

 ∂η 4
∂pI pV II − pV I


=


 ,
4

∂ζ
in which is recalled that the length of an edge in the parent element domain is equal
to 2. In case of nodes on an external facet of the computational domain the same
procedure can be adopted employing a backward or forward approximation formula.
Once the derivatives at the nodal point are computed using 4.17, the array Φ can be
assembled and the hermitian interpolation of the displacement field computed.

4.3 An efficient refinement algorithm for 3D hex-


ahedral meshes

The rate of error for elastic problems using hermitian interpolation has been dis-
cussed in section 3.3.2. Such rate is used for the evaluation of the refinement factor
for the new finer mesh. In one dimension the refinement procedure is trivial, since two
adjacent elements only share the original node. However this is not the case for 3D
hexahedral meshes where two adjacent elements share newly generated nodes along
edges and faces as depicted in figure 4.3. Moreover, in traditional refinement schemes
the refined mesh is required to respect the conformity condition, meaning that two

88
Figure 4.3: The refinement procedure for 3D elements generates shared nodes between the original
elements. This condition does not arise in 1D mesh.

or more elements at the interface between the coarse and fine mesh, share a face a
edge or a point as shown in picture 4.3.

Two different methods have been proposed in the literature to create meshes that
respect such condition. In [11, 102], a octree-based refinement scheme is proposed,
and depending on the position of element on the refinement level different transition
elements are created between the coarse and fine scale mesh as shown in picture
4.4. The placing of the transition elements plays a crucial role in the generation of
conformal meshes. On the other hand, Belytschko [103] and recently Bui et al. [104]
proposed a method that does not use transition elements. The non conformal node
degrees of freedom are forced to follow the displacement of the master original nodes.
Both of these methods do not address the spurious wave generation and propagation
at the interface between the newly created domains. In this work, the new mesh is
generated during the computation according to the refinement factor assigned from
the coarse scale computation without the use of transition elements. The coupling
between the two non conformal meshes is obtained using the explicit formulation of
the coupling forces described in section 3.2.

The refinement technique proposed in this work is based on an isotropic refinement


in the parent element domain in the three orthogonal directions ξ, η and ζ. Once

89
Figure 4.4: Refinement procedure proposed in [11] where different element templates are used to
generate a conforming mesh.

the element has been refined in its parent domain, the position of the new nodes in
the physical domain is mapped using the standard shape functions. Such technique
avoids the creation of distorted elements at the finer scale as shown in figure 4.5. The
computational efficiency of the algorithm, resides in the avoidance of merging and
searches for repeated nodes at shared faces or edges between two or more elements.
Such process is avoided by implementing two new entities for the faces and edges of
the flagged elements. Every element will inherit or create new nodes over a face or
an edge depending on whether or not such entities have been previously refined by a
neighbouring element. The complete refinement algorithm is reported in flowchart 4.

Flowchart 4. Flowchart for refinement of flagged elements

1. Loop over elements to refine

2. Create nodes in parent element domain according to refinement factor

3. Map new nodes on deformed configuration

4. If any face or edge of the selected element has been refined by another
element, inherit labels

5. Create connectivity matrices for the fine scale elements

6. Go to 1

90
Figure 4.5: Schematic representation of the refinement algorithm proposed. The flagged elements (a)
are firstly refined in their parent element domain (b), and then mapped on the deformed
configuration (c)

4.4 A novel 3D framework for dynamic adaptive


concurrent multi-scale simulations

In the previous sections the error estimation, mesh refinement, data transfer and
communication among the different scales has been completely described in the three
dimensional space. However, the efficiency of the dynamic adaptive concurrent multi-
scale methodology depends heavily on its implementation, as described in this section.
On the one hand the novel framework has to guarantee excellent performances in
terms of computational efficiency while on the other hand it has to be general enough
to fulfil both the aim and objectives of the present research.
The fundamental block upon which such framework is built is the mono-scale ex-
plicit formulation of finite elements described in flowchart 1 in which the solution of
a single length and time scale ΩM is propagated in each time-step until a certain user
defined end time is reached. The time integration over a single time-step is condensed
in one block in figure 4.7 as tM → tM +1 . Once the simulation has advanced over
a single time-step, the quality of the solution can be assessed through different al-
gorithms such as error estimation or strain/stress localisation. Even if different, the
criteria to highlight an element for refinement make use of the propagated solution

91
and the current configuration, therefore to guarantee the versatility of the framework
this procedure is executed after one time increment.
If the different refinement criteria implemented highlight different portions of the
the coarse domain ΩM , different micro-scales Ωm ... Ωm will be generated. As an ex-
1 n

ample in figure 4.6, ΩM generates two micro-scales Ωm and Ωm . The refined meshes
1 2

together with the assigned time-step, their data and the nodal weights constitute a
scale. In terms of hierarchy ΩM is thought as a parent for its child micro-scales. The
communication between the scales allows one child domain to exchange information,
over the coupling volume, only with its parent. Therefore the weighting parameter,
will blend the energies between one child/finer domain and its parent as shown in
flowchart 4.7. Every child micro-scale, after one increment with their own time-step,
will enforce the velocity compatibility condition in equation 3.52, using as macro ve-
locities the ones of their parent domain ΩM . Once the finer scales reach the same time
as their parent, the coupling term of equation 3.31 is applied on the parent domain on
the different coupling volumes, allowing a two way communication. In the simplest
configuration a coarse domain generates one or more fine-scales, however the general
situation is that every domain at each scale can generate different child entities, using
always the same communication scheme. As an example in flowchart 4.7 and figure
4.6 the domain Ωm generates Ωm , over which the same communication implemented
2 3

above is established.
The proposed framework and hierarchy can couple different scales and discretisa-
tion schemes, resulting versatile, and easy to maintain. In this thesis, the coupling
between one macro and one micro scale both discretised with the explicit formulation
of the finite elements will be studied. This coupling will allow a better understanding
of the stability properties and performances of the overall scheme.

92
Figure 4.6: Representation for the proposed adaptive concurrent multi-scale framework. The coarse
1 2
scale ΩM generates two micro-scales Ωm and Ωm that communicate only with their
parent domain (the communication is represented by the dashed arrows). The child/fine
2 3
scales can recursively generate finer domains, in this case Ωm is the parent of Ωm ,
which employ the same communication scheme.

4.5 Applications

In this section different applications are examined using the proposed adaptive
multi-level scheme in space and time. Particular attention is given at the results in
terms of stability and generation of numerical artefacts at the interface between the
differently discretised domains.

4.5.1 Square bar without lateral inertia

The first application is the direct 3D transformation of the problem examined in


section 3.3.1. In this case the square bar is analysed ith a sequence of hexahedrons.
The geometrical features of the bar and mesh are reported in table 4.2 and figure 4.8.
L
The coarse scale is represented by elements of height h = and width w = D, thus
50
one single element over the cross section is included. The bar is impacted from one
side, and the contact is modelled as a half sinusoidal velocity boundary condition.
The absence of the Poisson’s ratio eliminates every lateral inertia effect, making it

93
94
1 2
Figure 4.7: Flowchart for the proposed adaptive concurrent multi-scale framework. The coarse scale ΩM generates two micro-scales Ωm and Ωm
that communicate only with their parent domain (the communication is represented by the dashed arrows). The child/fine scales can
2 3
recursively generate finer domains, in this case Ωm is the parent of Ωm , which employ the same communication scheme.
E 210 GP a
ρ 7.85e ∗ 10−6 mm
kg
3

L 500 mm
D 80 mm

Table 4.2: Material and geometry used for the simulation of the square bar.

possible to assess the good implementation of the framework in a 3D setting. The


fine scale is generated prescribing an error threshold which translates to a refinement
factor of 5.

Figure 4.8: Geometry of the square cross sectional bar.

Figure 4.9 shows a qualitative comparison among the multi-scale and macro-scale
simulation at different time-steps. At every macro-time steps, the error criterion flags
almost all the elements affected by the wave which present an error higher that the
user input threshold, and their neighbours for coupling purposes. In the left side of
the figure the wave is represented over the computational domain at a certain macro
time-step. These elements, and their neighbours are refined in the micro-scale simu-
lation. The coupling between the micro and the macro-scale computational meshes is
achieved using the already defined blending parameter α over the coupling elements
as shown in picture 4.10. Such weights are linearly distributed over the coupling
length, selected as one element of the coarse scale mesh. The distribution of the

95
weights over the micro-scale domain is shown in the upper part of picture 4.10. The
quantitative comparison of the results in terms of longitudinal stresses and longitu-
dinal velocities for the coarse solution and the fine multi-scale solution are compared
in figure 4.11,4.12,4.13 and 4.14. The results of the multi-scale simulation show less
numerical dispersion, present in the mono-scale simulation in form of oscillations be-
hind the main pulse. The presence of the coupling parameter blends the frequencies
close to the coupling zone annihilating the presence of spurious oscillations due the
abrupt change in mesh resolution. At a global level, the solution of the multi-scale
simulation results are stable as shown in picture 4.15 and 4.16, where the kinetic and
elastic energies are compared in case of multi and mono-scale simulations. It is also
important to highlight that the total energy of the system (after the pulse has been
totally transmitted into the bar at t = 0.04ms) is constant along the simulation, even
during the rebound of the compression wave at the free end of the bar (t = 0.15ms),
and therefore, the non-dissipative implementation of the coupling is verified again
through this validation case.

96
Figure 4.9: Case of square bar without lateral inertia. Comparison between the mono-scale and
multi-scale spatial distribution of σz at t = 0.0789 ms a) and t = 0.1407 b). The yellow
line represents the axis over which the velocities will be plotted. On the lateral plane
the analytical solution for this problem, at the same time, is depicted. The micro-scale
simulation is activated for the entire portion of the domain that is interested by the
wave. Its extension is dictated by the macro-scale elements that present a high error
together with their neighbours.

Figure 4.10: Configuration of the coupling parameter α for the refined mesh at t = 0.0789 ms. The
coupling parameter varies linearly over the length of one macro-scale element.

97
Figure 4.11: Case of square bar without lat- Figure 4.12: Case of square bar without lat-
eral inertia. Comparison of eral inertia. Comparison of
stresses for conventional and velocities for conventional and
multi-scale simulation at t = multi-scale simulation at t =
0.0789 ms 0.0789 ms

Figure 4.13: Case of square bar without lat- Figure 4.14: Case of square bar without lat-
eral inertia. Comparison of eral inertia. Comparison of
stresses for conventional and velocities for conventional and
multi-scale simulation at t = multi-scale simulation at t =
0.1407 ms. 0.1407 ms.

98
Figure 4.15: Case of square bar without lateral inertia. Comparison between the kinetic energy for
both the mono-scale and multi-scale simulations. The two curves show very similar
trends meaning that the energy is globally the same.

Figure 4.16: Case of square bar without lateral inertia. Comparison between the elastic energy for
both the mono-scale and multi-scale simulations. The two curves show very similar
trends meaning that the energy is globally the same.

99
4.5.2 Lateral Inertia Effect in thick square bar

The absence of Poisson’s ratio in the previous loading case did not allow the forma-
tion and propagation of lateral inertia effects. Stress propagation in real bars always
presents a component of lateral inertia that creates a physical wave dispersion. Such
dispersive effect invalidate the planar and one dimensional propagation hypothesis,
and they are more significant as the ratio between radius of the bar and the length of
the loading pulse Λc is bigger than 0.2 [105]. In this section it will be proved that the
proposed refinement scheme, together with the coupling, filters only the non-physical
spurious wave reflection while retaining the physical dispersion due the lateral inertia
effects.
The geometrical features of the bar are reported in table 4.2. Differently from
the previous case a Poisson’s ratio of ν = 0.33 is included in the material properties.
L D
The coarse scale is represented by elements of height h = and width w = , thus
50 4
four elements over the cross section are included. The bar is impacted from one side,
and the contact is modelled as a quarter sinusoidal velocity boundary condition of
m
amplitude A = 12 and period T = 0.08ms, while the opposite face is longitudinally
s
constrained. The geometrical and loading condition lead to a ratio between radius of
r
the bar and length of the loading pulse ≈ 0.4.
Λc
Since an analytical solution is not available for this simulation the mono-scale
coarse simulation will be compared against a finer mesh where the elements presents
L
height h = and width w = 28
D
, corresponding to a refinement factor of the coarse
350
scale of 7. Figure 4.17 and 4.18 compare the velocities against the two simulations
in terms of longitudinal velocities along the central axis of bar at two different times,
namely before and after the rebound. Because of the symmetrical properties of the
bar there are no significant lateral velocities along the central axis, but they are visu-
alised in the contour plot of figure 4.19 at t = 0.05ms . The coarse scale is not able
to represent well the lateral motion of the bar, because of its poor discretisation in
the cross section.

100
Figure 4.17: Comparison among the mono-scale fine simulation, and the coarse simulation at t =
0.05 ms. The coarse scale can not capture the magnitude of the particle velocity along
the bar

Figure 4.18: Comparison among the mono-scale fine simulation, and the coarse simulation at t =
0.125 ms. After the wave rebound the coarse scale can not track the particle velocity
along the bar.

101
Figure 4.19: Contour plot of lateral velocity for coarse (top) and fine (bottom) simulation at t =
0.05 ms. The macro-scale simulation cannot capture the lateral motion of the bar.

Figure 4.20: Contour plot of lateral velocity for fine mono-scale (top) and multi-scale (bottom)
simulation at t = 0.05 ms. The two simulations show similar results.

4.3. The coarse scale mesh is used as basis for a multi-scale simulation, where
the pre-defined error tolerance requires a refinement factor of 7 such that the lateral
motion of the bar will be well represented. The multi-scale and the fine mono-scale
simulation are compared in terms of longitudinal velocities along the central axis of
the bar at t = 0.05 ms in figure 4.21 and t = 0.125 ms in figure 4.22 showing a perfect
correlation among them, validating the results predicted by the multi-scale simula-
tion. For further verification, figure 4.20 compares the lateral velocities obtained again
from the fine mono-scale and the multi-scale simulation, presenting perfect agreement.
Therefore, the proposed coupling among the different scales is able to represent the
structural behaviour of the structure, without creating numerical artefacts. Differ-
ently from the previous testing case in section 4.5.1, the error in the coarse scale is

102
Coarse mono-scale simulation 1
Multi-scale simulation 52.75
Fine mono-scale simulation 90.25

Table 4.3: Comparison of the computational time for the simulation of the square bar, including
lateral inertia effects. Even if the refined domain grows at every time-step until it encloses
the whole computational domain, the computational time is still lower than the one for
mono-scale fine simulation.

not distributed over a limited portion of the computational domain, instead all the
elements that are behind the wave front are highlighted for refinement. In particular,
the figure 4.20 is a snapshot of the coarse and fine scale domains at t = 0.05ms.
Taking s
in consideration the longitudinal velocity of sound in the material expressed
E(1 − ν)
as c = = 6295m/s, the wave front will be at z = ct = 314mm in
ρ(1 + ν)(1 − 2ν)
the Z direction, which explains why more than half of the total coarse scale domain
of 500mm is refined at a finer scale. The growing nature of the multi-scale simulation
affects its computational time, nevertheless is still 60% smaller when compared to the
mono-scale fine simulation. The three computational times are compared in table
The previous simulations presented an absence of numerical spurious oscillations due
to the weighting parameter function α applied over a coupling volume, always selected
as one macro-scale element. However, the behaviour of the weighting function, even if
prescribed a priori, can be modified. For the case under examination, three different
weighting functions are compared. Using the same notation of the 1D validation case
depicted in figure 3.6, defining the characteristic length of ΩC as `c and ξc as the
direction that points from Ωm to ΩM the three proposed couplings are


αL = ξ`cc
 m


αC
m =1 (4.18)
  4
αP = ξc


m `c

in which αm
L
represents the a linear weighting function (used for all the simulations)
over ξc , αm
C
represents a constant function and αm
P
a power law function with exponent
4. The three proposed functions are represented in figure 4.23.

103
Figure 4.21: Comparison among the multi-scale simulation and the mono-scale fine simulation at t
= 0.05 ms. The elements flagged for refinement are the ones behind the wave front.

Figure 4.22: Comparison among the multi-scale simulation and the mono-scale fine simulation at t
= 0.125 ms (after the reflection). At this time-step the whole bar has been flagged for
refinement.

104
Figure 4.23: Plot of the proposed weights for the parametric study of the weighting function αm .

Figure 4.24 and 4.25 compare the three different couplings in terms of longitudi-
nal velocities along the bar at t = 0.05 ms. In figure 4.24 the solution of the linear
coupling and constant coupling are compared. When using αm
C
even if the simulation
still results stable, a spurious wave propagates in the computational domain generat-
ing an artificial high frequency oscillations of the solution. Figure 4.25 compares the
use of αm
L
and αm
P
from which it is evident that there is not a substantial difference
between the two simulations.
This parametric study shows that the effect of the blending function αm is fun-
damental for the correct coupling of the two domains and proves that the selection
of the weighting parameters is not trivial, as it can result in spurious wave reflection.
Considering the parametric studies carried out in this research, a suggestion for a
general form of a correct coupling can be expressed as


ξc
αm = 0<γ≤4 (4.19)
`c

105
Figure 4.24: Comparison among the multi-scale simulations for two different weighting parameters,
C L
namely αm and αm at t = 0.05 ms. The constant coupling shows a significant amount
of spurious wave reflections.

Figure 4.25: Comparison among the multi-scale simulations for two different weighting parameters,
L P
namely αm and αm at t = 0.05 ms. The two couplings show very similar performances
without the generation of spurious waves.

106
4.5.3 Slender Circular bar

As it was mentioned in previous section, stress propagation in real bars always


presents a component of lateral inertia that creates a physical wave dispersion. Such
dispersive effect invalidate the planar and one dimensional propagation hypothesis,
however in some cases such dispersion is negligible and 1D theories are still valid.
In particular, lateral inertia components are negligible, when the ratio between the
diameter of the bar D and the total length of the loading pulse Λ represented in
figure 2.1 is smaller then 0.2 [105]. Therefore the simulation of planar waves using a
standard explicit finite element formulation can result in a wrong estimation of the
stresses and velocities in the bar when using coarse computational meshes.

E 210 GP a
ρ 7.85e ∗ 10−6 mm
kg
3

ν 0.33
L 500 mm
D 10 mm

Table 4.4: Material and geometry used for the simulation of the slender circular bar.

As an example consider a circular steel bar whose geometrical and material prop-
erties are reported in table 4.4. The bar is hit on side and the contact is modelled
with a quarter sinusoidal wave of period T = 0.06 ms and amplitude A = 12 m
s
, while
the bar is supported on the other side. It is possible to verify that this configuration
respects the hypothesis of planar wave propagation. The mesh along the cross section
of the bar is depicted in figure 4.26. Two parameters are used to define the mesh,
namely the number of divisions along one edge of the internal square nsq and the
number of divisions along the diagonal ndiag . Subsequently the 2D mesh is extruded
along the direction orthogonal to the plane according to a parameter nthick obtaining
a hexahedral mesh for the whole computational domain.
Fig 4.27 compares the solution of the coarsest and the finest computational meshes
in terms of longitudinal stress along the central axis of bar, with increasing number

107
Figure 4.26: Geometrical divisions for a section of a square circular bar. The cross section mesh is
defined by the number of divisions in the square nsq and the number of divisions on
the diagonal ndiag .

of elements at t = 0.05ms. Increasing the elements in both the cross section and
along the bar axis, improves the quality solution. The recovered error for these
simulations, reported in table 4.5, shows a decreasing trend whose rate is similar
to the 1D constant strain element. However, as expected, the computational time
increases when decreasing the mesh size.

nsq x ndiag x nthick Computational time Recovered Error * [kN ∗ mm]


5x2x25 1 0.02867
5x2x50 2 0.01703
15x6x100 10.5 0.003186
15x6x150 55 0.002386
20x24x200 200 0.001529

Table 4.5: Study on the computational time and error trends varying the mesh size for wave prop-
agation in a slender circular bar.* The recovered error is computed at time t = 0.05ms

The coarsest mesh (5x2x25) is used as basis for an adaptive concurrent multi-scale
simulation aimed at an error less or equal than 0.2‰, corresponding to a refinement
factor of 5.

108
Figure 4.27: Comparison of the axial stress along the bar at t = 0.05ms. The coarser mesh present
a higher amount of numerical error when compared to the solution with a finer mesh.

Figure 4.28: Cross section of the refined geometry and coarse scale. The proposed refinement al-
gorithm can efficiently refine complex shapes, however it cannot represent the real
geometry.

109
Figure 4.29: Comparison of the longitudinal stress at the small refined scale (upper mesh) and
the coarse scale simulation (lower mesh) at t = 6.36e-2 ms. The refinement criterion
highlights the elements interested by the wave, and triggers a multi-scale simulation
whose results are reported in the top half of the image. Since the finer mesh can capture
with a better resolution the evolution of the longitudinal stress over the computational
domain, it is respresented with a smoother variation when compared with the coarse
scale simulation in the bottom half.

Figure 4.28 shows a comparison among the refined and the original coarse scale
mesh. The proposed refinement algorithm does not create distorted, or badly shaped
elements, so the simulation is carried out without additional error due to highly dis-
torted elements. However, the procedure is not able to recover the initial round geom-
etry, reproducing the linear approximation of a circle around the edge of the circular
section. The error estimator triggers at the coarse scale only the elements excited
by the wave at each time step. The subsequent fine scale simulation, represented
in the top half of figure 4.29, presents a smoother variation of the stress gradient
given by the better resolution achievable with the bigger number of elements. This
is compared against the coarse scale simulation, reported in the bottom half of figure
4.29, where the smaller resolution in the longitudinal stress is qualitatively noticeable
in the less smooth transitions among different elements in the Z direction. From a
quantitative point of view the solutions of the multi-scale and coarse scale simula-
tions are compared in figure 4.30,4.31,4.32 and 4.33 in terms of axial velocity before

110
and after the reflection on the supported side of the bar. The multi-scale simulation
represents a closer solution to the numerical solution predicted by the fine scale sim-
ulation presented in figure 4.27. As expected, after the reflection the wave changes
from positive velocities to negative velocities of same magnitude to ensure that on the
supported end of the bar the velocities are always zero. Both the mono-scale and the
multi-scale simulations are able to capture the magnitude of the wave, however the
mono-scale simulation results in an artificial increment of velocity after the pass of
the wave due to numerical dispersion, as shown in 4.33. At the reflection, the coarse
scale simulation presents a high error localized at z = 500mm, that is alleviated by
the multi-scale simulation as shown in figure 4.37.
The stability of the multi-scale simulation is proved comparing the elastic and kinetic
energy in the case of mono fine and multi-scale simulation in figure 4.34 and 4.35,
whose trends show a very good agreement. Finally the multi-scale when compared
to the mono-scale fine simulation presents an improvement in computational cost of
133 %.
The multi-scale response of the bar is compared with a fine mono-scale simula-
tion in figure 4.36. The longitudinal velocities at t = 0.637 ms are compared, and
the two solutions show perfect agreement. The delay of the multi-scale simulation
with respect to the mono-scale simulation is only caused by a small difference in the
numerical time step. It is interesting to note that for this particular configuration, in
which the lateral inertia effects do not make a significant contribution, the presence of
cusps along the outer edge of the multi-scale simulation does not alter the correctness
of the solution.

111
Figure 4.30: Case of slender circular bar. Comparison of longitudinal velocities for conventional and
multi-scale simulation at t = 2.53e-2 ms

Figure 4.31: Case of slender circular bar. Comparison of longitudinal velocities for conventional and
multi-scale simulation at t = 6.37e-2 ms

112
Figure 4.32: Case of slender circular bar. Comparison of longitudinal velocities for conventional and
multi-scale simulation at t = 8.91e-2 ms.

Figure 4.33: Case of slender circular bar. Comparison of longitudinal velocities for conventional and
multi-scale simulation at t = 1.40e-01 ms.

113
Figure 4.34: Elastic Energy for the whole domain as function of time, in the case of a slender bar.
The similar trends show that the multi-scale simulation is stable.

Figure 4.35: Kinetic Energy for the whole domain as function of time, in the case of a slender bar.
The similar trends show that the multi-scale simulation is stable.

114
Figure 4.36: Comparison between multi-scale simulation and mono-scale fine simulation in terms of
longitudinal velocities at t = 6.37e−2 ms. The two simulations show perfect agreement,
confirming the validity of the proposed framework.

Figure 4.37: Case of slender circular bar. Comparison of longitudinal velocities for conventional and
multi-scale simulation at t = 1.4e-1 ms.

115
Figure 4.38: Split Hopkinson Bar apparatus configuration. A tensile pulse is generated in the inci-
dent bar which loads the specimen. The transmission bar serves as momentum trap for
the apparatus.

4.5.4 Dynamic plastic localisation in dog-bone specimen

The characterisation of dynamic tensile properties of materials is generally accom-


plished with a device called tensile split Hopkinson pressure bar. The experimental
set-up is composed by two elastic bars so called incident and transmission bars as it
is schematically shown in figure 4.38. The specimen is mechanically gripped between
both bars and the incident bar is pulled to produce an elastic tensile pulse which
propagates towards the specimen. Once the compressive pulse reaches the interface
between the incident bar and the specimen, part of it is reflected into the bar and part
of it is transmitted to the specimen and afterwards into the transmission bar [106].

Test results are validated and interpreted following the main general assumptions.
First and foremost, the incident and the transmission bars have to remain elastic
through the whole test. Moreover, a purely one-dimensional, planar wave propagation
is enforced in the incident and transmission bars assuring a ratio of Λ
D
<< 0.2. In order
to assess the validity of the tests dynamic force equilibrium within the specimen is
required. This ensures deformation is uniform across the specimen and initial inertia
effects are overcome. Such condition is generally verified after the test, checking that
the value of the axial forces at the bar ends have the same value within a tolerance.
As a rule of thumb between four or five reflections of the wave are required to ensure
dynamic equilibrium. The original Kolsky bar apparatus has been modified heavily
to apply not only compressive but even tensile and torsion loading condition [106].

The application under consideration is the dynamic tensile test of a steel specimen
in a tensile split Hopkinson bar. The focus of the application will be the correct

116
Figure 4.39: Specimen design for the simulation of dynamic plastic localisation in dog-bone spec-
imen, with dimensions in millimetres (above). The specimen is pulled from one face
while axially supported on the opposite face (below).

simulation of the evolution of the elastic and plastic response of the specimen. The
specimen is the only modelled part of the whole apparatus, while the response of the
bars is taken in account through the application of suitable boundary condition at the
opposite sides of the specimen. Such approach for the simulation of split Hopkinson
bar has been validated in [107]. The presence of a fillet between the shoulder and the
gauge section avoids the formation of interface waves between the different sections as
shown in figure 4.39. The interaction of the incident bar on the specimen is modelled
through an applied velocity boundary condition over one face of the specimen, while
the interaction with the transmission bar is approximated as an axial support.
Given the relative dimensions between the bar and the specimen, the applied ve-
locity boundary condition can be idealised as a constant function in time with ampli-
tude A = 10 m
s
. In dynamic simulations sharp rising times in the boundary conditions
are generally smoothed as they could lead to excessive noise in the simulation. The
application of a smoothing function results in the boundary condition


V = Aτ 3 (10 − 15τ + 6τ 2 ) if τ ≤ 1,
(4.20)
V = A if τ > 1,
in which τ = t
ts
is defined as the ratio between the actual time and a smoothing time
ts = 0.013 ms. This particular smoothing function is widely used and documented
in [108]. The resulting boundary condition is depicted in figure 4.40.

117
Figure 4.40: Smoothed applied boundary condition for dog-bone dynamic plastic localisation.

E 197 GP a
ρ 7.85e ∗ 10−6 mm
kg
3

ν 0.33
σY 0 270 M P a
ET 250 M P a

Table 4.6: Elastic-plastic isotropic steel material properties used in the simulation of dog-bone spec-
imen.

Figure 4.41: Coarse mesh for dog-bone specimen with `M = 0.7 mm.

118
Figure 4.42: Comparison between the normal forces at the two opposite faces of the dog-bone spec-
imen. When t > 0.05 ms the two forces are equal and opposite, therefore the condition
of dynamic equilibrium is achieved.

Coarse Fine multi-scale


Comp time 20 1263 1091
Final diameter 2.156 2.014 1.994
Maximum P 6.422e-1 8.436e-1 8.421e-1

Table 4.7: Comparison between coarse, fine and multi-scale simulations. While the mono-scale fine
simulation has a significantly bigger computational time than the coarse one, the multi-
scale simulation yields to a solution in perfect agreement with the fine scale one with a
saving, in terms of computational time, of 14%.

The material used for this simulation is silver steel, whose plastic behaviour is
approximated with an isotropic strain-independent linear hardening. The material
parameters are reported in table 4.6. The presented geometry is meshed using linear
hexahedrons with two different mesh densities. The coarse mono-scale simulation
presents an approximate mesh size of 0.7 mm as shown in figure 4.41 meanwhile the
fine mono-scale simulation presents a characteristic length size of 0.23 mm. The dy-
namic equilibrium is verified summing the nodal forces on the nodes of the pulled face
compared against the ones of the supported face. From figure 4.42 it is possible to
verify that dynamic equilibrium is achieved at t ≈ 0.05 ms. Significant and localised

119
lateral displacement cause necking of the specimen at t = 0.2 ms. Different predic-
tions were obtained for the coarse and the fine mono-scale simulations as compared
in table 4.7. The coarse mono-scale simulation predicts a final diameter of 2.15 mm
and a maximum plastic strain of 0.6422, as shown in picture 4.43, meanwhile the fine
scale simulation predicted a final diameter of 2.014 mm an a maximum equivalent
plastic strain of 0.8436 as shown in picture 4.44. The coarse mesh results in a stiffer
behaviour of the specimen, but it presents a sensible lower cost in terms of computa-
tional time.
The coarse scale mesh is used as a basis for a multi-scale simulation in which the
focus is to correctly represent the elastic and plastic behaviour of the specimen. To
this aim a different criterion for flagging the elements based on the plastic strain level
is implemented in the developed framework, and set at a value of 0.02 equivalent
plastic strain. At every time-step of the coarse scale simulation the current plastic
strain of the macro-scale element is compared against the user-defined value. If the
plastic strain is bigger than the given threshold, the element is flagged for refine-
ment, together with its first neighbours, starting the multi-scale process. Once the
elements of the coarse scale are flagged and refined the elastic part of the solution
is interpolated on the new mesh using the scheme proposed in section 4.2 while the
only state variable (the equivalent plastic strain) of the constitutive model is linearly
interpolated from the integration points of the old mesh to the integration points of
the new mesh.
In the multi-scale simulation the first elements to be flagged are the ones at the
interface between the fillet and the gauge length close to the pulled side of the spec-
imen. Here the plastic wave is generated and in the subsequent time-steps expands
in the gauge until it reaches the fillet on the opposite side. Figure 4.45 depicts the
evolution of the plastic wave on both the coarse and the fine scale. In particular it
is possible to notice that the coupling, once the plastic wave has fully developed, is
enforced over the fillet regions.

120
The multi-scale simulation results at t = 0.02 ms, in terms of maximum plastic
strain and final diameter of the bar are shown in picture 4.46. When compared with
the fine mono-scale the differences in these two parameters are negligible. The differ-
ence in the equivalent plastic strain, between the fine mono-scale simulation and the
multi-scale simulation are shown in figure 4.47. Moreover the multi-scale simulation
is 14% faster than the mono-scale fine simulation.
The relatively small gain in terms of computational time, is due to the fact that
almost the whole computational domain is flagged for refinement at each coarse scale
time-step, based on the user input threshold P∗ . In picture 4.48 three different multi-
scale simulation with different threshold parameters are compared over time. When
increasing the threshold parameter the time-step at which the multi-scale simula-
tion starts shifts forward in time. Moreover, the portion of the coarse scale domain
highlighted for refinement becomes smaller, and in the case in which P∗ > 0.3, the
multi-scale simulation is activated once the plastic wave has fully developed. The
simultaneous action of these effects contributes to lower the overall computational
cost of the multi-scale simulation, however the accuracy degrades as shown in figure
4.49.
Figure 4.50 quantifies the effects of the choice of P∗ in terms of accuracy and
computational time. In this graph the error is measured as the difference between te
maximum equivalent plastic strain at t = 0.02 and the same value for the fine mono-
scale simulation. A small value of P∗ leads to small savings in terms of computational
time, with respect to the mono-scale fine simulation. When 0.01 ≤ P∗ ≤ 0.2 the
error is smaller than 5 % and the maximum saving of computational time is bigger
than 60 %. This can be explained looking at both the propagation and localisation
of the plastic wave. In these cases the threshold is sufficiently small to capture both
the propagation an localisation of the computational domain and sufficiently high to
highlight only the area where the plastic behaviour is more important. If P∗ > 0.2,
the propagation of the plastic wave is not captured and the multi-scale process only

121
Figure 4.43: Results of the coarse scale simulation of the dog-bone specimen at t = 0.2 ms. The
maximum displacement in the z direction is of 0.44 mm (above). The maximum value
of equivalent plastic strain is registered in the middle of the specimen with a value of
0.64 (below).

captures the localisation. When this happens the error of the multi-scale simulation
grows at a higher rate. It is important to note that the coarse scale and the fine scale
represent respectively an upper and a lower bound for the multi-scale simulations.
Based on this discussion for this particular configuration, the optimum value of P∗ is
0.2, which represents a simulation 66% faster than the fine mono-scale and an error
smaller than 5%.

4.6 Concluding Remarks

This chapter has described the extension of the novel adaptive concurrent multi-
scale framework proposed in chapter 3 to a three dimensions environment using the
most popular element type, namely the uniform strain hexahedrons. The first key
difference among the 1D and 3D applications is in the topology of the computational
mesh which requires handling of the shared faces and edges among the the different
elements when refining them on a lower scale. At the same time 3D mesh, especially

122
Figure 4.44: Results of the fine scale simulation of the dog-bone specimen at t = 0.2 ms. The
maximum displacement in the z direction is of 0.4926 mm (above). The maximum
equivalent plastic strain is registered in the middle of the specimen with a value of
0.8436 (below).

Figure 4.45: Evolution of the plastic wave over time, alongside the refined mesh. On the left side the
elements of the coarse scale are flagged over time, and the boundaries of the highlighted
area are used to enforce the coupling at the micro-scale through αm .

123
Figure 4.46: Results of the multi-scale simulation of the dog-bone specimen at t = 0.2 ms. The
maximum displacement in the z direction is of 0.506 mm (above). The maximum
equivalent plastic strain is registered in the middle of the specimen with a value of
0.8421 (below). The differences with the fine mono-scale simulation are less than 2%.

Figure 4.47: Comparison of the multi-scale simulation of the dog-bone specimen at t = 0.2 ms. The
figure is obtained computing for every integration point the difference of the plastic
equivalent strain between the fine mono-scale simulation and the multi-scale simulation.

124
Figure 4.48: Flagging process of the coarse domain with different choices of the threshold parameter
P
∗ . Increasing the threshold value the multi-scale process starts later in time. Moreover
the highlighted portion of the domain becomes smaller.

in updated Lagrangian frameworks, can contain highly distorted elements which can
hinder their refinement due their shape. These two different aspects have been tackled
simultaneously proposing an efficient refinement strategy that first refines the coarse
scale elements in their parent element domain, and subsequently maps them on the
deformed configuration. The proposed algorithm is demonstrated to generate well
shaped computational meshes, in a range of applications.
Particular attention in this chapter is also given to the data transfer form the
original coarse mesh to the refined one. In Chapter 3 the hermitian interpolant has
been demonstrated to minimise diffusion while ensuring a correct equilibrium between
the internal external and inertia forces at the fine scale, however its extension is not
trivial given the big number of coefficients needed to interpolate a full polynomial in
3 dimensions. In this chapter an extension of such interpolant is proposed develop-
ing a reduced scheme in which the number of unknown coefficient equals exactly the
number of conditions that can be imposed over a eight nodes hexahedron. Such poly-
nomial takes into account not only the value of the kinematic variables to interpolate
at the lower scale but their derivatives as well, creating fields that do not present
discontinuities in their derivatives which are common when linear shape functions are

125
126
Figure 4.49: Results of the multi-scale simulations with different values of P
∗ at t = 0.2 ms. The highlighted are for refinement is smaller at higher
values of the threshold, however the accuracy of the simulation decreases.
Figure 4.50: Parametric study of the Threshold parameter for the multi-scale simulation. Small
values of P
∗ result in high computational cost and low error. On the other hand high
values result in an error close to the coarse mono-scale simulation. The non linear
behaviour of the error highlights at optimum value of the threshold value of 0.2.

employed.
The mathematical developments, are implemented in a novel framework, in which
particular attention is given to the computationally efficiency and the versatility. This
is accomplished by idealising the different areas of an adaptive concurrent multi-scale
simulation as separate modules comprising the framework, that once validated can
be further extended to reach the objectives of this research.
The developed 3D framework has been tested with various dynamic applications
ranging from elastic to elastic-plastic wave propagation in a range of different geome-
tries. In the first application, the framework has been validated using a simulation
with a null Poisson’s ratio. The presence of lateral inertia was studied in Section
4.5.2. In particular, through a parametric study of the coupling function, it has been
demonstrated that the proposed coupling only affects the spurious waves and does
not modify the mechanical response of the structure. Subsequently, the multi-scale
framework has been used to simulate a slender circular bar, in a configuration where

127
the wave propagation could be considered planar. In this case, even if the refinement
procedure is not able to reproduce the rounded geometry, the multi-scale simulation
can still correctly simulate the wave propagation with a computational time saving
of 113%.
In the last section,the framework has been applied to a case of dynamic plastic
localisation of a dog-bone specimen. In this case the multi-scale computation is not
based on the norm of the error, but rather on the accumulation of equivalent plastic
strain in the elements. This scalar, defined a priori, determines the time at which
the multi-scale procedure is started as well as the extent of the plastic area, directly
affecting the accuracy and the computational time. Even if the multi-scale simulation
has always a lower computational time when compared to the fine mono-scale sim-
ulation, an optimum can be achieved changing the value of the minimum equivalent
plastic strain that activates the multi-scale procedure.
In these applications it has been demonstrated that the proposed coupling does
not introduce spurious wave reflections at the interface between coarse and fine mesh,
and that it does not affect the mechanical behaviour of the structures under con-
sideration. Finally, it has been demonstrated that the proposed data transfer of the
results between the coarse and fine scale meshes does not alter the equilibrium among
the internal, external and inertial forces. This chapter provided a guideline for the
construction of a stable coupling, however the approach followed is valid only for hex-
ahedrons and cannot be readily applied to other element types (e.g., tetrahedrons,
bars) unless hermitian polynomials are constructed for these other classes of elements.

128
Chapter 5

Conclusions and further work

This chapter draws the main conclusions of this doctoral thesis and summarizes the
key contributions made to the field. Firstly, a summary highlights the main findings
and novelties of the work. A list of contributions of this research to the field of
concurrent multi-scale analysis is included. Finally, a list of research developments
and future activities is suggested.

5.1 Summary

The aim of this doctoral thesis was to develop a dynamic concurrent multi-scale
framework capable of adaptively refining the computational mesh, allowing the sim-
ulation of big structures with high detail and low computational cost. To accomplish
this aim an extensive literature on multi-scale simulation techniques was provided.
A clear distinction was made between hierarchical semi-concurrent and concurrent
multi-scale techniques in both static and dynamic simulations.

In the context of dynamic concurrent multi-scale schemes the key problem is


the correct representation of the stress waves travelling through the fine and coarse
computational domains without generating numerical artefacts, known as spurious
wave reflections. Different solutions to this problem have been reviewed, however
they all require a substantial computational effort that becomes prohibitive when large
domains are coupled. Such complications hinder the application of these schemes to

129
adaptive explicit frameworks and, as a result, none of the existing frameworks present
a simultaneous treatment of different time and length scales, avoiding the generation
of spurious reflection at the interfaces between the different domains.

This gap identified in the literature review served as basis for the formulation of a
novel adaptive concurrent multi-scale framework presented in section 3.2, in which a
high computational efficiency is achieved through the formulation of a coupling that
does not require the assembling of matrices. The link among the different meshes
is based on the evaluation of nodal forces that ensures continuity of the velocity
and absence of spurious wave reflections over the coupled domains. Subsequently, in
section 3.3.3 different error estimators are compared for the assessment of the quality
of the solution of a mesh and a novel computationally efficient error estimator based on
hermitian interpolation is proposed. The main advantage of the proposed estimator
is that it can be used simultaneously to check the error and, if necessary, transfer
the elastic kinetic and kinematic variables on the new generated mesh. Different
simulations and parametric studies in section 3.3.4 and 3.3.5 were used to validate
the developed framework in 1D.

In Chapter 4 the framework is extended to three dimensions using linear hex-


ahedrons with one integration point. Section 4.2 highlights the main challenge in
the extension of hermitian polynomials to the 3D space, given by the presence of 64
unknown coefficients and only 32 conditions for every hexahedron. A reduced third
order interpolation scheme is proposed in which the number of unknowns equals the
number of conditions. Subsequently in section 4.3, a novel refinement algorithm is
developed for the generation of computational meshes with complex topologies at the
macro-scale. This new algorithm avoids the generation of distorted meshes by refin-
ing every element in the parent element domain and then mapping the new nodes
on the current configuration. These two features together with the previously devel-
oped coupling are the basis of the concurrent adaptive multi-scale 3D framework. In
the last part of Chapter 4, various applications are examined ranging from elastic to

130
elastic-plastic dynamic wave propagation. It was found that the newly developed er-
ror estimator is suitable for the flagging of elements that present a relative high error
and that the refinement algorithm proposed can generate valid meshes for real struc-
tures including specimen geometries. Different parametric studies concluded that the
proposed nodal coupling is able to represent the fine scale behaviour filtering out only
the non-physical spurious waves while retaining the real response of the fine scale.
Finally in terms of computational cost, the adaptive concurrent multi-scale simula-
tions always presented significant time savings when compared to a fine mono-scale
simulation.

5.2 Contributions

The main contributions of this doctoral thesis are listed below.

• The coupling among the different discretisations is achieved through the def-
inition of a coupling volume, in which a weighting parameter αm blends the
different contributions of the coupled scales to the total energy of the system.
In this work the weight has been associated directly to nodal positions rather
than to elements. The advantages of this new formulation are several. Firstly,
the elements do not need to use modified integration rules to take into account
the spatial distribution of the weighting parameter. In second instance, the
evaluation of the the internal forces, coming from the constitutive model, does
not have to explicitly include the weighting parameter and therefore, contrarily
to other frameworks where overlapping volumes are used, the original constitu-
tive material model can be used without modifications. Moreover, section 3.2
demonstrates that the stable time-step of the simulation is not affected by the
use of nodal weights. Finally, the coupling is represented through the evalua-
tion of nodal forces that do not make use of any matrix, making the coupling
versatile and suitable for explicit simulations. The stability of the proposed

131
coupling has been verified both analytically in section 3.2.3 and through simu-
lations in section 3.3.4 showing that it ensures a correct energy transfer between
the scales.

• The adaptive process for elastic simulations is driven by a novel error estimator
based on hermitian polynomials. The error estimation uses only data that
are local to each element of the mesh, therefore it avoids neighbour searches
that can become computationally expensive in complex meshes. The main
advantage of such methodology resides in the computation of a higher order
interpolation of both kinetic and kinematic quantities, as opposed to only kinetic
quantities in the classical estimators. Moreover, the proposed methodology
does not require the assembling and inversion of matrices resulting in high
computational efficiency. The convergence rate of the proposed error estimator
is compared against the classical ZZ procedure and the analytical error, in the
case of 1D elastic wave propagation. It is found that the estimated error shows
the same trends of the analytical error, justifying its application in dynamic
problems. In section 4.2 the proposed estimator has been extended to three
dimensions, using the crucial quintic order expansion of equation ??. The use
of such expansion reduced the order of unknown interpolation coefficients from
64 to 32, making possible the use of error estimators in 3D settings. The
application in section 4.5.3 found that the rate of convergence of error for the
3D case is comparable to the 1D case.

• A consistent data transfer from the coarse scale to the fine scale is defined
using the hermitian interpolation previously developed. In particular, the classic
schemes use the shape functions to transfer both the kinetic and kinematic data,
however it results in numerical diffusion and unbalanced internal and external
forces. In the classical scheme the equilibrium at the fine scale is achieved
through a balancing step. The use of the hermitian interpolation functions for

132
the transfer of kinetic and kinematic data presents two main advantages. In
first instance, since by definition hermitian polynomials preserve the value of
the function to interpolate together with its derivative, it reduces significantly
the numerical diffusion at the lower scale. Secondly the balancing step is avoided
since the kinetic quantities (in the hypothesis of elastic behaviour) are directly
derived by the C 3 field of the displacements.

• Once the error has been assessed at the coarse scale and the refinement factor
has been computed based on the previously verified rate of convergence, the
highlighted elements are refined using a novel computationally efficient algo-
rithm. The proposed procedure refines individually every element in its parent
domain and subsequently maps the new nodes on the current configuration.
An efficient lookup is implemented to take into account shared faces and edges
among different elements. The biggest advantage of this procedure is that it is
general and can handle different topologies, since the parent element domain is
the same for every element. Moreover since the refinement procedure does not
rely on geometric dimensions in the current configuration, the probability of
generation of highly distorted elements is avoided altogether. The effectiveness
and the computational efficiency of the procedure are verified in section 4.5,
where different computational coarse meshes are refined using different refine-
ment factors.

The different contribution in terms of mesh coupling, error estimation, data trans-
fer and mesh refinement have been implemented in a single adaptive concurrent multi-
scale modelling framework for dynamic phenomena. The different advantages listed
above together with a clever data structure implementation result in a versatile, easy
to use and efficient framework as demonstrated in Chapter 4.

133
5.3 Limitations and proposed further work

The aim of this doctoral thesis was to create and validate a novel and efficient
dynamic adaptive concurrent multi-scale framework. For such reason the main ap-
plications of this thesis have been focused to the simulation of dynamic events in
homogeneous media.
The methodology, however, needs further improvement to be applied to problems
involving heterogeneous materials and coupling between different element represen-
tations (e.g. hexahedra and beam elements in the same computational model) and
multi-scale modelling involving different constitutive models. In particular, such lim-
itations can be resumed in:

• As shown throughout this thesis, different functions need to be created fo ev-


ery element to interpolate the solution between meshes. Such functions are
provided in this dissertation for bar elements in 1-dimension and hexahedra in
3-dimensions. However for every element a new interpolation has to be derived,
as described in Chapter 4, to ensure a consistent data transfer among coarse
and fine scales.

• In cases in which the finer scale needs to be described with a different constitu-
tive model from the coarse one, the proposed hermitian interpolation ceases to
be valid because the macro-scale state does not hold information about micro-
scale features, such as potential micro-fluctuations generated by micro-inertia
and stiffness mismatches. While a direct interpolation strategy from the macro
to micro-scale sounds appealing especially because of its reduced computational
cost when compared to a balancing step methodology, the data transfer, for this
class of problem, needs to be improved if not changed altogether.

• It has been demonstrated in this doctoral thesis that the proposed coupling
dissipates a certain amount of energy in the low frequency regime. Even if such

134
effect diminishes with the increase of the coupling length its effects need to be
examined when coupling different constitutive models at different scales, since
the loss of energy may be significant and may affect the results.

• The proposed refinement scheme, based on divisions of the coarse-scale mesh,


does not take in consideration the real geometry of the structure. For the ap-
plications considered in this thesis, this does not constitute a source of error,
however for dynamic problem on real structures with complex geometries (e.g.
engine blades) or contacts, the possibility of refining the computational mesh
and the geometric representation of the body is paramount for a correct inter-
pretation of the results.

The current limitations of the frameworks, if carefully addressed, could use the
proposed framework as the basis of more complex dynamic problems such as:

• Continuum damage mechanics simulations in which the regularisation process,


necessary to avoid mesh dependent results, leads to prohibitive element charac-
teristic lengths. In this context the mesh size could be initially coarse enough to
capture the elastic loading up to failure initiation and, once damage is detected,
the mesh size could be refined in the proposed framework to ensure the correct
energy dissipation.

• The adaptive coupling of heterogeneous materials in which the coarse scale is


represented by a homogeneous material while the heterogeneities are explicitly
represented at the micro-scale. In this scenario most of the research should be
devoted to the data transfer among the two scales, because the current approach
assumes that the displacements at the micro-scale can be directly interpolated
from the macro-scale. However, when including heterogeneities this assumption
does not hold true and one solution to account for micro-inertia and stiffness
mismatches, at the micro-scale, could be the use of an intermediate balancing
step.

135
• Enhancement of the proposed refinement algorithm to account for the real ge-
ometry of the structure in consideration. In this context, a future improvement
could be the mapping of the new nodes on the real geometry in the current
configuration. As an example, this could be achieved retaining the NURBS
representation of the original geometry once the coarse scale mesh is built, al-
lowing the nodes generated at the fine scale to be projected directly on the
NURBS.

136
Appendix A

Hermitian interpolation in three


dimensions

In this section the explicit form of the interpolation used defined in the equation 4.6
will be derived. The equations in section 4.2 are solved using the symbolic math toolbox
of MATLAB since they involve heavy matrix operations, un-expensive for modern
computers. The values of the obtained interpolation functions are validated verifying
the Kronecker Delta property.

A.1 Code implementation and verification

The data transfer scheme in 3-dimensions makes use of a reduced hermitian inter-
polation polynomial introduced in section 4.2. The procedure requires the determi-
nation of the entries of the matrix H(ξ) in equation 4.9 that cannot be accomplished
by hand since it requires the solution of the system of equations represented by equa-
tion 4.14 and 4.16 in 32x32 unknowns. To this aim the whole procedure described in
section 4.2 has been coded in MATLAB, using the symbolic math toolbox. In this
environment the variables declared as symbolic through the use of the command syms
are treated as mathematical entities allowing analytical differentiation, integration,
simplification, transforms, and equation solving.

In particular, in the script A.1, the parent element domain space ξ is declared as
symbolic on line 4. The exponents of the polynomial expressed in equation 4.11 are

137
stored in the variable expon and the coordinates of the hexahedron parent element
domain reported in table 4.1 are stored in the variable Csi. The array matrix G
of equation 4.11 that contains all the polynomial terms of the reduced expansion is
constructed on line 54 using a for loop statement in which the coordinates are raised
using their respective exponent. The coefficient matrix represented by the equations
4.14 and 4.16 is built in 4 different for loops. The 32x32 matrix representing the
union of the 32 U1x32
k
is assembled on line 80. Finally, the system of equation for X is
solved on line 86, and the matrix H containing the entries of the reduced hermitian
interpolation is determined through a simple matrix multiplication on the last line of
the script.

1
2 clearvars
3 X = sym('X%d %d', [32 32]);
4 syms x y z
5
6 % Coordinates of parent element hexahedron
7 Csi= [
8 -1 -1 -1;
9 1 -1 -1;
10 1 1 -1;
11 -1 1 -1;
12 -1 -1 1;
13 1 -1 1;
14 1 1 1;
15 -1 1 1;];
16
17
18 expon = [
19 0,0,0;
20 0,0,1;
21 0,1,0;
22 0,1,1;
23 1,0,0;
24 1,0,1;
25 1,1,0;
26 1,1,1;
27 2,0,0;
28 2,0,1;
29 2,1,0;
30 2,1,1;
31 3,0,0;
32 3,0,1;

138
33 3,1,0;
34 3,1,1;
35 0,2,0;
36 0,2,1;
37 0,3,0;
38 0,3,1;
39 1,2,0;
40 1,2,1;
41 1,3,0;
42 1,3,1;
43 0,0,2;
44 0,0,3;
45 0,1,2;
46 0,1,3;
47 1,0,2;
48 1,0,3;
49 1,1,2;
50 1,1,3];
51
52

53 for i = 1:length(expon)
54 G(1,i) = xˆexpon(i,1) * yˆexpon(i,2) * zˆexpon(i,3);
55 end
56
57 G = matlabFunction(G);
58 dGx = matlabFunction(diff(G,x));
59 dGy = matlabFunction(diff(G,y));
60 dGz = matlabFunction(diff(G,z));
61
62 for i =1:length(Csi)
63 C(i,:) = [G(Csi(i,1),Csi(i,2),Csi(i,3))]*X;
64 end
65
66
67 for i =1:length(Csi)
68 C(end+1,:) = [dGx(Csi(i,1),Csi(i,2),Csi(i,3))]*X;
69 end
70

71
72 for i =1:length(Csi)
73 C(end+1,:) = [dGy(Csi(i,1),Csi(i,2),Csi(i,3))]*X;
74 end
75
76 for i =1:length(Csi)
77 C(end+1,:) = [dGz(Csi(i,1),Csi(i,2),Csi(i,3))]*X;
78 end
79
80 U = eye(32,32);
81
82 equations = C == U;
83
84 [a,b] = equationsToMatrix(equations, X);
85
86 X sol = reshape(a\b,[32,32])

139
87 H = G* X sol;
Code A.1: MATLAB code for the determination of the reduced hermitian interpolation
presented in Chapter 4

The proposed interpolation can be considered valid if it reproduces the values of


the original field on the node of the hexahedron. This is ensured when the shape
functions that interpolate the field have a value of 1 on one node and 0 on the others.
In figure A.1, the value of the interpolation functions are summed at every nodal
position. The proposed reduced hermitian scheme in three dimensions is therefore
validated since the sum of the 32 interpolation functions present a value of 1 on the
eight vertices of the hexahedron.

Figure A.1: Validation of the proposed hermitian interpolation scheme. The methodology is val-
idated since the sum of proposed functions have a value of 1 on the vertices of the
hexahedron.

140
References

[1] M. Geers, V.G. Kouznetsova, and W.A.M. Brekelmans. Multiscale first-order


and second-order computational homogenization of microstructures towards
continua. International Journal for Multiscale Computational Engineering,
1(4):371–386, 2003.

[2] A. Ghosh and P. Chaudhuri. Computational modeling of fracture in concrete


using a meshfree meso-macro-multiscale method. Computational Materials Sci-
ence, 69:204–215, 2013.

[3] L.P. Canal, C. González, J. Segurado, and J. LLorca. Intraply fracture of


fiber-reinforced composites: Microscopic mechanisms and modeling. Composites
Science and Technology, 72(11):1223–1232, 2012.

[4] M. Silani, H. Talebi, S. Ziaei-Rad, A. M. Hamouda, G. Zi, and T. Rabczuk. A


three dimensional extended arlequin method for dynamic fracture. Computa-
tional Materials Science, 96:425–431, 2015.

[5] T. Belytschko and R. Mullen. Mesh partitions of explicit-implicit time integra-


tion. Formulations and Computational Algorithms in Finite Element Analysis,
pages 673–690, 1976.

[6] A. Prakash and K.D. Hjelmstad. A feti-based multi-time-step coupling method


for newmark schemes in structural dynamics. International Journal for Numer-
ical Methods in Engineering, 61(13):2183–2204, 2004.

141
[7] L. Gigliotti and S.T. Pinho. Multiple length/time-scale simulation of localized
damage in composite structures using a mesh superposition technique. Com-
posite Structures, 121:395–405, 2015.

[8] S. Ghosh, J. Bai, and P. Raghavan. Concurrent multi-level model for damage
evolution in microstructurally debonding composites. Mechanics of Materials,
39(3):241–266, 2007.

[9] N.E. Wiberg and X. Li. Adaptive finite element procedures for linear and non-
linear dynamics. International Journal for Numerical Methods in Engineering,
46(10):1781–1802, 1999.

[10] O.C. Zienkiewicz and J.Z. Zhu. The superconvergent patch recovery and a
posteriori error estimates. part 2: Error estimates and adaptivity. International
Journal for Numerical Methods in Engineering, 33(7):1365–1382, 1992.

[11] D.Y. Kwak and Y.T. Im. Remeshing for metal forming simulations. part ii:
Three-dimensional hexahedral mesh generation. International Journal for Nu-
merical Methods in Engineering, 53(11):2501–2528, 2002.

[12] T. Belytschko, K. Liu, W, B. Moran, and K. Elkhodary. Nonlinear Finite


Elements for Continua and Structures. John Wiley & Sons, 2013.

[13] Z.P. Bažant and Z. Celep. Spurious reflection of elastic waves in nonuniform
meshes of constant and linear strain unite elements. Computers & Structures,
15(4):451–459, 1982.

[14] S.P. Xiao and T. Belytschko. A bridging domain method for coupling con-
tinua with molecular dynamics. Computer Methods in Applied Mechanics and
Engineering, 193(17):1645–1669, 2004.

142
[15] W.J.T. Daniel. A study of the stability of subcycling algorithms in structural
dynamics. Computer Methods in Applied Mechanics and Engineering, 156(1-
4):1–13, 1998.

[16] J.L. Auriault. Heterogeneous medium. is an equivalent macroscopic description


possible? International Journal of Engineering Science, 29(7):785–795, 1991.

[17] W. Voigt. Ueber die beziehung zwischen den beiden elasticitätsconstanten


isotroper körper. Annalen der Physik, 274(12):573–587, 1889.

[18] A. Reuss. Berechnung der fließgrenze von mischkristallen auf grund der plas-
tizitätsbedingung für einkristalle. ZAMM-Journal of Applied Mathematics and
Mechanics/Zeitschrift für Angewandte Mathematik und Mechanik, 9(1):49–58,
1929.

[19] J.D. Eshelby. The determination of the elastic field of an ellipsoidal inclusion,
and related problems. In Proceedings of the Royal Society of London A: Math-
ematical, Physical and Engineering Sciences, volume 241, pages 376–396. The
Royal Society, 1957.

[20] Z. Hashin and S. Shtrikman. A variational approach to the theory of the elastic
behaviour of multiphase materials. Journal of the Mechanics and Physics of
Solids, 11(2):127–140, 1963.

[21] R1 Hill. A self-consistent mechanics of composite materials. Journal of the


Mechanics and Physics of Solids, 13(4):213–222, 1965.

[22] S. Nemat-Nasser and M. Hori. Micromechanics: overall properties of heteroge-


neous materials, volume 37. Elsevier, 2013.

[23] Z. Hashin. Analysis of composite materials. Journal of Applied Mechanics,


50(2):481–505, 1983.

143
[24] R. Hill. Continuum micro-mechanics of elastoplastic polycrystals. Journal of
the Mechanics and Physics of Solids, 13(2):89–101, 1965.

[25] F. Feyel and J. Chaboche. Fe 2 multiscale approach for modelling the elastovis-
coplastic behaviour of long fibre sic/ti composite materials. Computer Methods
in Applied Mechanics and Engineering, 183(3):309–330, 2000.

[26] K. Matsui, K. Terada, and K. Yuge. Two-scale finite element analysis of


heterogeneous solids with periodic microstructures. Computers & Structures,
82(7):593–606, 2004.

[27] C. McVeigh, F. Vernerey, W.K. Liu, and L.C. Brinson. Multiresolution analysis
for material design. Computer Methods in Applied Mechanics and Engineering,
195(37):5053–5076, 2006.

[28] I. Temizer and T.I. Zohdi. A numerical method for homogenization in non-linear
elasticity. Computational Mechanics, 40(2):281–298, 2007.

[29] I. Temizer and P. Wriggers. On the computation of the macroscopic tangent for
multiscale volumetric homogenization problems. Computer Methods in Applied
Mechanics and Engineering, 198(3):495–510, 2008.

[30] M. Hain and P. Wriggers. Computational homogenization of micro-structural


damage due to frost in hardened cement paste. Finite Elements in Analysis and
Design, 44(5):233–244, 2008.

[31] Z. Yuan and J. Fish. Toward realization of computational homogenization


in practice. International Journal for Numerical Methods in Engineering,
73(3):361–380, 2008.

[32] E. Bosco, V.G. Kouznetsova, E.W.C. Coenen, M.G.D. Geers, and A. Salvadori.
A multiscale framework for localizing microstructures towards the onset of
macroscopic discontinuity. Computational Mechanics, 54(2):299–319, 2014.

144
[33] E. Bosco, V.G. Kouznetsova, and M.G.D. Geers. Multi-scale computational
homogenization–localization for propagating discontinuities using x-fem. In-
ternational Journal for Numerical Methods in Engineering, 102(3-4):496–527,
2015.

[34] T.P. Fries and T. Belytschko. The extended/generalized finite element method:
an overview of the method and its applications. International Journal for Nu-
merical Methods in Engineering, 84(3):253–304, 2010.

[35] T. Belytschko, R. Gracie, and G. Ventura. A review of extended/generalized


finite element methods for material modeling. Modelling and Simulation in
Materials Science and Engineering, 17(4):043001, 2009.

[36] S. Loehnert and T. Belytschko. A multiscale projection method for macro/mi-


crocrack simulations. International Journal for Numerical Methods in Engi-
neering, 71(12):1466–1482, 2007.

[37] K. Matouš, M.G.D. Geers, V.G. Kouznetsova, and A. Gillman. A review of


predictive nonlinear theories for multiscale modeling of heterogeneous materials.
Journal of Computational Physics, 330:192–220, 2017.

[38] C. Farhat and F.X. Roux. A method of finite element tearing and interconnect-
ing and its parallel solution algorithm. International Journal for Numerical
Methods in Engineering, 32(6):1205–1227, 1991.

[39] C. Farhat, J. Mandel, and F.X. Roux. Optimal convergence properties of the
feti domain decomposition method. Computer Methods in Applied Mechanics
and Engineering, 115(3-4):365–385, 1994.

[40] P. Kerfriden, O. Allix, and P. Gosselet. A three-scale domain decomposition


method for the 3d analysis of debonding in laminates. Computational Mechan-
ics, 44(3):343–362, 2009.

145
[41] M. Hautefeuille, J.B. Colliat, A. Ibrahimbegovic, H.G. Matthies, and P. Villon.
A multi-scale approach to model localized failure with softening. Computers &
Structures, 94:83–95, 2012.

[42] T. Zohdi and P. Wriggers. A domain decomposition method for bodies with
heterogeneous microstructure basedon material regularization. International
Journal of Solids and Structures, 36(17):2507–2525, 1999.

[43] I.M. Gitman, H. Askes, and L.J. Sluys. Coupled-volume multi-scale modelling
of quasi-brittle material. European Journal of Mechanics-A/Solids, 27(3):302–
327, 2008.

[44] H.B. Dhia and G. Rateau. The arlequin method as a flexible engineering design
tool. International Journal for Numerical Methods in Engineering, 62(11):1442–
1462, 2005.

[45] P.A. Guidault and T. Belytschko. On the l2 and the h1 couplings for an overlap-
ping domain decomposition method using lagrange multipliers. International
Journal for Numerical Methods in Engineering, 70(3):322–350, 2007.

[46] P.T. Bauman, H.B. Dhia, N. Elkhodja, J.T. Oden, and S. Prudhomme. On the
application of the arlequin method to the coupling of particle and continuum
models. Computational Mechanics, 42(4):511–530, 2008.

[47] P.T. Bauman, J.T. Oden, and S. Prudhomme. Adaptive multiscale modeling
of polymeric materials with arlequin coupling and goals algorithms. Computer
Methods in Applied Mechanics and Engineering, 198(5):799–818, 2009.

[48] S. Prudhomme, H.B. Dhia, P.T. Bauman, N. Elkhodja, and J.T. Oden. Com-
putational analysis of modeling error for the coupling of particle and continuum
models by the arlequin method. Computer Methods in Applied Mechanics and
Engineering, 197(41):3399–3409, 2008.

146
[49] A.C. Eringen and E.S. Suhubi. Nonlinear theory of simple micro-elastic solid-
sâĂŤi. International Journal of Engineering Science, 2(2):189–203, 1964.

[50] K. Pham, V.G. Kouznetsova, and M.G.D. Geers. Transient computational ho-
mogenization for heterogeneous materials under dynamic excitation. Journal of
the Mechanics and Physics of Solids, 61(11):2125–2146, 2013.

[51] E.A. De Souza Neto, P.J. Blanco, P.J. Sánchez, and R.A. Feijóo. An rve-
based multiscale theory of solids with micro-scale inertia and body force effects.
Mechanics of Materials, 80:136–144, 2015.

[52] A. Sridhar, V.G. Kouznetsova, and M.G.D. Geers. Homogenization of lo-


cally resonant acoustic metamaterials towards an emergent enriched continuum.
Computational Mechanics, 57(3):423–435, 2016.

[53] M.C.C. Bampton and Roy R. Craig, J.R. Coupling of substructures for dynamic
analyses. AIAA Journal, 6(7):1313–1319, 1968.

[54] T. Belytschko and R. Mullen. Stability of explicit-implicit mesh partitions in


time integration. International Journal for Numerical Methods in Engineering,
12(10):1575–1586, 1978.

[55] T. Belytschko, H-J Yen, and R. Mullen. Mixed methods for time integration.
Computer Methods in Applied Mechanics and Engineering, 17:259–275, 1979.

[56] M.O. Neal and T. Belytschko. Explicit-explicit subcycling with non-integer time
step ratios for structural dynamic systems. Computers & Structures, 31(6):871–
880, 1989.

[57] M. Klisinski and A. Moström. On stability of multitime step integration pro-


cedures. Journal of Engineering Mechanics, 124(7):783–793, 1998.

147
[58] Y.S. Wu and P. Smolinski. A multi-time step integration algorithm for struc-
tural dynamics based on the modified trapezoidal rule. Computer Methods in
Applied Mechanics and Engineering, 187(3-4):641–660, 2000.

[59] W.J.T. Daniel. Subcycling first-and second-order generalizations of the trape-


zoidal rule. International Journal for Numerical Methods in Engineering,
42(6):1091–1119, 1998.

[60] F. Casadei and J. Halleux. Binary spatial partitioning of the central-difference


time integration scheme for explicit fast transient dynamics. International Jour-
nal for Numerical Methods in Engineering, 78(12):1436–1473, 2009.

[61] C. Farhat, L. Crivelli, and F.X. Roux. A transient feti methodology for large-
scale parallel implicit computations in structural mechanics. International Jour-
nal for Numerical Methods in Engineering, 37(11):1945–1975, 1994.

[62] A. Gravouil and A. Combescure. Multi-time-step explicit–implicit method for


non-linear structural dynamics. International Journal for Numerical Methods
in Engineering, 50(1):199–225, 2001.

[63] N. Mahjoubi, A. Gravouil, A. Combescure, and N. Greffet. A monolithic energy


conserving method to couple heterogeneous time integrators with incompatible
time steps in structural dynamics. Computer Methods in Applied Mechanics
and Engineering, 200(9):1069–1086, 2011.

[64] O. Bettinotti, O. Allix, and B. Malherbe. A coupling strategy for adaptive local
refinement in space and time with a fixed global model in explicit dynamics.
Computational Mechanics, 53(4):561–574, 2014.

[65] A. Gravouil, A. Combescure, and M. Brun. Heterogeneous asynchronous time


integrators for computational structural dynamics. International Journal for
Numerical Methods in Engineering, 102(3-4):202–232, 2015.

148
[66] N. Holmes and T. Belytschko. Postprocessing of finite element transient re-
sponse calculations by digital filters. Computers & Structures, 6(3):211–216,
1976.

[67] Zdenĕk P. BazÌĘant. Spurious reflection of elastic waves in nonuniform fi-


nite element grids. Computer Methods in Applied Mechanics and Engineering,
16(1):91–100, 1978.

[68] J. Marchais, L. Chamoin, and C. Rey. Wave filtering through a selective


perfectly matched layer in multiscale couplings with dynamically incompat-
ible models. International Journal for Numerical Methods in Engineering,
110(8):745–775, 2016.

[69] J.P. Berenger. A perfectly matched layer for the absorption of electromagnetic
waves. Journal of Computational Physics, 114(2):185–200, 1994.

[70] F. Collino and P.B. Monk. Optimizing the perfectly matched layer. Computer
Methods in Applied Mechanics and Engineering, 164(1-2):157–171, 1998.

[71] A. Ghanem, M. Torkhani, N. Mahjoubi, T.N. Baranger, and A. Combescure.


Arlequin framework for multi-model, multi-time scale and heterogeneous time
integrators for structural transient dynamics. Computer Methods in Applied
Mechanics and Engineering, 254:292–308, 2013.

[72] A. Fernier, V. Faucher, and O. Jamond. Multi-model arlequin method for tran-
sient structural dynamics with explicit time integration. International Journal
for Numerical Methods in Engineering, 112(9):1194–1215, 2017.

[73] H. Talebi, M. Silani, S.P.A. Bordas, P. Kerfriden, and T. Rabczuk. Molecular


dynamics/xfem coupling by a three-dimensional extended bridging domain with
applications to dynamic brittle fracture. International Journal for Multiscale
Computational Engineering, 11(6):527–541, 2013.

149
[74] S. Ghosh, K. Lee, and S. Moorthy. Multiple scale analysis of heterogeneous
elastic structures using homogenization theory and voronoi cell finite element
method. International Journal of Solids and Structures, 32(1):27–62, 1995.

[75] F.J. Vernerey and M. Kabiri. An adaptive concurrent multiscale method for
microstructured elastic solids. Computer Methods in Applied Mechanics and
Engineering, 241:52–64, 2012.

[76] F. Larsson and K. Runesson. On two-scale adaptive fe analysis of micro-


heterogeneous media with seamless scale-bridging. Computer Methods in Ap-
plied Mechanics and Engineering, 200(37):2662–2674, 2011.

[77] F. Greco, L. Leonetti, P. Lonetti, and P.N. Blasi. Crack propagation analy-
sis in composite materials by using moving mesh and multiscale techniques.
Computers & Structures, 153:201–216, 2015.

[78] Ahmad Akbari R., P. Kerfriden, and S. Bordas. Scale selection in nonlinear
fracture mechanics of heterogeneous materials. Philosophical Magazine, 95(28-
30):3328–3347, 2015.

[79] O.C. Zienkiewicz and J.Z. Zhu. A simple error estimator and adaptive procedure
for practical engineerng analysis. International Journal for Numerical Methods
in Engineering, 24(2):337–357, 1987.

[80] I. Temizer and P. Wriggers. An adaptive multiscale resolution strategy for


the finite deformation analysis of microheterogeneous structures. Computer
Methods in Applied Mechanics and Engineering, 200(37):2639–2661, 2011.

[81] O.C. Zienkiewicz and Y.M. Xie. A simple error estimator and adaptive time
stepping procedure for dynamic analysis. Earthquake engineering & structural
dynamics, 20(9):871–887, 1991.

150
[82] N.E. Wiberg and X.D. Li. A postprocessed error estimate and an adaptive
procedure for the semidiscrete finite element method in dynamic analysis. In-
ternational Journal for Numerical Methods in Engineering, 37(21):3585–3603,
1994.

[83] I. Romero and L.M. Lacoma. A methodology for the formulation of error estima-
tors for time integration in linear solid and structural dynamics. International
Journal for Numerical Methods in Engineering, 66(4):635–660, 2006.

[84] Z. Yue and D.H. Robbins. Adaptive superposition of finite element meshes
in elastodynamic problems. International Journal for Numerical Methods in
Engineering, 63(11):1604–1635, 2005.

[85] Z. Yue and D.H. Robbins. Adaptive superposition of finite element meshes
in non-linear transient solid mechanics problems. International Journal for
Numerical Methods in Engineering, 72(9):1063–1094, 2007.

[86] R. Gracie and T. Belytschko. An adaptive concurrent multiscale method for


the dynamic simulation of dislocations. International Journal for Numerical
Methods in Engineering, 86(4-5):575–597, 2011.

[87] P. Moseley, J. Oswald, and T. Belytschko. Adaptive atomistic-to-continuum


modeling of propagating defects. International Journal for Numerical Methods
in Engineering, 92(10):835–856, 2012.

[88] D. Peric, C. Hochard, M. Dutko, and D.R.J. Owen. Transfer operators for
evolving meshes in small strain elasto-placticity. Computer Methods in Applied
Mechanics and Engineering, 3(137):331–344, 1996.

[89] P.H. Saksono and D. Perić. On finite element modelling of surface tension
variational formulation and applications–part i: Quasistatic problems. Compu-
tational Mechanics, 38(3):265–281, 2006.

151
[90] M. Ortiz and J.J. Quigley. Adaptive mesh refinement in strain localization
problems. Computer Methods in Applied Mechanics and Engineering, 90(1-
3):781–804, 1991.

[91] P. Bussetta, R. Boman, and J.P. Ponthot. Efficient 3d data transfer operators
based on numerical integration. International Journal for Numerical Methods
in Engineering, 102(3-4):892–929, 2015.

[92] Klaus-Jürgen Bathe. Finite element procedures. Klaus-Jurgen Bathe, 2006.

[93] T.J.R. Hughes, K.S. Pister, and R.L. Taylor. Implicit-explicit finite elements
in nonlinear transient analysis. Computer Methods in Applied Mechanics and
Engineering, 17:159–182, 1979.

[94] Mei Xu and Ted Belytschko. Conservation properties of the bridging domain
method for coupled molecular/continuum dynamics. International Journal for
Numerical Methods in Engineering, 76(3):278–294, 2008.

[95] T. Grätsch and K.J. Bathe. A posteriori error estimation techniques in practical
finite element analysis. Computers & Structures, 83(4):235–265, 2005.

[96] M.M. Hrabok and T.M. Hrudey. A review and catalogue of plate bending finite
elements. Computers & Structures, 19(3):479–495, 1984.

[97] Ari Adini. Analysis of shell strutures by the finite element method. University
of California, Berkeley, 1961.

[98] G.T. Camacho and M. Ortiz. Adaptive lagrangian modelling of ballistic pene-
tration of metallic targets. Computer Methods in Applied Mechanics and Engi-
neering, 142(3-4):269–301, 1997.

[99] Fubin Tu, Daosheng Ling, Lingfang Bu, and Qingda Yang. Generalized bridging
domain method for coupling finite elements with discrete elements. Computer
Methods in Applied Mechanics and Engineering, 276:509–533, 2014.

152
[100] E Oran Brigham. The fast Fourier transform and its applications, volume 448.

[101] D.P. Flanagan and T. Belytschko. A uniform strain hexahedron and quadri-
lateral with orthogonal hourglass control. International Journal for Numerical
Methods in Engineering, 17(5):679–706, 1981.

[102] R. Schneiders. A grid-based algorithm for the generation of hexahedral element


meshes. Engineering with Computers, 12(3-4):168–177, 1996.

[103] T. Belytschko and M. Tabbara. H-adaptive finite element methods for dynamic
problems, with emphasis on localization. International Journal for Numerical
Methods in Engineering, 36(24):4245–4265, 1993.

[104] H.P. Bui, S. Tomar, H. Courtecuisse, S. Cotin, and S. Bordas. Real-time error
control for surgical simulation. IEEE Transactions on Biomedical Engineering,
2017.

[105] W.W. Chen and B. Song. Split Hopkinson (Kolsky) bar: design, testing and
applications. Springer Science & Business Media, 2010.

[106] R. Gerlach, C. Kettenbeil, and N. Petrinic. A new split hopkinson tensile bar
design. International Journal of Impact Engineering, 50:63–67, 2012.

[107] B.T. Spencer Cousins. Development of Improved Numerical Techniques for High
Strain Rate Deformation Behaviour of Titanium Alloys. PhD thesis, University
of Oxford, 2016.

[108] Ver ABAQUS. 6.14 documentation. Dassault Systemes Simulia Corporation,


2014.

153

Anda mungkin juga menyukai