Anda di halaman 1dari 43

Engineering Computational Methods (ENME4CM)

Course Notes

1. Introduction

1.1 About this Course

The use of computational methods to analyse the behaviour of physical phenomena finds
wide-ranging application in many fields, including engineering, chemistry, geology,
meteorology and astrophysics, amongst others. The aim of this course is to introduce you to
the application of this approach in the analysis of engineering problems. In particular, the
course will consider the computational analysis of simple problems relating to:

1. Solid Mechanics
2. Heat Transfer
3. Fluid Mechanics

The theory relating to these topics that is presented during lectures, will provide the
backbone to the practical solution of tutorial and assignment problems that you will
undertake using two computational analysis software packages. The first, Siemens NX
Advanced Simulation, will be used for solid mechanics and conductive heat transfer analysis,
whilst the second, CD-Adapco STAR-CCM+, will be used for fluid mechanics and convective
heat transfer analysis.

The topic of engineering computational methods is truly vast, and unfortunately, we’ll only
be able to cover the very basics of the approach. Because of this, it is vital that you realise
that although computational methods are incredibly powerful tools that can make light
work out of even highly complex problems, they can also be useless/dangerous to use if you
don’t understand how they work and treat them as a magic “black box”. So be sure to give
the subject the respect that it is due.

Before moving onto anything heavy, let’s start by taking a look at what techniques can be
used to analyse engineering problems and how computational methods fit into the picture.

1.2 Three Approaches to Solving Engineering Problems

In designing mechanical engineering systems, three approaches are generally available to


analyse the characteristics of such systems:

1. Analytical Methods – the closed-form or approximate mathematical solution of the


governing equations describing a system. E.g. solving the 2D heat diffusion equation
to find the steady temperature distribution in a flat plate.
2. Experimental Methods – the observation of characteristics from experiments that
feature actual or representative forms of a system. E.g. finding the lift coefficient of
an aircraft design using a model positioned on a balance in a wind tunnel.

1
3. Computational Methods – the use of computer hardware and software to
numerically solve an approximated form of the governing equations describing a
system. E.g. the finite element analysis of a gas turbine blade to predict the thermal
stress distribution within it.

Although analytical and experimental approaches have been used to solve problems and
design systems for centuries, computational methods only really became applicable with
the advent of the digital computer a few decades ago. Phenomenal advancements in
computer hardware since then, particularly with respect to processing (CPU) power and
memory capacity, now make it possible to analyse systems once thought impossible to
consider.

1.3 Advantages and Disadvantages of the Computational Approach

Several advantages that the computational approach holds over analytical and experimental
methods have led to its success in the engineering environment:

1. The solution of problems that have no available analytical solutions (a considerable


problem of the analytical approach).
2. The analysis of problems that are difficult/impossible to replicate under
experimental conditions.
3. Significantly greater flexibility than experimental methods in terms of analysis
configuration and results observation.
4. Generally provide acceptably accurate solutions to problems at a lower cost and in
less time than those obtained by experimental means.

The advantages of using computational methods versus analytical/experimental methods


must be weighed against associated disadvantages:

1. Rubbish in, rubbish out – the “black box” nature of commercial computational
analysis makes it easy to define the problem incorrectly and can create a false sense
of confidence in results.
2. A satisfactory solution isn’t necessarily achieved first time - complex computational
analyses often require optimization through a series of iterative parameter revisions.
3. Computational analysis will always only be able to approximate a real phenomenon.
4. The accuracy of this approximation can be limited by available computational
resources (both hardware and software).

Even though computational methods possess notable advantages, they should be viewed as
being complementary rather than superior to analytical and experimental techniques. This
is because: i) the computational approach is at its core based on analytical methods, and ii)
computational codes are almost always validated against experimental and/or analytical
results. Analytical and experimental methods are therefore still crucial elements of modern
day engineering.
1.4 The Basic Components of Computational Analysis

2
Irrespective of the physical nature of a problem, the process of computational analysis
comprises three fundamental components:

1. Mathematical Models – the assemblage of governing equations that define the


physical phenomenon being investigated and form the core of the solution.
2. Numerical Methods – the numerical procedures that allow for a solvable
representation of the mathematical models to be formed and govern the solution
thereof.
3. Software and Hardware – which provide the framework to support the operation of
the above components and form the interface with the user for entering problem
parameters and extracting results.

The arrangement of these fundamental components within the computational analysis


“black box” is shown in the diagram below:

In your case, NX Advanced Simulation and STAR-CCM+ will configure mathematical models
on the basis of the problem parameters you specify (through building your simulation model
in the program’s GUI) and then instruct the LAN PC that you’re sitting in front of to
algebraically solve these models by applying the relevant numerical methods and through
the use of the PC’s central processing unit (CPU) and random access memory (RAM). You
can then recall the subsequent results of the analysis from the PC’s hard drive by opening
the results file in the GUI.

3
2. Important Concepts in Computational Analysis

2.1 Discretization

‘In essence, discretization is the process by which a closed-form mathematical expression,


such as a function or a differential or integral equation involving functions, all of which are
viewed as having an infinite continuum of values throughout some domain, is approximated
by analogous (but different) expressions which prescribe values at only a finite number of
discrete points or volumes in the domain.’ – Anderson

Some offshoots of this:

1. In the analytical solution of partial differential equations (PDEs), dependent variables


are evaluated continuously throughout the domain.
2. In the numerical solution of PDEs using a discretization technique however,
dependent variables are approximated only at discrete points in the domain.
3. Discretization allows PDEs that are difficult or impossible to solve analytically, to be
represented as algebraic functions that can be solved relatively easily using digital
computing techniques.

The concept of discretization is demonstrated graphically in the figure below. Here, f(x) is a
continuous function over a given domain (x1 : xn), f ’(x) is a discrete function over the same
domain analogous to f(x) but prescribing solutions only at a finite number of points (shown
as dots), yi is the solution of f(x) at xi (actual solution) and yi’ is the solution of f ’(x) at xi
(approximate/discretized solution).

You’ll notice that discretizing a continuous function only gives us solutions at certain points
in the domain. But what about at positions in between the points, you might ask? To find
values at these points, we generally need to make use of separate interpolation functions
which approximate the variation of the solution between the discretization points.

If the solution to a set of differential equations governing a specific problem doesn’t depend
on time (static/steady), then discretization is carried out in a spatial sense, where only

4
spatial derivatives in the governing equations are approximated. If however, the solution
varies with time (transient/unsteady), then discretization must be carried out in a spatial
sense as well as in a temporal sense, where derivatives in time are approximated in
addition. Problems of a transient nature will not be considered in this introductory course.

As far as spatial discretization is concerned, three distinct techniques are generally


employed in commercial computational analysis codes, each with their own strengths and
weaknesses. An in depth discussion of these schemes is beyond the scope of this course, but
there are ample sources of literature available for further reading on the mathematical
formulation of each method. Brief summaries of each method are presented below:

1. Finite Difference Method – the ‘grandfather’ of discretization schemes; this method


approximates the derivatives in the governing equations in differential form with
difference quotients over a domain that is subdivided into discrete grid points.
Although efficient and accurate to use with simple geometries (in structural, thermal
and fluid analysis), this method is difficult to apply when geometries are complex.

2. Finite Element Method – the finite element method derives the solutions of the
governing differential equations by replacing the continuous differential functions
with piecewise approximations, over discrete surfaces. This method is particularly
popular in structural and thermal analysis and it handles complex geometries well.

3. Finite Volume Method – in this method, the solution domain is subdivided into a
finite number of small control volumes, and discrete versions of the governing
equations in integral form are applied to every volume to generate a flow solution in
each volume. This method is the most popular technique used in modern
commercial fluid mechanics software.

In a physical sense, spatial discretization is the breaking down of the spatial domain
(geometry) being analysed into a collection of smaller ‘regions’ populated by a finite
number of solution points. These regions can be one-, two- or three-dimensional in nature
and can come in a variety of shapes, depending on the characteristics of the problem at
hand. This concept is demonstrated for an arbitrary two-dimensional spatial domain in the
figure below:

5
Remember though, that solutions are computed only at the solution points of the
discretized spatial domain. Solutions at all other positions in each region are derived from
the associated solution points using interpolation functions. In the finite element method,
we call the solution points ‘nodes’, whilst the regions are called ‘elements’. The overall
network of elements is called a ‘mesh’. In the finite volume method, the solution points are
called ‘centroids’ and the regions are called ‘cells’. The network of cells is also called a mesh.

Spatial discretization allows the (complex) overall analysis problem to be transformed into
an assemblage of ‘sub-problems’ (representing each solution point) which can be solved
much more easily on an individual basis. In a numerical sense, the collection of sub-
problems is represented by a large assemblage of equations in matrix, vector or tensor form
(this system of equations is actually an algebraic form of the governing equations).
Fortunately, computers are very well suited to simultaneously solving these systems of
equations. This is why computational analysis methods are so effective at solving complex
engineering problems.

2.2 The Computational Analysis Process

Zooming out from the finer details discussed above, let’s now turn our attention to the
overall computational analysis process. The journey of going from an initial engineering
problem statement to obtaining final computational analysis results may seem daunting to
you at this stage, but in reality, the modelling and analysis process is fairly straightforward.
Irrespective of the type of analysis being undertaken, the process can be divided into a
general series of steps, as outlined below (note that slight variations in the order may exist):

Stage 1: Assessment of the Problem Statement

Stage 2: Geometry Preparation

 Construction of the part or flow geometry model using CAD software

Stage 3: Pre-Processing

 Importation of geometry model into the computational analysis software


 Mesh generation
 Specification of discretization parameters.
 Selection of physical model (i.e. governing equations)
 Specification of material properties
 Specification of initial and boundary conditions
 Selection of solver and related solution parameters

Stage 4: Solution Run

Stage 5: Post-Processing

 Retrieval and qualitative review of results


 Quantitative measurements, data extraction, graph and image generation

6
Stage 6: Results Verification/Validation

2.3 Sources of Error in Computational Analysis

Error will always be generated in the computational analysis process because it is an


approximate solution method. Some errors are avoidable but others aren’t. The key to
being a good computational analyst is to eliminate the avoidable errors whilst minimizing
those that cannot be avoided. The following are sources of error that may generally be
encountered during the computational analysis process, and should be considered carefully:

1. Physical modelling error – difference between the phenomenon occurring in reality


and its representation by the model’s governing equations (model insufficiency or
over-simplification).
2. Discretization error – difference between the exact solution of the PDE and
algebraic forms of the governing equations (made worse by excessively coarse
spatial discretization and/or solution time step size).
3. Iteration error – difference between the iterative and exact solution of the algebraic
form of the governing equations (excessively large solution tolerance).
4. Round-off error – induced by rounding-off operations during the solution of the
algebraic form of the governing equations.
5. Programming error – bugs, mistakes in the code, etc.
6. User input error – incorrect specification of model parameters.

2.4 Accuracy vs. Computational Cost, and Mesh Independence

One of the most important things to consider when using computational analysis methods is
the relationship between the accuracy of a solution and the computational cost of obtaining
that solution. Basically, computational cost or expense is proportional to the time taken to
run a simulation and obtain a solution, using a given set of computational resources (i.e.
processing and memory capacity).

Solution accuracy is dependent to a large extent on the fineness of the mesh used to
discretize the spatial domain of a model. A coarse mesh containing fewer, larger
discretization regions will, in general, result in a less accurate solution than that obtained by
employing a finer mesh. The problem however, is that the finer the mesh, the larger the
computational size of the model. Importantly, the relationship between mesh fineness, or
density, and computational size is exponential. As such, the time taken to obtain a solution
is strongly sensitive to this parameter. A trade-off must therefore be made between the
accuracy and computational expense of a solution, if computational resources are limited.
The process of optimizing this trade-off can make computational analysis look more like an
art than a science, as learning how to optimize model parameters properly takes time and
much practice.

Furthermore, the relationship between mesh density and solution accuracy is not linear; it is
asymptotic. That is to say that the real solution to the governing equations can never be

7
obtained via the solution of the discretized governing equations, as an error will always exist
in this solution no matter how fine the mesh used.

Increasing the density of a very coarse mesh will at first lead to great reductions in the
discretization error, but as the density is increased further, there will be lower and lower
subsequent reductions in error. At some point, further increases in density will produce no
noticeable reduction in discretization error at all. This phenomenon is known as mesh
independence and is shown graphically below.

As an analyst, it is important to be aware of the point at which mesh independence is


reached, as by increasing mesh density any further, massive increases in computational
expense will be incurred without any increase in accuracy, resulting in a grossly inefficient
model. You should also remember that the optimal mesh density is not necessarily the most
accurate mesh density. For instance, if only an approximate idea of a design’s operational
characteristics were required, one might use a coarse mesh to save computation time. If
however, your design was in the final analysis stage, a much finer, more accurate mesh
might be needed requiring significant computation time.

8
3. Computational Solid Mechanics1

3.1 Analysis Considerations

The analysis of solid mechanics problems can either be independent of time (i.e. static) or
dependent of time (i.e. quasistatic or dynamic):

• Static and quasistatic analyses do not consider time as an independent variable in the
governing equations, whilst a dynamic analysis does.
• A static analysis is undertaken when the applied load is time-constant and inertial and
damping effects in the response can be neglected. E.g. steady stresses in a reinforced
concrete beam in a building.
• A quasistatic analysis is undertaken when the applied load is time-varying but inertial
and damping effects in the response can still be neglected. E.g. time-varying stresses in
the members of the supporting truss structure of a water tower.
• A dynamic analysis is undertaken when the applied load is time-varying and inertial and
damping effects in the response cannot be neglected. E.g. time-varying stresses in the
arm of a crane during a rapid lifting operation.

Solid mechanics problems can also be classed as linear or nonlinear:

• Geometry – linear for small geometric deformations (i.e. geometric domain remains
unchanged during the analysis), nonlinear for large deformations (i.e. geometric domain
must be updated during the analysis)
• Material – linear if a solid is being stressed within its elastic limit, nonlinear if it
undergoes plastic deformation. Other nonlinear characteristics may apply, including
energy dissipation.
• Loading Conditions and Constraints – can be nonlinear if dependent on displacements,
for example.

Careful consideration must be given to the attributes of the physical problem being
modeled to determine what analysis types are required to accurately capture its behaviour.
Importantly, the types selected have a direct effect on the size of the model and therefore
the cost of the solution. A nonlinear/dynamic analysis of a problem will always be more
computationally expensive than a linear/static analysis.

The analyses you will conduct during this course will be undertaken to determine either the
stresses, strains or displacements within a part. Additional, more specific types of analyses
are also employed in computational solid mechanics, and although this course will not deal
with them, you should at least be aware of what functions they serve. Examples of these
include:

1
Some of the material in this section was derived from the supporting notes of the MSC SimXpert SMX120
course.

9
• Modal Analysis – to determine the natural frequencies and mode shapes of vibrating
structures.
• Buckling Analysis – to determine the buckling characteristics of slender structures under
compressive loading.

3.2 The Governing Equations

In solid mechanics problems, we are generally interested in determining the behaviour of a


solid in response to an imposed force (or displacement). As mentioned above, one or more
of the following response indices are generally sought by the engineer:

1. Displacement
2. Strain
3. Stress

Three fundamental governing equations are used to describe the response of a solid: the
equilibrium, compatibility and constitutive equations. These equations will now be
expressed for an infinitesimal volume of an elastic, homogeneous and isotropic material:

The Equilibrium Equation: div{ }  {F}  {u}

Here, div is the divergence operator, { } is the stress tensor (normal & shear stresses),
{F } is the body force vector,  is the material’s density, and {u} is the acceleration vector
(the second time derivative of the displacement vector).

As can be seen, the equilibrium equation is a balance of forces experienced by the volume:
the sum of the internal and body forces equals the inertial force (density x acceleration). In
its dynamic form (as above), it is often referred to as the equation of motion. The dynamic
form can also include a damping force term if energy dissipation via material damping is
present. In static analysis however, the inertial force term is set to zero as derivatives with
respect to time are not considered.
{ }  grad{u}  ( grad{u})T 
1
The Compatibility Equation:
2

Here, { } is the strain tensor (normal and shear strains), grad is the gradient operator, and
{u} is the displacement vector (displacements in the translational and rotational directions).
The function of the compatibility equation is to determine strains in the volume on the basis
of associated displacements.

The Constitutive Equation: { }  [C ]{ }

The matrix, C  , reflected above is the stiffness matrix containing an arrangement of the
elastic and shear moduli. The purpose of the constitutive equation is to determine the
stresses in the volume on the basis of the strain tensor and the stiffness matrix. The above
equation is in Hooke’s law form and is applicable to linear (elastic) materials only!

10
3.3 Introduction to the Finite Element Method

The governing equations above provide a continuous solution over a simple spatial domain
(a cube). But how do we obtain solutions for complex geometries? We have to discretize
complex geometries into smaller, simpler, regions for which solutions can be more easily
found. In the finite element method (FEM), which is used by NX Advanced Simulation, these
regions are called elements. The actual solutions are computed at a discrete number of
points called nodes. And then what? How exactly is the finite element method put into
practice? Let’s take a look at what’s involved in solving a linear, static problem using the
FEM:

1. The original problem domain is divided into a collection of simply shaped elements,
connected by nodes (black dots):

Each node is capable of moving in six independent directions: three translations and three
rotations. These movements are called the degrees of freedom (DOF) of a node:

The relationship between an element and its surrounding nodes can be described by the
following expression known as the elemental equation:

k e{u}e  { f }e
Note that this is in fact an equilibrium equation: it is describing a spring (the material) in
equilibrium with an external force. The elemental stiffness matrix, k e , is derived from the
element geometry, material properties and element properties. The elemental load vector,
{ f }e describes the forces acting on the element. The displacement vector, {u}e , describes

11
the movement of the nodes in response to the applied forces and is the unknown of the
equation.

2. The elemental stiffness matrices and load vectors are then assembled into a global
stiffness matrix and load vector, respectively. This allows the global equilibrium equation
to be established:

K {u}  {F}

3. The boundary condition (or constraint) must now be applied to the overall model in
both a translational and rotational sense to prevent rigid body motion (RBM) of the
model. Mathematically, this is done by removing rows and columns corresponding to
the constrained degrees of freedom from the global equation.

4. The global equation can then be solved to determine the unknown nodal displacements.
Element strains and stresses can then be computed from the nodal displacements using
compatibility and constitutive equations, respectively.

There are two general approaches to the finite element method: the displacement method
and the force method. In the displacement method, nodal displacements are the basic
unknowns, whereas in the force method, member forces are the basic unknowns. Both
methods can be used to solve structural problems, but the displacement method is used by
most finite element codes, including NX Advanced Simulation. Some further comments on
the displacement method:

12
• In the displacement method, the equilibrium, compatibility and constitutive relations
are used to generate a system of equations in which the displacements are unknown.
• The global stiffness matrix is used to relate the forces acting on the structure and the
displacements resulting from these forces according to the global equation.
• A key step in the displacement method is therefore the formulation of the stiffness
matrix of each element, [k]e.

To illustrate the development of an elemental stiffness matrix, the formulation of the


element stiffness matrix for a rod element will now be demonstrated. Consider an elastic
rod (elastic modulus = E) of uniform cross section, A, and length, L, under axial load:

Note that the axial translations, u1, and, u2, are the only displacements at nodes 1 and 2.
The element therefore has just two degrees of freedom. The elemental stiffness matrix
formulation for this element can be broken down into the following steps:

1. Step 1 – Satisfy static equilibrium (1):

F x  F1  F2  0  F2   F1

2. Step 2 – Relate strains to displacements (2):

L u2  u1
x  
L L

3. Step 3 – Relate stress to strain (3):

 x  E x

4. Step 4 – Relate force to stress ( 4):

F1 F2
x   and x 
1
A 2
A

5. Step 5 – Relate force to displacement (5):

13
EA
 F1   x A  E x A  (u2  u1 )
L

In other words (6)…

EA EA EA EA
 F1  u2  u1 and F2  u2  u1
L L L L

Finally, Equations (6) represent two linear equations with two unknowns and can be
rewritten in matrix form as follows:

 F1  EA 1  1 u1 
     or  f e  k e{u}e
F2  L 1  1 u2 

The stiffness matrix and the applied force vector are known, therefore the resulting
displacement vector can be solved for. Some additional comments:

• The method we’ve just applied to determine the element stiffness matrix is known as
the ‘direct method’.
• This method obviously works well for simple elements such as rods and beams (1D
elements).
• However, to compute element stiffness matrices in more complex 2D and 3D elements,
the ‘variational method’ is used. In-depth description of this method can be found in
literature.

3.4 Application of the Finite Element Method using NX

3.4.1 File Types

As described earlier, the engineering computational analysis process is divided into six
stages. In relation to deriving the actual solution, three stages are the most important: pre-
processing, solving and post-processing. When conducting a structural analysis in NX, pre-
and post-processing is undertaken using the NX Advanced Simulation application, whilst the
separate NX Nastran code is used to execute the solution process.

Following completion of pre-processing in NX Advanced Simulation, an NX Nastran input file


is generated, containing all the model information that NX Nastran requires to solve the
finite element problem. Such information includes details relating to the geometry,
elements, materials, and loads and constraints applied. The input file is usually generated
with the extension “.dat”, although other extensions are also applicable. The input file is
prepared and passed to NX Nastran when the “Solve” command is issued.

During the solution process, NX Nastran automatically creates a set of output files that
present the solution in addition to other useful information. These can be summarised as
follows:

14
File Type Description
.DBALL A database containing the input file, assembled matrices, and
solutions. Used also for restarting the run for additional analysis or
output.
.f04 Provides history of assigned files, disk space usage, and modules
used during the analysis.
.f06 Main output file with printed output, such as displacements and
stresses.
.LOG Gives a summary of the command line options used and the
execution links.
.MASTER Contains the master directory of the files used by the run and the
physical location of those files on your system. Needed for a
restart.

Following completion of the solution run, solution data must be imported into the post-
processing application (NX Advanced Simulation) in a suitable form, to enable the user to
graphically inspect the results. For this purpose, NX Nastran generates a “.xdb” output file,
which is imported into the post-processor when the “Load” command is given.

In addition to the above files, NX Nastran also generates

3.4.2 Units

As with any engineering calculation activity, the appropriate use of consistent units is critical
if the computed results are to be at all meaningful. When talking about consistency of units,
we’re referring to the unit system (i.e. Metric or Imperial), the unit scale (e.g. degrees
Celsius or Kelvin) and the order of magnitude of the units (e.g. Pa or MPa). Obviously, if
inconsistency exists with respect to any of these categories, the solution you derive will be
erroneous. Certain analysis codes (particularly research-based or older software) do not
accommodate unit system specification, and the user is required to manually ensure
consistency. In fact, the NX Nastran code operates on a unitless basis. Modern, pre- and
post-processor packages allow you to specify the characteristics of the units you use,
enabling convenient, unconverted unit inputs. In the case of NX Advanced Simulation, the
Units Manager can be used to set unit system details. Whatever the case, as a beginner, it’s
helpful to keep things simple, and it is generally advised that you stick to the SI system of
units.

3.4.3 Coordinate Systems

Coordinate systems allow a simulation model to be given a spatial frame of reference with
respect to the physical size and orientation of the model. Moreover, coordinate systems are
used to define the spatial location of nodes and to orient each node’s displacement vector.
Two types of coordinate systems employed in the NX Advanced Simulation application that
are of fundamental importance are:

15
1. Absolute Coordinate System – the “global” coordinate system implicitly defined in
Cartesian coordinates as the reference coordinate system. The overall solution of the
finite element model is carried out in terms of this coordinate system.
2. Local Coordinate System – a user-defined coordinate system that can be used to specify
model parameters that are difficult to apply in reference to the basic coordinate system.
A local coordinate system must be related directly or indirectly to the basic coordinate
system.

Three types of local coordinate system can be created in NX Advanced Simulation:


Cartesian, cylindrical and spherical.

3.4.4 Geometry Definition

Simple and complex part geometries can either be generated in-house in the NX Modeling
application or by using third-party CAD packages such as Inventor, SolidWorks or Solid Edge.
If using NX Modeling, switching to the NX Advanced Simulation application once the part
model is complete will automatically initiate the part importation process. If using a third-
party software, you will have to import the part into NX Advanced Simulation in a
recognisable file format. Allowable formats are outlined in the NX documentation.

One further comment regarding geometry definition: minimising a model’s geometric detail
(such as the removal of non-functional or aesthetic geometric features, including
embossments and fillets) can considerably lower the computational size of the problem.
This is since such features can substantially raise the node count of the associated finite
element model and lead to poor quality meshes. In order to assist you in this process, NX
Advanced Simulation offers the user the option to generate an “idealised part”, in which
superficial features can be suppressed without altering the original part file. The finite
element analysis can then be based on this simpler geometry. It is important to note that
such simplifications should only be made if the features in question are structurally
unimportant.

3.4.5 Material Specification

A material model needs to be specified during pre-processing to mathematically


characterise the material being analysed. The material model describes the directionality
and the stress vs. strain behaviour of the material on the basis of specific user-defined
mechanical properties. In terms of directionality, three conditions can exist:

1. Isotropy – mechanical properties uniform in all directions (e.g. unworked metals,


plastics).
2. Orthotropy – different mechanical properties in two/three orthogonal directions (e.g.
composite laminate, wood).
3. Anisotropy – most general case of directional dependence of mechanical properties.

The stress vs. strain behaviour of a material can be linear (e.g. steel in the elastic range) or
nonlinear (e.g. steel in the plastic range, rubber, soil, damaged material). A variety of
material models are available in NX Advanced Simulation that can accommodate materials

16
that are directional and/or linear/nonlinear. Remember: the directionality and stress vs.
strain behaviour of a material need to be captured accurately by the material model to
compute a realistic response!

3.4.6 Elements

A wide variety of elements are available in NX Advanced Simulation to provide spatial


discretization (over 50). The key is to choose the simplest element for the job that will
provide sufficient representation of the structure. More complex elements will solve the
problem, but will give you extra (unwanted) results and increase the solution time. There
are three basic sets of structural elements: 1D, 2D and 3D elements (as shown below).

1D elements are used to discretize structures that exhibit one-dimensional behaviour (e.g.
beams and truss members). The most common 1D elements used in NX Advanced
Simulation are CROD, CBAR and CBEAM, and should be applied as follows:

• CROD – If only an axial or torsional load is to be transmitted.


• CBAR – If an axial, torsional, shear and/or bending load is to be transmitted.
• CBEAM – Should be used instead of CBAR if:
1. The cross-section tapers.
2. The neutral axis and the shear centre are not coincident (e.g. open cross-
sections).
3. Shear deformation needs to be taken into account.
4. There is significant cross-sectional warpage.
5. The analysis is nonlinear.

2D elements are used to discretize thin-walled structures and plates. This two-dimensional
approximation of three-dimensional structures should only be made if the thickness of the
structure is at least ten times less than the smallest of either of the other two structural
dimensions. If this rule-of-thumb is violated, 3D elements should be applied. This is because
shell elements consider in-plane and bending behaviour, but do not account for the
through-the-thickness variation of strain. Four general-purpose 2D elements are available in
the NX Advanced Simulation Structures module: CTRI3 and CTRI6 (triangular elements), and
CQUAD4 and CQUAD8 (quadrilateral elements). These elements should be applied as
follows:

17
• CTRI3 – First-order element (3 corner nodes). Commonly used for mesh transitions but
not suitable for modelling bending behaviour. Should not be used as the primary
element.
• CTRI6 – Second-order element (3 corner and 3 edge nodes). Better accuracy than CTRI3
and useful in regions of curvature.
• CQUAD4 – First-order element (4 corner nodes). Good overall accuracy and useful in
regions of double curvature.
• CQUAD8 – Second-order element (4 corner and 4 edge nodes). Greatest accuracy and
useful in regions of single curvature (excessively stiff when used in regions of double
curvature).

When we speak of the order of an element (i.e. first- or second-order), we are referring to
the nature of the interpolation function used to calculate response indices between the
corner nodes of an element. A first-order (linear) element interpolates such values linearly,
whilst a second-order (quadratic) element uses a quadratic interpolation function and hence
features an extra edge-node midway between the corner nodes. Second-order elements are
generally more accurate than first-order elements but are more computationally expensive
(owing to the extra nodes).

Three types of 3D elements (aka solid elements) are available in NX Advanced Simulation
Structures: CHEXA (8-20 nodes), CPENTA (6-15 nodes) and CTETRA (4-10 nodes). These are
discussed in general terms below:

• CHEXA – Acceptable for general use. Accuracy degrades when element is skewed and
used in regions where bending behaviour is dominant. Has superior performance to
other 3D element types in some modelling situations.
• CPENTA – Commonly used in transitions from solids to plates or shells.
• CTETRA – Frequently used in automatic mesh generation. The four-noded CTETRA
element is not recommended for use. The ten-noded CTETRA element provides similar
accuracy to the eight-noded CHEXA element and is easier to mesh, although requires a
longer solution time.

Element properties need to be defined (especially for 1D and 2D elements) to specify what
type of an element is required and to define elemental geometric characteristics such as
cross-sectional area, 2nd moment of inertia, torsional constant and thickness. The element
property specification is used in conjunction with the material model specification to
determine the elemental stiffness matrix.

An important factor to keep in mind is the “quality” of the meshed elements in your model.
Several geometric parameters including the aspect ratio, skewness and warpage of an
element have a direct effect on the accuracy of the solution it can generate. For best results,
these parameters should lie within the recommended range specified for the particular
element. Note that NX Advanced Simulation has functions available to assess and improve
the quality of meshed elements.

18
3.4.7 Meshing

Meshing is the process of physically assigning the elements selected for spatial
discretization to the geometric domain, on the basis of an element property. The
characteristics of the process depend on whether a surface is being meshed in a “surface
meshing” operation (for 1D and 2D geometries and the faces of 3D geometries) or a solid is
being meshed in a “solid meshing” operation (for 3D volumes).

In either case, the sizing of the elements can be specified by defining an element size target
(manually or automatically) or by specifying the number of elements required along the
lines (1D elements) or edges (2/3D elements) of the geometry. The latter technique is
known as mesh seeding, and allows for tight control of element size distribution. Mesh
seeding also allows for a directional bias in element size to be assigned to a geometry to
capture stress gradients in a component efficiently. High stress gradient regions need a
higher discretization density to preserve solution accuracy. But a higher element density
means greater solution cost. By using a biased mesh however, more elements can be
applied in regions of high stress gradient (for accuracy), with element density being reduced
as the stress gradient decreases (for efficiency). In NX Advanced Simulation, mesh seeding
can be achieved using the “Mesh Control” command.

Meshing of two-dimensional surfaces in NX Advanced Simulation is achieved using the “2D


Mesh” or “2D Mapped Mesh” commands. By default, applying the “2D Mesh” command
results in the automatic generation of a free (i.e. unstructured) mesh with little user input
required. If a more regular mesh is desired, the “Attempt Free Mapped Mesh” option can be
selected, which instructs the mesher to create a structured mesh where possible within the
free mesh. Alternatively, the “2D Mapped Mesh” command can be used to create a
structured mesh throughout the entire model.

Solid meshes can be generated in a number of ways. If the component to be meshed varies
consistently along a third dimension, then it can be meshed with solid elements generated
from the extrusion or revolution of a 2D mesh. In this case, the resulting mesh will be highly
structured. To make use of this approach, the “2D Dependent Mesh” command should be
selected. If the component is asymmetrical or has complicated features, automatic mesh
generation using the “3D Tetrahedral Mesh” command should be employed.

3.4.8 Loading Conditions and Constraints

Loading conditions are applied to a simulation model to represent the physical loading of a
solid/structure in terms of magnitude and orientation. Loading can be either static
(unchanging with time) or transient (changing with time) depending on the nature of the
actual problem. The types of loads that are commonly considered in computational solid
mechanics analysis include:

1. Concentrated Loads (point loads)


2. Distributed Loads (per unit length loads, constant or varying in space)
3. Pressure Loads (applied in the direction of the surface normal, constant or varying in
space)

19
4. Gravitational/Inertial Loads (weight of the structure or loading due to acceleration)
5. Contact Loads (imposed during contact with another body)
6. Moments

In addition to their direct application, loads can be indirectly applied through the
specification of an imposed displacement in the structure.

Constraints (boundary conditions) must be applied to a model to restrict its motion as a rigid
body through space (i.e. when the model moves in at least one way which does not
generate strain). A fully unconstrained model has the potential to slide and spin in and
about the x, y and z directions. Any one of these six free motions, known as rigid body
modes (RBMs), will result in a singular global stiffness matrix and an unsolvable problem.
Even if the problem being considered is one- or two-dimensional, the finite element solution
is undertaken with respect to three-dimensional space, so all six RBMs need to be
eliminated through the imposition of translational and rotational constraint. In other words,
always think in three dimensions when applying constraints.

3.4.9 Symmetry

In certain modelling scenarios, the geometry of a model and the loading conditions and
constraints applied to the model are symmetrical about one or more planes (i.e. the xy, yz
and/or xz planes). In these special cases, the physical volume of the model can be reduced
by a factor of 2n, where n is the number of planes of symmetry. The computational cost of
the solution can therefore be reduced dramatically by taking advantage of symmetry. This is
why you should always be on the lookout for opportunities to employ model symmetry in
computational analysis.

Mathematically, symmetry conditions are applied by defining specific translational and


rotational constraints to the nodes at the points, curves or surfaces where the symmetry
plane cuts through the model. A symmetry constraint is defined as follows: translational
displacements normal to the plane of symmetry and rotational displacements about the
axes lying in the plane of symmetry are fixed, whilst all other displacements are set as free.

In specifying a symmetric constraint, the structural characteristics of the other half, quarters
or eighths are mirrored about the plane/s of symmetry. But what is the physical significance
of applying a symmetry constraint? In reality, no displacement of material occurs across the
symmetry plane/s owing to symmetrical loading conditions and constraints. Try and
visualise this!

A special case of symmetry, known as axisymmetry, can be applied when analysing


rotationally symmetric structures that are subjected to symmetric loads and constraints. In
such cases, the resulting computational size of the axisymmetric model is only a fraction of
that of a fully three-dimensional model.

20
4. Computational Heat Transfer2

4.1 Analysis Considerations

The analysis of heat transfer problems can be categorised according to three general
problem characteristics:

a) Heat Transfer Mechanism


b) Time Dependence
c) Linearity

a) Heat Transfer Mechanism

The transfer of heat can occur in three ways:

1. Conduction – transfer of heat through a solid, liquid or gaseous medium via


molecular vibration.
2. Convection – transfer of heat between a solid and an adjacent fluid flow across a
boundary layer. Free and forced forms.
3. Radiation – transfer of heat between emitting and absorbing bodies via
electromagnetic radiation.

Obviously, the type/s of heat transfer characterising a problem will determine the solution
approach required. The propagation of heat via conduction can be modelled very effectively
using the finite element method. As such, NX Advanced Simulation will be used for
conduction modelling in this course.

The explicit modelling of convective heat transfer requires consideration of the fluid domain
to model the boundary layer, and the solid domain to model the heat sink or source.
Convective modelling is therefore generally undertaken using analysis codes with multi-
disciplinary capabilities based on either finite element or finite volume discretization.
Consequently, STAR-CCM+ will be used for convection modelling in this course.

Explicit radiation modelling is carried out using more specific numerical approaches and is
not considered in this course. Having said this however, the effects of convective and
radiative thermal loading can still be accommodated in a finite element conduction
modelling code by applying convection and radiation boundary conditions.

b) Time Dependence

As with structural analysis, problems of heat transfer can be independent of time (steady) or
dependent of time (transient). Steady analysis does not consider time as an independent
variable in the governing equations, whilst transient analysis does. A steady analysis is
undertaken when sufficient time has elapsed since the application of an unchanging thermal
load that the temperature distribution within the domain no longer changes with time.

2
Some of the material in this section was derived from the MSC SimXpert SMX124 Course Notes.

21
A steady analysis would be conducted, for example, to predict the temperature profile
within the insulation of a brick-firing kiln (why is this?). A transient analysis is undertaken in
instances where the domain temperature distribution is still developing shortly after
thermal loading is applied or changed. A transient analysis would be conducted, for
example, to estimate the unsteady temperature profile in an automotive brake disk during a
simulated braking event (why is this?).

c) Linearity

Heat transfer problems can be linear or nonlinear, requiring either linear or nonlinear
analysis approaches. Linearity/nonlinearity exists with respect to a variety of problem
characteristics, including:

1. Material – linear if thermal properties can be treated as temperature independent,


nonlinear if thermal properties depend on temperature. A nonlinear analysis is also
necessary if a material phase change occurs (when might this be the case?).
2. Time Dependence – linear if the analysis is steady, nonlinear if the analysis is
transient.
3. Loading and Boundary Conditions – nonlinear if conditions are dependent on
boundary temperature (i.e. convective & radiative loading).

4.2 The Governing Equation of Heat Conduction

In heat conduction problems, we are generally interested in establishing how heat flows
through a given domain. In particular, we want to quantify the variation of two indices with
space and (often) time:

1. Temperature
2. Heat Flux

One fundamental governing equation is used to describe the temperature field in the
domain: the heat conduction equation. This will now be expressed for a control volume of a
thermally homogeneous and isotropic medium (1):

T
c  div (k  gradT )  Q
t

Here, ρ, c and k are the medium’s density, specific heat capacity and thermal conductivity,
respectively, T and t represent temperature and time, respectively, and Q is the heat
source/sink term. The above equation defines the conduction, storage and generation of
heat in the volume as a function of space and time. The conductivity and specific heat
capacity terms are considered to be temperature dependent. If however, these two terms
and the heat generation rate are independent of temperature, (1) reduces to (2):

22
1 T Q
  2T 
 t k

Here, α is the thermal diffusivity constant (obtained from ρ, c and k). In the absence of heat
generation within the medium, (2) becomes (3):

1 T
  2T
 t

Finally, if temperature does not vary with time, i.e. a steady analysis is being undertaken, (3)
becomes the Laplace equation (4):

 2T  0

4.3 The Finite Element Representation of Heat Conduction

The finite element discretization procedure employed for modeling heat conduction is
analogous to that applied in solid mechanics. For the steady-state case (without internal
heat generation), the governing equation of conduction can be represented over the entire
geometric domain by the global finite element equation:

[ K ]{T }  {P}

In this equation, [K] is the heat conduction matrix (analogous to the stiffness matrix), {T} is
the temperature vector (analogous to the displacement vector), and {P} is the global applied
heat flow vector (analogous to the load vector). The assemblage of the global conduction
matrix is carried out on the basis of summing the individual elemental conduction matrices
(analogous to summing the elemental stiffness matrices).

The global conduction matrix [K] is dependent on material properties, element properties
and geometry. The global applied heat flow vector {P} is computed through the summation
of the applied heat flow vectors for each element. All heat flux loads that are imposed upon
the model using thermal loading conditions are included in {P}.

Whilst [K] and {P} are known quantities, we are ultimately interested in determining the
temperature at each point in the domain by solving (1) for the unknown temperature vector
{T}. Any temperature boundary conditions (“thermal constraints”) prescribed to the model
will be entered into {T} as it is derived as the sum of the individual elemental temperature
vectors.

4.4 Application of the Finite Element Method using NX Advanced Simulation

The following aspects relating to solid mechanics modeling in NX Advanced Simulation are
equivalently applicable in heat conduction modeling and require no further discussion:

1. File Types
2. Units

23
3. Coordinate Systems
4. Geometry Definition
5. Meshing Functions

The particulars of other aspects concerning heat conduction modeling will now be
discussed.

4.4.1 Material Specification

The number of material properties that need to be specified in a conduction model depend
on the time dependence of its solution:
1. Steady-State Analysis – thermal conductivity.
2. Transient Analysis – thermal conductivity, specific heat and density.

The thermal properties of a material may also be temperature dependent, particularly over
wide temperature ranges. NX Advanced Simulation allows for the temperature dependence
of thermal conductivity and specific heat to be specified, but not that of density.
Furthermore, thermal conduction within a material can also be directionally-dependent. As
in the case of material stiffness, thermal conductivity can be:

1. Isotropic (k)
2. Orthotropic (kx, ky, kz)
3. Anisotropic (kxx, kxy, kxz, kyy, kyz, kzz)

So called ‘material specification’ is also required to define convection and radiation


boundary condition coefficients:

• Covection – heat transfer coefficient (can be a function of time and temperature).


• Radiation – surface absorptivity and surface emissivity (can be functions of
temperature).

4.4.2 Elements

The 1D, 2D and 3D conduction elements available in the Thermal module of NX Advanced
Simulation have essentially exactly the same configurations as those available in the
Structures Module.

As in the case of the finite element modeling of solid mechanics problems, element
properties need to be defined to specify:

1. What type of element is required – are we trying to model a 3D, 2D or 1D domain?


2. Elemental geometry characteristics such as cross-sectional area (1D element) and
thickness (2D element).

The element property specification is used in conjunction with the material model
specification to determine the elemental conduction matrix. The selection of what

24
dimension of element to use in a model is based on how heat is anticipated to flow within
the model:

• 1D element – Heat flow is only in the direction of the centerline of the element, not
normal to the centerline of the element.
• 2D element – Heat flow is only in the plane of the element, not normal to the plane
of the element.
• 3D element – Heat flow is in all three directions of the element.
• Axisymmetric element – Heat flow is only in the radial or centerline direction of the
element, not in the circumferential direction.

As far as the type of element to use is concerned:

• The performance of linear elements is as good as that of quadratic/parabolic


elements in 2D and 3D conduction modeling.
• It is therefore recommended that linear elements are used unless the domain being
meshed features regions of high curvature, or the number of elements to be used
must be minimised.
• The CQUAD and CHEXA elements are most suitable for general meshing, whilst the
CTRIA and CTETRA/CPENTA elements are useful to apply in mesh transition regions.

4.4.3 Loading and Boundary Conditions

The thermal loading conditions available in NX Advanced Simulation include:

1. Heat Flux Load

a) Normal Flux – Applied directly to model surface.


b) Radiant Flux – Applied from a distant radiation source.
c) Nodal Heat Load – Power input applied to a node or nodes.

2. Volumetric Heat Load – Internal heat generation within elements.

The thermal boundary conditions available in NX Advanced Simulation include:

1. Temperature – Constant or time-varying prescription of temperature to a set of


boundary nodes.
2. MPC – Multipoint Constraint, constrains an arbitrary set of nodal temperatures.
3. Convection – Prescription of forced or free convective heat transfer along
boundaries (can be a boundary condition or load).
4. Radiation – Prescription of radiative heat transfer along boundaries:

a) Ambient (radiation to open space)


b) Enclosure (radiation within a closed space)

5. Temperature Initialization – To perform nonlinear steady-state or transient analysis


it is necessary to specify initialization temperatures.

25
a) Steady-State Analysis – An estimated starting temperature must be defined
at all nodes to give the iterative solver a temperature field to start solving
with.
b) Transient Analysis – A realistic distribution of initial temperatures at all nodes
must be provided to represent the state from which the solution evolves.

4.4.3 Symmetry

The concept of symmetry can also be exploited in thermal modeling to reduce a model’s
computational size. Symmetrical boundary conditions can be incorporated into a model if its
geometry and the loading and boundary conditions applied to it are symmetrical about one
or more planes. This process is analogous to symmetry in solid mechanics modeling, where
material is prevented from moving across the plane/s of symmetry by constraining
displacements. In thermal modeling, symmetry boundary conditions are applied by simply
preventing the flow of heat across the surface cut by the symmetry plane, i.e. prescribing an
adiabatic boundary condition.

4.4.4 Solution Process

Since the majority of thermal conduction problems are nonlinear in nature, the discretized
equations are generally solved using an iterative approach, as opposed to directly solving
the finite element system of equations. Iteration is a procedure in which the final solution to
a problem is obtained by first guessing the solution, and in a series of corrective steps,
refining the solution until the difference between guess number i and number i-1, known as
the residual or error, decreases to within a specified value, known as the tolerance. The
phenomenon of a decreasing residual/error is known as convergence. The user is required
to specify tolerances related to the errors selected for assessment to define when the
solution has been completed.

26
5. Computational Fluid Mechanics

5.1 Analysis Considerations

The analysis of fluid flows can be classified according to a number of flow characteristics
which have a direct impact on the numerical techniques required to solve a given problem:

a) Viscous Effects
b) Compressibility
c) Time Dependence
d) Laminar or Turbulent Flow

a) Viscous Effects

Computational fluid mechanics analyses can either be viscous or inviscid. Although all real
fluids are viscous, the degree to which fluid viscosity influences the overall nature of a
particular flow varies from problem to problem. Flows of highly viscous fluids (e.g. flow of
crude oil in pipes) and flows over rough surfaces (e.g. wind flow over urban terrain) will
result in the development of significant boundary layers. In such cases, a viscous analysis is
required where the viscous terms in the governing equations are accounted for.

Flows of low viscosity fluids (e.g. helium) and flows over smooth surfaces (e.g. polished
stainless steel) will result in thinner boundary layers. Depending on certain additional
factors, making an inviscid simplification under these conditions by leaving out the viscous
terms in the governing equations may provide and accurate enough solution at far less
computational expense. Remember however, that the physical scale of the problem must
be considered before choosing between a viscous or inviscid analysis regime. If the
thickness of the anticipated boundary layer is large in comparison to the size of the flow
domain, a viscous analysis will be necessary – even if the fluid has a low viscosity and flows
over smooth surfaces.

b) Compressibility

Computational fluid mechanics analyses can either be compressible or incompressible. A


compressible flow analysis is undertaken when a compressible fluid flowing within a domain
undergoes a significant change in density as it passes through the domain. Compressibility
effects become significant when a flow moves over or within a body at high speed (e.g. flow
around a passenger jet at cruising speed or flow within a rocket nozzle). Flows can generally
be classified as subsonic (0 < Mach Number < 0.7), transonic (0.7 < Mach Number < 1.3) or
supersonic (Mach number > 1.3).

Technically, the effects of compressibility are present at any flow speed but in reality, these
effects are negligible at low speeds. The general rule is that a compressible analysis regime
should be applied for flows of compressible fluids where the Mach Number > 0.3. This
means that density must be considered as a dependent variable in the governing equations,
leading to greater computational expense. An incompressible simplification can be made for
compressible fluids where Mach Number < 0.3 or when fluids are by nature incompressible
(e.g. liquids).

27
c) Time Dependence

Computational fluid mechanics analyses can either be steady (independent of time) or


unsteady (dependent of time). A steady analysis does not consider time as an independent
variable in the governing equations, whilst an unsteady analysis does. A steady analysis is
undertaken when a flow is defined by time-constant boundary conditions that result in an
unchanging flow field (e.g. flow of a stream over a weir).

An unsteady analysis is conducted when a flow’s boundary conditions are time-varying,


resulting in a changing flow field (e.g. the flow of air over a set of wind turbine blades during
a gust of wind). In addition to boundary conditions, unsteady analyses require initial
conditions which can sometimes be difficult to derive. Unsteady analysis regimes are
generally more expensive to implement than steady analysis regimes. In some instances
however, it can be more effective to compute a steady solution by applying an unsteady
analysis regime over a long period of time.

d) Laminar or Turbulent Flow

Flows can either be laminar (ordered state), turbulent (unordered state) or transitional
(transitioning between laminar and turbulent states). Computational fluid mechanics
analyses can therefore be classified as being either laminar or turbulent (as determined by
the Reynold’s number of the flow). The physical characteristics of a flow are highly
dependent on this classification.

Turbulent flows generally result in notably different velocity profiles and higher heat
transfer coefficients, for example. Specific turbulence models that are incorporated into the
governing equations have been developed to represent the effects of flow disorder.
Conducting a turbulent analysis is generally more computationally expensive than
undertaking a laminar analysis. The topic of turbulence modelling will be discussed in
greater detail later…

5.2 The Governing Equations

The governing equations of fluid mechanics are based on three fundamental principles of
physical balance:

1. Conservation of Mass – yields the ‘continuity equation’.


2. Conservation of Momentum – yields the ‘momentum equation’.
3. Conservation of Energy – yields the ‘energy equation’.

The equations describing the conservation of fluid mass and momentum are collectively
referred to as the Navier-Stokes Equations. The above equations will now be detailed for the
most general case of an unsteady, three-dimensional, compressible, viscous flow of a
Newtonian fluid, in an infinitesimal volume with respect to Cartesian coordinates:

u v w 


The Continuity Equation    0
x y z t

28
The Momentum Equation: X-Direction

Du     u 2  u v w      u v     u w 
  g x     2                 
Dt x x   x 3  x y z   y   y x  z   z x 

The Momentum Equation: Y-Direction

Dv     u v     v 2  u v w      v w 
  g y      2       2             
Dt y x   y x  y   y 3  x y z   z   z y 

The Momentum Equation: Z-Direction

Dw     u w     v w     w 2  u v w  
  g z                 2       
Dt z x   z x  y   z y  z   x 3  x y z  

In the above equations, u, v and w are the components of velocity in the x, y and z
directions, respectively, gx, gy, gz are the components of gravity in the x, y and z directions,
respectively, and µ is the fluid’s kinematic viscosity.

The continuity and momentum equations are the only equations needed to define the flow
of a fluid if the flow is incompressible or does not experience changes in internal energy (i.e.
no heat is added externally or generated internally). If the time dependent and viscous
terms of the above equations are omitted, and if density is set as constant, one obtains the
Euler equations for steady, inviscid, incompressible flow. If however the flow is
compressible and/or experiences changes in its internal energy, a third governing equation
known as the energy equation must be applied:

The Energy Equation

( i) iu  iv  iw   u v w    T    T    T 


     p                   Si
t x y z  x y z  x  x  y  y  z  z 

In the above, i, p, κ and T are the internal energy, pressure, thermal conductivity and
temperature of the fluid, respectively, whilst Φ and Si are source and sink terms. If the
energy equation is applied, the system of governing equations (continuity, momentum &
energy) is not closed, i.e. we have more unknowns (ρ, i, u, v, w, T, p) than the five equations
we have above. To solve this problem, we need to introduce two equations of state, relating
two of the flow variables to others.

29
Equations of State

p  p , T   p  RT (for an ideal gas)


i  i , T   i  cvT (for an ideal gas)

But what if you want to model more complex flow phenomena such as mass transfer,
chemical reactions and multi-phase flow? These phenomena are not explicitly captured by
the governing equations shown above. Additional equations based on more advanced
mathematical models that govern such processes are therefore needed to ‘interlink’ with
the continuity, momentum and energy equations. It is also important to note that the most
general form of the continuity, momentum and energy equations are nonlinear PDEs, with
derivatives in both space and time – i.e. difficult to solve! Iterative solving techniques are
therefore applied to resolve these equations, which we’ll discuss in greater detail later…

We’ve talked about how useful making simplifications and assumptions in computational
models can be with respect to problems in solid mechanics and heat transfer. As CFD
models are traditionally larger and more complex systems to solve, being able to take
advantage of these shortcuts in CFD modelling is even more important! By implementing
simplifications, the governing equations can be reduced in size dramatically. Possible
simplifications include:

1. Eliminating the time dependence of the governing equations (for cases where the
flow being modelled can be approximated as time-invariant) Steady Flow
2. Setting density as constant (when dealing with low speed flows or flows that don’t
experience notable changes in density) Incompressible Flow
3. Dropping the viscous terms (when such effects aren’t significant or of interest)
Inviscid Flow
4. Reducing the dimension of the problem (2D, cyclic symmetry or axisymmetric
simplifications in the case where the flow is symmetrical).

5.3 The Finite Volume Method in Brief

The finite volume method (FVM) is used by most commercial CFD codes. It is a robust,
efficient and reliable discretization technique that is suitable for use with complex domain
geometries. In the FVM, the computational domain is divided into a series of control
volumes (CVs) (these can be in a 1D, 2D or 3D sense). Flow properties are determined in an
average sense for the entire CV at a specific point in the CV, usually the central point or
centroid, although sometimes at the boundaries of the CV. As mentioned earlier, these
control volumes are often referred to as cells. The behaviour of the fluid flowing within the
CVs in a finite volume mesh is governed by the conservation laws (i.e. continuity,
momentum and energy equations) expressed in integral form. Let’s take a brief look at how
the method is applied…

For the case of mass conservation in steady flow across a continuous surface boundary Ω,
we have:

30
 
  Fd  0

Here, the flux vector, F, is the product of density and the u, v and w components of the
velocity vector. Using Gauss’s Divergence Theorem, the above equation can be expressed in
semi-discrete form as:
 
 F  nd  0


This expression simply says that for steady conditions, the net mass flux across the discrete
boundaries (points, edges or faces) of a control volume is zero. In a one-dimensional steady
flow scenario, the mass flux through a control volume can be represented as follows:

Here, FL and FR are the mass fluxes passing through the left and right points of the control
volume, whilst FX is the net mass flux through the control volume. Applying the divergence
theorem in this case, we get:

XR

 F  dx  ( F
XL
x R  FL )  0

In a general sense, the flux term in the continuity equation is discretized as follows:
  * 
  nd 
F  k  nk
F
 boundaries

This equation states that the total net mass flux flowing through the control volume can be
approximated as the sum of the discrete normal fluxes flowing across each of the CV’s
boundaries. By working with the integral form of the governing equations, the FVM has an
important inherent characteristic (and strength!): it ensures that flow variables are
conserved across the CV. This property makes the FVM very effective at capturing
discontinuities in a flow (e.g. sonic shocks). Conservation is a characteristic that isn’t
naturally present in the general finite difference and finite element schemes.

To derive values for flow variables on the boundaries of the control volume from the value
computed at the cell centroid, a variety of numerical interpolation schemes are available.
The following schemes can be specified for use in Star-CCM+:

1. First-Order Upwind – flow solution reached more easily, but with lower accuracy.

31
2. Second-Order Upwind – flow solution reached less easily than in the case of first-
order upwind, but with great accuracy. Generally offers a good compromise.
3. Central Differencing – as accurate as second-order upwind but typically presents
solution stability problems. Useful in large eddy turbulence modelling.
4. Hybrid Second-Order Upwind/Central – second-order accuracy, useful in detached
eddy turbulence modelling.

The type of interpolation scheme you choose can profoundly impact the accuracy of the
overall solution. This is because each scheme has strengths and weaknesses, making them
work better in some modelling scenarios whilst worse in others. Each scheme also exhibits
unique numerical behaviour during the solution process; sometimes stable, sometimes not.
The ability to select the most appropriate scheme for a particular flow scenario is best
developed from experience.

5.4 Computational Fluid Mechanics using Star-CCM+

5.4.1 Star-CCM+ File Types

In Star-CCM+, pre-processing, solving and post-processing activities are all undertaken using
the same program. This means that only one main working file needs to be dealt with: the
*.sim or simulation database file.

This file can therefore comprise anything from an empty template at the beginning of the
modelling process, to the fully meshed and solved model incorporating analysis results at
the end of it. Flow domain ‘surfaces’ can be imported into Star-CCM+ in a variety of
geometry file formats including Parasolid, STL and Iges. Surface and volume meshes in
various third-party formats can also be imported.

5.4.2 Units

Once again, it’s crucial that consistency of units is observed when defining the parameters
of a CFD model to avoid generating significant errors. Two primary unit systems are
available in Star-CCM+: Systeme Internationale (SI) and United States Common System
(USCS). These built-in unit systems allow for the automatic conversion of units entered in
arbitrary form into the primary unit system form, ensuring basic consistency. For
convenience, custom forms of units can also be specified in Star-CCM+.

5.4.3 Coordinate Systems

Three coordinate system types are available in Star-CCM+:

1. Laboratory Coordinate System: the default global coordinate system in Cartesian


form.
2. Local Coordinate System: user-defined to specify specific model parameters at a local
level, available in Cartesian, cylindrical and spherical forms.
3. Block-Mapped Coordinate System: user-defined to specify section surfaces in
periodic flow scenarios.

32
1.4.4 Geometry Definition

Here, the geometry that we are referring to is that of the volume occupied by the fluid lying
within the boundaries of the flow domain. In Star-CCM+, this geometry can be defined by
importing a CAD-based solid model, or by importing a ready-made mesh from a separate
meshing program. Note that when importing a solid model, Star-CCM+ will automatically
attempt to ‘surface mesh’ the exposed surfaces of the model with a mesh seeded by the
facets of the geometry. Once again, to optimise the solution efficiency of a CFD model, all
unnecessary detail in the flow domain geometry should be removed or simplified.

5.4.5 Mesh Generation

STAR-CCM+ employs two overall mesh classes: surface meshes and volume meshes.

Surface Meshes

• Positioned on the surface boundary of the flow domain.


• Aren’t meshes in the true sense of the word. They merely provide the framework
from which a volume mesh (the mesh actually discretizing the flow domain) can be
grown.
• Can be constructed with one mesh type only: a triangular mesh.
• Because the volume mesh is ‘grown’ from the surface mesh, the surface mesh
should be of the best possible quality.
• Ideally, a surface mesh should be comprised of near-equilateral cells and should
feature smooth cell density variation.

Volume Meshes

• Occupy the physical flow domain.


• Can be constructed using four mesh types: tetrahedral, polyhedral, trim and prism
layer meshes:

Tetrahedral Meshes

• Built from cells shaped as tetrahedrals (4 cell faces).


• Lowest sophistication, yielding lowest accuracy per given mesh density.
• Shortest generation time, but longest solution time.
• Not recommended for general use.

33
Polyhedral Meshes

• Built from cells that are shaped as arbitrary polyhedrals (average of 14 cell faces).
• Efficient to build, more numerically stable and require fewer cells than tetrahedral
meshes for the same accuracy.
• Generally recommended if surface mesh is of sufficient quality.

Trim Meshes

• Built from hexahedral (6 cell faces) and trimmed hexahedral shaped cells (trimmed
cells at surface).
• Robust and efficient mesh with minimal cell skewness.
• Least demanding on surface quality.
• Recommended if surface quality is poor, or tight control on cell size variation is
needed.

34
Prism Layer Meshes

• Built from orthogonal prism shaped cells positioned next to wall boundaries.
• Should be included with the ‘core’ meshes described above, to accurately simulate
boundary layers.
• Sizing and distribution parameters are dependent on flow characteristics.

The procedure by which a flow domain is meshed in Star-CCM+ is as follows:

1. CAD model of flow domain is imported into STAR-CCM+ (in general, a separate CAD
package is used to create solid model of the flow domain).
2. CAD model is automatically surface meshed as it is imported, using the individual
model geometry facets as a framework.
3. If mesh quality is unacceptable, the Surface Remesher tool is used to remesh surface
according to specified mesh parameters.

35
4. A volume mesh (core mesh and prism layer mesh if specified) is then grown into flow
domain from surface mesh.
5. Any volume mesh quality adjustments required are made by adjusting mesh
parameters and remeshing.

To minimise the no. of cells required in a model, the golden rule is:

• High mesh density in flow regions likely to experience high variable gradients (e.g.
boundary layers and recirculation zones in the case of velocity, sonic shocks in the
case of pressure and density).
• Low mesh density in flow regions likely to experience low variable gradients.

The quality of a cell’s geometric composition (mesh quality) has a direct effect on the
accuracy and robustness of the flow variable solutions of that cell, and hence neighbouring
cells. Conditions that can yield meshes of poor quality include:

• Inappropriate locally-prescribed mesh densities.


• Sharp mesh density variations.
• Flow geometries with small included angles (such as gaps).

1.4.5 Definition of Flow Physics

In addition to discretizing the flow domain, the set of equations governing the anticipated
flow characteristics needs to be defined. This is carried out in Star-CCM+ by sequentially
specifying what physics models need to feature in the solution process and hence what
analysis regimes need to be considered. The following characteristics generally need to be
defined:

1. Space – is the flow 2D, 3D or axisymmetric?


2. Time – is the flow steady or unsteady?
3. Motion – are flow boundaries stationary or do they move?
4. Materials – is the continuum material single-component, multi-component (a
miscible mixture) or multi-phase (an immiscible mixture)? Is it a gas, liquid or solid?
What are its physical properties? Is it compressible or incompressible? If
compressible, is it an ideal gas or does it have more specific equations of state?
5. Viscous Regime – is the flow viscous or can it be approximated as inviscid? If viscous,
is it turbulent?
6. Turbulence – what turbulence model is most appropriate to use and what are the
turbulence properties (e.g. turbulent intensity, turbulent kinetic energy)?
7. Flow Coupling – do the continuity and momentum equations need to be directly
coupled during the solution?
8. Energy Coupling – does the energy equation need to be incorporated into the
solution? E.g. compressible flow, buoyant flow.

1.4.6 Turbulence Modelling

36
Turbulence modeling is a central theme of CFD, as most flows encountered in engineering
are turbulent in nature. As a general rule, laminar flow can be assumed to have become
turbulent under the following conditions:

No single type of turbulence ‘model’ can accurately describe all possible turbulent flow
scenarios. As such, different turbulence models have been developed to suit specific flow
regimes. To model turbulence properly therefore, the analyst needs to:

1. Have a general understanding of the flow conditions likely to be encountered in the


actual problem.
2. Appreciate the capabilities of the various turbulence models and their ranges of
application.

Failure to use the appropriate turbulence model, and/or realistic turbulence properties and
boundary conditions, can lead to very wrong results!

Four Notable Contemporary Turbulence Modelling Techniques:

1. Direct Numerical Simulation (DNS)


2. Reynolds-Averaged Navier-Stokes (RANS)
3. Large Eddy Simulation (LES)
4. Detached Eddy Simulation (DES)

Direct Numerical Simulation involves the direct capture of turbulent behaviour at an


extremely small scale. It therefore requires massive computing power and is not yet suitable
for application in general flow problems. STAR-CCM+ is able to implement RANS, LES and
DES, but only RANS will be discussed here as LES and DES are far more specialised
techniques.

RANS Turbulence Modelling:

• Oldest, most widely applicable technique (at present).


• Especially attractive because of its moderate computational resource requirements.
• An approximate approach that works by decomposing the Navier-Stokes equations
describing the instantaneous velocity and pressure fields, into mean and fluctuating
components.
• The fluctuating components are derived as a tensor quantity known as the Reynolds
stress tensor.

37
• A separate turbulence model is used to statiscally compute the components of this
Reynolds stress tensor, and hence mathematically close the system of governing
equations.

The Four Classes of RANS Models Available in Star-CCM+:

1. Spalart-Allmaras – good choice where little surface separation of the flow occurs,
particularly in external flows (e.g. flow over a low angle of attack wing). Not suitable
for flows exhibiting recirculation (particularly with heat transfer) or natural
convection.
2. K-Epsilon – provides a good compromise between robustness, computational cost
and accuracy. Well suited to industrial-type applications that may feature flow
recirculation, with or without heat transfer.
3. K-Omega – recommended as an alternative to Spalart-Allmaras, finding particular
application in the aerospace industry.
4. Reynolds Stress Transport – the most complex and computationally expensive.
Recommended for situations where the turbulence is strongly anisotropic (e.g.
swirling flow in a cyclone separator).

Each of the models above is itself an equation or system of equations. As such, turbulence
properties and boundary conditions of some sort must be specified in each case, to allow the
turbulence model, and hence overall model, to be solved.

5.4.7 Initial and Boundary Conditions

Initial and boundary conditions are used to mathematically represent the way in which the
flow being modelled interacts with the universe around it. Initial conditions specify the value
of fluid variables throughout the flow domain at the starting point of a simulation (beyond
the scope of this course). Boundary conditions define the value of fluid variables at the
boundaries of the flow domain (e.g. closed boundary form - pipe wall, open boundary form –
rocket nozzle outlet), during the entire course of the simulation.

In unsteady flows, both initial and boundary conditions are mathematically required. In
steady flows only boundary conditions are strictly necessary, but approximate ‘initialization’
conditions can be used to aid the solution of complex steady flow problems.

The proper specification of boundary conditions can be tricky, but is a crucial element of
successful CFD modelling. Boundary conditions can either be under- or over-specified,
leading unrealistic models and/or solution failures. E.g. specification of different velocities at
the inlet and outlet of a uniform pipe for incompressible flow. As a general rule, a set of
boundary conditions is well posed if they represent a configuration that can be physically
recreated in a fluids laboratory. So in general, boundary conditions:

1. must not violate the laws of conservation,


2. must be as physically representative of reality as possible,
3. must directly or indirectly prescribe a value for each flow variable being solved for at
some point in the domain.

38
Closed Boundary Conditions:

• Act to physically confine flow.


• Most common: ‘wall’ type, where flow through the confining surface is disallowed.
• Either specified as slip (in the case of inviscid analyses) or zero slip (i.e. v @ wall = 0,
in the case of viscous flow analyses).

Open Boundary Conditions:

• Allow for fluid to enter or leave the flow domain.


• Can be defined in a variety of ways, but are more complicated to define and are
strongly dependent on the nature of flow and geometry domain.

The boundary condition options offered by Star-CCM+:

Location of
Type Imposed Conditions Comments
Application

User-specified velocity vector Static pressure extrapolated from


Velocity Inlet Domain Inlets
and static temperature. adjacent cells.

Static pressure and static


Planes of Flow Shear stress and heat flux at
Symmetry Plane temperature extrapolated from
Symmetry symmetry face set to zero.
adjacent cells.

Interface velocity vector, static


Physical
Wall (Slip) Zero fluid flux through wall. pressure and static temperature
Surfaces
extrapolated from adjacent cells.

Zero fluid flux through wall,


Wall (Non-Slip) interface velocity vector set to
Physical Static pressure extrapolated from
zero (or velocity of wall), user-
Surfaces adjacent cells.
specified static temperature or
heat flux.

Velocity magnitude and static


User-specified total pressure,
temperature computed algebraically,
Stagnation Inlet Domain Inlets total temperature and velocity
static pressure extrapolated from
direction vector.
adjacent cells.

Velocity vector and static


Domain
Pressure Outlet User-specified static pressure. temperature extrapolated from
Outlets/Inlets
adjacent cells.

Static pressure extrapolated from


User-specified mass flow rate
Mass Flow Inlet Domain Inlets adjacent cells, static temperature
and flow angle.
computed algebraically.

External Flow Dependent on characteristics of


Free Stream Boundaries flow. Dependent on characteristics of flow.

Static pressure and static


Flow-Split User-specified total mass flow temperature extrapolated from
Outlet Split Outlets rate and split fraction. adjacent cells.

39
A note on symmetry: Just as in solid mechanics and heat transfer problems, instances of
symmetry are often encountered in fluid flows. In such cases, the flow in the entire domain
is ‘repeated’ in each of the imagenary ‘sub-domains’ formed by the planes of symmetry. By
modelling just one of these flow sub-domains, the flow within the entire domain can be
solved for by a much smaller, and more computationally efficient model. Once again, the
orientation of symmetry planes can be established on the basis of any geometric and
boundary condition symmetry that exists in a flow scenario.

The solution obtained using a segmented model with symmetry boundary conditions would
be identical to that obtained using a whole model with a mesh generated by mirroring the
mesh of the segmented model about the symmetry planes. A symmetry plane is treated as a
boundary having zero shear stress at its surface (i.e. non-zero tangential velocity component
allowed). In addition, no mass flux and no heat flux is permitted across a symmetry
boundary (i.e. zero normal velocity and heat flux components).

5.4.8 Solution Process

In engineering flow problems, the discretized system of equations governing the flow is
generally nonlinear. To be solved numerically, these equations need to be linearised.
Iterative solving techniques are normally used to solve the linearised form of the equations
as opposed to direct techniques. This is because iterative techniques are more efficient at
solving large systems of equations, such as those encountered in CFD analysis.

The overall solution process is governed by a computer program known as a solver. The
choice of what solver to use is based upon the characteristics of the flow being modeled.
Star-CCM+ offers two general types of solvers, each based on fundamentally different flow
models:

1. Segregated Flow Solver

• Solves the governing equations separately in an uncoupled fashion.


• The continuity and momentum equations are synthetically linked using the SIMPLE
pressure-corrector algorithm, which in turn controls additional velocity and pressure
solvers.
• Suitable for solving flows that exhibit incompressibility or mild compressibility
(where slight errors are allowable). E.g. flow around a submarine, flow through an
air-conditioning duct.

2. Coupled Flow Solvers

• These solvers solve the directly coupled, unsteady form of the governing equations
using a time-marching approach.
• Steady solutions are derived by driving the unsteady form of the governing
equations to a steady state.
• Either a coupled implicit solver (implicit integration) or a coupled explicit solver
(explicit integration) can be used.
• In comparison to explicit integration, implicit integration yields a solution more
quickly but at the cost of requiring significantly more memory.

40
• For solution accuracy, a coupled flow solver must be used (in conjunction with the
coupled energy model) in instances where the flow is strongly compressible or
where changes to the fluid’s internal energy occur. E.g. flow around the space
shuttle, flow within a steam turbine, convection (free and forced).

The iterative solution process works by sequentially minimizing the initial error incurred
when the values of the flow variables are guessed at the start of the solution, either
automatically by the software or manually by the user (i.e. initialization values). This solving
process inherently generates an iterative error known as a ‘residual’. To guide the solution
process, the residual is compared to a desired degree of error known as the ‘tolerance’ after
each iteration cycle takes place. The phenomenon of the residual approaching the specified
tolerance value is known as convergence, and once it reaches the value, the solution is said
to have converged. If instead of becoming smaller and smaller, the residual continues to
grow (over an extended period of time) a solution cannot be obtained, and the solution
process is known to have diverged. The reduction of residuals in a converging Star-CCM+
simulation is demonstrated by the following plot:

The general requirement for convergence is that the residuals must be reduced by an order
of magnitude or more during iteration. HOWEVER, a residual reaching its tolerance value is
in itself not a true indication that a solution has been reached. Even if tolerance values are
reached, the actual flow solution may still be in the process of reaching a state of
equilibrium

It is therefore just as important that key engineering metrics related to the simulation are
shown to reach a state of steadiness as iterations progress. E.g. surface averaged mass flow
rate, heat transfer coefficient, lift coefficient, surface averaged temperature. In Star-CCM+,
such parameters can be evaluated during the solution process by setting up ‘monitors’,
which graphically trace the solutions of these metrics during the simulation run. Only once
both of these criteria have been met is it safe to conclude that a solution has been reached.
As a demonstration of the assessment of such a metric, the stabilization of an aerofoil’s lift
coefficient during the course of a Star-CCM+ simulation is shown below:

41
In complex flow cases, convergence can take a very long time to achieve, which is
inconvenient and often very costly. To some extent, the convergence of flow solutions can
be accelerated by:

1. Specifying ‘bulk’ initial conditions that approximate as closely as is reasonably


possible the final solution of the simulation (i.e. initialization conditions).
2. Adjusting parameters known as relaxation factors that are used by the solver to
condition the solution process.

5.4.9 The Importance of Dimensionless Numbers

Dimensionless numbers are useful to use in CFD for determining what flow characteristics
and boundary conditions should be accounted for in a prospective model, by making use of
mean flow parameters which may be known or can be roughly calculated. Examples include:

1. Reynolds Number

UL Inertial Forces


Re ynolds Number Re   
 Viscous Forces

• Indicates whether the flow is laminar, in transition or turbulent.

2. Mach Number

Mach Number M  
U Mean Flow Speed

c Speed of Sound in Flow Medium

• If M > 0.3, then model needs to account for flow compressibility.


• If M < 0.3, but changes in fluid density are expected, model flow as compressible.
However, if no changes in fluid density are expected, model flow as incompressible.

42
3. Biot Number

Biot Number Bi  


Lh Re sis tan ce of Conduction

 Re sis tan ce of Convection

• If Bi >> 1, wall side thermal resistance is large, therefore boundary can be treated as
adiabatic.
• If Bi << 1, wall side thermal resistance is small, therefore conduction into boundary
walls should be modelled.

43

Anda mungkin juga menyukai