Anda di halaman 1dari 9

O

The Origins of the


Finite Element
Method

O1
Appendix O: THE ORIGINS OF THE FINITE ELEMENT METHOD

TABLE OF CONTENTS
Page
O.1 Introduction . . . . . . . . . . . . . . . . . . . . . O3
O.2 Who Invented Finite Elements? . . . . . . . . . . . . . O3
O.2.1 G1: The Pioneers . . . . . . . . . . . . . . . . O4
O.2.2 G2: The Golden Age . . . . . . . . . . . . . . O4
O.2.3 G3: Consolidation . . . . . . . . . . . . . . . . O5
O.2.4 G4: Back to Basics . . . . . . . . . . . . . . . O5
O.3 Precursors . . . . . . . . . . . . . . . . . . . . . . O6
O.4 Clough: The Right Idea at the Right Time . . . . . . . . . O7
O.5 Turner: On Flutter Prediction as Motivator for FEM . . . . . . O9

O2
O.2 WHO INVENTED FINITE ELEMENTS?

(Note: for the cited references, go to Appendix R.)

O.1. Introduction

This Appendix summarizes the history of structural finite elements in the early days. It functions
as a hub for chapter-dispersed historical references.
For exposition convenience, structural finitelementology may be divided into four generations
that span 10 to 15 years each. There are no sharp intergenerational breaks, but noticeable change
of emphasis. The following summary does not cover the conjoint evolution of Matrix Structural
Analysis into the Direct Stiffness Method from 1934 through 1970. This was the subject of a
separate essay [256], which is also reproduced in Appendix H.
The story ends at around 1990, at which point FEM becomes an enabling commodity labeled as
mature by US funding agencies. Specialized variants, however, have continued to evolve since.

O.2. Who Invented Finite Elements?

Not just one individual, as this historical sketch will make clear. But if the question is tweaked to:
who created the FEM in everyday use? there is no question in the writers mind: M. Jonathan (Jon)
Turner at Boeing over the period 19521964. He generalized and perfected the Direct Stiffness
Method (DSM), and got Boeing to commit resources to it while other aerospace companies (Douglas,
Lockheed, Rockwell, . . .) were mired in the Force Method swamp. He oversaw the development
of the first continuum based finite elements, the consequent creation and expansion of the DSM,
and its first application to nonlinear problems. In addition to Turner, major contributors to current
practice include: B. M. Irons, inventor of isoparametric models, shape functions, the patch test
and frontal solvers; R. J. Melosh, who recognized the Rayleigh-Ritz link and systematized the
variational derivation of stiffness elements; and E. L. Wilson, who developed the first open source
(and widely imitated and distributed) FEM and matrix software.
All of these pioneers were in the aerospace industry at least during part of their careers. That is
no accident. FEM is the confluence of three ingredients, one of which is digital computation. And
only large industrial companies (as well as some government agencies, notably those in the defense
and spy business) were able to afford mainframe computers during the 1950s.1
Who were the popularizers? Four academicians: J. H. Argyris, R. W. Clough, H. C. Martin, and
O. C. Zienkiewicz are largely responsible for the technology transfer from the aerospace industry
to a wider range of engineering applications during the 1950s and 1960s. The first three learned the
method from Turner directly or indirectly. As a consultant to Boeing in the early 1950s, Argyris, a
Force Method expert then at Imperial College, received reports from Turners group, and weaved
the material into his influencial 1954 serial [26]. To Argyris goes the credit of being the first in
constructing a displacement-assumed continuum element [26, p. 62].
Clough and Martin, then junior professors at U.C. Berkeley and U. Washington, respectively,
spent faculty internship summers at Turners group during 195253. The result of this seminal
collaboration was a celebrated paper [834], widely considered the start of the present FEM. Clough
baptized the method in 1960 [145] and went on to form at Berkeley the first research group to

1 A 1950s mainframe cost the equivalent of $200M today (2017) and required dedicated and expensive infrastructure.

O3
Appendix O: THE ORIGINS OF THE FINITE ELEMENT METHOD

propel the idea into Civil Engineering applications.2 Olek Zienkiewicz, originally an expert in
finite difference methods who learned the trade from Southwell, was convinced in 1958 by Clough
to try FEM. He went on to write the first textbook on the subject [902] and to organize another
important Civil Engineering research group in the University of Wales at Swansea.
O.2.1. G1: The Pioneers

The 1956 paper by Turner, Clough, Martin and Topp [834], henceforth abbreviated to TCMT, is
recognized as the start of the current FEM, as used in the overwhelming majority of commercial
codes. Along with Argyris serial [26] they prototype the first generation, which spans 1950
through 1962. A panoramic picture of this period is available in two textbooks [637,665] and a
book-formatted survey report [311]. Przemienieckis text is still available through Dover, whereas
Pestel-Leckies is out of print. The survey by Gallagher, confined to the aerospace industry, was
influential at the time but is now difficult to access outside libraries.
The pioneers were structural engineers, schooled in classical mechanics. They followed a century
of tradition in regarding structural elements as a device to transmit forces. This element as force
transducer was the standard view in pre-computer structural analysis. It explains the use of stress
flux assumptions to derive stiffness equations in TCMT. Element developers worked in, or interacted
closely with, the aircraft industry. (As noted previously, only large aerospace companies were then
able to afford mainframe computers.) Accordingly they focused on thin structures built up with bars,
ribs, spars, stiffeners and panels. Although the Classical Force Method dominated stress analysis
during the 1950s [256], stiffness methods were kept alive by use in dynamics and vibration. It is
not coincidence that Turner was a world-class expert in aeroelasticity.
One interesting fact from those early days: the 1952-53 Boeing team that created FEM/DSM
was interested in vibration and flutter predictions, not statics; read O.5 for motivations. Not an
accident: in aerospace vehicle design, structural dynamics (vibrations, transients, aeroelasticity,
...) is crucial whereas statics is secondary. Civil engineering is the opposite: static analysis is
paramount except for extreme events (hurricanes, earthquakes, ...) When Ray Clough tried to
sell FEM back at Berkeley, his Civil colleagues doubted that it would be competitive against the
century-old Force Method, in which statics is bread and butter. The proof that FEM could hold its
own came in the crack analysis of the Norfork Dam, as narrated in Chapter 7.
O.2.2. G2: The Golden Age

The next period spans the golden age of FEM: 19621972. This is the variational generation.
Melosh showed [535] that conforming displacement models are a form of Rayleigh-Ritz based
on the minimum potential energy principle. This influential paper, based on his thesis under
Harold Martin [534], marks the confluence of three lines of research: Argyris dual formulation of
energy methods [26], the DSM of Turner [835,837], and early ideas of interelement compatibility
as basis for error bounding and convergence [295,534]. G1 workers thought of finite elements
as idealizations of structural components. From 1962 onward a two-step interpretation emerges:
discrete elements approximate continuum models, which in turn approximate real structures.

2 For Cloughs personal account of the Boeing team work, see [154,155]. Extracts are quoted in O.4.

O4
O.2 WHO INVENTED FINITE ELEMENTS?

By the early 1960s FEM begins to expand into Civil Engineering through Cloughs Boeing-Berkeley
connection [153,155] and had been baptized [145,147,153]. Reading Fraeijs de Veubekes famous
article [296] side by side with TCMT [834] one can sense the ongoing change in perspective
opened up by the variational framework. The first book devoted to FEM appears in 1967 [902].
Applications to nonstructural problems had started in 1965 [901], and were treated in some depth
in the textbook by Martin and Carey [519]. Applications to nonlinear problems begin with the 1960
paper by Turner et. al. [836] and expanded during the 1960s; see the historical summary in Chapter
1 of [274].
From 1962 onwards the displacement method dominates. This was given a big boost by the
invention of the isoparametric element formulation and related tools (numerical integration, body-
fitted natural coordinates, shape functions, patch test) by Irons and coworkers [434,904,437]. Low
order displacement models often exhibit disappointing performance. Thus there was a frenzy to
develop higher order elements. Other variational formulations, notably hybrids [639,645], mixed
[389,797] and equilibrium models [296] emerged.
The early distribution of open-source FEM software was initiated by the practice of the SESM
Berkeley group of freely distributing code used in doctoral theses and advanced courses, starting
with Wilsons PSI program [879]. (For obvious reasons, code developed at aerospace companies
and software houses was deemed propietary). Part of the baggage of exiting SESM Berkeley
doctoral students were boxes of punched cards magnetic tapes after 1970 they took along
with their diplomas: an effective way to propagate FEM into a conservative Civil Engineering
community. Those Fortran IV vintage codes still survive in some distant lands.
G2 can be viewed as closed by the monograph of Strang and Fix [772], the first book to focus on
the mathematical foundations.
O.2.3. G3: Consolidation
The post-Vietnam economic doldrums are mirrored during this post-1972 period. Gone is the
youthful exuberance of the golden age. This is consolidation time. Substantial effort is put into
improving the stock of G2 displacement elements by tools initially labeled variational crimes
[771], but later justified. Textbooks by Hughes [424] and Bathe [55,58] reflect the technology
of this period. Hybrid and mixed formulations record steady progress [42]. Assumed strain
formulations appear [503]. A booming activity in error estimation and mesh adaptivity is fostered
by better understanding of the mathematical foundations see, e.g, [788].
Commercial FEM codes gradually gain importance. They provide a reality check on what works in
the real world and what doesnt. By the mid-1980s there was gathering evidence that complex and
high order elements were commercial flops. Exotic gadgetry interweaved amidst millions of lines
of code easily breaks down in new releases. Complexity is particularly dangerous in nonlinear and
dynamic analyses conducted by novice users. A trend back toward simplicity starts [506,509].
O.2.4. G4: Back to Basics
The fourth generation begins by the early 1980s. More approaches come on the scene, notably the
Free Formulation [88,92], orthogonal hourglass control [283], Assumed Natural Strain methods
[61,765,620], stress hybrid models [639,640,641,642,643,644,645], as well as variants and deriva-
tives of those approaches: ANDES [243,542], EAS [744,744] and others. Although technically
diverse the G4 approaches share two common objectives:

O5
Appendix O: THE ORIGINS OF THE FINITE ELEMENT METHOD

(i) Elements must fit into DSM-based programs since that includes the vast majority of production
codes, commercial or otherwise.
(ii) Elements are kept simple but should provide answers of engineering accuracy with relatively
coarse meshes. These were collectively labeled high performance elements in 1989 [237].
Two more trends since the mid 1980s can be noted: increased abstraction on the mathematical
side,3 and canned recipes for running commercial software on the physical side.
Things are always at their best in the beginning, said Pascal. Indeed. By now FEM looks like
an aggregate of largely disconnected methods and recipes. The blame should not be placed on the
method itself, but on the community split noted in the book Preface.

O.3. Precursors

As used today, FEM represents the confluence of three ingredients: Matrix Structural Analysis
(MSA), variational approximation theory, and the digital computer. The tres amigos came together
in the early 1950. The reader should not think, however, that they simultaneouly appeared on the
dining table as an alchemic pizza. MSA came on the scene in the early 1930s when flutter-prone
monoplanes and desk calculators became popular, as narrated in Appendix H. And variational
approximation schemes akin to those of modern FEM were proposed well before digital computers.
Some examples, in rough chronological order.
The historical sketch of [519] says that Archimedes used finite elements in determining
the volume of solids. The alleged linkage is tenuous. Indeed he calculated areas, lengths
and volumes of geometrical objects by dividing them into simpler ones and adding their
contributions, passing to the limit as necessary. Where does variational approximation
come in? Well, one may argue that the volume (area, length) measure of an object is a scalar
functional of its geometry. Transmute measure into energy and simpler objects into
elements and you capture one of the FEM tenets: the energy of the system is the sum of
element energies. But for Archimedes to reach modern FEM long is the way, and hard,
since physical energy calculations require derivatives and Calculus would not be invented for
20 centuries.
In his studies leading to the creation of variational calculus, Euler divided the interval of
definition of a one-dimensional functional into finite intervals and assumed a linear variation
over each, defined by end values; see [478, p. 53]. Passing to the limit he obtained what is now
called the Euler-Lagrange differential equation of variational calculus. Thus Euler deserves
credit for being the first to use piecewise linear functions with discontinuous derivatives at
nodes to produce, out of the hat, an ODE with second derivatives. He did not use those
functions, however, to obtain an approximate value of the functional.4
In the early 1940s Hrennikoff, then a professor at MIT, developed his lattice analogy to
analyze flat plates in plane stress and bending [412]. The continuum model was replaced
by a lattice of fictitious bars and/or beams. The analogy only worked correctly for some

3 If you go too far up, abstraction-wise, you run out of oxygen. (Joel Spolsky).
4 That would have predated the invention of direct variational methods (Rayleigh-Ritz) by over one century, while repre-
senting also the first FEM-style calculation preceding computers by two centuries. A sorry miss indeed.

O6
O.4 CLOUGH: THE RIGHT IDEA AT THE RIGHT TIME

lattice configurations (such as equilateral triangles) and specific material properties. But it did
influence followers before the real FEM arrived. In particular, Ray Clough, who as an MIT
alumnus was familiar with Hrennikoffs work, tried this approach in his efforts to model a
delta wing during the summer of 1952 without success see narrative in O.4.
Also in the early 1940s Courant wrote an expository article [164] advocating the variational
treatment of partial differential equations. The Appendix of this article contains the first FEM-
style calculations on a triangular net for determining the torsional stiffness of a hollow shaft.
He used piecewise linear interpolation over each triangle as Rayleigh-Ritz trial functions,
and called the idea generalized finite differences a most unfortunate choice for selling
purposes. The proposal was not followed up by the math community, who rediscovered
Courants contribution 30 years later and consecrated him as a FEM pioneer.
An approach similar to Courants was continued by Synge and Prager in a functional analysis
context [660] and further developed in a book [786] as the hypercircle method.5
The seminal paper by Turner et al [834] cites two immediate DSM precursors, both dated 1953,
by Levy [488] and Schuerch [729]. (Only the former is available as a journal article; both have
delta wings in the title.) From [834], p. 806: In a recent paper Levy has presented a method
of analysis for highly redundant structures that is particularly suited to the use of high-speed
digital computing machines. . . . The stiffness matrix for the entire structure is computed by
simple summation of of the stiffness matrices of the elements of the structure. Voila.
Precursors before 1950 had no influence on the FEM developments of Generation 1 outlined above.
Two crucial pieces were missing. First, and most important, was the programmable digital computer.
Without computers FEM would be a curiosity, worth perhaps a footnote in an arcane book. Also
lacking was a driving application that could get the sustained attention of scientists and engineers
as well as industrial resources to fund R&D work. Aerospace structural mechanics provided the
driver because the necessary implementation apparatus of Matrix Structural Analysis (MSA) was
available since the mid 1930s [304] see Appendix H for a more detailed account.
MSA procedures had to be moved from desk calculators and punched-tape accounting machines
to digital computers, which affluent aerospace companies were able to afford amidst Cold War
paranoia. Can you imagine juicy defense funds pouring into hypercircles or Courants triangles?
But delta wing fighters had to fly to keep air superiority over the bad guys. Once all pieces were in
place, synergy transformed the method into a product, and FEM took off.6
We conclude this looking back from where we came7 with quotes from two of the pioneers.

5 Curiously this 1957 book does not mention, even in passing, the use of digital computers that had already been commer-
cially available for 6 years. The numerical examples, all in 2D, are done by hand via Southwell-style relaxation methods.
The overuse of function space talk kills the mood. A comparison with Crandalls book [170], which appeared in 1956,
is instructive. The unpretentious, example-driven Engineering Analysis is remembered and credited as a key source of
Weighted Residual Methods; its Preface starts: The advent of high-speed automatic computing is making possible the
solution of engineering problems of great complexity and matrices are used throughout. Clearly the use of the scary
term hypercircle was a sure way to discourage attention from practicing scientists and engineers.
6 Without market forces on its side, technology is impotent. (W. Kahan)
7 Joni Mitchells The Circle Game.

O7
Appendix O: THE ORIGINS OF THE FINITE ELEMENT METHOD

O.4. Clough: The Right Idea at the Right Time

One of the last public talks by Ray Clough was an informal address on the origins of FEM, delivered
at the Fifth World Congress on Computational Mechanics (WCCM V), held at Vienna in 2002. He
was 82 at the time, and 15 years past his retirement from UC Berkeley (he passed away on October
2016, at age 96). Here is his description of the epiphanous 1953 event, with my annotations enclosed
in square brackets:
My involvement with the FEM began when I was employed by the Boeing Airplane Company in
Seattle during summer 1952 as a member of their summer faculty program. When I had joined the
Civil Engineering faculty at Berkeley in 1949, I decided to take advantage of my MIT structural
dynamics background by taking up the field of Earthquake Engineering. Because the summer faculty
program at Boeing offered offered positions with their structural dynamics unit I seized on that as
the best means on advancing my preparation for the earthquake engineering field. I was particularly
fortunate in this choice of summer work at Boeing because the head of their structural dynamics
unit was Mr. M. Jonathan (Jon) Turner a very capable man in dealing with problems of structural
dynamics and flutter. [Turner was an applied mathematician by training, having received his M.S. in
Mathematics in 1938 at the University of Chicago.]

When I arrived in the summer of 1952, Jon Turner asked me to work on the vibration analysis of
a delta wing structure. Because of its triangular plane form, this problem could not be solved by
procedures based on standard beam theory; so I spent the summer of 1952 trying to formulate a
delta wing model built up of an assembly of one-dimensional beams and struts. [This first attempt
mimicked Hrennikoffs lattice analogy approach see O.3.] However, the results of deflection
analysis based on this type of mathematical model were in very poor agreement with data obtained
from laboratory tests of a scale model of a delta wing. My final conclusion was that my summers
work was a total failure however, at least I learned what did not work.

Spurred by this disappointment, I decided to return to Boeing for the summer faculty program of 1953.
During the winter, I stayed in touch with Jon Turner so I was able to rejoin the structural dynamics
unit in June. [Here comes the FLASH FROM HEAVEN.] The most important development during the
winter was that Jon suggested we try to formulate the stiffness properties of the wing by assembling
plane stress plates of either triangular or rectangular shapes. So I developed stiffness matrices for
plates of both shapes, but I decided the triangular form was much more useful because such plates
could be assembled to approximate structures of any configuration. Moreover, the stiffness properties
of the individual triangular shapes could be calculated easily based on assumptions of uniform states
of normal stress in the X and Y directions combined with a uniform state of shear stress. Then the
stiffness of the complete structure was obtained by appropriate addition from the contributions of the
individual pieces. The Boeing group called this procedure the Direct Stiffness Method.

The remainder of the summer of 1953 was spent in demonstrating that deflections calculated for
structures formed of assemblies of triangular elements agreed well with laboratory measurements on
the actual physical models. Also it became evident that that the precision of the calculated results
could be improved by refinement of the finite element mesh. The conclusions drawn from that
summers work were presented in a paper given by Jon Turner at the annual meeting of the Institute
for Aeronautical Sciences [the predecessor of the present AIAA society] on January 1954. However,
for reasons I never understood Jon did not submit the paper for publication until many months later.
So this paper, which often is considered the first published description of the FEM was not published
until September 1956 more than two years after the verbal presentation [and three years after the
end of the 1953 work].

O8
O.5 TURNER: ON FLUTTER PREDICTION AS MOTIVATOR FOR FEM

From my point of view, the next important event in the finite element history was my coining the
name Finite Element Method. I never thought that the name used by Boeing for their procedure:
the Direct Stiffness Method, was at all descriptive of the concept involved in the method. [Clough
was right DSM is a subset of FEM, although by far the most practically important one.] So when I
later wrote the stress analysis paper [for the 2nd ASCE Conference on Electronic Computation held at
Pittsburgh in 1960] I had to choose a new name for the procedure. On the basis that a deflection analysis
done with these new pieces (or elements) of the structure is equivalent to the formal integration
procedure of calculus, I decided to call the procedure the FEM because it deals with finite components
rather than differential slices.
A red letter event that occurred during this very early history of FEM was my visiting Northwestern
University [in 1958] to give a seminar lecture on finite elements. When I received this invitation from
Olek Zienkiewicz, who was teaching at Northwestern at that time, I expected we would have some
arguments about the relative merits of finite elements versus finite differences because Olek had been
brought up in the tradition of Professor Southwell. It is true that we did have some such discussions,
but Olek recognized very quickly the advantages of the finite element approach. In fact I would say
that my visit to Northwestern yielded a tremendous dividend in the conversion of Olek from finite
differences to finite elements.

O.5. Turner: On Flutter Prediction as Motivator for FEM

What motivated the invention of Matrix Structural Analysis in the 1930s? Flutter and desk calcula-
tors see Appendix H. What motivated the invention of the Finite Element Method in the 1950s?
If you guess: flutter and digital computers, bingo! Here are excerpts from a mid-1980s interview
with Jon Turner.

We had some fairly serious deficiencies in our dynamic analysis capabilities in the late 1940s and early
1950s, and we were having to rely almost wholly on scaled dynamic wind tunnel models for design
evaluation and verification of flutter margins in the development of large subsonic aircraft. There are
two fundamental requirements for aeroelastic analysis (in addition, of course, to a knowledge of the
mass distribution):
(1) You have to be able to predict structural deflections under load, and
(2) You have to be able to predict changes in aerodynamic loading due to structural deformation.
On the structural side there are often cutouts in wing and body structure that introduce complex local
behavior. In addition, there are complex three-dimensional effects at the wing-to-body juncture and
at engine nacelle strut-to-wing attachments. These details have an important effect on the elastic
and dynamic behavior of the structure. Obviously they cannot be handled with beam theory. There
was also growing interest in the 1960s in supersonic configurations. We were involved in the B-70
competition and later on, of course the SST. These required a basically different kind of structure
for which the elastic properties of the wing are more plate-like than beam-like in character, and the
wing-body interaction is more complex because of the longer root chord of the wing.
So the crux of all this is that you have to be able to deal with a complex structural model consisting of
multiple connected one-, two- and three-dimensional elements. Theoretical unsteady aerodynamics
was also in a fairly primitive state in those days. In fact, unsteady three-dimensional transonic
aerodynamics is still a very active research field of current research. The appearance of the large
digital computer on the scene suddenly opened up a lot of new possibilities which did not exist before,
and the finite element approach to structural analysis was a rather attractive option that had to be
developed.

O9

Anda mungkin juga menyukai