Solving Differential Equations Through Means of Deep Learning

© All Rights Reserved

0 tayangan

Solving Differential Equations Through Means of Deep Learning

© All Rights Reserved

- C-integrability
- Use of Fast Fourier Transforms for Solving Partial Differential Equations in Physics
- null
- Comparison of Various RCNN techniques for Classification of Object from Image
- Alana Jaskir
- AI_2013
- Reinforcement Car Racing with A3C
- Ma 303
- Introduction and Syllabus
- Robot and Neuroscience Technology
- FEMvsFDM
- tratification of Dengue Fever using SMO and NSGA-II Optimization Algorithms.
- mukhopadhyay_vita-1.pdf
- Computational Heat Transfer ME673 Mini Project 2 Revised
- Neural Network
- Ch3_S
- ma691_ch3
- Bao Cao Thuc Tap
- Sistema híbrido para la predicción del comportamiento de GNL
- neural

Anda di halaman 1dari 56

Juliane Braunsmann

February 8, 2019

Table of Contents

1 Neural Networks

... and hard assignment of constraints

... and soft assignment of constraints

4 Summary

Table of Contents

1 Neural Networks

... and hard assignment of constraints

... and soft assignment of constraints

4 Summary

Solving Differential Equations through Means of Deep Learning

Machine Learning

what’s expected.

▶ Input data points e.g. images for image tagging, speech for speech recognition

▶ Examples of the expected output e.g. tagged images, transcribed audio files

▶ A way to measure if the algorithm is doing a good job to determine if the output of the

algorithm is close to the expected output, this enables learning

Juliane Braunsmann 1 46

Solving Differential Equations through Means of Deep Learning

Formalization

Training data, consiting of pairs of input data and expected output:

{(x1 , y1 ), … , (xN , yN )} ⊆ 𝒳 × 𝒴 ,

distribution P.

Goal: Given a new input x ∈ 𝒳 , predict output y ̂ ∈ 𝒴 , i. e. find a prediction function

f: x ↦ y.̂

Assumption: (x, y) is another independent observation of P

To measure if the prediction function is doing a good job, we have to define a loss function

L: 𝒴 × 𝒴 → ℝ,

where L(y, y)̂ measures how close the expected output y is to the predicted output y.̂

Juliane Braunsmann 2 46

Solving Differential Equations through Means of Deep Learning

Typical losses

Some typical loss functions are:

squared loss

2

L(y, y)̂ = |y − y|̂ for 𝒴 = ℝ,

zero-one loss

L(y, y)̂ = 𝟙y (y)̂ for arbitrary𝒴 ,

cross-entropy loss

L(y, y)̂ = −(y log(y)̂ + (1 − y) log(1 − y))

̂ for 𝒴 = [0, 1].

Juliane Braunsmann 3 46

Solving Differential Equations through Means of Deep Learning

Average loss

𝒳

N N

1

≈ ∑ L(yi , f(xi )) = ∑ Ei (f)

N i=1 i=1

Then find

f∗ = arg min E(f),

f

Juliane Braunsmann 4 46

Solving Differential Equations through Means of Deep Learning

▶ decision trees

▶ support vector machines

▶ naive Bayes classifiers

▶ k-nearest neighbor algorithm

▶ neural networks, specifically deep learning

Juliane Braunsmann 5 46

Solving Differential Equations through Means of Deep Learning

▶ decision trees

▶ support vector machines

▶ naive Bayes classifiers

▶ k-nearest neighbor algorithm

▶ neural networks, specifically deep learning

Juliane Braunsmann 6 46

Solving Differential Equations through Means of Deep Learning

Deep Learning

representations from data that puts an emphasis on learning successive layers of

increasingly meaningful representations.

Juliane Braunsmann 7 46

Solving Differential Equations through Means of Deep Learning

+1 b

x1

w1 n

𝜎 (∑ wi xi + b)

w2 i=1

x2 Σ𝜎

w3

x3

⋮

n

w

xn

1

https://github.com/PetarV-/TikZ/tree/master/Multilayer%20perceptron

Juliane Braunsmann 8 46

Solving Differential Equations through Means of Deep Learning

w1 n layer layer layer

x1 𝜎 (∑ wi xi + b)

w2 i=1

x2 Σ𝜎

w3

x3 I1 O1

⋮

n

w

xn I2

I3 O2

1

https://github.com/PetarV-/TikZ/tree/master/Multilayer%20perceptron

Juliane Braunsmann 9 46

Solving Differential Equations through Means of Deep Learning

Sigmoid function

1

𝜎(z) =

1 + exp(−z)

Tanh function

exp(z) − exp(−z)

𝜎(z) =

exp(z) + exp(−z)

ReLu function

𝜎(z) = max(z, 0)

Juliane Braunsmann 10 46

Solving Differential Equations through Means of Deep Learning

1

ReLu (scaled)

Tanh

Sigmoid

−1

Juliane Braunsmann 11 46

Solving Differential Equations through Means of Deep Learning

Given an input vector zl ∈ ℝn , the output of the fully connected layer l + 1 is

n

(𝜎 (∑ wli,j zi + bli )) ∈ ℝm .

i=1 j=1,…,m

Such layers can be concatenated, yielding a deep neural network parametrized by weight

matrices Wl and bias vectors bl for each layer.

We denote these parameters by 𝜃 and the corresponding neural network by f𝜃 .

We write

N

1 N

E(𝜃) = ∑ Ei (𝜃) = ∑ L(yi , f𝜃 (xi )).

i=1

N i=1

Juliane Braunsmann 12 46

learning. The next chapter explains in more detail how backpropagation works.

Solving Differential Equations through Means of Deep Learning

Input X

Layer

Weights

(data transformation) Training loop

Layer

Weights

(data transformation)

update Y' Y

Loss score

feedback signal to adjust the weights.

Source: [Cho17]

Initially, the weights of the network are assigned random values, so the13netwo

Juliane Braunsmann 46

Solving Differential Equations through Means of Deep Learning

N

Let E(𝜃) = ∑i=1 Ei (𝜃) be a loss function. Then stochastic gradient descent can be

presented as follows:

▶ Initialize weights 𝜃 and a learning rate 𝜂.

▶ Repeat until stopping criterion:

▶ choose an index 1 ≤ i ≤ N

▶ calculate Ei (𝜃) (forward pass)

▶ calculate ∇𝜃 Ei (𝜃) using backpropagation and update 𝜃 = 𝜃 − 𝜂∇Ei (𝜃) (backward

pass)

Juliane Braunsmann 14 46

Table of Contents

1 Neural Networks

... and hard assignment of constraints

... and soft assignment of constraints

4 Summary

Solving Differential Equations through Means of Deep Learning

Beginnings

The idea of solving differential equations by means of machine learning was introduced in

1996 by Isaac Lagaris, Aristidis Likas and Dimitrios Fotiadis (University of Ioannina,

Greece).

They saw the following advantages in this approach:

▶ the solution learned by a neural network has a differentiable, closed analytic form

▶ the method is general

▶ efficient implementation on parallel architectures

They noted the further additional advantages of the method:

▶ the neural network solution showed good generalization properties on unseen points

▶ a high accuracy could be achieved with much fewer parameters

Juliane Braunsmann 15 46

Solving Differential Equations through Means of Deep Learning

General setup

Problem

Let Ω ⊆ ℝn bounded. Given a general differential equation of the form

where 𝜓: Ω ⊆ ℝn → ℝ, G: ℝn × ℝ × ℝn × ℝn×n → ℝ and g: Γ → ℝ. Find a solution 𝜓.

parameters 𝜃

Juliane Braunsmann 16 46

Solving Differential Equations through Means of Deep Learning

Loss functions

There exist several possibilities how to define loss functions:

▶ use the residual error, i. e. minimize

Ω

̃ ̃

∇𝜓(x)) dx

̃

𝜓∈H Ω

Both integrals then have to be discretized for training, which is done by using a Monte

Carlo integration method.

Juliane Braunsmann 17 46

Solving Differential Equations through Means of Deep Learning

For xi ∈ Ω we can write

Ω

∞

“ = ” ∑ G(xi , 𝜓𝜃 (xi ), ∇𝜓𝜃 (xi ), ∇2 𝜓𝜃 (xi ))2

i=1

∞

= ∑ Ei (𝜃),

i=1

Advantage:

▶ no risk of overfitting

Juliane Braunsmann 18 46

Solving Differential Equations through Means of Deep Learning

We can instead choose some fixed collocation points, for example a grid of equidistant

points, or uniformly distributed points.

Now, the collocation points can be seen as training data.

Advantage:

▶ fewer training points, thus faster

In the literature

▶ [LLF98], [McF06], [BN18] use 10-100 equidistant points to discretize the interval [0,1]

▶ [SS18], [EY18] use “infinite” training data, up to 500 million data points in total

Juliane Braunsmann 19 46

Solving Differential Equations through Means of Deep Learning

Problem: how can we assert that 𝜓𝜃 satisfies the boundary conditions? In the literature,

there are two possibilities to deal with boundary conditions:

▶ add a penalty term to the loss function (“soft assignment”)

▶ use trial solutions that automatically satisfy the boundary conditions (“hard

assignment”)

Juliane Braunsmann 20 46

Table of Contents

1 Neural Networks

... and hard assignment of constraints

... and soft assignment of constraints

4 Summary

Solving Differential Equations through Means of Deep Learning

𝜓ts

𝜃 (x) = A(x) + F(x, N𝜃 (x)),

where A satisfies the boundary conditions, F does not contribute to the boundary

conditions and N𝜃 is a neural network with parameters 𝜃.

The functions A and F need to be defined “by hand” according to the problem.

Juliane Braunsmann 21 46

Solving Differential Equations through Means of Deep Learning

Example in 1D

Consider the linear diffusion equation

d2 u

Lu = = f, 0 < x < 1,

dx2

u(0) = g0 , u(1) = g1

𝜓ts

𝜃 (x) = (1 − x)g0 + xg1 + x(1 − x)N𝜃 (x)

𝜃 (0) = g0 , 𝜓𝜃 (1) = g1 .

1 1 2

dN dN d2 N𝜃

E(𝜃) = ∫ (L𝜓ts

𝜃 )(x)dx = ∫ (2 ((1 − x) 𝜃 (x) − x 𝜃 (x) − N𝜃 (x)) + x(1 − x) (x)) dx.

0 0 dx dx dx2

Juliane Braunsmann 22 46

Demonstration

Solving Differential Equations through Means of Deep Learning

Juliane Braunsmann 23 46

Solving Differential Equations through Means of Deep Learning

It is necessary to specify A, which satisfies the boundary condition, and F, which doesn’t

contribute to the boundary condition → not trivial if the boundary is more complex.

One way to construct such a function F for irregular boundaries is proposed in [McF06;

MM09] with the introduction of length factors.

Length factor

For a domain Ω and a boundary Γ ⊆ ∂Ω, a length factor is a function L: Ω → ℝ such that

L(x) = 0 for all x ∈ Γ and L(x) ≠ 0 for all x ∈ Ω ⧵ Γ.

Juliane Braunsmann 24 46

Solving Differential Equations through Means of Deep Learning

𝜓ts

𝜃 (x) = A(x) + L(x)N𝜃 (x),

the boundary data to the whole domain.

Juliane Braunsmann 25 46

Solving Differential Equations through Means of Deep Learning

𝜓ts

𝜃 (x) = A(x) + L(x)N𝜃 (x).

▶ learn the maps A and L

▶ use a smoothed distance function for L, i. e. L(x) ≈ d(x) where

d(x) = min‖x − xb ‖

xb ∈Γ

▶ train a network A𝜅 : Ω → ℝ such that

▶ only relatively few points needed to train these networks (only collocation points on Γ

and a few in the interior for the distance function)

Juliane Braunsmann 26 46

Solving Differential Equations through Means of Deep Learning

Figure: Smoothed distance function on a star shaped domain, learned using a single layer with 20

neurons using less than 1000 collocation points.

Source: [BN18]

Juliane Braunsmann 27 46

Solving Differential Equations through Means of Deep Learning

Figure: From left to right: A𝜅 , boundary function continued on the whole domain, neural network

solution, difference to exact solution

Source: Middle, Right: [BN18]

Juliane Braunsmann 28 46

Solving Differential Equations through Means of Deep Learning

For problem with simple polygonal boundaries, high quality meshes can be generated →

not clear if ANN method is competitive with FEM

▶ authors report that mesh generated was not finished after 16h

▶ their method took 10 minutes on a high end laptop, implemented in SciPy → even

more performance again expected when using frameworks such as PyTorch or

TensorFlow

Juliane Braunsmann 29 46

Solving Differential Equations through Means of Deep Learning

Boundary and collocation points

1.6

1.5

y

1.4

1.3

1.2

Source: [BN18]

x

(a) Collocation and boundary points (b) Smoothed distance function using a single hidden layer

used to compute d(x). with 20 neurons.

Juliane Braunsmann 30 46

Figure 10: Boundary and collocation points to compute the smoothed distance function

Solving Differential Equations through Means of Deep Learning

(a) ANN solution to the 2D stationary diffusion (b) Difference between the exact and computed

equation. solution. Source: [BN18]

Figure 11: Solution and error for the diffusion equation in a complex 2D geometry using

Juliane Braunsmann

five hidden layers with 10 neurons each. 31 46

Solving Differential Equations through Means of Deep Learning

Recap

▶ use trial solutions that automatically satisfy the boundary conditions (“hard

assignment”)

▶ add a penalty term to the loss function (“soft assignment”)

Juliane Braunsmann 32 46

Solving Differential Equations through Means of Deep Learning

Boundary penalty

A different approach to defining trial functions which satisfy the boundary conditions

exactly is to add penalty terms to the loss function, i. e.

▶ Advantages: easy to formulate, different types of boundary conditions can be

incorporated without further effort

▶ Disadvantages: boundary conditions not exactly satisfied, training has to be

balanced

[SS18] call this method the “Deep Galerkin Method”.

Juliane Braunsmann 33 46

Solving Differential Equations through Means of Deep Learning

Another application of learning algorithms is finding a solution of a PDE over a range of

different setups (i. e. physical conditions, boundary conditions).

Burgers’ equation

The one dimensional (viscous) Burgers’ equation is defined as

du d2 u du

= 𝜈 2 − 𝛼u , (t, x) ∈ [0, 1] × [0, 1]

dt dx dx

u(t, x = 0) = a, t ∈ [0, 1]

u(t, x = 1) = b, t ∈ [0, 1]

u(t = 0, x) = g(x) = a + (b − a)x, x ∈ [0, 1]

Juliane Braunsmann 34 46

Solving Differential Equations through Means of Deep Learning

Approach

Instead of solving the equation for each configuration in the parameter space separately, a

neural network is trained on the whole space

A network with 6 layers with 200 neurons each is used (1200 parameters).

Juliane Braunsmann 35 46

Solving Differential Equations through Means of Deep Learning

Result

Figure: Deep learning solution is in red, solution found via finite differences in in blue. The problem

setups are (𝜈, 𝛼, a, b) = (0.01, 0.95, 0.9, −0.9), (0.09, 0.95, 0.5, −0.5).

Source: [SS18]

Juliane Braunsmann 36 46

Solving Differential Equations through Means of Deep Learning

[SS18] demonstrate that their method works also in high dimensions by analysing a free

boundary PDE describing stock price dynamics (Black Scholes model)

→ # dimensions = # stocks

They choose parameters for which a semi-analytic solution exists (can be reduced to a

one-dimensional PDE which can be used by finite difference methods).

3 0.05%

20 0.03 %

100 0.11 %

200 0.22 %

Juliane Braunsmann 37 46

Solving Differential Equations through Means of Deep Learning

but is there any theory to support this?

Juliane Braunsmann 38 46

Solving Differential Equations through Means of Deep Learning

Approximation Theorem

Denote by Θn = ℝn × ℝd×n the set of parameters of neural networks with n hidden units

with input dimension d. Let 𝜃n be the parameter configuration which minimizes E(𝜃) over

the set Θn and let u𝜃 be the corresponding neural network. Under certain growth and

smoothness assumptions on the nonlinear terms, for a class of quasilinear parabolic PDEs,

there exists a sequence of optimizers 𝜃n such that

E(𝜃n ) → 0 as n → ∞

as well as

f𝜃n → u

strongly in L𝜌 with 𝜌 < 2.

Juliane Braunsmann 39 46

Solving Differential Equations through Means of Deep Learning

(1) prove that E(𝜃n ) → 0 as n → ∞ using neural network function approximation results

(2) establish that each neural network u𝜃n satisfies a PDE with a source term fn

(3) prove the convergence of u𝜃n → u in L𝜌 as n → ∞ using the smoothness of the neural

network approximations and compactness arguments

Juliane Braunsmann 40 46

Solving Differential Equations through Means of Deep Learning

Uniformly m-dense subset

Let X ⊆ ℝd compact and for f ∈ Cm (X) define the Sobolev norm

|𝛼|≤m x∈X

where 𝛼 is a multi-index. Then a subset S of Cm (X) is called uniformly m-dense if for all 𝜀 > 0 there

is a function g = g(f, 𝜀) ∈ S such that

‖f − g‖m,X < 𝜀.

Approximation theorem

If the activation function 𝜓 ∈ Cm (X) is nonconstant and bounded, then the set of neural networks

with one hidden layer {f𝜃 |𝜃 ∈ ∪n≥1 Θn } is m-dense in Cm (X).

Juliane Braunsmann 41 46

Solving Differential Equations through Means of Deep Learning

Consider a quasilinear parabolic PDE

𝒢 [u](t, x) = ∂t u(t, x) − div(𝛼(t, x, u(t, x), ∇u(t, x))) + 𝛾(t, x, u(t, x), ∇u(t, x)) = 0

for (t, x) ∈ ΩT = (0, T] × Ω

for a bounded set Ω ⊆ ℝd with additional boundary conditions with a solution u ∈ C2 (Ω̄ T ).

We can then apply the approximation theorem to get a function f𝜃 such that

T

and thus sup |∂t u(t, x) − ∂t f𝜃 (t, x)| + max sup |D𝛼 u(t, x) − D𝛼 f𝜃 (t, x)| < 𝜀.

(t,x)∈ΩT |𝛼|≤2 (t,x)∈Ω̄

T

E(𝜃) → 0 can be shown.

Juliane Braunsmann 42 46

Solving Differential Equations through Means of Deep Learning

Further remarks

Some more work has to be done to prove step (3): it is not obvious that

E(𝜃) = ‖𝒢 [u]‖L2 (Ω ) + boundary term norms → 0 implies f𝜃 → u.

T

The approximation theorem is only an existence result, minimization of the functional E(𝜃)

is highly non-trivial, since it is not convex. However, deep learning optimization algorithms

are designed to deal with these kinds of optimization problems and work quite well in

practice.

Juliane Braunsmann 43 46

Table of Contents

1 Neural Networks

... and hard assignment of constraints

... and soft assignment of constraints

4 Summary

Solving Differential Equations through Means of Deep Learning

Advantages

▶ loss function is straightforward to formulate and the problem-dependent additional

effort is minimal

▶ the method is general and mesh-free

▶ the total number of sampled points might be large, but can be processed sequentially

without harming convergence

▶ transfer learning can be used to solve similar problems

▶ parallel on GPU, with frameworks that make relatively easy implementation possible

and have a large community

▶ conversely: body of literature for PDE is very large and thus offers the possibility to

study neural networks in a well-understood context ([MQH18])

Juliane Braunsmann 44 46

Solving Differential Equations through Means of Deep Learning

Disadvantages

▶ dependence on initialization

▶ due to non-convex optimization there is a risk of getting a local minimum

▶ convergence results?

Juliane Braunsmann 45 46

Solving Differential Equations through Means of Deep Learning

Summary

using a compositional structure instead of an additive structure

▶ there exist different methods to treat boundary conditions

▶ there is a theorem which guarantees that it is possible to express the solution of

certain PDEs by neural networks, and that minimizing L2 residual error leads to a

solution

▶ results considering convergence speed and approximation accuracy are still lacking

Juliane Braunsmann 46 46

Solving Differential Equations through Means of Deep Learning

Bibliography I

Francois Chollet. Deep Learning with Python. 1st. 00111. Greenwich, CT, USA:

Manning Publications Co., 2017.

Mohammad Amin Nabian and Hadi Meidani. “A Deep Neural Network Surrogate

for High-Dimensional Random Partial Differential Equations”. In:

arXiv:1806.02957 [physics, stat] (June 2018). 00005 arXiv: 1806.02957.

Weinan E and Bing Yu. “The Deep Ritz Method: A Deep Learning-Based

Numerical Algorithm for Solving Variational Problems”. en. In: Communications

in Mathematics and Statistics 6.1 (Mar. 2018). 00006, pp. 1–12.

I.E. Lagaris, A. Likas, and D.I. Fotiadis. “Artificial neural networks for solving

ordinary and partial differential equations”. en. In: IEEE Transactions on Neural

Networks 9.5 (Sept. 1998). 00403, pp. 987–1000.

Juliane Braunsmann 0 2

Solving Differential Equations through Means of Deep Learning

Bibliography II

Kevin Stanley McFall. “An artificial neural network method for solving boundary

value problems with arbitrary irregular boundaries”. en. In: (Apr. 2006). 00006.

Jens Berg and Kaj Nyström. “A unified deep artificial neural network approach

to partial differential equations in complex geometries”. In: Neurocomputing

317 (Nov. 2018). 00011, pp. 28–41.

Justin Sirignano and Konstantinos Spiliopoulos. “DGM: A deep learning

algorithm for solving partial differential equations”. en. In: Journal of

Computational Physics 375 (Dec. 2018). 00000 arXiv: 1708.07469,

pp. 1339–1364.

Juliane Braunsmann 1 2

Solving Differential Equations through Means of Deep Learning

Bibliography III

Boundary Value Problems With Exact Satisfaction of Arbitrary Boundary

Conditions”. In: IEEE Transactions on Neural Networks 20.8 (Aug. 2009).

00055, pp. 1221–1233.

Martin Magill, Faisal Qureshi, and Hendrick W. de Haan. “Neural Networks

Trained to Solve Differential Equations Learn General Representations”. In:

arXiv:1807.00042 [physics, stat] (June 2018). 00000 arXiv: 1807.00042.

Juliane Braunsmann 2 2

- C-integrabilityDiunggah olehSamuel Krapp
- Use of Fast Fourier Transforms for Solving Partial Differential Equations in PhysicsDiunggah olehEduardo Sandoval
- nullDiunggah olehapi-25914596
- Comparison of Various RCNN techniques for Classification of Object from ImageDiunggah olehAnonymous kw8Yrp0R5r
- Alana JaskirDiunggah olehShafayet Uddin
- AI_2013Diunggah olehUmer Farooq Shah
- Reinforcement Car Racing with A3CDiunggah olehChan Lee
- Ma 303Diunggah olehYipeng Chamberlain Chu
- Introduction and SyllabusDiunggah olehSurya Ambati
- FEMvsFDMDiunggah olehSusan Roberts
- tratification of Dengue Fever using SMO and NSGA-II Optimization Algorithms.Diunggah olehInternational Journal Of Emerging Technology and Computer Science
- Computational Heat Transfer ME673 Mini Project 2 RevisedDiunggah oleheldwin_dj7216
- Robot and Neuroscience TechnologyDiunggah olehrohann1
- mukhopadhyay_vita-1.pdfDiunggah olehsampritc
- Neural NetworkDiunggah olehHotland Sitorus
- Ch3_SDiunggah olehNicholas
- ma691_ch3Diunggah olehgracus
- Bao Cao Thuc TapDiunggah olehDương Minh Đức
- Sistema híbrido para la predicción del comportamiento de GNLDiunggah olehDario Huertas
- neuralDiunggah olehrwshiflet
- mapaprocesospmbok5-Diunggah olehsri
- 10.1.1.259.2012Diunggah olehAnonymous 7AGewis
- MATH4052_T09Diunggah olehJohn Chan
- Journal Reading UgnDiunggah olehUlayya Ghina Nabilla
- b27809140739642d5ab1fd6e8818d2b907fa.pdfDiunggah olehhamodi
- 5-Year Integrated & 3-Year M.sc.Tech (AGL) 2012-13 OnwardsDiunggah olehDrBishnu Prasad Mahala
- CST196.Chap12.060124Diunggah olehHetal Langalia
- Development of adaptive modeling.pdfDiunggah olehSaif Evony
- Artificial Neural Networks Using Microsoft Excel for Windows 95Diunggah olehgurusodhii
- An Application of a Neural Network to Detection of Deteriorated Steel Structural MembersDiunggah olehKusni Kadut

- Ann IntroDiunggah olehAayush Patidar
- Paper teaching calculus by using matlabDiunggah olehSyed Beeban Basha
- MatlabDiunggah olehDINESHKUMARMCE
- Fuzzy NeuralDiunggah olehapi-3834446
- A Rapidly Convergent Iterative Method for the Solution of the Genaralised Nonliner ProblemDiunggah olehSyed Beeban Basha
- 34032_IntEquns.pdfDiunggah olehSyed Beeban Basha
- Integ_Equ_Phys571_T131 (1)Diunggah olehraoni_jampa
- Lecturers in GDC(Mathematics) Notification No.43-11 Only EnglishDiunggah olehSyed Beeban Basha
- NN ExamplesDiunggah olehMinh Map Vu
- algebra in easy way to teachDiunggah olehSyed Beeban Basha
- Modelling and BvpDiunggah olehSyed Beeban Basha
- Mthematical ModellingDiunggah olehSyed Beeban Basha
- CSIR June 2012 Maths AnswersDiunggah olehmittalnipun2009
- PDE ClassifcationDiunggah olehTreba Kome
- MATLAB 123Diunggah olehSyed Beeban Basha
- Pde BookDiunggah olehSteven Thomas
- MIT18_152F11_lec_15Diunggah olehSyed Beeban Basha

- Denali Advanced Integration Named Cradlepoint Enterprise Partner of the YearDiunggah olehPR.com
- 13571_1915__5300_syllabusDiunggah olehLokesh Kumar Anaimallur Mani
- oracle Instance ManagementDiunggah olehNett2k
- Entrepreneurial Ecosystem in Indonesia.pdfDiunggah olehDefnissa Zumara
- Laptop Motherboard RepairDiunggah olehZoran
- Valve Point Loading of TurbinesDiunggah olehAmien Karim
- Protection Guide- SchneiderDiunggah olehrajfab
- Pneumatic Control ValvesDiunggah olehMatt Wright
- ESFUERZOS MAXIMO PERMISIBLES TABLAS UCS II ASME VIII DIV 1.pdfDiunggah olehChuy Ramos
- Kewill Ti Survey Nov12Diunggah olehfmonsiva
- 07. kontrol digital - Mikrocontroler.pptxDiunggah olehSafwan Andesla
- Chapter 1 Kuliah 1Diunggah olehFitriYani
- PARAPHRASING.pptxDiunggah olehLaila Turrohmah Kurniawati
- ACSI E-Business Report Aug09Diunggah olehMarco_Schierhorn_741
- AWWA Journal Article June-07Diunggah olehAldiarso Utomo
- Toshiba 2820C MFP Management GuideDiunggah olehhappy_jon
- communication plan nguyenDiunggah olehapi-332617418
- BOMA EBCx for Commercial Real Estate Owners - 2011Diunggah olehdavid messier
- CokeDiunggah olehKiran Kumar
- OPERATION MGMT_SCANR_IIDiunggah olehrkmumbai
- Proposal Markov Chain - Original selected proposal for GSoC 2017.Diunggah olehvandit
- Assign 3Diunggah olehVipul Gupta
- 200-ENDiunggah olehrezaeibehrouz
- Ladle Flow Control Refractories and Equipment.pdfDiunggah olehÍcaro Gava
- Cambiar TcmDiunggah olehGabriel
- Diagnosis 1 v-112JADiunggah olehwilliamrgv
- pmbec_2.pdfDiunggah olehLuis Sousa
- Environmental ImpactDiunggah olehKhurram Shahzad
- using pivot tableDiunggah olehtheressa mae ingente
- 215814- Chapter 18Diunggah olehJitender Sharma