Anda di halaman 1dari 36

The aim of this thesis is to examine the ability of two models, GARCH and neural network, in

forecasting the price volatility in agricultural commodities using daily data from 2011-2016. In
particural we focus on three key cereals commodities: Corn, Wheat, Soybeans.

The topic of this paper was selected starting from the observation that the evolution of grain
commodities during the last decades indicated a difference between estimations generated by
macroeconomic models and reality. If the macroeconomic models illustrate that prices of
agricultural products should fluctuate by a few percentage points from one period to another, in
reality, the volatility could be very high. (Ruppert, 2011) This situation occurs as a consequence
of the fact that the macroeconomic models do not take into consideration market barriers that
may appear during the period analyzed, and they are based on the assumption of a perfect
market. Some researchers even noticed the fact that agricultural commodity prices recorded a
higher degree of fluctuations in comparison with prices of services or industrial products.
(Thompson et al, 2012)

Given these observations, the purpose of this paper is to analyze the efficiency of two ways to
model volatility of three main grain commodities. As previously emphasized, the pattern of
evolution in the case of agriculture was not all the time similar with the pattern generated by
macroeconomic models. Moreover, investors interested in owning portfolio investments in this
field, have strong reasons to assess the degree of risk on this market, and to avoid uncertainty.
These are only some of the reasons for identifying the most efficient models for volatility
forecasting in agriculture.

In the same time, Munier (2016) reminds about the financialization of commodity markets,
including agriculture. Financialization is a concept explaining the process through which all
value exchanged is transformed into a financial instrument. In other words, financialization is
based on the idea that any product or service can be reduced to a financial instrument, which is
easier to exchange, similar to a currency, and which is meant to facilitate transactions between
investors. According to the same author, financialization occurred in the period between 1980
and 2010. In this period of time, financial investors focused on investing in commodity markets
and real estate derivatives. (Girardi, 2010) In the case of agricultural products, financialization
was meant to be used only for several categories of products, but, in time, it started to be applied
for most of agricultural products. (Munier, 2016)
Munier (2016) affirms as well, that financialization leads to a series of consequences related to
economic modelling, as it follows:
The commodities markets are driven by expectations. Since the occurrence of
financialization, the tatonnement process does not apply as much as in the past.
Tatonnement process makes reference to a market of perfect competition, where there is
clear information about prices practiced and the supply of goods is matched with the
demand. In case of excess supply, the prices decrease, while when there is excess
demand, prices are increased. Through the adjustment of prices, it is intended to reach a
market equilibrium.
The evolution of commodity markets started to be determined by investors, who
represent a new category of players on these markets. The author sustains that it was not
demonstrated until this moment whether speculators play any role in the price evolution
on these markets, but it is highly possible that they do so.
Through financialization, commodity markets became more strongly connected with
financial markets, and they are more strongly affected by monetary policies. Even though
researchers acknowledge the importance of monetary policies on commodity markets, in
most cases this factor is not taken into consideration by economic models.
Financialization leads to more interconnected markets at global level. This means that
policy makers should not focus on analyzing only a single market or branch in order to
make decisions, but they should take into consideration the system as a whole, because
the interconnections between markets at world level are stronger nowadays, than in the
past.

These perspectives over the changes that occurred in the field of agriculture, in the past several
decades, signal the increasing importance of producing accurate volatility forecasts. According
to Loayza et al (2007), the main reasons for estimating volatility are the following:
Management of the risk This activity implies to assess potential losses encountered by a
portfolio of assets, at a certain moment in the future. In order to make decisions related to
the assessment of potential losses, investors have to make estimations about future price
volatilities.
Assets allocation One of the most important rules related to assets allocation is to
minimize the risk that is associated with the expected return of a certain investment.
Identifying opportunities brought by future volatility.

In order to achieve these three objectives through volatility forecasting, researchers and finance
quantitative analysts applied different models, such that they could obtain accurate forecasts, and
that they could make appropriate decisions. However, many of these models were not used for a
very long time, because they did not generate satisfactory results.

In the current paper, it was proposed a comparison between GARCH model and neural network.
Through the application of these two models for volatility modeling, it is intended to identify
which of these models helps quantitative analysts to manage the risk of a portfolio, to make
decisions regarding assets allocation, and to identify new investment opportunities. The
comparison should also indicate whether neural network, which is a more recent model for
volatility forecasting, outperformed the GARCH model, which was created several decades ago,
when the economic context was different from nowadays.
All in all, as showed in this section, the changes that occurred in economy, were reflected as well
in the agriculture field especially grain commodities, leading to a higher degree of complexity
concerning investments in this sector. This example is a proof of the fact that financial markets
become more and more complex as the time passes, and there is a need for developing more
efficient models of volatility forecasting.
The research purpose established previously requires the following of several steps, representing
the chapters of this thesis. Therefore, this paper consists of two parts each part divided to two
chapters, and a conclusion indicating different stages of the research.

The first part is dedicated to literature review. This part of the study will present theoretical
information about the GARCH model and neural network. Firstly, these two models will be
presented from the point of view of the way they are used, advantages and disadvantages of each
model, and some information regarding the history of using these models. Secondly, there will
be presented theoretical perspectives of researchers related to the models, in the sense of
indicating results of their studies and conclusions from tests performed by them.

The second part consists of the practical tests performed for achieving the research objectives. In
this chapter, there will be provided information about the methodology proposed for the
research, the data on which the two models are applied, the presentation of tests and the results.
The conclusion will provide comments for the results obtained after the application of the two
models. These comments will be accompanied by recommendations related to the efficiency of
the GARCH model and neural network, applied to the agriculture field.

Related studies w contribut


Part1

Chapter1
resunme

Section 1: Overview of volatility


Every investor makes decisions regarding the investment portfolio based on the expected returns
of the financial assets included in the portfolio. The spread of these returns is known as asset
volatility, and it is used in order to perform various financial applications, such as the following:
to estimate the market risk, pricing financial derivatives, risk management assessment,
predictions of assets future values, and so on. (Ruppert, 2011) Over the time, this concept has
been defined by many researchers, and in the following lines, there will be presented several
perspectives.

Gertrude (2003) states that volatility was firstly defined as coefficient of nervousness or
coefficient of instability of the price. A more modern perspective over the understanding of this
concept is brought by Rothborth (2007), who states that volatility of a financial price reflects the
intensity of fluctuations that have an impact over that particular price. Poon and Ganger (2003)
emphasize that volatility is usually correlated only with the idea of risk. Figlewski (2004)
explains this general perception by underlining the fact that a high volatility is perceived by
market observers as a symptom of market disruption. However, the author also emphasizes the
fact that volatility can be used as a tradable market instrument. In the second case, volatility is
traded through dynamic trading strategies, such as different types of options.

As it can be seen from these definitions, volatility is associated with the idea of risk, uncertainty,
variation in the price of financial assets. After getting some basic knowledge about volatility, the
next step is to identify the types of volatility as they are mentioned in scientific literature. In this
regard, Rothbort (2007) provides useful information, sustaining that there are four types of
volatility measures: historical volatility, implied volatility, volatility index and intraday
volatility. In the following lines there will be provided a presentation of the four types of
volatility measures from the authors perspective.

Historical volatility is perceived by the author as the first step in understanding volatility in
general, and it is measured by computing the standard deviation of the historic prices of a stock
for a given period of time. Therefore, the historical volatility reflects the changes that occurred in
stock prices or in the prices of other financial instruments. Implied volatility is the opposite of
historical volatility in the sense that the stock volatility varies due to different options strike
prices, while historical volatility is static for an established period of time. The implied volatility
is computed through the application of Black-Scholes formula to the prices expressed by buyers
and sellers on the options market. The author emphasizes as well the fact that the options market
is similar with an auction environment where buyers and sellers interact as in a bid and offer
system.

Regarding the volatility indices, these measurements were created in order to compute the
volatility for an index, as volatility is computed for stocks or options. In the authors opinion, the
most common volatility indices are S&P 500 Volatility Index (VIX), S&P 100 Volatility Index
(VOX), and Nasdaq 100 Volatility Index (VXN).

The final type of volatility measurement specified by Rothbort (2007) is the intraday volatility.
This measurement is used in order to indicate the price movements of a certain financial
instrument, which occur during a trading day.

The four types of volatility were developed for different purposes and according to changes that
occurred on the financial markets. It results that forecasting volatility is important for financial
analysts for different reasons. Rothbort (2007) sustains that volatility is as important as other
inputs that are generally used for computing the value of an option, such as market price, interest
rate, dividends, and so on. The importance of volatility was underlined as well by Black and
Scholes (1973). According to the authors, the significance of this concept relies on the fact that
volatility is the only measurement which is not directly observable, because it is forecasted.
Some examples of observable measurements of financial instruments are the stock price, the
interest rate, the strike price, and so on. Even though there is a possibility to compute historical
volatility by using historical data of a financial instrument, the investors and market observers
are interested in the present value of that particular instrument and in its future volatility.

Over the time, investors and market observers encountered more difficulties in using volatility,
especially for applying derivatives valuation models because the maturities of these financial
instruments increased in the last decades. (Figlewski, 2004) The author sustains that in 1970s the
maturities of financial instruments were of several months, meaning that it was less difficult to
forecast the volatility based on the assumption that it will not incur significant changes in
comparison with the values recorded in the past. These days, derivatives may have maturities of
over 10 years, which makes it more difficult to forecast and to avoid making errors due to
uncertainty.
Poon and Granger (2003) analyze the importance of volatility from a different perspective.
Forecasting volatility is important for investors and portfolio managers because they want to be
aware about the risks of investments they plan to do, thus they need instruments to assess these
risks. In other words, volatility is understood as uncertainty, and it lays at the basis of investment
decisions and portfolio creations. According to the authors many financial institutions perform
volatility forecasting with the intention to better manage the risks involved by certain
investments. Forecasting volatility is important also because financial market volatility may have
a negative impact over the economy as a whole. One example in this sense, provided by authors,
refers to the terrorists attack from 2001 in United States, which impacted financial markets
around the world. The explanation behind this phenomenon relates to the fact that there is a
strong connection between uncertainties in financial markets and public confidence.

Ruppert (2011) analysis the importance of volatility by underlining the reasons for which
volatility is computed by academics, policy makers or financial market participants. Firstly, these
economic agents forecast volatility in order to assess the risks presented by different portfolios
and to make rational decisions in this sense. Market risk is a concept used in order to name the
possibility of a portfolio to record decreases in returns caused by changes occurred in the market.
Some examples of market changes are the following: changes in interest rates, changes in the
prices of securities, changes in currency rates, and so on. () Given the fact that unpredicted
changes may lead to significant losses, it is important for any institution or financial market
participant to evaluate the market risk.
Secondly, volatility is used in order to establish the prices of derivative securities. In other
words, the price of an option is set depending on the risk characterizing that particular option
from the moment it is issued until it expires. Thirdly, volatility may be used for evaluating the
vulnerability of an economy.
To sum up the information presented in this section, it can be said that volatility gained more
attention with the development of financial markets. During this time, researchers and financial
analysts tried to better understand the significance of this concept, they developed new types of
volatility adapted to the needs of financial markets, and they better understood how to use
volatility in order to take proper investment decisions.

Section 1: Literature review on volatility


The future consists of unexpected events or uncertainty, which in the economic environment is
experienced as an impossibility to predict the occurrence of certain events, to forecast the
evolution of a price or interest rate, and so on. In fact, there is a wide variety of forces that may
influence the evolution of the economic environment, leading economists to uncertainty.
Regardless of the amount of knowledge that economists own about the way a market works, or
about the relationship between agents, they still find themselves in the impossibility to provide
accurate forecasts regarding the evolution of economic and financial variables. (Gomes, 2014)

Over the time, finance quantitative analysts searched for various ways and solutions that could
help them in better preparing for the uncertainty brought by the future. One particular topic in
this area, which represents the subject of the present research, is volatility. Volatility is a concept
that received a great attention due to its role in evaluating the risks of a financial market. The
analysis of the financial market risks is performed in order to identify the negative effects over
an economy and possible solutions for reducing these risks. A financial market with a high risk
will impact the expectations of economic agents, who will be more reluctant towards investing in
economic projects. In support of this affirmation, Keynes (1936) sustains that the level of
investment that takes place in an economy determines the performance of that particular
economy, and finally the level of consumption. For instance, a high level of uncertainty will
determine economic agents to reduce their investments, and, in this way, the economic
conditions will deteriorate.

Volatility is a concept discussed by researchers and used by financial market analysts over the
past several decades. Poon and Granger (2003) sustain that this topic caught the attention of
researchers in the last two decades, researchers who dedicated time and resources to understand
various aspects of volatility, such as assessing the performance of different volatility models in
forecasting, or volatility modelling in general. The extensive research on this topic is explained
by the fact that volatility is important in several areas of financial markets: investments,
monetary policy, risk management, etc. In other words, there are different parties interested in
forecasting as accurate as possible the volatility of financial instruments, such as academics,
policy makers or financial market participants, and this explains the increasing interest for the
topic.
Other researchers provide arguments in support of this perspective over the history of using
volatility. Researchers as Keynes (1936), Hayek (2006) and Minsky (1992) understood the
importance of evaluating the risk derived from changes occurred in financial markets through
volatility forecast. These authors analyzed the relationship between the risk-taking behavior
displayed by economic agents and the evolution of financial markets. They reached the
conclusion that financial market participants change their decisions when they are confronted
with financial market volatility, because the uncertainty affects their expectations. Higher levels
of volatility lead to a higher degree of uncertainty related to the future economic conditions, such
as interest rates, expected returns of investment or even the occurrence of a crisis. Danielsson et
al (2016) calls this cause-effect relationship as high volatility channel. On the other side, low
volatility channel is a concept used for indicating an environment where investments are
characterized by a low risk, such that economic agents are encouraged to assume more risk. This
risk will materialize only in the moment when a financial crisis will occur.

Danielsson et al (2016) bring in the same time, a different perspective over the history of using
volatility. The authors observed that volatility was a topic that gained increasing interest from
various parties after the occurrence of a financial crisis. They observed that researchers, policy
makers or financial market participants became more convinced about the cause-effect
relationship between volatility and financial crises once they compared the levels of volatility
before and after a financial crisis. One example in this sense is provided by the analysis of
volatility levels in USA before and after the financial crisis from 2008. Researchers noticed low
volatility levels before 2008 and high levels of volatility after this year. In this sense, Danielsson
et al (2016) remind the fact that there are many research papers which used such examples in
order to identify the ways in which volatility may increase the likelihood of financial crises.

The same authors affirmed that the history of using volatility depends on the field where the
volatility is forecasted. Kose et al (2013) exemplifies the case of macroeconomic volatility,
which is of great interest for policymakers, because volatility has negative effects over the
economic growth of a country, affecting firstly the welfare of people living there. Loayza et al
(2007) state that macroeconomic volatility determines people in a country to be risk averse,
behavior that is reflected in their consumption. Risk adversity is caused by different forms of
uncertainty: economic, political, investment uncertainty, and so on. Since people lack trust in the
economic environment, they will be reluctant to investing their money, and this will have a
negative impact on the output growth, and implicitly on the future consumption. The effects of
volatility over welfare are even more intense in countries that are poor, or which are not very
well developed from the financial and institutional point of view. Given these examples of
negative effects of volatility at macroeconomic level, Danielsson et al (2016) sustain that after
the financial crisis from 2008, policymakers searched for better ways in order to forecast
volatility and to identify signals of financial and economic instability that an economy may
experience in the future.

Based on the results obtained after the application of different models of volatility forecasting,
policymakers propose and implement macroprudential and microprudential measures for
mitigating the outcomes of these instabilities. Also, when the levels of volatility indicate a
pending crisis, policymakers have a justification for performing official interventions, such as
requiring banks to reduce the level of risk assumed related to their operations.

As it can be seen from the researchers opinions presented above, the research and application of
volatility was driven by the interests of parties who use this concept in order to achieve their
objectives. Thus, the research around volatility was based on the need of finance quantitative
analysts to evaluate the risks of financial market, the need of policymakers interested in
preventing financial crises and their negative effects, or the desire of researchers interested in
developing efficient models for forecasting volatility. Regardless of the perspective from which
one looks at the evolution of this concept, it can be concluded that over the time, people gained
more knowledge on this topic and they extended the use of volatility to more areas by improving
the existing models.

Section 3: Volatility in agricultural commodities

The evolution of agricultural products prices was not all the time similar with the pattern
generated by macroeconomic models. By looking at the research on this topic, one can identify
many reasons explaining the previous statement, reason for which this section is dedicated to
explain the price volatility in agriculture at worldwide level, and in particular in USA.
Thompson, Smith et al (2012) affirm that one cause of agricultural price volatility might be the
lack of agricultural products stocks at international level. Girardi (2012) brings into discussion
the effects of financialization, which lead to the occurrence of a new category of players on
commodity markets, who are financial investors, and it is assumed that they influence to a
certain degree the price evolution on these markets. Girardi (2012) analyses to deeper level this
topic by focusing on the role of investors over the agricultural commodity prices fluctuation that
occurred during the financial crisis.

The hypothesis from which Girardi starts his analysis is the following: after 2000, when the
phenomenon of financialization started to become more prominent, more and more financial
investors were interested about the opportunity to invest in commodity futures markets. This
trend determined the formation of futures prices that influenced the current prices of
commodities. Towards the end of 2008, financial investors decided to sell the contracts, aspect
that lead to a fall in prices of commodities. The author sustains that confirmation of this
hypothesis should determine policy makers to take action in the view of limiting speculations
over commodity derivatives markets. The confirmation of this hypothesis should be possible
through the existence of conclusive evidence that financial speculation affected the prices
volatility of agricultural commodities.
In an attempt to demonstrate that there are no physical market fundamentals to justify the sharp
increase or decrease of agricultural commodity prices in 2007-2008, the author analyses the
evolution of these market fundamentals after 2000.

The prices of agricultural commodities were characterized by stability and moderation from
1980s until 2007. In the period 2007-2008 and 2010-2011 the prices increased sharply, followed
by price falls occurring between the previous mentioned periods. In the following table, it can be
observed the evolution of agricultural commodity prices at international level, in the period
1980-2010.
Figure 1: Evolution of agricultural commodity prices at international level
Source: Girardi, 2010, pp. 80
As it is illustrated in the above graph, the period January 2007- June 2008 was marked by sharp
increases of main agricultural products prices that reached the highest levels in the last three
decades. Prices of agricultural commodities recorded significant changes especially in the period
2006-2011, due to a series of factors, such as the following: an increasing demand of agricultural
products, a decrease in the stocks of the main producers, volatility in oil prices, the growth of
biofuel industry in US. In fact, the evolution of oil prices and the biofuel industry are the factors
with the greatest impact over the interconnection between the field of energy and agriculture.

As it is shown in the graphical illustration, there was an increase of 78% in wheat price, 75% of
corn price, 166% of rice price and 116% of soybean price. In the following six months, untill the
end of 2008, the prices of these agricultural products decreased rapidly, as it follows: wheat price
by 37%, corn price by 45%, rice price by 34%, soybeans price by 42%. As Etienne, Barrera et al
(2016) sustain, high levels of price volatility are particularly affecting in a negative way those
countries for which the trade of agricultural commodities is very important.

Regarding the case of USA, wheat prices recorded high rates of price volatility since 1960s,
situation which occurred for other agricultural products as well. The main cause of the volatility
in wheat prices was a decrease in stock levels in the main countries that export grain. United
States is the largest exporting country of wheat, and it has a sixth of the wheat reserves of the
world.
Other factors mentioned by the author that contributed to the volatility in wheat prices are the
following:

A decrease in wheat production, which was noticed worldwide;


An increase in the demand for wheat, due to the fact that it is used for other purposes as
well, such as biofuels.
A sharp decrease of stock levels worldwide after 2000. A decrease under the critical
level, which represents a certain percentage of world consumption, leads to a
destabilization of the world prices.

Another perspective over the wheat price volatility takes into consideration the effects of
globalization. Financial analysts expected a stabilization of wheat prices at global level driven by
an intensification of exchanges between countries and by a higher degree of liberalization of
these exchanges. However, the reality did not match with these expectations. For example, in
2006, the wheat recorded high prices volatility that affected developing and developed countries
in the same time. Through the explanations provided in this sense, one can mention the climatic
factor, or internal malfunctions of the wheat market. The climatic factors explain to a certain
degree the wheat volatility in the sense that they affect the production of wheat, leading to
increased risks in terms of quantity, quality and price of the wheat. The quantity risk refers to the
fact that the producers cannot predict the quantity they will harvest at the end of a year. Quality
risk means that producers are not able to predict the quality level of the wheat. Lastly, the price
risk refers to the uncertainty of producers regarding the price at which they will sell the wheat.

Internal malfunctions of wheat market may be caused by the actions performed by players on
this market (producers and policymakers), and by imperfections of the market structure.

Another interesting case of price volatility for agricultural products is the evolution of corn price
in USA. In USA, a large part of corn production is used for obtaining ethanol, which is an
alternative source of energy.

In US, the government implemented a series of policies for encouraging the research and
development in the field of renewable energy for at least two main reasons: to reduce the impact
of oil prices volatility and to reduce the greenhouse gas emissions. The investments in the
ethanol industry lead to an increase in the corn production, while the consumption of corn for
other purposes remained relatively stable in the past three decades. According to (Trujillo-
Barrera et al, 2011) the consumption of ethanol started to increase from 1990s, such that after
2010 it uses 25%-30% of the corn production.

Trujillo-Barrera et al affirm (2011) that the connections which were created between the energy
field and agriculture are factors which increase the price uncertainty. Price volatilities in the field
of energy are a source of higher risks in agriculture, leading to more difficult investment choices
to be made. Other areas of agriculture that are influenced by price volatilities in energy could be
the following: price of food products, trade tariffs, policies in agriculture, business decisions, etc.
Taking the example of corn and ethanol production mentioned previously, (Trujillo-Barrera et al,
2011) observed that the price volatilities in corn and ethanol markets were determined to a
certain degree by the volatility of crude oil prices. More exactly, the research performed by
authors highlighted the fact that volatility of crude oil price determined volatilities in corn and
ethanol markets of up to 20%, and the periods of high price volatility of crude oil determined an
increase of up to 50% in the volatility of corn and ethanol.

Regarding the interconnections between volatility in corn market and volatility in ethanol
market, the authors observed that the effects of corn price volatility are significant over the
ethanol price volatility, which is not the same the other way around. In other words, the prices of
ethanol have little influence over the volatility of corn prices, because there are more factors that
may have a significant impact over corn prices such as exports or food products from corn. In the
case of ethanol market, the corn price volatility may have a significant impact because this is a
protected market by high tariffs in USA, fact which facilitates forecasts regarding ethanol
production. A better understanding of the interdependencies between price volatilities in the
markets of crude oil, ethanol and corn may help producers and consumers of corn and ethanol to
make investment and hedging decisions in these markets.

One perspective over possible causes of agricultural prices volatility is provided by Etienne,
Barrera et al (2016). These authors emphasize the fact that the largest part of literature review
dedicated to this topic, focused on the influence of the following factors over agricultural
commodities volatility: an increase in demand from developing countries, dollar depreciation,
effects of speculation related to commodity markets, and an increase in the biofuel production
for which agricultural products are used leading to a decreased stock of products for
consumption. Nevertheless, there was little attention placed on the analysis of input price effects
over fluctuations of agricultural commodity prices.

One example of input price that might affect prices of agricultural products is the price of
fertilizers. In the US agricultural production, fertilizers are one of the main inputs used to
enhance the soil quality and to help crops increase more rapidly. Fertilizers recorded similar
price spikes as the agricultural products, precious metals or energy during the period 2003-2008.

Fertilizers, at their turn, are produced from several input substances, such as urea and ammonia,
which are derived from natural gas, and which represent approximately 40% of a fertilizers
price. Between 2000 and 2006, the natural gas price increased leading to a decrease in
production of ammonia in USA of approximately 40%. Due to innovations in the production of
natural gas, it was possible a stabilization of its price followed by an increase in the production
of ammonia.

Agricultural producers use three types of fertilizers for their crops: nitrogen, potash and
phosphate. The first type of fertilizer is used in order to enhance the quality of soil and enhance
the growth of crops. Ammonia is one of the types of nytrogen fertilizers, and the most used in
agricultural production. As stated earlier, natural gas is one of its main components, and it
represents approximately 70% of ammonia production cost. The other two types of fertilizers are
used for maintaining the plants.

Etienne, Barrera et al (2016) performed a research based on econometric methods with the
purpose to identify whether there is a correlation between the evolution of prices of natural gas,
fertilizers and corn. They chose to perform this comparison with the evolution of corn prices
because corn prices use the highest quantity of fertilizer. The following set of graphical
illustrations present the evolution of the three price series during the period 1994-2014.
Figure 2: The evolution of natural gas, ammonia and corn prices between 1994 and 2014
Source: Etienne, Barrera et al (2016), pp. 157
As it can be seen, the graphical illustrations indicate patterns in the evolution of each of the
variables. Firstly, in the case of corn prices, there can be observed increasing prices betwen
1994-1996, 2007-2008, and 2010-2014. Secondly, similar price increases can be observed in the
case of fertilizers in the periods 2007-2008 and 2010-2014. In both cases, the prices did not
returned to the levels before crisis, after 2010. Thirdly, the prices of natural gas did not followed
a similar pattern as the other two variables, excepting the period 2007-2008. Thir argument
emphasizes the fact that the financial crisis from 2007-2008 had a strong impact over the
commodities market.
The authors analyse the same series of data by computing the coefficient of variation per year.
The graphical illustration of the coefficient of variation allows for the comparison of price
volatility between different series of data. The following set of graphical illustrations present the
evolution of coefficients of variation for natural gas, fertilizers and corn for the period 1994-
2004.

Figure 3: Coefficient of variation in the case of natural gas, ammonia and corn prices for the
period 1994-2014
Source: Source: Etienne, Barrera et al (2016), pp. 158
As the graphical illustrations indicate, the coefficient of variation records high values in all the
three cases in the period of the financial crisis. Out of the three types of commodities, natural gas
was the commodity with the most often price volatility: 2000-2002, 2006-2007, 2008-2010, and
2012-2013. For ammonia fertilizer there could be observed the following periods of price
volatility: 2000-2003, 2008-2010, 2011 and 2013. In the case of corn, it can be noticed price
volatility during almost the entire period analysed except for the period 1997-2001.

All in all, the research efforts performed by Etienne, Barrera at al (2016) demonstrated a strong
correlation between the price patterns of fertilizers and corn commodities between 1994-2014,
and medium correlation between these two commodity types and natural gas.

Many other researchers placed attention on the relationship between the energy market and
agriculture in USA. According to Hertel and Beckman (2011) it was observed a low correlation
between these two markets due to the fact that biofuels production was not practiced at large
scale. However, in the recent years, researchers as Du and McPhail (2012), noticed an increased
linkage between the two markets, and clear evidence was found between crude oil and corn
prices. McPhail et al (2012) sustain that there is a debate between researchers regarding the link
between agriculture and energy. On the one side, it is considered that the use of grain and
oilseeds in biofuels production was not a determinant factor in price volatilities in agriculture.
On the other side, it is considered that the increasing demand for biofuels had the most
significant impact over price volatilities in agriculture.

As a conclusion for this part of the study, price volatility of agricultural products may be
influenced by numerous factors, given the fact that in the present, countries become more and
more interconnected. The analysis performed earlier, demonstrated that the evolution of
agricultural products prices is determined by major events that affect other sectors of economy,
such as a financial crisis, but also by other factors that are strictly connected to agriculture, such
as the production of alternative energy.
Chapter2
Section 2: Volatility modelling

The evolution of the economic environment is usually analyzed based on sets of data gathered
from several periods. The basic idea behind this analysis is that the events which occurred in the
past will impact the economy in the present and in the future. Also, the past observations serve as
a justification for the current expectations of economic agents, and the current observations are
used in order to formulate future expectations regarding the outcomes that economic agents will
obtain. All in all, the economic environment is characterized by a certain dynamics, which can be
addressed through models having the form of equations. The purpose of these equations is to
reflect the expectations of economic agents regarding the cause-effect relations between
economic variables. Also, these equations are performed in order to spot fluctuations in the
connections between variables and to identify a certain degree of predictability. (Gomes, 2014)

However, as Ruppert (2011) states, it is difficult for financial analysts to make accurate
estimations of the financial assets returns due to unexpected connections between these assets.
Therefore, forecasting the volatility is a challenging task. Mathematical modeling represents a
solution for predicting a future market trend or for estimating future expected values of financial
assets. Even though there are financial analysts who consider that the future events cannot be
predicted, some researchers demonstrated the contrary. One example in this sense is the fact that
some experts demonstrated that the financial volatility is characterized by a tendency to cluster
and by autocorrelation, meaning that the future values are determined to a certain degree by past
values. Therefore, these characteristics represented a justification for financial analysts to create
models for volatility forecasting, and thus to formalize the concept.

As it was established at the beginning of this paper, it is intended to apply two such models for
volatility forecasting to data from the agriculture field. The selected models were GARCH model
and neural network. During the rest of this section, these two models will be presented.

The Autoregressive Conditional Heteroskedasticity (ARCH) model was created by Engle in


1982. The denomination selected for this model was based on two statements. Firstly, the word
autoregressive indicates this characteristic of the model given by the fact that the values of t
are influenced by previous values recorded for residuals (t-1). Secondly, the expression
conditional heteroskedasticity is used in order to underline the fact that the conditional variance
changes permanently. Engle (1982) emphasizes the fact that this model identifies the differences
between unconditional and conditional variances, where the conditional variance may incur
changes in time due to errors in the past.

According to Bollerslev (1986), ARCH model was successfully used in forecasting the volatility
for different economic phenomena. For example, this model was used by Engle (1982) for
inflation, starting from the assumption that the uncertainty in this case incurs changes in time.

Over the time, there were developed various extensions of this model, such as the following:
GARCH, EGARCH, TARCH, OGARCH, etc. Since the moment of its introduction, ARCH
model and its extensions were the most frequently used models for volatility forecasting.
(Ruppert, 2011)

GARCH model was developed by Bollerslev (1986), who was the student of Engle, the author of
the Autoregressive Conditional Heteroskedasticity. These two methods attracted plenty of
attention from researchers and financial analysts, which explains the large use of these methods,
and the development of many extensions. GARCH model is used with the purpose to estimate
volatility in financial markets. According to (), GARCH model is used by financial analysts for
volatility modelling because it forecasts prices of financial instruments in a context which is
closer to reality.

Financial analysts use GARCH models in trading, investing, hedging and dealing. () states that
this statistical model is applied by financial institutions in order to forecast the volatility of stock
returns. The information obtained with the help of this model will help financial institutions to
identify those stocks that are expected to generate higher returns, and to allocate the available
budget between different types of investments, according to the estimated returns.

This type of volatility modeling is appropriate for use in those situations when there are analyzed
asset returns. In this cases, volatility may record different values from one period to another, and
its evolution depends on past variance. Based on this assumption, financial analysts established
that a homoskedastic model would not be appropriate for predicting the volatility of asset
returns, because this type of model assumes that the volatility is constant. GARCH models are
preferred in forecasting the volatility of asset returns because these models are autoregressive,
meaning that they are based on past variances in order to deliver possible values for current
variances. By analyzing the errors recorded in the previous forecasts, GARCH model reduces the
errors that may be occurred in the ongoing predictions, leading to a higher degree of accuracy in
forecasting.

A better perspective over the concept of heteroscedasticity is provided by (). The author sustains
that homoscedastic models are based on the assumption that the variance for all disturbance
terms is the same. Nevertheless, in reality, this situation rarely occurs, in the sense that variance
is not always constant. According to a homoscedastic model, in a statistical test all the data
would have equal weights, while in reality these may vary. As a result of using such a model, the
statistical analysis would be inaccurate because the confidence intervals and the standard errors
will be smaller than they should be. This result will lead a financial analyst to consider that the
model obtained is more precise than it is in reality.

On the other side, a heteroskedastic model, such as ARCH/GARCH models have as purpose to
identify a model for the disturbance terms. The variance reflected by the disturbance terms is
perceived as the volatility or risk of the financial asset that is analyzed.

The GARCH model has as well a number of extensions, such as the following: Integrated
GARCH, Exponential GARCH, GJR GARCH, and so on.

ARCH/GARCH models are used only for time series data. Given the fact that in the case of time
series there is one order in which data can be processed, it results that this models cannot be used
for cross-sectional data. Cross-sectional data may be applied when comparing two or several
factors. Another aspect to be mentioned regarding the difference between time series and cross-
sectional data, relates to regressors. In the case of time series models, the author states that there
is a certain degree of correlation between the regressors in moments t and t+1. In the case of
cross-sectional data, the author emphasizes the fact that the regressors are selected randomly
from populations.

Loayza et al (2007), underlines as well the importance of ARCH/GARCH models in assessing


the accuracy of a model created based on a series of data and to establish whether the forecast is
valid or not. Before the introduction of ARCH model, specialists analyzed the variances of
models by applying a rolling standard deviation. This technique involves using equal weights for
observations over a certain number of previous observations. On the other side, ARCH model
involves considering the weights as parameters, which are estimated. The author considers that
this approach is more realistic based on two affirmations. Firstly, an analysis of data based on the
assumption that all the observations have equal weights, would be inaccurate because the more
recent observations are more relevant for the forecast. Secondly, the forecast should not be
restricted to the weights of a finite number of observations.

After presenting some information about the way GARCH model works, in the following lines
there will be indicated several advantages and disadvantages of this model. Some of these
advantages and disadvantages were made known previously, but they will be developed here in a
more structured way.

In terms of advantages, the first thing to be emphasized relates to the fact that GARCH model
enables the identification of a volatility measure for predicting the residuals in a model. As
mentioned earlier, a homoscedastic model assumes that the squared error is constant in a data set.
However, in the case of financial data, one cannot assume a constant volatility, because often
times, there are periods when the volatility records high values. Moreover, these periods with
high volatility are characterized by an autocorrelation effect, which means that these periods tend
to group up together. Consequently, models characterized by non-constant variance are defined
by the term heteroskedasticity. The risk of investments or portfolios determined by the non-
constant volatility is known as value-at-risk.

A second advantage presented by GARCH model is given by its simplicity in comparison with
some of its extensions, such as EGARCH, which provides more accurate volatility forecasts. ()

A third advantage rests on the fact that the GARCH model enables the identification of
influencing factors over a data set, and some examples in this sense are heteroskedasticity,
volatility clustering, or excess kurtosis.

In terms of disadvantages of the GARCH model, there are several aspects to be mentioned here.
Firstly, the GARCH model provides only a part of the solution, in the sense that the financial
decisions take into consideration the information about volatilities and expected returns, but
other types of information are gathered as well. Secondly, the GARCH model achieves
maximum effectiveness in the case of markets that are relatively stable, even though the model is
aimed at processing time-varying conditional variances. Moreover, it was demonstrated that the
GARCH model often fails at modeling high market fluctuations, such as crashes, or other
unanticipated events that may lead to structural changes. Thirdly, the GARCH model may not
capture the fat tails that characterize the asset return series. Often, the distribution of fat tails in
GARCH models is achieved through trials and errors.

All in all, these are the main aspects to be emphasized about the GARCH model concerning the
history of this model, the way it works, and advantages and disadvantages. The discussion will
continue with covering the same aspects for the neural network, which is the second model
selected for volatility forecast for this study.

Neural network is a volatility model developed with the purpose to provide more accurate
forecasts of different market variables, as the financial markets became more and more
interconnected and interdependent. This volatility model enables the analysis of
interrelationships between several market variables in the same time.

Neural networks were created in 1960s, but they started to receive more attention two decades
later, when specialists from different fields begun to apply these models for different purposes.
Among the areas where neural networks were applied, there can be mentioned the following:
economics and finance, medical diagnosis, manufacturing and process control, and so on.
Concerning the fields of economics and finance, neural networks were used for different
purposes such as the following: predictions in economic sectors, selection of financial assets for
investment portfolios, evaluation of the risks of bonds, forecasts of exchange rate, valuing
options, assessing the risks of credits or insurances, and so on. More information regarding the
fields of application for neural networks are provided in Annex 1.

The reason why neural network is characterized by a large use in economics and finance, rests in
the ability of this model to capture complex relationships, aspect which is not possible in the case
of linear models. However, Ruppert (2011) states that neural networks did not generate a
consistent performance in all its applications.
The neural network is a computational technique build to function in a similar way as the human
brain. More exactly, the purpose of this technique is to process data and information in order to
identify patterns. A neural network consists of simple processing elements or neurons that
function in the same time. The performance of a neural network depends on the following
aspects: its structure, the strength of connections between processing elements, and the
processing realized by the computing elements.

Neural networks present several advantages. Firstly, the neural network enables identifying
patterns in a large number of variables, without being overwhelmed by details. On the contrary,
neural networks use the details that appear as being irrelevant in order to extract important
features about the set of data. A neural network can also discover patterns that cannot be reduced
to precise rules.

Secondly, a neural network can perform several operations simultaneously, such as identifying
patterns and detecting correlations between variables in the same time. Also, given the fact that
neural networks are used for understanding the relations between several variables, these models
are considered as multiple regressions.

Thirdly, neural networks offer the possibility to be adapted to changes in the market behavior or
to the data in general. Predictions are usually performed with models that use time series.
However, one disadvantage of time series relates to the identification of a proper model to be
applied to the data (autoregressive or moving average). This issue is encountered especially in
the case of data regarding financial markets, in the sense that finance quantitative analysts have
difficulties in identifying a pattern in the data series. Neural networks, however, adapt to the
data, making possible the forecast of variables characterized by nonstationarity.

It was mentioned above that the performance of a neural network depends on its structure, the
strength of connections between processing elements, and the processing realized by the
computing elements. In the following lines, there will be presented each of these aspects.

The network architecture or structure refers to the way in which neurons are organized in layers
and to the connections that arise between layers. The network architecture depends primarily on
the types of problem for which it is aimed to provide a solution. Examples of solutions that could
be obtained through the use of neural networks are prediction, classification, pattern
identification, conceptualization, and so on. Each of these problem solving may be achieved
through the use of one or several network architectures, whose variations are determined by
network parameters. The parameters are the following:

Number of layers and number of neurons. The layers consist of neurons, that process the
input variables, and that provide information about the relationships between the inputs
and outputs of the process. The number of layers is established according to the type and
complexity of the problem.
Connections between neurons. The neurons are fully or partially connected in layers. In
the first case of connection, all the neurons from a layer are connected to all the neurons
from the following layer. At the beginning of a process, the connections are assigned
with initial weights, which are reduced during the process. During the process, the
network receives some seed values, which represent hints for the network and which
adjust during the process.
Transfer function. This function aims at making sense of the input data. The
interconnected neurons send and receive data to and from other neurons. The input data,
which is denoted as I1, I2,,In, are multiplied with weights specific to the type of
connection established between neurons. In the following step, a transfer function is
applied to the results obtained, leading to a value between 1 and -1 or 1 and 0. The
training is repeated for each neuron, until the input data reaches an output layer, where it
is not multiplied by a new layer, because it represents the output of the process.

Neural networks present as well disadvantages. Firstly, the process performed through a neural
network cannot be decomposed. In other words, the connection weights included in the process,
cannot be reduced to an algorithm that can be understood outside the network.

Secondly, the neural network may perform more functions for the data than it is necessary. For
example, in the case of financial market data, the network may retain idiosyncratic patterns
during data processing, which are not useful for a sample of data. In other words, the network is
not able to reduce the error by identifying the relationships that are significant between input and
output variables, leading to the memorization of meaningless relationships. However, the
network will consider these relationships as significant and it will generate forecasts. Moreover,
the training is lengthened more than needed, and it will affect the output.
To sum up the presentation of neural networks, these models gain an increasing importance in
the field of finance from at least two reasons. Firstly, financial analysts have access to large
volumes of data sets related to market variables. Secondly, neural networks enable the
identification of relationships between several variables. Nevertheless, an effective use of neural
networks supposes a series of trials and combinations regarding the structure of the networks. In
the finance field, analysts research permanently for improving the effectiveness of neural
networks.

Section 2: Volatility forecasting

In the previous sections it was mentioned the fact that the forecast of volatility is central to
investment decisions and portfolio creations. Given the fact that investors are able to assume
certain levels of risk, they have to assess the risks involved with certain investments by
forecasting the volatility of asset prices.

According to Poon and Granger (2003), volatility forecasting could be performed by using
historical information, or by using prices of traded options. In the first case, which refers to
making predictions based on historical information, the two authors emphasize the fact that these
models of volatility forecasting use time series. These models start from the assumption that the
past standard deviations, which are used for volatility forecasting, are known or they can be
estimated. Some examples of models using time series, are the following: Random Walk model,
Historical Average method, Moving Average method, Exponential Smoothing method,
Exponentially Weighted Moving Average method (EWMA), and so on.

These methods involve making predictions about volatility by using past standard deviations in
different ways. As such, the Random Walk model uses the past values of standard deviation as
forecasted values for the current standard deviation. The Historical Average and Exponential
Smoothing methods use all the standard deviations from the past. The Moving Average and
Exponentially Weighted Moving Average methods remove older values of standard deviation.
Poon and Granger (2003) mention the fact that Exponential Smoothing and Exponentially
Weighted Moving Average methods differ from the other methods mentioned above in the sense
that they involve placing a greater weight on the more recent data used as input in volatility
forecasting.
Other models of volatility forecasting are the Smooth Transition Exponential Smoothing model
and the Simple Regression. According to James Taylor (2001), the first model is similar with
EWMA, but the weights are established according to the size and sign of the past standard
deviations. The Simple Regression method is based on the assumption that volatility is a function
with past values of standard deviation and error terms, as inputs.

There are several general observations emphasized by Poon and Ganger (2003) for the models
enumerated above. All of the models, except for Random Walk and Historical Average, suppose
identifying the optimal weighting scheme. This action is performed by reducing the errors from
the sample on which it is performed the volatility forecast. An alternative to this action would be
to permanently update the estimates of parameters based on the new information identified
during the estimation period.

Up to this moment, there were presented simpler models of volatility forecast, which use past
standard deviations. However, researchers as Bollerslev et al (1994), bring into discussion more
sophisticated models that apply to time series, and which belong to the ARCH family. These
models are not based on sample standard deviations, but on formulating conditional variance
returns through a maximum likelihood procedure. In this case, the authors speak about the one-
step ahead forecast. This class consists of two main categories of models: ARCH and GARCH
models, and various extensions of these, such as the following: Exponential GARCH
(EGARCH), Treshold GARCH (TGARCH), Quadratic GARCH, and so on. Given the fact that
this class of models will be discussed later on in this thesis, there will not be provided more
information about it in this section.

Finally, a third category of forecasting volatility models based on time series is the category of
stochastic volatility models. The main idea about this category is the fact that volatility is subject
to innovations, which could generate or not returns. This perspective over volatility forecast
determines fat tail distributions for returns. According to Hull and White (1988), stochastic
volatility models have two main characteristics. Firstly, the authors speak about persistence,
which is given by the autoregressive term specific to the volatility forecast. Secondly, the authors
speak about volatility asymmetry induced by the correlation between returns and innovations.
The presentation delivered in this section shows a wide variety of forecasting volatility models
developed or analyzed by researchers. Nevertheless, a more complex approach of scientific
papers dedicated to this topic, would reveal more models for volatility forecasting, but for the
moment this is not the purpose of the current paper.

After an introduction related to the history of using volatility, and to the different models of
volatility forecasting used by finance quantitative analysts, the following sub-chapters of this
paper will provide information about the direction of research: problem statement, research
objective and dissertation structure.

Section 3: Related studies on GARCH and neural network

It was emphasized earlier in this paper that the actors of financial markets (portfolio managers,
option traders, etc.), are interested in forecasting volatility as accurate as possible in order to
make wise investment decisions. In the literature review, GARCH and neural networks were
often used, either separately or in combination, in order to perform examples of volatility
forecasts. Arneric et al (2014) affirms that GARCH was used in combination with neural
networks because of some limitations of the GARCH model: non-stationary behaviour,
nonlinearity or persistence in the conditional variance.

One example of research that uses both GARCH model and neural network is the one performed
by Hu and Tsoukalas (1999). The two researchers use a data set consisting of exchange rates of
the European Monetary System (EMS). The starting point of their study is represented by
previous researches on the volatility forecasting of EMS exchange rates. Thus, during 1990s
there were performed a series of studies in which GARCH models were applied in order to study
the positive or negative impact of EMS in reducing the exchange rate instability. Some
researchers, such as Bollerslev (1990) demonstrated with the help of GARCH the fact that the
European Monetary System lead to a reduced volatility of the European exchange rates.

Hu and Tsoukalas (1990) underline the fact that these studies have limitations, in the sense that
the use of GARCH model alone is not sufficient to capture all the aspects of volatility. Therefore,
the authors decided to use several GARCH models in order to forecast conditional volatility and
improved the results by applying the artificial neural network. The results of this research lead to
several conclusions. Firstly, it was proven that the European Monetary System reduces the
volatility of exchange rates. Secondly, the authors emphasize the fact that the EGARCH model
used in the study provided a superior performance due to the type of data set. Regarding the
performance of neural network, it was demonstrated that it delivers better results in the
prediction of absolute errors in comparison with the prediction of squared errors.

Bildirici and Ersin (2009) performed a research in which they aimed to forecast the evolution of
stock returns of Istanbul Stock Exchange ISE 100 Index in the period 1987-2008. In order to
perform the forecasts, the two authors applied several models from the GARCH family, such as
GARCH, EGARCH, TGARCH, NGARCH, and several others, and the artificial neural network.
The reason for combining these categories of models was to obtain improved forecasting results.
The authors sustained that, based on their study, the combination of GARCH models with neural
networks delivers improved forecasting results as long as the models capture the volatility more
efficiently. The authors emphasize as well the fact that the performance of the study in two
stages allowed them to notice that neural networks had an important role in their research
because they improved significantly the forecasts.

Arneric et al (2014) perform as well a research for comparing the performance of GARCH and
neural network. The authors use the stock returns of the Zagreb Stock Exchange recorded in the
period 2011-2014. In the first stage of the study, the researchers apply a GARCH model to the
data set, but, given the weaknesses of this model, they selected a neural network to continue the
research. The results of their research demonstrated the fact that neural networks provide a
superior performance to the GARCH model. Nevertheless, according to the authors, neural
networks pose some challenges to researchers, in the sense that they face difficulties in adapting
these networks to conditional variance forecasting of time series. Arneric et al (2014) highlight
the fact that there is a need for more research on the use of neural networks, as the study
performed by them presents some limitations regarding the data used as input for volatility
forecasting.

One last example that will illustrate the difference in forecasting performance between GARCH
and neural network, is provided by Kristjanpoller and Minutolo (2016). The authors performed a
forecast of the oil price volatility using a combination of GARCH model and artificial neural
network. In general, researchers who were interested in forecasting the volatility of oil price used
only the GARCH model, because they performed forecasts based on historical data. In other
words, the GARCH model explained the behavior of oil prices in the past, while financial
analysts were interested in predicting the oil prices.

Kristjanpoller and Minutolo (2016) remind about the fact that there is a great deal of research on
the forecasting models of oil prices because of the impact that they have on an economy:
increased inflation, decreased investments, or price volatility in other sectors of energy. Most of
these studies focused on using separately the GARCH model or the artificial neural network.
However, in the research performed by these two authors, it was demonstrated that the
combination of GARCH model and artificial neural network delivered better volatility forecasts
in comparison with traditional forecasting models.

As it can be seen from these presentations, there are numerous studies in which researchers
applied both GARCH models and neural networks to forecast the volatility in different sectors of
the financial markets. In some cases, the researchers clearly stated that the combination of these
methods lead to better performance of volatility forecasts. However, there were mentioned also
studies where the combination improved the forecasts capabilities only in some conditions.

vast majority of recent papers which attempt to forecast volatilit0y out-of-sample have been entirely
univariate in nature, using past realisations of volatility to predict its future path. Akgiray (1989), for
example, finds the GARCH model superior to ARCH, exponentially weighted moving average, and historical
mean models for forecasting monthly US stock index volatility. A similar result concerning the apparent
superiority of GARCH is observed by West and Cho (1995) using one-step ahead forecasts of Dollar
exchange rate volatility, although for longer horizons, the model behaves no better than their alternatives i.
Pagan and Schwert (1990) compare GARCH, EGARCH, Markov switching regime and three non-parametric
models for forecasting monthly US stock return volatilities. The EGARCH followed by the GARCH models
perform moderately; the remaining models produce very poor predictions. Franses and van Dijk (1996)
compare three members of the GARCH family (standard GARCH, QGARCH and the GJR model) for
forecasting the weekly volatility of various European stock market indices. They find that the non-linear
GARCH models were unable to beat the standard GARCH model. Finally, Brailsford and Faff (1996) find
GJR and GARCH models slightly superior to various simpler models ii for predicting Australian monthly
stock index volatility. The conclusion arising from this growing body of research is that forecasting volatility
is a notoriously difficult task (Brailsfrod and Faff, 1996, p419), although it appears that conditional
heteroscedasticity models are among the best that are currently available. In particular, more complex nonlinear
and non-parametric models are inferior in prediction to simpler models, a result echoed in an earlier
paper by Dimson and Marsh (1990) in the context of relatively complex versus parsimonious linear models

Part2
References

1. Bildirici, M., Ersin, O., O., (2009), Improving forecasts of GARCH family models with
the artificial neural networks: an application to the daily returns in Istanbul Stock
Exchange. Expert Systems with Applications. Vol. 36, pp. 7355-7362.
2. Bollerslev, T., (1986), Generalized Autoregressive Conditional Heteroskedasticity.
Journal of Econometrics, Vol. 31, pp. 307-327.
3. Bollerslev, T., Engle, R. F., Nelson, D.B., (1994), ARCH Models. Handbook of
Econometrics. Vol. 4, pp. 2959-3038.
4. Danielsson, J., Valenzuela, M., Zer, I., (2016), Learning from history: volatility and
financial crises. FEDS Working Paper No. 2016-093.
5. Du, X., McPhail, L., L., (2012), Inside the Black Box: The Price Linkage and
Transmission between Energy and Agricultural Markets. The Energy Journal. Vol. 33,
pp. 171-194.
6. Etienne, X., Barrera, A., Wiggins, S., (2016), Price and volatility transmissions between
natural gas, fertilizer, and corn markets. Agricultural Finance Review. Vol. 76, pp. 15-
26.
7. Girardi, D., (2012), Do financial investors affect the price of wheat. PSL Quarterly
Review. Vol. 65, pp. 79-109.
8. Gomes, O., (2014), Complexity in Economics: Cutting Edge Research. Springer
International Publishing.
9. Hayek, F., (2006), The Constitution of Liberty(Routledge Classics). Routledge, New
Edition.
10. Hertel, T., W., Beckman, J., (2011), Commodity Price Volatility in the Biofuel Era: An
Examination of the Linkage between Energy and Agricultural Markets. Working Papers:
16824. National Bureau of Economic Research, Inc.
11. Hull, J., White, A., (1988), An Analysis of the Bias in Option Pricing Caused by a
Stochastic Volatility. Advanced Futures Options Research. Vol. 3, pp. 27-61.
12. Keynes, J. M., (1936), The General Theory of Interest, Employment and Money.
London: Macmillan.
13. Kose, M. A., Prasad, E. S., Terrones, M. E., (2003), Financial Integration and
Macroeconomic Volatility. IMF Staff Papers. Vol. 50, pp. 119-142.
14. Kristjanpoller, W., Minutolo, M., (2016), Forecasting volatility of oil price using an
artificial neural network-GARCH model. Expert Systems With Applications. Vol. 65,
pp. 233-241.
15. Loayza, N.V., Ranciere, R., Serven, L., Ventura, J., (2007) Macroeconomic Volatility
and Welfare in Developing Countries: An Introduction. The World Bank Economic
Review. Vol. 21, pp. 343-357.
16. McPhail, L., L., Du, X., Muhammad, A., (2012), Disentangling Corn Price Volatility:
The Role of Global Demand, Speculation and Energy. Journal of Agricultural and
Applied Economics. Vol. 44, pp. 401-410.
17. Minsky, H., (1992), The Financial Instability Hypothesis. Working Paper 74, Jerome
Levy Economics Institute, Annandale on Hudson, NY.
18. Munier, B., (2016), Commodity price volatility: causes and impact on the EU
agricultural markets. Retrieved at 18th of November, 2016.
http://www.momagri.org/UK/momagri-model/Commodity-Price-Volatility-Causes-and-
Impact-on-the-EU-Agricultural-Markets_695.html
19. Poon, S. H., Granger, C.W.J., (2003), Forecasting Volatility in Financial Markets: A
Review. Journal of Economic Literature, Vol. 41, pp. 478-539.
20. Rothbort, S., (2007), Understanding the four measures of volatility. The Street.
Retrieved at 1st of March, 2017. Available at:
https://www.thestreet.com/story/10343098/1/understanding-the-four-measures-of-
volatility.html
21. Ruppert, D., (2011), Statistics and Data Analysis for Financial Engineering. Springer-
Verlag, New York.
22. Thompson, W., Smith, G., Elasri, A., (2012), World Wheat Price Volatility: Selected
Scenario Analyses. OECD Food, Agriculture, and Fisheries Policy Papers. No. 59,
OECD Publishing.
23. Trujillo-Barrera, A., Mallory, M., Garcia, P., (2011), Volatility Spillovers in the U.S.
Crude Oil, Corn, and Ethanol Markets. Proceedings of the NCCC-134 Conference on
Applied Commodity Price Analysis, Forecasting, and Market Risk Management.
Annex 1

The fields where neural network was used are the following:
Volatility forecasting;
Evaluation of the risks of credits or insurances;
Knowledge processing;
Medical applications;
Inspections for identifying automobile malfunctions;
Pattern classification;
Database retrieval;
Sensor signal processing.

Anda mungkin juga menyukai