0 tayangan

Diunggah oleh Mokong Soares

AdaBoost ensemble for financial distress prediction An empirical comparison with data from Chinese listed companies

- C omparative S tudy of D iabetic P atient D ata’s U sing C lassification A lgorithm in WEKA T ool
- Final PDF
- Education University
- IS ZC415
- romi-dm-04-algoritma-nov2014.pptx
- A Rapid Beneficial Outlier Destruction with Pre-Denoising using Naive Bayesian Filter
- Avenue Supermarts Ltd (DMART in)_Initiation-1
- Sfy 2011 Budget
- sadsd
- saurav cv
- 071_CrossEntropy_MannorEtAl
- A Study of Supervised Learning in Context with Decision and Regression Tree
- Magellan
- Conceptual Frame Work of Accounting
- [IJCST-V4I2P17]:Prof. Harish Barapatre, Ms. Sampada Samel
- Champlain Township 2017 Budget Presentation
- inc vs exp jan thru aug 2013
- California Tax Board: AB156 042699
- Ch 1 Basics of Analysis (1)
- Decision Trees in Practice _ Coursera_programming_assignment-quiz.pdf

Anda di halaman 1dari 9

The attached

copy is furnished to the author for internal non-commercial research

and education use, including for instruction at the authors institution

and sharing with colleagues.

Other uses, including reproduction and distribution, or selling or

licensing copies, or posting to personal, institutional or third party

websites are prohibited.

In most cases authors are permitted to post their version of the

article (e.g. in Word or Tex form) to their personal website or

institutional repository. Authors requiring further information

regarding Elsevier’s archiving and manuscript policies are

encouraged to visit:

http://www.elsevier.com/copyright

Author's personal copy

journal homepage: www.elsevier.com/locate/eswa

with data from Chinese listed companies

Jie Sun ⇑, Ming-yue Jia, Hui Li

School of Economics and Management, Zhejiang Normal University, P.O. Box 62, YingBinDaDao 688, Jinhua 321004, Zhejiang Province, PR China

a r t i c l e i n f o a b s t r a c t

Keywords: Due to the important role of ﬁnancial distress prediction (FDP) for enterprises, it is crucial to improve the

Financial distress prediction accuracy of FDP model. In recent years, classiﬁer ensemble has shown promising advantage over single

AdaBoost ensemble classiﬁer, but the study on classiﬁer ensemble methods for FDP is still not comprehensive enough and

Single attribute test leaves to be further explored. This paper constructs AdaBoost ensemble respectively with single attribute

Decision tree

test (SAT) and decision tree (DT) for FDP, and empirically compares them with single DT and support vec-

Support vector machine

tor machine (SVM). After designing the framework of AdaBoost ensemble method for FDP, the article

describes AdaBoost algorithm as well as SAT and DT algorithm in detail, which is followed by the com-

bination mechanism of multiple classiﬁers. On the initial sample of 692 Chinese listed companies and 41

ﬁnancial ratios, 30 times of holdout experiments are carried out for FDP respectively one year, two years,

and three years in advance. In terms of experimental results, AdaBoost ensemble with SAT outperforms

AdaBoost ensemble with DT, single DT classiﬁer and single SVM classiﬁer. As a conclusion, the choice of

weak learner is crucial to the performance of AdaBoost ensemble, and AdaBoost ensemble with SAT is

more suitable for FDP of Chinese listed companies.

Ó 2011 Elsevier Ltd. All rights reserved.

the empirical experiment, in which experimental data is described

In recent years, with the breakout of global ﬁnancial crisis, and analysis is made according to experimental results. Section 6

many economic entities suffer great loss or even become bankrupt. makes conclusion.

Such shocks impact ﬁnancial institutions and all kinds of enter-

prises themselves, as well as their related interest parts such as

2. Literature review

stockholders, creditors, auditors, clients, employees, and so on.

For this reason, it is necessary and urgent for enterprises to raise

FDP is a domain which has been studied for many decades. In as

the consciousness of crisis management. As a useful tool for scien-

early as 1930s, Fitzpartrick (1932) made a pioneering attempt to

tiﬁc decision making, ﬁnancial distress prediction (FDP) takes an

compare the ﬁnancial ratios of successful industrial enterprises

important role in the prevention of corporate failure. From this

with those of failed companies. Beaver (1966) used univariate

point of view, the accuracy of FDP model is of crucial importance,

analysis to predict corporate bankruptcy and found that ﬁnancial

and many researchers have focused on how to construct efﬁcient

ratios have discriminative ability for corporate failure. From then

FDP models. To improve model accuracy, classiﬁer ensemble is a

on, FDP continued to be widely studied due to its important role

promising method for FDP. However, research on ensemble meth-

of decision aiding. Altman (1968) developed the famous Z-score

ods for FDP just arose recently, and urgently needs to be studied

model by using multiple discriminant analysis (MDA), and it can

comprehensively. This paper further explores AdaBoost ensemble

be utilized to identify companies into two categories by a combina-

for FDP.

tion of ﬁve ﬁnancial ratios. Ohlson (1980) applied logistic

The rest of the paper is divided into ﬁve parts. Section 2 makes a

regression (Logit) model to FDP, which is able to indicate the prob-

brief literature review. Section 3 states the contribution of this pa-

ability of ﬁnancial distress.

per. Section 4 represents the methodology, which includes frame-

FDP entered a booming stage in late 1980s because of the rapid

work of AdaBoost ensemble method for FDP, AdaBoost algorithm,

progress of artiﬁcial intelligent and data mining techniques.

the weak learners of single attribute test and decision tree, and

Frydman, Altman, and Kao (1985) carried out FDP through decision

tree (DT), which can draw rules with easiness of understanding.

⇑ Corresponding author. Tel.: +86 13454942829. Odom and Sharda (1990) began to construct neural network (NN)

E-mail addresses: sunjiehit@gmail.com (J. Sun), lihuihit@gmail.com (H. Li). model for FDP. Since then, NN has become one of the most widely

0957-4174/$ - see front matter Ó 2011 Elsevier Ltd. All rights reserved.

doi:10.1016/j.eswa.2011.01.042

Author's personal copy

used machine learning methods for FDP, and many studies (Carlos, Alternatively, different training data sets can be produced by

1996; Fletcher & Goss, 1993; Odom & Sharda, 1990; Pendharkar, selecting from the same initial training set according to certain

2005; Zhang, Hu, Patuwo, & Indro, 1999) concluded that it outper- mechanism, and then used to generate diverse classiﬁers through

forms traditional statistic methods. NN has the advantage of strong the same learning algorithm. Two popular methods for creating

nonlinear mapping ability, but its black-box property makes the such ensembles are bagging and boosting. Alfaro, Gámez, and Gar-

learned knowledge difﬁcult to understand for corporate managers. cía (2007) have shown that AdaBoost, one of the popularly used

Developed on the basis of statistical learning theory, support boosting algorithms, decreases the generalization error and im-

vector machine (SVM) is a relatively new machine learning tech- proves the accuracy in its application to FDP. Alfaro, García, Gámez,

nique (Kim, Kim, & Lee, 2002). SVM was applied to bankruptcy pre- and Elizondo (2008) carried out an empirical comparison for FDP

diction respectively by Shin, Lee, and Kim (2005) and Min and Lee and showed that AdaBoost with DT outperforms NN both in the

(2005) with Korean data and Hui and Sun (2006) with data of cross-validation and test set estimation of the classiﬁcation error.

Chinese listed companies. They all used radial basis function Kim and Kang (2010) established bagging and AdaBoost ensembles

(RBF) as SVM’s kernel function, and supported the conclusion that with NN and compared them with single NN classiﬁer. It is indi-

SVM outperforms MDA, Logit and NN in FDP. Instead of empirical cated that bagged and boosted NN ensembles consistently improve

risk minimization, SVM uses the principle of structural risk mini- predictive accuracy.

mization, which well prevents SVM from over ﬁtting. Besides, the

problem of local optimization can also be avoided by SVM algo-

3. Contribution of this paper

rithm for the reason that it is a convex optimization problem and

its local optimal solution is just the global optimal solution. Finally,

Former researches on AdaBoost ensemble for FDP used DT or

SVM can also provide good generalization ability and stable classi-

NN as weak learner, and were both compared to single NN classi-

ﬁcation performance for relatively small sample. From this point of

ﬁer. As the most simple classiﬁcation method for FDP, univariate

view, SVM is superior to NN because NN is easy to get into over

analysis, also called single attribute test (SAT) in this paper, needs

ﬁtting when sample is not large enough. Recently, Ding, Song,

lower computational cost than almost all other classiﬁcation meth-

and Zen (2008) and Boyacioglu, Kara, and Baykan (2009) further

ods. In addition and more importantly, SAT itself has real weak

investigated SVM-based FDP, and afﬁrmed that SVM can serve as

learning ability, which can produce moderate accurate but not so

promising FDP model.

strong classiﬁer. As a universal principle, for two things with the

Considering the possible limitation of single classiﬁer, more and

same function, the simpler is the better. However, to the best of

more researchers began to pay attention to FDP based on multiple

our knowledge, no literatures have provided evidence on whether

classiﬁer combination, or classiﬁer ensemble. It is expected to re-

AdaBoost ensemble with SAT is more suitable for FDP than Ada-

duce the variance of estimated error and improve the whole recog-

Boost ensemble with DT. For this reason, this paper constructs Ada-

nition performance (Kim, Min, & Han, 2006; Kim et al., 2002; Ruta

Boost ensemble respectively with SAT and DT for FDP, and

& Gabrys, 2005). To construct an effective multiple classiﬁer sys-

empirically compares them with single DT and SVM classiﬁer.

tem, diversity is essential, which means the base classiﬁers to be

The reason why SVM, instead of NN, is chosen for the purpose of

combined should be different. By now, several methods to produce

comparison is that SVM has been proved to be a prominent single

such diversity have been proposed.

classiﬁer for FDP by many literatures, and it is superior to NN in

Firstly, diverse base classiﬁers can be generated by applying dif-

terms of generalization ability especially when the sample is not

ferent learning algorithms (with heterogeneous model representa-

so large. Therefore, this study contributes to provide further insight

tions) to a single data set. Jo and Han (1996) integrated CBR, NN,

into FDP method based on AdaBoost ensemble, particularly for the

and MDA to predict bankruptcy, and concluded that the combined

situation that FDP needs to be made with limited samples. It is also

model is superior to each independent one. Sun and Li (2008) pro-

believed that such empirical results can provide useful guideline

posed FDP method by weighted majority voting combination of

for the practice of FDP.

MDA, Logit, DT, NN, SVM, and CBR, and Cho, Kim, and Bae (2009)

introduced an integration strategy with subject weight based on

NN to combine MDA, Logit, NN and DT for bankruptcy prediction. 4. Methodology

They concluded that FDP based on combination of multiple classiﬁ-

ers is superior to single classiﬁer according to accuracy rate or stabil- 4.1. Framework of AdaBoost ensemble method for FDP

ity to some extent. Li and Sun (2009) put forward a multiple CBR

system by majority voting, which inherits the ability of producing As a commonly used technique for constructing ensemble clas-

maximum accuracy generated by its component, improves the abil- siﬁers, Boosting tries to construct a classiﬁer ensemble by develop-

ity of producing minimum accuracy, and achieves more ability on ing one classiﬁer at a time incrementally. This means each

stability. Sun and Li (2009) and Hung and Chen (2009) respectively classiﬁer that joints the ensemble is trained on a data set selec-

studied the FDP ensemble method by classiﬁer selection with differ- tively sampled from a training data set by gradually increasing

ent inner structures. The former called it serial combination of mul- the likelihood of ‘‘difﬁcult’’ data points at each step. AdaBoost, pro-

tiple classiﬁers and compared it with candidate single classiﬁers. It posed by Freund and Schapire (1997), is the most well known

was concluded that for the FDP problem with two categories serial boosting method. This paper tries to study on AdaBoost ensemble

combination does not show much superiority to the best base clas- method for FDP, whose framework is designed as Fig. 1. It ﬁrstly

siﬁer. The latter called it selective ensemble and compared it with samples a training set from the initial data set according to uni-

stacking ensemble by voting and weighting, to conclude that selec- form distribution (W1), and then adaptively adjusts each example’s

tive ensemble performs better than stacking ensemble. weight in terms of whether it is difﬁcult or easy to classify. Namely,

Besides, Tsai and Wu (2008) used NN ensemble for bankruptcy the initial training examples which are misclassiﬁed by the weak

prediction, whose diversiﬁed base classiﬁers were constructed on learner trained in the nearest last step are regarded as the difﬁcult

different data sets from three countries. Their experimental results ones, and their weights should be increased. The updated weight

showed that NN ensemble did not outperform a single best NN distribution is then used to sample another training set from the

classiﬁer, based on which they considered that the proposed multi- initial data set, on which another weak learner can be trained. After

ple classiﬁer system may be not suitable for the binary classiﬁca- T times of such iterations, AdaBoost ensemble for FDP can be com-

tion problem as bankruptcy prediction. posed of T weak learners, whose individual outputs are combined

Author's personal copy

Table 1

Initial training dataset Sn AdaBoost algorithm.

AdaBoost algorithm

Weight distribution W1 Weight distribution W2 … Weight distribution WT

Input: Initial training set composed of n examples, denoted as

Sn = {(x1, y1), (x2, y2), . . . , (xn, yn)}

Training dataset 1 Training dataset 2 Training dataset T Weak learning algorithm, denoted as WeakLearner

selected from Sn selected from Sn selected from Sn Integer T specifying total number of iterations

according to W1 according to W2 according to WT Initialize: wi1 ¼ 1=n, i.e. W 1 ¼ fw11 ; w21 ; . . . ; wn1 g ¼ f1=n; 1=n; . . . ; 1=ng

The ensemble F = /

For t = 1, 2, . . . , T

Weak learner 1 Weak learner 2 Weak learner T 1. Take a sample Rt from Sn using distribution Wt

2. Build a classiﬁer ft using Rt as the trainingset

P

3. Compute: Et ¼ i:ft ðxi Þ–yi wit and at ¼ 0:5 ln 1E Et

t

i

AdaBoost ensemble 4. Update the weight:witþ1 ¼ normalizeðwit expðat lt ÞÞ

Weak learner 1

Output: The ensemble F = {f1, f2, . . . , fT} and A = {a1, a2, . . . , aT}

Testing Weak learner 2 Combination Prediction

dataset … result

Weak learner T

4.3. Weak learners

Fig. 1. Framework of AdaBoost ensemble method for FDP.

AdaBoost is a method used to signiﬁcantly reduce the error of

to produce the ﬁnal prediction result. The details of AdaBoost algo- weak learning algorithm. In theory, the weak learning algorithm

rithm, weak learning algorithm and combination mechanism are to can be any one as long as it can generate classiﬁers which need

be stated respectively in the following subsections. only be a little better than random guessing (Freund & Schapire,

1996). That means the weak learners should not result in over-

4.2. AdaBoost ﬁtting. In this paper, SAT and DT are respectively used as weak

learning algorithms for the following two reasons. Firstly, SAT

Suppose Sn = {(x1, y1), (x2, y2), . . . , (xn, yn)} is a set of training and DT are both non-parametric learning algorithms which need

samples, and yi (i = 1, 2, . . . , n) e {1, 1}, which represents only not search optimal parameters in the training stage, and thus they

two classes for simpliﬁcation purpose. The weight distribution have relatively faster learning ability than other parametric algo-

over these samples at the tth boosting iteration is denoted as rithms. This property makes them especially suitable for AdaBoost

W t ¼ fw1t ; w2t ; . . . ; wnt g (t = 1, 2, . . . , T), which is initially set uni- ensemble, which needs time-consuming iterations repeated for

formly. It means the weight wit (i = 1, 2, . . . , n) is given a value of many times. Secondly, AdaBoost prefers weak learning algorithms

1/n at the ﬁrst iteration when t = 1, and will be updated adaptively over strong ones. Namely, AdaBoost can only provide very limited

at later iterations. At iteration t, AdaBoost builds a new training improvement in accuracy for a strong learning algorithm. Though

data set by sampling from the initial training data set with the Alfaro et al. (2008) used DT with deep pruning as weak learner

weight distribution of Wt, and calls the Weak Leaner to construct for FDP AdaBoost ensemble, this paper also attempts the FDP Ada-

a base classiﬁer, represented as ft, on this new training data set. ft Boost ensemble with SAT weak learner, because SAT is surpassed

should then be applied to classifying the samples in the initial data by all other FDP methods in former researches and usually obtains

set, and the error of ft, denoted as Et, can be calculated as follows: moderately but not so accurate learner, which just meets the

X requirement of AdaBoost.

Et ¼ wit ð1Þ

i:ft ðxi Þ–yi

4.3.1. Single attribute test

According to the idea that easy samples correctly classiﬁed by ft SAT is the ﬁrst algorithm proposed for diagnosing corporate

get lower weights and difﬁcult samples misclassiﬁed get higher ﬁnancial distress, and it was named as univariate discriminant

weights, samples’ weight distribution should be updated as analysis by Beaver (1966). Suppose X is a matrix composed of m

follows: rows and n columns. Here, m is the number of all attributes and

n is also the number of training samples. The SAT algorithm ap-

i

w0itþ1 ¼ wit expðat lt Þ ði ¼ 1; 2; . . . ; nÞ ð2Þ plied in this study is listed in Table 2.

i

In the above formula (2), at and lt is calculated as follows:

4.3.2. Decision tree

1 Et DT began to be applied to ﬁnancial distress prediction by

at ¼ 0:5 ln ð3Þ

Et Frydman et al. (1985). It is a kind of tree-shaped decision struc-

ture learned inductively from sample data whose class is already

i 1 if f t ðxi Þ ¼ yi known by recursively partitioning attribute values. In DT, each

lt ¼ ð4Þ

1 if f t ðxi Þ – yi non-leaf node means a testing of an attribute value, and each

leaf node represents a class. Thus, DT can provide well under-

The above calculated weights should be normalized so that they standable knowledge and aid decision making for less-experi-

add up to one enced users easily. Basic algorithm of DT is stated in Table 3(

w0i Sun & Li, 2008).

witþ1 ¼ Pn tþ10i ði ¼ 1; 2; . . . ; nÞ ð5Þ In the basic DT algorithm described in Table 3, IG represents

i¼1 wtþ1

information gain, which is most widely used as the criterion for

When T iterations are processed, the ensemble will be com- choosing an attribute split. Its calculation is as the follows.

posed of T weak classiﬁers. The ﬁnal AdaBoost classiﬁcation result Sn is a data set consisting of n samples. Label of a sample has

is made through combination of their classiﬁcation results two different values, namely: C1 = 1 and C2 = 1. If sl (l = 1, 2) is

weighted by at. In detail, the AdaBoost algorithm is listed in the sample number of class Cl. Then the total information entropy

Table 1. needed to classify the given data set is I(s1, s2)

Author's personal copy

Single attribute test algorithm. IG(A).

Single attribute test algorithm

IGðAÞ ¼ Iðs1 ; s2 Þ EðAÞ ð10Þ

Input: Matrix composed of different attributes values of all training samples,

denoted as Xmn. The decision tree obtained by the above basic algorithm is often

Label values set of all training samples, denoted as Yn. a big one and easy to get into over-ﬁtting, and a pruning technique

1. For each attribute k 2 [1, 2, . . . , m], do the following: is a useful way to avoid over-ﬁtting.

(1) Let Fn represent the kth row in the matrix Xmn

(2) Let Z = unique_ascending(Fn)

(3) Let Size_Z = length(Z) 1 4.4. Combination mechanism

(4) For each j 2 [1, 2, . . . , Size_Z], do the following steps:

a) Let fj = [Zj + Z(j+1)]/2

The output of AdaBoost algorithm in Table 1 is the ensemble

b) Compute the classiﬁcation error, denoted as errj, on the

assumption that fj is the partitioning point for attribute k

composed of T base classiﬁers that is denoted as F = {f1, f2, . . . , fT},

and their corresponding combination weight set, which is denoted

(5) Let Errk represents the minimal value in the err = [er- as A = {a1, a2, . . . , aT}. The combination weight at is calculated from

Size Z

r1, err2, . . . , errSize_Z,], i.e. Err k ¼ minj¼1 ðerr j Þ and PPk represents the the error of base classiﬁer ft according to formula (3), in which, the

best partitioning point for attribute k corresponding to Errk, i.e.

lower Et (error of base classiﬁer ft) is, the higher value at (weight of

PPk = f(j⁄), in which j⁄ represents the position index in array err cor-

responding to the minimal value base classiﬁer ft) gets. As the most widely used combination mech-

m

2. Let b ¼ PP k if Errk ¼ mink¼1 ðErr k Þ. It means attribute k⁄ is selected to anism, majority voting scheme is adopted to combine the predic-

construct the single attribute model for classiﬁcation tion results of T base classiﬁers. In detail, weighted majority

3. If attribute k⁄ is a positive indicator, then the single attribute discriminant voting, instead of simple majority voting, is applied to consider

model is represented as y ¼ sgnðxk bÞ. Else, y ¼ sgnðxk þ bÞ

Output: The single attribute discriminant model

the different role each base classiﬁer takes in the combination pro-

cess to produce the ﬁnal result. For FDP problem with two catego-

ries denoted NM (normal) and FD (ﬁnancial distress), given an

Table 3

example xi to be predicted by the ensemble, the combination

Basic algorithm of decision tree. mechanism of weighted majority voting can be described as

follows:

Basis algorithm of decision tree

Input: Training sample data (on the assumption that all attributes should be NM if PðFðxi Þ ¼ NMÞ > PðFðxi Þ ¼ FDÞ

discretized).

Fðxi Þ ¼ ð11Þ

FDif PðFðxi Þ ¼ NMÞ 6 PðFðxi Þ ¼ FDÞ

Candidate attribute set, denoted as attribute_list.

Algorithm: Gen_decision_tree X

1. Create a node, which can be denoted as N; PðFðxi Þ ¼ NMÞ ¼ at ð12Þ

2. If all samples of the node belong to the same class, then return N as a leaf t:ft ðxi Þ¼NM

node which is denoted as class C;

3. If is_empty(attribute_list) then return N as a leaf node which is denoted as X

the class having the most samples in node N; PðFðxi Þ ¼ FDÞ ¼ at ð13Þ

4. Choose the attribute whose IG value is the biggest in attribute_list, and t:ft ðxi Þ¼FD

then denote it as test_attribute;

5. Sign the node N with test_attribute;

6. According to the condition of test_attribute=ai, produce a branch from the 5. Empirical experiment

node N, where Si is a sample set that meets the condition;

7. If is_empty(Si) Then denote the corresponding leaf as the class that has

the largest number of samples in node N 5.1. Data description

Else express the corresponding leaf node as the class that is iteratively

returned by Gen_decision_tree(Si, attribute_list - test_attribute). Financially distressed company is deﬁned as the one who has

Output: decision tree. had negative net proﬁt in consecutive two years, or its net capital

per share is lower than the face value per share for the reason of

X

2 one year’s substantive loss. They are also the main criteria for spe-

Iðs1 ; s2 Þ ¼ pl log2 ðpl Þ ð6Þ cially treating Chinese listed companies by Chinese Stock Exchange

l¼1 due to abnormal ﬁnancial status (Sun & Li, 2011).1 Healthy compa-

Where, pl expressses the probability that each random sample be- nies are chosen from those that have never been specially treated by

longs to the class Cl. Its value can be calculated by pl = sl/s. the matching method considering both industry and asset size.

Assume attribute A has v different values {a1, a2, . . . , av}. The According to the information of Chinese listed companies from

data set S can be divided into v subsets, namely:{S1, S1, . . . , Sv}. 2000 to 2008, the initial samples consist of totally 692 companies

The subset Sq (q = 1, 2, . . . , v) consists of samples whose attribute listed in Shenzhen Stock Exchange and Shanghai Stock Exchange.

values are equal to aq. Assume slq denotes the number of samples Therefore, the experimental data are collected from Chinese listed

who belong to both subset Sq and class Cl. The information entropy companies on real world information, and our study focuses on

which is used to classify the given data set according to the attri- ﬁnancial distress from the prospect of proﬁtability.

bute can be calculated by the following formula, namely: E(A) Suppose the year of ﬁnancial distress is represented as year

(t 0). Similarly, year (t 1), (t 2) and (t 3) respectively mean

Xv

s1q þ s2q one year, two years and three years before the ﬁnancial distress

EðAÞ ¼ Iðs1q ; s2q Þ ð7Þ

q¼1

s year. For comprehensive generalization, our study attempts to

1

X

2 Commonly, Chinese listed companies will be specially treated (ST) if: (1) a

Iðs1q ; s2q Þ ¼ plq log2 ðplq Þ ð8Þ company has had negative net proﬁt in consecutive two years or (2) a ﬁrm’s net

l¼1 capital per share is lower than its face value. This study chooses samples according to

the above ST criteria. If a company is specially treated because (1) the ﬁrm purposely

slq publishes ﬁnancial statements with serious false and misstatement or (2) other

plq ¼ ðl ¼ 1; 2Þ ð9Þ abnormal incidents described in Chinese Stock Listing Exchange Rule appear, it is

s1q þ s2q

excluded.

Author's personal copy

predict ﬁnancial distress of year (t 0) respectively according to 5.3. Experimental results and analysis

ﬁnancial ratio information of year (t 1), (t 2) and (t 3). That

is, ﬁnancial distress is tried to be predicted respectively one year Thirty times of holdout testing errors for FDP are listed in Ta-

in advance, two years in advance and three years in advance. ble 5. For direct comparison among different FDP methods, the

Forty one ﬁnancial ratios are utilized as input variables, which mean values of 30 times of holdout testing errors are also calcu-

cover proﬁtability, activity, solvency, growth, risk level, per share lated in the last row of Table 5. As can be seen, no matter FDP is

ratios, and cash ﬂow ratios, as listed in Table 4 (Sun and Li, made one year, two years or three years in advance, SA (AdaBoost

2011). Hence, these explanatory variables can provide comprehen- ensemble with single attribute test) outperforms the other three

sive indication of ﬁrm’s ﬁnancial and operational state. Since SAT methods in terms of mean testing error. That is, in the 30 times

and DT are applied as weak learners in the study, no other feature of holdout testing, SA has the lowest mean testing errors of

selection methods are needed for the reason that these weak lear- 2.78%, 12.81% and 27.51% respectively at year (t 1), (t 2) and

ner algorithms have the ability of feature selection by themselves. (t 3). However, DTA does not always obtain the lower mean test-

ing errors than the single classiﬁers of DT and SVM from year

5.2. Experimental design (t 1) to (t 3). In detail, the mean testing error of DTA at year

(t 2) is 13.04%, which is both lower than DT (14.26%) and SVM

To obtain comparable experimental results, the same FDP prob- (13.43%), but the mean testing error of DTA is higher than DT at

lem is solved by four different classiﬁcation methods, i.e. AdaBoost year (t 1) and higher than SVM at year (t 3). For the two single

ensemble with SAT (represented as SA), AdaBoost ensemble with classiﬁers, SVM evidently outperforms DT at year (t 2) and

DT (represented as DTA), single classiﬁer on DT and single classiﬁer (t 3), but performs a little worse than DT at year (t 1). The pos-

on SVM. Thirty times of holdout tests are carried out to estimate sible reason for this phenomenon is that more linearity may exist

the prediction accuracy more objectively. Each time, the total ini- between explanatory variables and output label at year (t 1) than

tial sample composed of 692 Chinese listed companies are divided year (t 2) and (t 3). While, the RBF SVM applied in the experi-

into two subsets, namely training set and testing set. The former ment is more suitable for non-linear problems.

has the proportion of two thirds (2/3) and the latter occupies the For clearer illustration, testing error curves on 30 times of hold-

rest one third (1/3). For each classiﬁcation method, the above de- out experiments are graphed in Figs. 2–4 respectively at year

scribed process of dividing training set and testing set are repeated (t 1), (t 2) and (t 3).

for 30 times, so that 30 estimated errors can be ultimately obtained As can be seen, the testing error curves of SA are at a relatively

for the purpose of statistical analysis. This makes the comparison lower position for all three years. Furthermore, the testing error

among different classiﬁcation methods more scientiﬁc. curves of SA wave in smaller ranges than those of DTA, DT and

Individual DT classiﬁer is pruned using the level of depth with SVM, indicating that the FDP method on SA is more stable than

which the pruned tree has the lowest 10-fold cross validation the other three ones. Such an advantage of SA over the other three

error. Therefore, the size of individual tree is limited to avoid methods is obvious according to the shape of testing error curve.

over-ﬁtting. SVM algorithm with RBF kernel function is applied When FDP is made at year (t 1), RBF SVM tends to have the high-

to building single SVM classiﬁer, since RBF SVM is proved to be est testing error and largest waving range for the possible reason

an effective one for FDP in former researches (Hui & Sun, 2006; mentioned above. But when FDP is made at year (t 2) or

Min & Lee, 2005; Shin et al., 2005). Because the tuning parameter (t 3), the points on DT’s testing error curve tend to be on the

C and the kernel parameter c are crucial to RBF SVM’s classiﬁcation higher position more frequently. Therefore, FDP methods based

performance, grid search technique is used to ﬁnd optimal param- on AdaBoost ensemble is superior to the single classiﬁer of DT or

eter values by the criteria of 10-fold cross validation error. SVM as a whole.

Table 4

Financial ratios used as explanatory variables.

Category Variables

Proﬁtability Gross income/operating revenue Net proﬁt/operating revenue

Earning before interest and tax/total assets Net proﬁt/total assets

Net proﬁt/current assets Net proﬁt/ﬁxed assets

Proﬁt margin Net proﬁt/equity

Return on invested capital

Activity Account receivables turnover Inventory turnover

Account payable turnover Working capital turnover

Current assets turnover Fixed assets turnover

Long-term assets turnover Total assets turnover

Net assets turnover

Solvency Current ratio Quick ratio

Working capital ratio Asset-liability ratio Equity/debt ratio

Current assets/total assets Fixed assets/total assets

Equity/ﬁxed assets Current liability/total liabilities

Debt/tangible assets ratio Liabilities/market value of equity

Growth ratios Growth rate of prime operating revenue Rate of capital preservation and appreciation

Growth rate of total assets Growth rate of net proﬁt

Risk level Coefﬁcient of ﬁnancial leverage Coefﬁcient of operating leverage

Per share ratios Operating revenue per share Earning per share

Net assets per share

Cash ﬂow ratios Cash ﬂow/current liabilities ratio Cash rate of prime operating revenue

Net operating cash ﬂow per share Net cash ﬂow per share

Net operating cash ﬂow/net proﬁt ratio

Author's personal copy

Table 5

Holdout testing errors for FDP.

SA (%) DTA (%) DT (%) SVM (%) SA (%) DTA (%) DT (%) SVM (%) SA (%) DTA (%) DT (%) SVM (%)

1 3.48 5.65 5.22 6.96 10.00 12.17 14.78 12.61 23.36 22.90 23.36 23.83

2 2.17 2.61 2.17 2.61 10.00 13.04 12.61 12.61 29.44 34.11 29.44 30.37

3 3.91 4.35 5.22 5.65 11.74 16.09 14.35 14.78 32.71 30.84 28.50 28.97

4 3.04 4.78 4.78 8.26 14.78 10.87 13.91 15.65 26.17 28.50 33.64 25.23

5 1.74 3.91 2.61 2.61 13.91 13.04 14.35 15.22 28.97 30.84 30.84 29.91

6 1.74 2.17 1.74 2.61 11.30 12.61 15.22 15.65 24.77 26.17 27.10 23.83

7 1.74 1.74 2.61 2.61 12.61 14.35 13.91 11.74 28.50 28.50 28.50 27.57

8 3.04 3.91 3.91 3.48 13.48 11.30 12.17 13.04 26.64 29.91 29.91 31.78

9 1.30 2.61 2.17 2.17 14.78 13.04 13.91 13.91 27.57 33.18 30.37 31.31

10 1.74 2.17 1.74 2.17 9.57 15.65 12.61 14.78 29.91 28.97 30.37 30.37

11 3.04 3.48 3.48 3.91 13.48 16.52 14.78 15.22 25.23 33.18 28.50 27.57

12 3.48 5.22 4.78 5.22 13.04 11.30 14.35 12.61 29.91 29.91 29.91 32.24

13 3.48 3.48 3.48 4.35 11.30 13.04 14.35 11.74 28.50 28.97 30.37 29.91

14 2.61 3.04 3.48 3.04 14.78 13.91 15.65 12.17 28.97 29.44 31.78 28.97

15 3.91 5.65 3.48 5.65 14.35 12.17 15.22 12.61 32.24 29.44 31.78 29.91

16 2.61 5.22 5.65 3.91 13.48 12.61 16.96 13.48 24.30 23.83 25.70 30.84

17 3.48 3.04 3.91 7.39 9.57 8.26 14.35 11.30 30.37 28.50 28.50 27.10

18 1.74 2.61 3.04 3.04 15.22 13.91 16.09 11.74 29.44 36.92 27.57 28.04

19 3.91 4.35 4.78 4.35 14.78 13.48 15.22 16.96 26.17 28.97 24.30 25.23

20 2.61 3.04 4.35 3.48 12.61 10.43 12.61 12.17 29.44 29.91 28.50 29.91

21 2.61 3.48 2.61 3.91 15.65 12.61 15.22 15.22 19.63 22.90 21.96 23.83

22 3.04 5.22 5.22 3.48 13.04 9.57 12.61 11.30 25.70 24.77 24.30 26.17

23 2.61 3.48 3.04 3.04 12.61 13.48 14.78 14.35 27.57 25.70 35.98 28.04

24 1.74 0.87 3.04 3.04 11.30 12.61 11.74 12.17 25.23 26.64 26.64 26.64

25 3.91 3.48 3.48 3.48 12.17 13.48 14.35 12.61 29.44 29.44 30.37 26.17

26 3.04 2.61 3.04 3.04 12.61 12.61 10.87 11.30 26.64 26.17 26.64 28.97

27 2.61 3.48 2.61 5.22 12.17 13.04 14.78 13.48 28.04 28.04 32.24 28.04

28 3.91 3.04 2.61 2.61 16.52 16.09 16.52 16.09 26.17 30.84 28.04 27.10

29 3.91 4.35 3.48 3.91 13.48 16.52 17.83 13.91 27.57 30.37 35.51 29.44

30 1.30 3.04 2.17 3.04 10.00 13.48 11.74 12.61 26.64 27.10 31.31 27.10

Mean 2.78 3.54 3.46 3.94 12.81 13.04 14.26 13.43 27.51 28.83 29.07 28.15

9 39

PA

8 DTA 37

DT 35

Holdout testing error (%)

7 SVM 33

6 31

5 29

27

4

25

3 23

2 21 PA

DTA

19

1 DT

17 SVM

0

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 15

The number of holdout testing 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30

The number of holdout testing

Fig. 2. Testing error curve on 30 times of holdout experiments at year (t 1).

Fig. 4. Testing error curve on 30 times of holdout experiments at year (t 3).

20

For convincing support of the above comparison, the statistical

19 analysis of left-tailed T test for mean comparison is carried out and

the results corresponding to year (t 1), (t 2) and (t 3) are

Holdout testing error (%)

15

From Table 6 at year (t 1), the testing error of SA is lower than

DTA, DT, and SVM all at the signiﬁcance level of 1%. DTA is more

13 accurate than SVM at the signiﬁcance level of 5%, but there exists

no signiﬁcant statistical difference between DTA and DT. For the

11

possible reason mentioned above, DT performs better than SVM

9 PA at the signiﬁcance level of 5% at year (t 1).

DTA From Table 7 at year (t 2), SA has lower testing error than DT

7 DT

and SVM respectively at the signiﬁcance level of 1% and 5%. Though

SVM

5

the mean testing error of SA is lower than DTA as shown in Table 5,

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 it is not statistically signiﬁcant. DTA is more accurate than DT at

The number of holdout testin

the signiﬁcance level of 1%. However, there exists no signiﬁcant

Fig. 3. Testing error curve on 30 times of holdout experiments at year (t 2). statistical difference between DTA and SVM according to the result

Author's personal copy

Table 6 play well. Hence, the choice of weak learner is crucial to the perfor-

Left-tailed T testing results at year (t 1). mance AdaBoost ensemble. For FDP, AdaBoost ensemble with SAT

t1 is a better choice because of its acceptable prediction accuracy as

SA DTA DT SVM well as the relatively low computational cost.

SA – 4.419 4.095 4.910

0.000*** 0.000*** 0.000*** 6. Conclusion

DTA 4.419 – 0.473 1.742

1.000 0.680 0.046**

DT 4.095 0.473 – 2.118 FDP takes an important role in the prevention of corporate fail-

1.000 0.320 0.021** ure, which makes the accuracy of FDP model be widely concerned

SVM 4.910 1.742 2.118 – by FDP researches. Though former researches have made compre-

1.000 0.954 0.979

hensive investigation on different single classiﬁers for FDP, FDP

**

Signiﬁcance level of 5%. based on classiﬁer ensemble just arose in recent years and has a

***

Signiﬁcance level of 1%. good prospect of application. This paper further explores AdaBoost

ensemble for FDP and makes an empirical comparison. After

designing the framework of AdaBoost ensemble method for FDP,

Table 7 it describes the algorithms of AdaBoost as well as SAT and DT,

Left-tailed T testing results at year (t2). and uses weighted majority voting as the combination mechanism.

On the sample from Chinese listed companies, 30 times of holdout

t2

experiments are carried out respectively for the four FDP methods

SA DTA DT SVM

of AdaBoost ensemble with SAT, AdaBoost ensemble with DT, sin-

SA – 0.515 4.385 1.733 gle DT and single SVM. Experimental results show that AdaBoost

0.305 0.000*** 0.047**

ensemble with SAT outperforms the other three methods with sta-

DTA 0.515 – 3.288 1.127

0.695 0.001*** 0.135 tistical signiﬁcance and especially suits for Chinese listed compa-

DT 4.385 3.288 – 2.510 nies FDP. It is also conﬁrmed that choice of weak learner

1.000 0.999 0.991 algorithm does affect the FDP performance of AdaBoost ensemble,

SVM 1.733 1.127 2.510 – because DT weak learner is inferior to SAT weak learner in our

0.953 0.865 0.009***

experiments with Chinese listed companies. FDP experiments are

**

Signiﬁcance level of 5%. carried out respectively one year, two years and three years in ad-

***

Signiﬁcance level of 1%. vance, which make the above conclusion more comprehensive.

Therefore, this study contributes to provide incremental evidence

for FDP research based on AdaBoost and guide the real world prac-

Table 8 tice of FDP to some extent. However, this study also has the limi-

Left-tailed T testing results at year (t 3). tation that the experimental data sets are only collected from

t3 Chinese listed companies, and further investigation can be done

SA DTA DT SVM

based on other countries’ real world data sets in future study.

0.006*** 0.004*** 0.075* Acknowledgments

DTA 2.675 0.349 1.266

0.994 0.365 0.892

This research is supported by the National Natural Science

DT 2.898 0.349 – 1.622

0.996 0.635 0.942 Foundation of China (Grant No. 70801054) and the Zhejiang Pro-

SVM 1.477 1.266 1.622 – vincial Natural Science Foundation of China (Grant No. Y6090392).

0.925 0.108 0.058*

*

Signiﬁcance level of 10%. References

***

Signiﬁcance level of 1%

Alfaro, E., Gámez, M., & García, N. (2007). Multiclass corporate failure prediction by

AdaBoost. M1.. Advanced Economic Research, 13, 301–312.

of one-tailed T testing, although Table 5 indicates that the mean Alfaro, E., García, N., Gámez, M., & Elizondo, D. (2008). Bankruptcy forecasting: An

empirical comparison of AdaBoost and neural networks. Decision Support

testing error of DTA is lower than SVM at year (t 2). With more Systems, 45, 110–122.

non-linearity in the dataset at year (t 2), SVM predicts ﬁnancial Altman, E. I. (1968). Financial ratios discriminant analysis and the prediction of

distress well than DT at the signiﬁcance level of 1%. corporate bankruptcy. Journal of Finance, 23, 589–609.

Beaver, W. (1966). Financial ratios as predictors of failure. Journal of Accounting

From Table 8 at year (t 3), SA is superior to DTA and DT both Research, 4, 71–111.

at the signiﬁcance level of 1% and to SVM at the signiﬁcance level Boyacioglu, M., Kara, Y., & Baykan, O. (2009). Predicting bank ﬁnancial failures using

of 10%. Though the mean testing error of DTA is lower than DT and neural networks, support vector machines and multivariate statistical methods:

A comparative analysis in the sample of savings deposit insurance fund

higher than SVM in Table 5, such differences are not signiﬁcant transferred banks in Turkey. Expert Systems with Applications, 36, 3355–3366.

from statistical view. Similar to year (t 2), SVM is more suitable Carlos, S.-C. (1996). Self organizing neural networks for ﬁnancial diagnosis. Decision

for FDP than DT at year (t 3), and the signiﬁcance level is 10%. Support Systems, 17, 227–238.

Cho, S., Kim, J., & Bae, J. K. (2009). An integrative model with subject weight based

Above all, when SAT is utilized as the weak learner, AdaBoost on neural network learning for bankruptcy prediction. Expert Systems with

ensemble always has relatively lower error and is particularly suit- Applications, 36, 403–410.

able for the problem of FDP. When DT is used as the weak learner, Ding, Y., Song, X., & Zen, Y. (2008). Forecasting ﬁnancial condition of Chinese listed

companies based on support vector machine. Expert Systems with Applications,

the advantage of AdaBoost ensemble over single classiﬁer is not

34, 3081–3089.

stable enough. Therefore, AdaBoost ensemble with SAT outper- Fitzpartrick, P. (1932). A comparison of ratios of successful industrial enterprises

forms AdaBoost ensemble with DT in application to FDP according with those of failed companies. Certiﬁed Public Accountant, 10, 598–605. 11,

to our experimental results. This may be attributed to the reason 656–662; 12, 727–731.

Frydman, H., Altman, E. I., & Kao, D.-L. (1985). Introducing recursive partitioning for

that the DT classiﬁer trained in the experiment is not weak enough, ﬁnancial classiﬁcation: The case of ﬁnancial distress. Journal of Finance, 40(1),

so that the advantage of AdaBoost algorithm is not brought into 269–291.

Author's personal copy

Fletcher, D., & Goss, E. (1993). Forecasting with neural networks: An application Ruta, D., & Gabrys, B. (2005). Classiﬁer selection for majority voting. Information

using bankruptcy data. Information and Management, 24, 159–167. Fusion, 6, 63–81.

Freund, Y., & Schapire, R. E. (1996). Experiments with a new boosting algorithm. In Odom, M., & Sharda, R. (1990). A neural networks model for bankruptcy prediction.

Proceedings of the 13th international conference on machine learning (pp. 148– Proceedings of the IEEE International Conference on Neural Network, 2, 163–168.

156). San Francisco: Morgan Kaufmann. Ohlson, J. (1980). Financial ratios and probabilistic prediction of bankruptcy. Journal

Freund, Y., & Schapire, R. E. (1997). A decision-theoretic generalization of on-line of Accounting Research, 18, 109–131.

learning and an application to boosting. Journal of Computer and System Sciences, Pendharkar, P. C. (2005). A threshold varying artiﬁcial neural network approach for

55(1), 119–139. classiﬁcation and its application to bankruptcy prediction problem. Computers

Hung, C., & Chen, J. (2009). A selective ensemble based on expected probabilities for & Operations Research, 32, 2561–2582.

bankruptcy prediction. Expert Systems with Applications, 36, 5297–5303. Sun, J., & Li, H. (2008). Listed companies’ ﬁnancial distress prediction based on

Hui, X.-F., & Sun, J. (2006). An application of support vector machine to companies’ weighted majority voting combination of multiple classiﬁers. Expert Systems

ﬁnancial distress prediction. Lecture Notes in Artiﬁcial Intelligence, 3885, with Applications, 35, 818–827.

274–282. Sun, J., & Li, H. (2008). Data mining method for listed companies’ ﬁnancial distress

Jo, H., & Han, I. (1996). Integration of case-based forecasting, neural network, and prediction. Knowledge-Based Systems, 21(1), 1–5.

discriminant analysis for bankruptcy prediction. Expert Systems with Sun, J., & Li, H. (2009). Financial distress prediction based on serial combination of

Applications, 11(4), 415–422. multiple classiﬁers. Expert Systems with Applications, 36, 8659–8666.

Kim, E., Kim, W., & Lee, Y. (2002). Combination of multiple classiﬁers for the Sun, J., & Li, H. (2011). Dynamic ﬁnancial distress prediction using instance selection

customer’s purchase behavior prediction. Decision Support Systems, 34, for the disposal of concept drift. Expert Systems with Applications, 38,

167–175. 2566–2576.

Kim, M.-J., Min, S.-H., & Han, I. (2006). An evolutionary approach to the combination Shin, K.-S., Lee, T. S., & Kim, H.-J. (2005). An application of support vector machines

of multiple classiﬁers to predict a stock price index. Expert Systems with in bankruptcy prediction model. Expert Systems with Applications, 28(1),

Applications, 37, 241–247. 127–135.

Kim, M.-J., & Kang, D.-K. (2010). Ensemble with neural networks for bankruptcy Tsai, C.-F., & Wu, J.-W. (2008). Using neural network ensembles for bankruptcy

prediction. Expert Systems with Applications, 31(4), 3373–3379. prediction and credit scoring. Expert Systems with Applications, 34, 2639–2649.

Li, H., & Sun, J. (2009). Majority voting combination of multiple case-based Zhang, G., Hu, M. Y., Patuwo, B. E., & Indro, D. C. (1999). Artiﬁcial neural networks in

reasoning for ﬁnancial distress prediction. Expert Systems with Applications, 36, bankruptcy prediction: General framework and cross-validation analysis.

4363–4373. European Journal of Operational Research, 116, 16–32.

Min, J. H., & Lee, Y.-C. (2005). Bankruptcy prediction using support vector machine

with optimal choice of kernel function parameters. Expert Systems with

Applications, 28(4), 128–134.

- C omparative S tudy of D iabetic P atient D ata’s U sing C lassification A lgorithm in WEKA T oolDiunggah olehATS
- Final PDFDiunggah olehDevansh
- Education UniversityDiunggah olehwaqas2011
- IS ZC415Diunggah olehShailaja Shanker
- romi-dm-04-algoritma-nov2014.pptxDiunggah olehRizqy Fahmi
- A Rapid Beneficial Outlier Destruction with Pre-Denoising using Naive Bayesian FilterDiunggah olehIRJET Journal
- Avenue Supermarts Ltd (DMART in)_Initiation-1Diunggah olehdarshanmalde
- Sfy 2011 BudgetDiunggah olehEwing Township, NJ
- sadsdDiunggah olehnamhoa02
- saurav cvDiunggah olehSaurav Nepal
- 071_CrossEntropy_MannorEtAlDiunggah olehDocente Fede Tecnologico
- A Study of Supervised Learning in Context with Decision and Regression TreeDiunggah olehEditor IJRITCC
- MagellanDiunggah olehJavier Palomino
- Conceptual Frame Work of AccountingDiunggah olehShashank Sharma
- [IJCST-V4I2P17]:Prof. Harish Barapatre, Ms. Sampada SamelDiunggah olehEighthSenseGroup
- Champlain Township 2017 Budget PresentationDiunggah olehTheReview
- inc vs exp jan thru aug 2013Diunggah olehapi-234509024
- California Tax Board: AB156 042699Diunggah olehTaxman
- Ch 1 Basics of Analysis (1)Diunggah olehAnonymous Wq5xxraf
- Decision Trees in Practice _ Coursera_programming_assignment-quiz.pdfDiunggah olehsleepyhollowinoz
- cf (1)Diunggah olehEloiza Evans
- A Seminar Report on Machine LearingDiunggah olehMeenakshi Soni
- California Tax Board: AB287 042299Diunggah olehTaxman
- BHCL Case SolutionDiunggah olehCfhunSaat
- File 924016248Diunggah olehDennielle Xyra Valle Briones
- Fashion - BP SampleDiunggah olehPro Business Plans
- Ubaf1053 Accounting in PraticeDiunggah olehPei Teoh
- ArvindCorporatePresentationFebruary2012.pdfDiunggah olehDinesh Mech
- Estácio: 2Q13 Earnings ReleaseDiunggah olehEstacio Investor Relations
- ECS Announces 3Q 2011 ResultsDiunggah olehWeR1 Consultants Pte Ltd

- Administrasi_dan_Manajemen_Jaringan.pdfDiunggah olehMokong Soares
- Oversampling Boosting for Classification of Imbalanced Software Defect DataDiunggah olehMokong Soares
- A comparative study of ANN, k-means and AdaBoost algorithms for image classification.pdfDiunggah olehMokong Soares
- Arsitektur KomputerDiunggah olehTaufan ARt
- Improving an Early Warning System to Prediction of Student Examination AchievementDiunggah olehMokong Soares
- Predicting Student Placements Using Bayesian ClassificationDiunggah olehMokong Soares
- Determining the Impact of Demographic Features in Predicting Student Success in CroatiaDiunggah olehMokong Soares
- Determining the Impact of Demographic Features in Predicting Student Success in CroatiaDiunggah olehMokong Soares
- kalkulus MatLab.pdfDiunggah olehSan Aura
- Ebook_Teknisi_Jaringan_Komputer_Lengkap.pdfDiunggah olehMokong Soares
- Lecture 01Diunggah olehMokong Soares
- Lecture 01Diunggah olehMokong Soares
- 0013_12_FUZZY_RENDSZEREK_SZAMITASI_INTELLIGENCIA_B.pdfDiunggah olehnoorahmed1991
- Implement of salary prediction system to improve student motivation using data mining technique.pdfDiunggah olehMokong Soares
- A Bayesian Learning and Data Mining Approach to Reaction System Identification Application to Biomass ConversionDiunggah olehMokong Soares
- 1 Eric Boosting304FinalRpdfDiunggah olehChristina Jenkins
- Matlab code for Radial Basis FunctionsDiunggah olehSubash Chandar Adikesavan
- Prosiding_Adi_Suwondo1.pdfDiunggah olehMokong Soares
- Chapter 02Diunggah olehMokong Soares
- U2100Diunggah olehMuthumanikandan Hariraman
- Is i 3171521294137Diunggah olehMokong Soares
- Transformasi Wavelet Pada Citra sdsMenggunakan MatlabDiunggah olehAriz Wandi
- jurnal_15318_2.pdfDiunggah olehMokong Soares
- 384-432-1-PBDiunggah olehMokong Soares
- 1.-DASAR-–-DASAR-ALGORITMA.pptDiunggah olehMokong Soares
- jurnal_14246Diunggah olehMokong Soares
- Analisa_Perbandingan_Algoritma_Decision.pdfDiunggah olehMokong Soares
- 11. Interaksi Manusia Dan KomputerDiunggah olehAfiarie Anzella
- 186-547-1-PBDiunggah olehMokong Soares

- Sampson, Tony D._ Tarde, Gabriel de-Virality _ Contagion Theory in the Age of Networks-Univ Of Minnesota Press (2012).pdfDiunggah olehAnonymous MP1sRv
- martin mcquillan-Post Theory New Directions in Criticism-Edinburgh University Press.pdfDiunggah olehfekir64
- brackets lesson planDiunggah olehapi-278531329
- Eightfold PathDiunggah olehMirabela Tiugan
- Acting Crazy a Training Program That Strengthens Empathic Listening, Awareness, And Creativity for Psychology Students.Diunggah olehwilliam_oakley4654
- testing vocabulary trabDiunggah olehapi-251470476
- Survival Guide for Psychotherapy(1)Diunggah olehAdriana Cristina Saraceanu
- Balance Scorecard Ppt[1]Diunggah olehSiddharth
- Shinta Ayu Utami_COVERDiunggah olehAjiSatyaNugroho
- 1394-2809-1-PBDiunggah olehNishant Jadhav
- ikes reaction time table finalDiunggah olehapi-145664365
- This is Not My Beautiful BiosphereDiunggah olehGabriela Rodríguez
- nullDiunggah olehapi-27788847
- I Have Designed This English Area Year Program for the Third Grade of Primary EducationDiunggah olehmarianneco77
- Working With Others 4Diunggah olehEmelisaSadiaLabuga
- Judith Wambacq, “Maurice Merleau-Ponty and Gilles Deleuze as interpreters of Henri Bergson” (texto en internet).Diunggah olehdaseinundzeit
- The informal communication networkDiunggah olehSujata Kumari
- Performance Planning and ReviewDiunggah olehurfak
- Influence of Islamic Educational Games for Interest of the ChildDiunggah olehZulfikar Al Ghazali
- SOW F1 2016Diunggah olehRosnani Binti Abdul Hamid
- Baymax- Your Mental Health Care Companion - An Artificial Intelligence based Chat bot for Mental Health CareDiunggah olehEditor IJRITCC
- 6B Find the MistakesDiunggah olehclau a
- Rafol, A Position Paper on Steven Pinker’s ‘Why Academics Stink at Writing’Diunggah olehEqui Tin
- Transfer of learningDiunggah olehMuhammad Mohkam Ud Din
- Johari Window DocumentDiunggah olehHarshdeep Bhatia
- Phsyco VIIDiunggah olehnurulkhalisah
- Subject Verb AgreementDiunggah olehShinta Ari Herdiana
- Quimica 3y4 Medio 2014-WebDiunggah olehCarla Vidal Valdebenito
- bab 9Diunggah olehJhamezVanSandiRodriguezs
- Personal Learning Theory Assignment - AMDiunggah olehadsmauricio