Anda di halaman 1dari 20

31/01/2015

ArtificialneuralnetworkWikipedia,thefreeencyclopedia

Artificialneuralnetwork
FromWikipedia,thefreeencyclopedia

Inmachinelearning,artificialneuralnetworks
(ANNs)areafamilyofstatisticallearningalgorithms
inspiredbybiologicalneuralnetworks(thecentral
nervoussystemsofanimals,inparticularthebrain)and
areusedtoestimateorapproximatefunctionsthatcan
dependonalargenumberofinputsandaregenerally
unknown.Artificialneuralnetworksaregenerally
presentedassystemsofinterconnected"neurons"which
cancomputevaluesfrominputs,andarecapableof
machinelearningaswellaspatternrecognitionthanks
totheiradaptivenature.
Forexample,aneuralnetworkforhandwriting
recognitionisdefinedbyasetofinputneuronswhich
maybeactivatedbythepixelsofaninputimage.After
beingweightedandtransformedbyafunction
(determinedbythenetwork'sdesigner),theactivations
oftheseneuronsarethenpassedontootherneurons.
Thisprocessisrepeateduntilfinally,anoutputneuron
isactivated.Thisdetermineswhichcharacterwasread.
Likeothermachinelearningmethodssystemsthat
learnfromdataneuralnetworkshavebeenusedto
solveawidevarietyoftasksthatarehardtosolveusing
ordinaryrulebasedprogramming,includingcomputer
visionandspeechrecognition.

Anartificialneuralnetworkisaninterconnected
groupofnodes,akintothevastnetworkofneurons
inabrain.Here,eachcircularnoderepresentsan
artificialneuronandanarrowrepresentsa
connectionfromtheoutputofoneneurontothe
inputofanother.

Contents
1Background
2History
2.1Recentimprovements
2.2Successesinpatternrecognition
contestssince2009
3Models
3.1Networkfunction
3.2Learning
3.2.1Choosingacostfunction
3.3Learningparadigms
http://en.wikipedia.org/wiki/Artificial_neural_network

1/20

31/01/2015

ArtificialneuralnetworkWikipedia,thefreeencyclopedia

3.3.1Supervisedlearning
3.3.2Unsupervisedlearning
3.3.3Reinforcementlearning
3.4Learningalgorithms
4Employingartificialneuralnetworks
5Applications
5.1Reallifeapplications
5.2Neuralnetworksandneuroscience
5.2.1Typesofmodels
6Neuralnetworksoftware
7Typesofartificialneuralnetworks
8Theoreticalproperties
8.1Computationalpower
8.2Capacity
8.3Convergence
8.4Generalizationandstatistics
9Controversies
9.1Trainingissues
9.2Hardwareissues
9.3Practicalcounterexamplesto
criticisms
9.4Hybridapproaches
10Gallery
11Seealso
12References
13Bibliography
14Externallinks

Background
Examinationsofthehuman'scentralnervoussysteminspiredtheconceptofneuralnetworks.Inan
ArtificialNeuralNetwork,simpleartificialnodes,knownas"neurons","neurodes","processingelements"
or"units",areconnectedtogethertoformanetworkwhichmimicsabiologicalneuralnetwork.
Thereisnosingleformaldefinitionofwhatanartificialneuralnetworkis.However,aclassofstatistical
modelsmaycommonlybecalled"Neural"iftheypossessthefollowingcharacteristics:
1. consistofsetsofadaptiveweights,i.e.numericalparametersthataretunedbyalearningalgorithm,
http://en.wikipedia.org/wiki/Artificial_neural_network

2/20

31/01/2015

ArtificialneuralnetworkWikipedia,thefreeencyclopedia

and
2. arecapableofapproximatingnonlinearfunctionsoftheirinputs.
Theadaptiveweightsareconceptuallyconnectionstrengthsbetweenneurons,whichareactivatedduring
trainingandprediction.
Neuralnetworksaresimilartobiologicalneuralnetworksinperformingfunctionscollectivelyandin
parallelbytheunits,ratherthantherebeingacleardelineationofsubtaskstowhichvariousunitsare
assigned.Theterm"neuralnetwork"usuallyreferstomodelsemployedinstatistics,cognitivepsychology
andartificialintelligence.Neuralnetworkmodelswhichemulatethecentralnervoussystemarepartof
theoreticalneuroscienceandcomputationalneuroscience.
Inmodernsoftwareimplementationsofartificialneuralnetworks,theapproachinspiredbybiologyhas
beenlargelyabandonedforamorepracticalapproachbasedonstatisticsandsignalprocessing.Insomeof
thesesystems,neuralnetworksorpartsofneuralnetworks(likeartificialneurons)formcomponentsin
largersystemsthatcombinebothadaptiveandnonadaptiveelements.Whilethemoregeneralapproachof
suchsystemsismoresuitableforrealworldproblemsolving,ithaslittletodowiththetraditionalartificial
intelligenceconnectionistmodels.Whattheydohaveincommon,however,istheprincipleofnonlinear,
distributed,parallelandlocalprocessingandadaptation.Historically,theuseofneuralnetworksmodels
markedaparadigmshiftinthelateeightiesfromhighlevel(symbolic)artificialintelligence,characterized
byexpertsystemswithknowledgeembodiedinifthenrules,tolowlevel(subsymbolic)machinelearning,
characterizedbyknowledgeembodiedintheparametersofadynamicalsystem.

History
WarrenMcCullochandWalterPitts[1](1943)createdacomputationalmodelforneuralnetworksbasedon
mathematicsandalgorithms.Theycalledthismodelthresholdlogic.Themodelpavedthewayforneural
networkresearchtosplitintotwodistinctapproaches.Oneapproachfocusedonbiologicalprocessesinthe
brainandtheotherfocusedontheapplicationofneuralnetworkstoartificialintelligence.
Inthelate1940spsychologistDonaldHebb[2]createdahypothesisoflearningbasedonthemechanismof
neuralplasticitythatisnowknownasHebbianlearning.Hebbianlearningisconsideredtobea'typical'
unsupervisedlearningruleanditslatervariantswereearlymodelsforlongtermpotentiation.Theseideas
startedbeingappliedtocomputationalmodelsin1948withTuring'sBtypemachines.
FarleyandWesleyA.Clark[3](1954)firstusedcomputationalmachines,thencalledcalculators,tosimulate
aHebbiannetworkatMIT.OtherneuralnetworkcomputationalmachineswerecreatedbyRochester,
Holland,Habit,andDuda[4](1956).
FrankRosenblatt[5](1958)createdtheperceptron,analgorithmforpatternrecognitionbasedonatwolayer
learningcomputernetworkusingsimpleadditionandsubtraction.Withmathematicalnotation,Rosenblatt
alsodescribedcircuitrynotinthebasicperceptron,suchastheexclusiveorcircuit,acircuitwhose
mathematicalcomputationcouldnotbeprocesseduntilafterthebackpropagationalgorithmwascreatedby
PaulWerbos[6](1975).

http://en.wikipedia.org/wiki/Artificial_neural_network

3/20

31/01/2015

ArtificialneuralnetworkWikipedia,thefreeencyclopedia

NeuralnetworkresearchstagnatedafterthepublicationofmachinelearningresearchbyMarvinMinsky
andSeymourPapert[7](1969).Theydiscoveredtwokeyissueswiththecomputationalmachinesthat
processedneuralnetworks.Thefirstissuewasthatsinglelayerneuralnetworkswereincapableof
processingtheexclusiveorcircuit.Thesecondsignificantissuewasthatcomputerswerenotsophisticated
enoughtoeffectivelyhandlethelongruntimerequiredbylargeneuralnetworks.Neuralnetworkresearch
sloweduntilcomputersachievedgreaterprocessingpower.Alsokeylateradvanceswasthe
backpropagationalgorithmwhicheffectivelysolvedtheexclusiveorproblem(Werbos1975).[6]
Theparalleldistributedprocessingofthemid1980sbecamepopularunderthenameconnectionism.The
textbyDavidE.RumelhartandJamesMcClelland[8](1986)providedafullexpositionontheuseof
connectionismincomputerstosimulateneuralprocesses.
Neuralnetworks,asusedinartificialintelligence,havetraditionallybeenviewedassimplifiedmodelsof
neuralprocessinginthebrain,eventhoughtherelationbetweenthismodelandbrainbiologicalarchitecture
isdebated,asitisnotcleartowhatdegreeartificialneuralnetworksmirrorbrainfunction.[9]
Inthe1990s,neuralnetworkswereovertakeninpopularityinmachinelearningbysupportvectormachines
andother,muchsimplermethodssuchaslinearclassifiers.Renewedinterestinneuralnetswassparkedin
the2000sbytheadventofdeeplearning.

Recentimprovements
ComputationaldeviceshavebeencreatedinCMOS,forbothbiophysicalsimulationandneuromorphic
computing.Morerecenteffortsshowpromiseforcreatingnanodevices[10]forverylargescaleprincipal
componentsanalysesandconvolution.Ifsuccessful,theseeffortscouldusherinaneweraofneural
computing[11]thatisastepbeyonddigitalcomputing,becauseitdependsonlearningratherthan
programmingandbecauseitisfundamentallyanalogratherthandigitaleventhoughthefirstinstantiations
mayinfactbewithCMOSdigitaldevices.
Between2009and2012,therecurrentneuralnetworksanddeepfeedforwardneuralnetworksdevelopedin
theresearchgroupofJrgenSchmidhuberattheSwissAILabIDSIAhavewoneightinternational
competitionsinpatternrecognitionandmachinelearning.[12]Forexample,multidimensionallongshort
termmemory(LSTM)[13][14]wonthreecompetitionsinconnectedhandwritingrecognitionatthe2009
InternationalConferenceonDocumentAnalysisandRecognition(ICDAR),withoutanypriorknowledge
aboutthethreedifferentlanguagestobelearned.
VariantsofthebackpropagationalgorithmaswellasunsupervisedmethodsbyGeoffHintonand
colleaguesattheUniversityofToronto[15][16]canbeusedtotraindeep,highlynonlinearneural
architecturessimilartothe1980NeocognitronbyKunihikoFukushima,[17]andthe"standardarchitecture
ofvision",[18]inspiredbythesimpleandcomplexcellsidentifiedbyDavidH.HubelandTorstenWieselin
theprimaryvisualcortex.
Deeplearningfeedforwardnetworks,suchasconvolutionalneuralnetworks,alternateconvolutionallayers
andmaxpoolinglayers,toppedbyseveralpureclassificationlayers.FastGPUbasedimplementationsof
thisapproachhavewonseveralpatternrecognitioncontests,includingtheIJCNN2011TrafficSign
RecognitionCompetition[19]andtheISBI2012SegmentationofNeuronalStructuresinElectron
http://en.wikipedia.org/wiki/Artificial_neural_network

4/20

31/01/2015

ArtificialneuralnetworkWikipedia,thefreeencyclopedia

MicroscopyStackschallenge.[20]Suchneuralnetworksalsowerethefirstartificialpatternrecognizersto
achievehumancompetitiveorevensuperhumanperformance[21]onbenchmarkssuchastrafficsign
recognition(IJCNN2012),ortheMNISThandwrittendigitsproblemofYannLeCunandcolleaguesat
NYU.

Successesinpatternrecognitioncontestssince2009
Between2009and2012,therecurrentneuralnetworksanddeepfeedforwardneuralnetworksdevelopedin
theresearchgroupofJrgenSchmidhuberattheSwissAILabIDSIAhavewoneightinternational
competitionsinpatternrecognitionandmachinelearning.[22]Forexample,thebidirectionalandmulti
dimensionallongshorttermmemory(LSTM)[23][24]ofAlexGravesetal.wonthreecompetitionsin
connectedhandwritingrecognitionatthe2009InternationalConferenceonDocumentAnalysisand
Recognition(ICDAR),withoutanypriorknowledgeaboutthethreedifferentlanguagestobelearned.Fast
GPUbasedimplementationsofthisapproachbyDanCiresanandcolleaguesatIDSIAhavewonseveral
patternrecognitioncontests,includingtheIJCNN2011TrafficSignRecognitionCompetition,[25]theISBI
2012SegmentationofNeuronalStructuresinElectronMicroscopyStackschallenge,[20]andothers.Their
neuralnetworksalsowerethefirstartificialpatternrecognizerstoachievehumancompetitiveoreven
superhumanperformance[21]onimportantbenchmarkssuchastrafficsignrecognition(IJCNN2012),orthe
MNISThandwrittendigitsproblemofYannLeCunatNYU.Deep,highlynonlinearneuralarchitectures
similartothe1980neocognitronbyKunihikoFukushima[17]andthe"standardarchitectureofvision"[18]
canalsobepretrainedbyunsupervisedmethods[26][27]ofGeoffHinton'slabatUniversityofToronto.A
teamfromthislabwona2012contestsponsoredbyMercktodesignsoftwaretohelpfindmoleculesthat
mightleadtonewdrugs.[28]

Models
Neuralnetworkmodelsinartificialintelligenceareusuallyreferredtoasartificialneuralnetworks(ANNs)
theseareessentiallysimplemathematicalmodelsdefiningafunction
oradistributionover
orboth and ,butsometimesmodelsarealsointimatelyassociatedwithaparticularlearning
algorithmorlearningrule.AcommonuseofthephraseANNmodelreallymeansthedefinitionofaclass
ofsuchfunctions(wheremembersoftheclassareobtainedbyvaryingparameters,connectionweights,or
specificsofthearchitecturesuchasthenumberofneuronsortheirconnectivity).

Networkfunction
Thewordnetworkintheterm'artificialneuralnetwork'referstotheinterconnectionsbetweentheneurons
inthedifferentlayersofeachsystem.Anexamplesystemhasthreelayers.Thefirstlayerhasinputneurons
whichsenddataviasynapsestothesecondlayerofneurons,andthenviamoresynapsestothethirdlayer
ofoutputneurons.Morecomplexsystemswillhavemorelayersofneuronswithsomehavingincreased
layersofinputneuronsandoutputneurons.Thesynapsesstoreparameterscalled"weights"thatmanipulate
thedatainthecalculations.
AnANNistypicallydefinedbythreetypesofparameters:
1. Theinterconnectionpatternbetweenthedifferentlayersofneurons
http://en.wikipedia.org/wiki/Artificial_neural_network

5/20

31/01/2015

ArtificialneuralnetworkWikipedia,thefreeencyclopedia

2. Thelearningprocessforupdatingtheweightsoftheinterconnections
3. Theactivationfunctionthatconvertsaneuron'sweightedinputtoitsoutputactivation.
Mathematically,aneuron'snetworkfunction
isdefinedasacompositionofotherfunctions
,
whichcanfurtherbedefinedasacompositionofotherfunctions.Thiscanbeconvenientlyrepresentedasa
networkstructure,witharrowsdepictingthedependenciesbetweenvariables.Awidelyusedtypeof
compositionisthenonlinearweightedsum,where
,where (commonly
referredtoastheactivationfunction[29])issomepredefinedfunction,suchasthehyperbolictangent.Itwill
beconvenientforthefollowingtorefertoacollectionoffunctions assimplyavector
.
Thisfiguredepictssuchadecompositionof ,withdependenciesbetween
variablesindicatedbyarrows.Thesecanbeinterpretedintwoways.
Thefirstviewisthefunctionalview:theinput istransformedintoa3
dimensionalvector ,whichisthentransformedintoa2dimensionalvector ,
whichisfinallytransformedinto .Thisviewismostcommonlyencountered
inthecontextofoptimization.

ANNdependencygraph

Thesecondviewistheprobabilisticview:therandomvariable
dependsupontherandomvariable
,whichdependsupon
,whichdependsupon
therandomvariable .Thisviewismostcommonlyencounteredinthecontextofgraphicalmodels.
Thetwoviewsarelargelyequivalent.Ineithercase,forthisparticularnetworkarchitecture,the
componentsofindividuallayersareindependentofeachother(e.g.,thecomponentsof areindependent
ofeachothergiventheirinput ).Thisnaturallyenablesadegreeofparallelismintheimplementation.
Networkssuchasthepreviousonearecommonlycalledfeedforward,becausetheir
graphisadirectedacyclicgraph.Networkswithcyclesarecommonlycalled
recurrent.Suchnetworksarecommonlydepictedinthemannershownatthetopof
thefigure,where isshownasbeingdependentuponitself.However,animplied
temporaldependenceisnotshown.

Learning
Whathasattractedthemostinterestinneuralnetworksisthepossibilityoflearning.
Givenaspecifictasktosolve,andaclassoffunctions ,learningmeansusinga
setofobservationstofind
whichsolvesthetaskinsomeoptimalsense.

Twoseparate
depictionsofthe
recurrentANN
dependencygraph

Thisentailsdefiningacostfunction
suchthat,fortheoptimal
solution ,

i.e.,nosolutionhasacostlessthanthecostoftheoptimal
solution(seeMathematicaloptimization).
Thecostfunction isanimportantconceptinlearning,asitisameasureofhowfarawayaparticular
solutionisfromanoptimalsolutiontotheproblemtobesolved.Learningalgorithmssearchthroughthe
solutionspacetofindafunctionthathasthesmallestpossiblecost.
http://en.wikipedia.org/wiki/Artificial_neural_network

6/20

31/01/2015

ArtificialneuralnetworkWikipedia,thefreeencyclopedia

Forapplicationswherethesolutionisdependentonsomedata,thecostmustnecessarilybeafunctionofthe
observations,otherwisewewouldnotbemodellinganythingrelatedtothedata.Itisfrequentlydefinedasa
statistictowhichonlyapproximationscanbemade.Asasimpleexample,considertheproblemoffinding
themodel ,whichminimizes
,fordatapairs
drawnfromsome
distribution .Inpracticalsituationswewouldonlyhave samplesfrom andthus,fortheabove
example,wewouldonlyminimize
.Thus,thecostisminimizedovera
sampleofthedataratherthantheentiredataset.
When
someformofonlinemachinelearningmustbeused,wherethecostispartiallyminimized
aseachnewexampleisseen.Whileonlinemachinelearningisoftenusedwhen isfixed,itismost
usefulinthecasewherethedistributionchangesslowlyovertime.Inneuralnetworkmethods,someform
ofonlinemachinelearningisfrequentlyusedforfinitedatasets.
Choosingacostfunction
Whileitispossibletodefinesomearbitraryadhoccostfunction,frequentlyaparticularcostwillbeused,
eitherbecauseithasdesirableproperties(suchasconvexity)orbecauseitarisesnaturallyfromaparticular
formulationoftheproblem(e.g.,inaprobabilisticformulationtheposteriorprobabilityofthemodelcanbe
usedasaninversecost).Ultimately,thecostfunctionwilldependonthedesiredtask.Anoverviewofthe
threemaincategoriesoflearningtasksisprovidedbelow:

Learningparadigms
Therearethreemajorlearningparadigms,eachcorrespondingtoaparticularabstractlearningtask.These
aresupervisedlearning,unsupervisedlearningandreinforcementlearning.
Supervisedlearning
Insupervisedlearning,wearegivenasetofexamplepairs
andtheaimistofind
afunction
intheallowedclassoffunctionsthatmatchestheexamples.Inotherwords,we
wishtoinferthemappingimpliedbythedatathecostfunctionisrelatedtothemismatchbetweenour
mappingandthedataanditimplicitlycontainspriorknowledgeabouttheproblemdomain.
Acommonlyusedcostisthemeansquarederror,whichtriestominimizetheaveragesquarederror
betweenthenetwork'soutput,f(x),andthetargetvalueyoveralltheexamplepairs.Whenonetriesto
minimizethiscostusinggradientdescentfortheclassofneuralnetworkscalledmultilayerperceptrons,one
obtainsthecommonandwellknownbackpropagationalgorithmfortrainingneuralnetworks.
Tasksthatfallwithintheparadigmofsupervisedlearningarepatternrecognition(alsoknownas
classification)andregression(alsoknownasfunctionapproximation).Thesupervisedlearningparadigmis
alsoapplicabletosequentialdata(e.g.,forspeechandgesturerecognition).Thiscanbethoughtofas
learningwitha"teacher,"intheformofafunctionthatprovidescontinuousfeedbackonthequalityof
solutionsobtainedthusfar.
Unsupervisedlearning

http://en.wikipedia.org/wiki/Artificial_neural_network

7/20

31/01/2015

ArtificialneuralnetworkWikipedia,thefreeencyclopedia

Inunsupervisedlearning,somedata isgivenandthecostfunctiontobeminimized,thatcanbeany
functionofthedata andthenetwork'soutput, .
Thecostfunctionisdependentonthetask(whatwearetryingtomodel)andouraprioriassumptions(the
implicitpropertiesofourmodel,itsparametersandtheobservedvariables).
Asatrivialexample,considerthemodel
where isaconstantandthecost
.Minimizingthiscostwillgiveusavalueof thatisequaltothemeanofthe
data.Thecostfunctioncanbemuchmorecomplicated.Itsformdependsontheapplication:forexample,in
compressionitcouldberelatedtothemutualinformationbetween and
,whereasinstatistical
modeling,itcouldberelatedtotheposteriorprobabilityofthemodelgiventhedata.(Notethatinbothof
thoseexamplesthosequantitieswouldbemaximizedratherthanminimized).
Tasksthatfallwithintheparadigmofunsupervisedlearningareingeneralestimationproblemsthe
applicationsincludeclustering,theestimationofstatisticaldistributions,compressionandfiltering.
Reinforcementlearning
Inreinforcementlearning,data areusuallynotgiven,butgeneratedbyanagent'sinteractionswiththe
environment.Ateachpointintime ,theagentperformsanaction andtheenvironmentgeneratesan
observation andaninstantaneouscost ,accordingtosome(usuallyunknown)dynamics.Theaimisto
discoverapolicyforselectingactionsthatminimizessomemeasureofalongtermcosti.e.,theexpected
cumulativecost.Theenvironment'sdynamicsandthelongtermcostforeachpolicyareusuallyunknown,
butcanbeestimated.
MoreformallytheenvironmentismodelledasaMarkovdecisionprocess(MDP)withstates
andactions
withthefollowingprobabilitydistributions:the
instantaneouscostdistribution
,theobservationdistribution
andthetransition
,whileapolicyisdefinedasconditionaldistributionoveractionsgiventheobservations.
Takentogether,thetwothendefineaMarkovchain(MC).Theaimistodiscoverthepolicythatminimizes
thecosti.e.,theMCforwhichthecostisminimal.
ANNsarefrequentlyusedinreinforcementlearningaspartoftheoverallalgorithm.[30][31]Dynamic
programminghasbeencoupledwithANNs(Neurodynamicprogramming)byBertsekasandTsitsiklis[32]
andappliedtomultidimensionalnonlinearproblemssuchasthoseinvolvedinvehiclerouting,[33]natural
resourcesmanagement[34][35]ormedicine[36]becauseoftheabilityofANNstomitigatelossesofaccuracy
evenwhenreducingthediscretizationgriddensityfornumericallyapproximatingthesolutionofthe
originalcontrolproblems.
Tasksthatfallwithintheparadigmofreinforcementlearningarecontrolproblems,gamesandother
sequentialdecisionmakingtasks.

Learningalgorithms
Traininganeuralnetworkmodelessentiallymeansselectingonemodelfromthesetofallowedmodels(or,
inaBayesianframework,determiningadistributionoverthesetofallowedmodels)thatminimizesthecost
criterion.Therearenumerousalgorithmsavailablefortrainingneuralnetworkmodelsmostofthemcanbe
http://en.wikipedia.org/wiki/Artificial_neural_network

8/20

31/01/2015

ArtificialneuralnetworkWikipedia,thefreeencyclopedia

viewedasastraightforwardapplicationofoptimizationtheoryandstatisticalestimation.
Mostofthealgorithmsusedintrainingartificialneuralnetworksemploysomeformofgradientdescent,
usingbackpropagationtocomputetheactualgradients.Thisisdonebysimplytakingthederivativeofthe
costfunctionwithrespecttothenetworkparametersandthenchangingthoseparametersinagradient
relateddirection.
Evolutionarymethods,[37]geneexpressionprogramming,[38]simulatedannealing,[39]expectation
maximization,nonparametricmethodsandparticleswarmoptimization[40]aresomecommonlyused
methodsfortrainingneuralnetworks.

Employingartificialneuralnetworks
PerhapsthegreatestadvantageofANNsistheirabilitytobeusedasanarbitraryfunctionapproximation
mechanismthat'learns'fromobserveddata.However,usingthemisnotsostraightforward,andarelatively
goodunderstandingoftheunderlyingtheoryisessential.
Choiceofmodel:Thiswilldependonthedatarepresentationandtheapplication.Overlycomplex
modelstendtoleadtoproblemswithlearning.
Learningalgorithm:Therearenumeroustradeoffsbetweenlearningalgorithms.Almostany
algorithmwillworkwellwiththecorrecthyperparametersfortrainingonaparticularfixeddataset.
However,selectingandtuninganalgorithmfortrainingonunseendatarequiresasignificantamount
ofexperimentation.
Robustness:Ifthemodel,costfunctionandlearningalgorithmareselectedappropriatelytheresulting
ANNcanbeextremelyrobust.
Withthecorrectimplementation,ANNscanbeusednaturallyinonlinelearningandlargedataset
applications.Theirsimpleimplementationandtheexistenceofmostlylocaldependenciesexhibitedinthe
structureallowsforfast,parallelimplementationsinhardware.

Applications
Theutilityofartificialneuralnetworkmodelsliesinthefactthattheycanbeusedtoinferafunctionfrom
observations.Thisisparticularlyusefulinapplicationswherethecomplexityofthedataortaskmakesthe
designofsuchafunctionbyhandimpractical.

Reallifeapplications
Thetasksartificialneuralnetworksareappliedtotendtofallwithinthefollowingbroadcategories:
Functionapproximation,orregressionanalysis,includingtimeseriesprediction,fitness
approximationandmodeling.
Classification,includingpatternandsequencerecognition,noveltydetectionandsequentialdecision
http://en.wikipedia.org/wiki/Artificial_neural_network

9/20

31/01/2015

ArtificialneuralnetworkWikipedia,thefreeencyclopedia

making.
Dataprocessing,includingfiltering,clustering,blindsourceseparationandcompression.
Robotics,includingdirectingmanipulators,prosthesis.
Control,includingComputernumericalcontrol.
Applicationareasincludethesystemidentificationandcontrol(vehiclecontrol,processcontrol,natural
resourcesmanagement),quantumchemistry,[41]gameplayinganddecisionmaking(backgammon,chess,
poker),patternrecognition(radarsystems,faceidentification,objectrecognitionandmore),sequence
recognition(gesture,speech,handwrittentextrecognition),medicaldiagnosis,financialapplications(e.g.
automatedtradingsystems),datamining(orknowledgediscoveryindatabases,"KDD"),visualizationand
emailspamfiltering.
Artificialneuralnetworkshavealsobeenusedtodiagnoseseveralcancers.AnANNbasedhybridlung
cancerdetectionsystemnamedHLNDimprovestheaccuracyofdiagnosisandthespeedoflungcancer
radiology.[42]Thesenetworkshavealsobeenusedtodiagnoseprostatecancer.Thediagnosescanbeused
tomakespecificmodelstakenfromalargegroupofpatientscomparedtoinformationofonegivenpatient.
Themodelsdonotdependonassumptionsaboutcorrelationsofdifferentvariables.Colorectalcancerhas
alsobeenpredictedusingtheneuralnetworks.Neuralnetworkscouldpredicttheoutcomeforapatientwith
colorectalcancerwithmoreaccuracythanthecurrentclinicalmethods.Aftertraining,thenetworkscould
predictmultiplepatientoutcomesfromunrelatedinstitutions.[43]

Neuralnetworksandneuroscience
Theoreticalandcomputationalneuroscienceisthefieldconcernedwiththetheoreticalanalysisandthe
computationalmodelingofbiologicalneuralsystems.Sinceneuralsystemsareintimatelyrelatedto
cognitiveprocessesandbehavior,thefieldiscloselyrelatedtocognitiveandbehavioralmodeling.
Theaimofthefieldistocreatemodelsofbiologicalneuralsystemsinordertounderstandhowbiological
systemswork.Togainthisunderstanding,neuroscientistsstrivetomakealinkbetweenobservedbiological
processes(data),biologicallyplausiblemechanismsforneuralprocessingandlearning(biologicalneural
networkmodels)andtheory(statisticallearningtheoryandinformationtheory).
Typesofmodels
Manymodelsareusedinthefield,definedatdifferentlevelsofabstractionandmodelingdifferentaspects
ofneuralsystems.Theyrangefrommodelsoftheshorttermbehaviorofindividualneurons,modelsof
howthedynamicsofneuralcircuitryarisefrominteractionsbetweenindividualneuronsandfinallyto
modelsofhowbehaviorcanarisefromabstractneuralmodulesthatrepresentcompletesubsystems.These
includemodelsofthelongterm,andshorttermplasticity,ofneuralsystemsandtheirrelationstolearning
andmemoryfromtheindividualneurontothesystemlevel.

Neuralnetworksoftware
Neuralnetworksoftwareisusedtosimulate,research,developandapplyartificialneuralnetworks,
biologicalneuralnetworksand,insomecases,awiderarrayofadaptivesystems.
http://en.wikipedia.org/wiki/Artificial_neural_network

10/20

31/01/2015

ArtificialneuralnetworkWikipedia,thefreeencyclopedia

Typesofartificialneuralnetworks
Artificialneuralnetworktypesvaryfromthosewithonlyoneortwolayersofsingledirectionlogic,to
complicatedmultiinputmanydirectionalfeedbackloopsandlayers.Onthewhole,thesesystemsuse
algorithmsintheirprogrammingtodeterminecontrolandorganizationoftheirfunctions.Mostsystemsuse
"weights"tochangetheparametersofthethroughputandthevaryingconnectionstotheneurons.Artificial
neuralnetworkscanbeautonomousandlearnbyinputfromoutside"teachers"orevenselfteachingfrom
writteninrules.

Theoreticalproperties
Computationalpower
Themultilayerperceptron(MLP)isauniversalfunctionapproximator,asprovenbytheuniversal
approximationtheorem.However,theproofisnotconstructiveregardingthenumberofneuronsrequiredor
thesettingsoftheweights.
WorkbyHavaSiegelmannandEduardoD.Sontaghasprovidedaproofthataspecificrecurrent
architecturewithrationalvaluedweights(asopposedtofullprecisionrealnumbervaluedweights)hasthe
fullpowerofaUniversalTuringMachine[44]usingafinitenumberofneuronsandstandardlinear
connections.Theyhavefurthershownthattheuseofirrationalvaluesforweightsresultsinamachinewith
superTuringpower.[45]

Capacity
Artificialneuralnetworkmodelshaveapropertycalled'capacity',whichroughlycorrespondstotheir
abilitytomodelanygivenfunction.Itisrelatedtotheamountofinformationthatcanbestoredinthe
networkandtothenotionofcomplexity.

Convergence
Nothingcanbesaidingeneralaboutconvergencesinceitdependsonanumberoffactors.Firstly,there
mayexistmanylocalminima.Thisdependsonthecostfunctionandthemodel.Secondly,theoptimization
methodusedmightnotbeguaranteedtoconvergewhenfarawayfromalocalminimum.Thirdly,foravery
largeamountofdataorparameters,somemethodsbecomeimpractical.Ingeneral,ithasbeenfoundthat
theoreticalguaranteesregardingconvergenceareanunreliableguidetopracticalapplication.

Generalizationandstatistics
Inapplicationswherethegoalistocreateasystemthatgeneralizeswellinunseenexamples,theproblemof
overtraininghasemerged.Thisarisesinconvolutedoroverspecifiedsystemswhenthecapacityofthe
networksignificantlyexceedstheneededfreeparameters.Therearetwoschoolsofthoughtforavoiding
thisproblem:Thefirstistousecrossvalidationandsimilartechniquestocheckforthepresenceof
overtrainingandoptimallyselecthyperparameterssuchastominimizethegeneralizationerror.Thesecond
istousesomeformofregularization.Thisisaconceptthatemergesnaturallyinaprobabilistic(Bayesian)
framework,wheretheregularizationcanbeperformedbyselectingalargerpriorprobabilityoversimpler
http://en.wikipedia.org/wiki/Artificial_neural_network

11/20

31/01/2015

ArtificialneuralnetworkWikipedia,thefreeencyclopedia

modelsbutalsoinstatisticallearningtheory,wherethegoalistominimizeovertwoquantities:the
'empiricalrisk'andthe'structuralrisk',whichroughlycorrespondstotheerroroverthetrainingsetandthe
predictederrorinunseendataduetooverfitting.
SupervisedneuralnetworksthatuseanMSEcostfunctioncanuse
formalstatisticalmethodstodeterminetheconfidenceofthetrained
model.TheMSEonavalidationsetcanbeusedasanestimatefor
variance.Thisvaluecanthenbeusedtocalculatetheconfidence
intervaloftheoutputofthenetwork,assuminganormaldistribution.A
confidenceanalysismadethiswayisstatisticallyvalidaslongasthe
outputprobabilitydistributionstaysthesameandthenetworkisnot
modified.
Byassigningasoftmaxactivationfunction,ageneralizationofthe
Confidenceanalysisofaneural
logisticfunction,ontheoutputlayeroftheneuralnetwork(ora
network
softmaxcomponentinacomponentbasedneuralnetwork)for
categoricaltargetvariables,theoutputscanbeinterpretedasposterior
probabilities.Thisisveryusefulinclassificationasitgivesacertaintymeasureonclassifications.
Thesoftmaxactivationfunctionis:

Controversies
Trainingissues
Acommoncriticismofneuralnetworks,particularlyinrobotics,isthattheyrequirealargediversityof
trainingforrealworldoperation.Thisisnotsurprising,sinceanylearningmachineneedssufficient
representativeexamplesinordertocapturetheunderlyingstructurethatallowsittogeneralizetonew
cases.DeanPomerleau,inhisresearchpresentedinthepaper"KnowledgebasedTrainingofArtificial
NeuralNetworksforAutonomousRobotDriving,"usesaneuralnetworktotrainaroboticvehicletodrive
onmultipletypesofroads(singlelane,multilane,dirt,etc.).Alargeamountofhisresearchisdevotedto
(1)extrapolatingmultipletrainingscenariosfromasingletrainingexperience,and(2)preservingpast
trainingdiversitysothatthesystemdoesnotbecomeovertrained(if,forexample,itispresentedwitha
seriesofrightturnsitshouldnotlearntoalwaysturnright).Theseissuesarecommoninneuralnetworks
thatmustdecidefromamongstawidevarietyofresponses,butcanbedealtwithinseveralways,for
examplebyrandomlyshufflingthetrainingexamples,byusinganumericaloptimizationalgorithmthat
doesnottaketoolargestepswhenchangingthenetworkconnectionsfollowinganexample,orbygrouping
examplesinsocalledminibatches.
A.K.Dewdney,aformerScientificAmericancolumnist,wrotein1997,"Althoughneuralnetsdosolvea
fewtoyproblems,theirpowersofcomputationaresolimitedthatIamsurprisedanyonetakesthem
seriouslyasageneralproblemsolvingtool."(Dewdney,p.82)
http://en.wikipedia.org/wiki/Artificial_neural_network

12/20

31/01/2015

ArtificialneuralnetworkWikipedia,thefreeencyclopedia

Hardwareissues
Toimplementlargeandeffectivesoftwareneuralnetworks,considerableprocessingandstorageresources
needtobecommitted.Whilethebrainhashardwaretailoredtothetaskofprocessingsignalsthrougha
graphofneurons,simulatingevenamostsimplifiedformonVonNeumanntechnologymaycompela
neuralnetworkdesignertofillmanymillionsofdatabaserowsforitsconnectionswhichcanconsume
vastamountsofcomputermemoryandharddiskspace.Furthermore,thedesignerofneuralnetwork
systemswilloftenneedtosimulatethetransmissionofsignalsthroughmanyoftheseconnectionsandtheir
associatedneuronswhichmustoftenbematchedwithincredibleamountsofCPUprocessingpowerand
time.Whileneuralnetworksoftenyieldeffectiveprograms,theytoooftendosoatthecostofefficiency
(theytendtoconsumeconsiderableamountsoftimeandmoney).
ComputingpowercontinuestogrowroughlyaccordingtoMoore'sLaw,whichmayprovidesufficient
resourcestoaccomplishnewtasks.Neuromorphicengineeringaddressesthehardwaredifficultydirectly,
byconstructingnonVonNeumannchipswithcircuitsdesignedtoimplementneuralnetsfromtheground
up.

Practicalcounterexamplestocriticisms
ArgumentsagainstDewdney'spositionarethatneuralnetworkshavebeensuccessfullyusedtosolvemany
complexanddiversetasks,rangingfromautonomouslyflyingaircraft[46]todetectingcreditcardfraud.
TechnologywriterRogerBridgmancommentedonDewdney'sstatementsaboutneuralnets:
Neuralnetworks,forinstance,areinthedocknotonlybecausetheyhavebeenhypedtohigh
heaven,(whathasn't?)butalsobecauseyoucouldcreateasuccessfulnetwithout
understandinghowitworked:thebunchofnumbersthatcapturesitsbehaviourwouldinall
probabilitybe"anopaque,unreadabletable...valuelessasascientificresource".
Inspiteofhisemphaticdeclarationthatscienceisnottechnology,Dewdneyseemshereto
pilloryneuralnetsasbadsciencewhenmostofthosedevisingthemarejusttryingtobegood
engineers.Anunreadabletablethatausefulmachinecouldreadwouldstillbewellworth
having.[47]
Althoughitistruethatanalyzingwhathasbeenlearnedbyanartificialneuralnetworkisdifficult,itis
mucheasiertodosothantoanalyzewhathasbeenlearnedbyabiologicalneuralnetwork.Furthermore,
researchersinvolvedinexploringlearningalgorithmsforneuralnetworksaregraduallyuncoveringgeneric
principleswhichallowalearningmachinetobesuccessful.Forexample,BengioandLeCun(2007)wrote
anarticleregardinglocalvsnonlocallearning,aswellasshallowvsdeeparchitecture.[48]

Hybridapproaches
Someothercriticismscamefrombelieversofhybridmodels(combiningneuralnetworksandsymbolic
approaches).Theyadvocatetheintermixofthesetwoapproachesandbelievethathybridmodelscanbetter
capturethemechanismsofthehumanmind.[49][50]
http://en.wikipedia.org/wiki/Artificial_neural_network

13/20

31/01/2015

ArtificialneuralnetworkWikipedia,thefreeencyclopedia

Gallery

Asinglelayerfeedforwardartificial
Atwolayerfeedforwardartificial
neuralnetwork.Arrowsoriginating
neuralnetwork.
from areomittedforclarity.There
arepinputstothisnetworkandq
outputs.Inthissystem,thevalueofthe
qthoutput, wouldbecalculatedas

Seealso
20Q
ADALINE
Adaptiveresonancetheory
Artificiallife
Associativememory
Autoencoder
Backpropagation
BEAMrobotics
Biologicalcybernetics
Biologicallyinspiredcomputing
Bluebrain
Catastrophicinterference
CerebellarModelArticulationController
http://en.wikipedia.org/wiki/Artificial_neural_network

14/20

31/01/2015

ArtificialneuralnetworkWikipedia,thefreeencyclopedia

Cognitivearchitecture
Cognitivescience
Connectionistexpertsystem
Connectomics
Culturedneuronalnetworks
Digitalmorphogenesis
Encog
Fuzzylogic
Geneexpressionprogramming
Geneticalgorithm
Groupmethodofdatahandling
Habituation
InSituAdaptiveTabulation
Memristor
Multilinearsubspacelearning
Neuroevolution
Neuralcoding
Neuralgas
Neuralnetworksoftware
Neuroscience
Ni1000chip
Nonlinearsystemidentification
Opticalneuralnetwork
ParallelConstraintSatisfactionProcesses
Paralleldistributedprocessing
Radialbasisfunctionnetwork
Recurrentneuralnetworks
Selforganizingmap
Systolicarray
Tensorproductnetwork
Timedelayneuralnetwork(TDNN)

References

http://en.wikipedia.org/wiki/Artificial_neural_network

15/20

31/01/2015

ArtificialneuralnetworkWikipedia,thefreeencyclopedia

1. ^McCulloch,WarrenWalterPitts(1943)."ALogicalCalculusofIdeasImmanentinNervousActivity".
BulletinofMathematicalBiophysics5(4):115133.doi:10.1007/BF02478259
(https://dx.doi.org/10.1007%2FBF02478259).
2. ^Hebb,Donald(1949).TheOrganizationofBehavior.NewYork:Wiley.
3. ^Farley,B.G.W.A.Clark(1954)."SimulationofSelfOrganizingSystemsbyDigitalComputer".IRE
TransactionsonInformationTheory4(4):7684.doi:10.1109/TIT.1954.1057468
(https://dx.doi.org/10.1109%2FTIT.1954.1057468).
4. ^Rochester,N.J.H.Holland,L.H.Habit,andW.L.Duda(1956)."Testsonacellassemblytheoryoftheaction
ofthebrain,usingalargedigitalcomputer".IRETransactionsonInformationTheory2(3):8093.
doi:10.1109/TIT.1956.1056810(https://dx.doi.org/10.1109%2FTIT.1956.1056810).
5. ^Rosenblatt,F.(1958)."ThePerceptron:AProbalisticModelForInformationStorageAndOrganizationInThe
Brain".PsychologicalReview65(6):386408.doi:10.1037/h0042519(https://dx.doi.org/10.1037%2Fh0042519).
PMID13602029(https://www.ncbi.nlm.nih.gov/pubmed/13602029).
6. ^abWerbos,P.J.(1975).BeyondRegression:NewToolsforPredictionandAnalysisintheBehavioral
Sciences.
7. ^Minsky,M.S.Papert(1969).AnIntroductiontoComputationalGeometry.MITPress.ISBN0262630222.
8. ^Rumelhart,D.EJamesMcClelland(1986).ParallelDistributedProcessing:Explorationsinthe
MicrostructureofCognition.Cambridge:MITPress.
9. ^Russell,Ingrid."NeuralNetworksModule"(http://uhaweb.hartford.edu/compsci/neuralnetworks
definition.html).Retrieved2012.
10. ^Yang,J.J.Pickett,M.D.Li,X.M.Ohlberg,D.A.A.Stewart,D.R.Williams,R.S.Nat.Nanotechnol.
2008,3,429433.
11. ^Strukov,D.B.Snider,G.S.Stewart,D.R.Williams,R.S.Nature2008,453,8083.
12. ^http://www.kurzweilai.net/howbioinspireddeeplearningkeepswinningcompetitions2012KurzweilAI
InterviewwithJrgenSchmidhuberontheeightcompetitionswonbyhisDeepLearningteam20092012
13. ^Graves,AlexandSchmidhuber,JrgenOfflineHandwritingRecognitionwithMultidimensionalRecurrent
NeuralNetworks,inBengio,YoshuaSchuurmans,DaleLafferty,JohnWilliams,ChrisK.I.andCulotta,
Aron(eds.),AdvancesinNeuralInformationProcessingSystems22(NIPS'22),December7th10th,2009,
Vancouver,BC,NeuralInformationProcessingSystems(NIPS)Foundation,2009,pp.545552
14. ^A.Graves,M.Liwicki,S.Fernandez,R.Bertolami,H.Bunke,J.Schmidhuber.ANovelConnectionistSystem
forImprovedUnconstrainedHandwritingRecognition.IEEETransactionsonPatternAnalysisandMachine
Intelligence,vol.31,no.5,2009.
15. ^http://www.scholarpedia.org/article/Deep_belief_networks/
16. ^Hinton,G.E.Osindero,S.Teh,Y.(2006)."Afastlearningalgorithmfordeepbeliefnets"
(http://www.cs.toronto.edu/~hinton/absps/fastnc.pdf).NeuralComputation18(7):15271554.
doi:10.1162/neco.2006.18.7.1527(https://dx.doi.org/10.1162%2Fneco.2006.18.7.1527).PMID16764513
(https://www.ncbi.nlm.nih.gov/pubmed/16764513).
17. ^abFukushima,K.(1980)."Neocognitron:Aselforganizingneuralnetworkmodelforamechanismofpattern
recognitionunaffectedbyshiftinposition".BiologicalCybernetics36(4):93202.doi:10.1007/BF00344251
(https://dx.doi.org/10.1007%2FBF00344251).PMID7370364(https://www.ncbi.nlm.nih.gov/pubmed/7370364).
18. ^abMRiesenhuber,TPoggio.Hierarchicalmodelsofobjectrecognitionincortex.Natureneuroscience,1999.
http://en.wikipedia.org/wiki/Artificial_neural_network

16/20

31/01/2015

ArtificialneuralnetworkWikipedia,thefreeencyclopedia

19. ^D.C.Ciresan,U.Meier,J.Masci,J.Schmidhuber.MultiColumnDeepNeuralNetworkforTrafficSign
Classification.NeuralNetworks,2012.
20. ^abD.Ciresan,A.Giusti,L.Gambardella,J.Schmidhuber.DeepNeuralNetworksSegmentNeuronal
MembranesinElectronMicroscopyImages.InAdvancesinNeuralInformationProcessingSystems(NIPS
2012),LakeTahoe,2012.
21. ^abD.C.Ciresan,U.Meier,J.Schmidhuber.MulticolumnDeepNeuralNetworksforImageClassification.
IEEEConf.onComputerVisionandPatternRecognitionCVPR2012.
22. ^2012KurzweilAIInterview(http://www.kurzweilai.net/howbioinspireddeeplearningkeepswinning
competitions)withJrgenSchmidhuberontheeightcompetitionswonbyhisDeepLearningteam20092012
23. ^Graves,AlexandSchmidhuber,JrgenOfflineHandwritingRecognitionwithMultidimensionalRecurrent
NeuralNetworks(http://www.idsia.ch/~juergen/nips2009.pdf),inBengio,YoshuaSchuurmans,DaleLafferty,
JohnWilliams,ChrisK.I.andCulotta,Aron(eds.),AdvancesinNeuralInformationProcessingSystems22
(NIPS'22),710December2009,Vancouver,BC,NeuralInformationProcessingSystems(NIPS)Foundation,
2009,pp.545552.
24. ^A.Graves,M.Liwicki,S.Fernandez,R.Bertolami,H.Bunke,J.Schmidhuber.ANovelConnectionistSystem
forImprovedUnconstrainedHandwritingRecognition(http://www.idsia.ch/~juergen/tpami_2008.pdf).IEEE
TransactionsonPatternAnalysisandMachineIntelligence,vol.31,no.5,2009.
25. ^D.C.Ciresan,U.Meier,J.Masci,J.Schmidhuber.MultiColumnDeepNeuralNetworkforTrafficSign
Classification.NeuralNetworks,2012.
26. ^Deepbeliefnetworks(http://www.scholarpedia.org/article/Deep_belief_networks)atScholarpedia.
27. ^Hinton,G.E.Osindero,S.Teh,Y.W.(2006)."AFastLearningAlgorithmforDeepBeliefNets"
(http://www.cs.toronto.edu/~hinton/absps/fastnc.pdf).NeuralComputation18(7):15271554.
doi:10.1162/neco.2006.18.7.1527(https://dx.doi.org/10.1162%2Fneco.2006.18.7.1527).PMID16764513
(https://www.ncbi.nlm.nih.gov/pubmed/16764513).
28. ^JohnMarkoff(November23,2012)."ScientistsSeePromiseinDeepLearningPrograms"
(http://www.nytimes.com/2012/11/24/science/scientistsseeadvancesindeeplearningapartofartificial
intelligence.html).NewYorkTimes.
29. ^"TheMachineLearningDictionary"(http://www.cse.unsw.edu.au/~billw/mldict.html#activnfn).
30. ^Dominic,S.,Das,R.,Whitley,D.,Anderson,C.(July1991)."IJCNN91SeattleInternationalJoint
ConferenceonNeuralNetworks"(http://dx.doi.org/10.1109/IJCNN.1991.155315).IJCNN91Seattle
InternationalJointConferenceonNeuralNetworks.Seattle,Washington,USA:IEEE.
doi:10.1109/IJCNN.1991.155315(https://dx.doi.org/10.1109%2FIJCNN.1991.155315).ISBN0780301641.
Retrieved29July2012.|chapter=ignored(help)
31. ^Hoskins,J.C.Himmelblau,D.M.(1992)."Processcontrolviaartificialneuralnetworksandreinforcement
learning".Computers&ChemicalEngineering16(4):241251.doi:10.1016/00981354(92)80045B
(https://dx.doi.org/10.1016%2F00981354%2892%2980045B).
32. ^Bertsekas,D.P.,Tsitsiklis,J.N.(1996).Neurodynamicprogramming.AthenaScientific.p.512.ISBN1
886529108.
33. ^Secomandi,Nicola(2000)."Comparingneurodynamicprogrammingalgorithmsforthevehiclerouting
problemwithstochasticdemands".Computers&OperationsResearch27(1112):12011225.
doi:10.1016/S03050548(99)00146X(https://dx.doi.org/10.1016%2FS03050548%2899%2900146X).
http://en.wikipedia.org/wiki/Artificial_neural_network

17/20

31/01/2015

ArtificialneuralnetworkWikipedia,thefreeencyclopedia

34. ^deRigo,D.,Rizzoli,A.E.,SonciniSessa,R.,Weber,E.,Zenesi,P.(2001)."ProceedingsofMODSIM2001,
InternationalCongressonModellingandSimulation"
(https://zenodo.org/record/7482/files/de_Rigo_etal_MODSIM2001_activelink_authorcopy.pdf).MODSIM2001,
InternationalCongressonModellingandSimulation(http://www.mssanz.org.au/MODSIM01/MODSIM01.htm).
Canberra,Australia:ModellingandSimulationSocietyofAustraliaandNewZealand.doi:10.5281/zenodo.7481
(https://dx.doi.org/10.5281%2Fzenodo.7481).ISBN0867405252.Retrieved29July2012.|chapter=ignored
(help)
35. ^Damas,M.,Salmeron,M.,Diaz,A.,Ortega,J.,Prieto,A.,Olivares,G.(2000)."Proceedingsof2000
CongressonEvolutionaryComputation"(http://dx.doi.org/10.1109/CEC.2000.870269).2000Congresson
EvolutionaryComputation.LaJolla,California,USA:IEEE.doi:10.1109/CEC.2000.870269
(https://dx.doi.org/10.1109%2FCEC.2000.870269).ISBN0780363752.Retrieved29July2012.|chapter=
ignored(help)
36. ^Deng,GengFerris,M.C.(2008)."Neurodynamicprogrammingforfractionatedradiotherapyplanning".
SpringerOptimizationandItsApplications12:4770.doi:10.1007/9780387732992_3
(https://dx.doi.org/10.1007%2F9780387732992_3).
37. ^deRigo,D.,Castelletti,A.,Rizzoli,A.E.,SonciniSessa,R.,Weber,E.(January2005).PavelZtek,ed.
"Proceedingsofthe16thIFACWorldCongressIFACPapersOnLine"
(http://www.nt.ntnu.no/users/skoge/prost/proceedings/ifac2005/Papers/Paper4269.html).16thIFACWorld
Congress(http://www.nt.ntnu.no/users/skoge/prost/proceedings/ifac2005/Index.html)16.Prague,Czech
Republic:IFAC.doi:10.3182/200507036CZ1902.02172(https://dx.doi.org/10.3182%2F200507036CZ
1902.02172).ISBN9783902661753.Retrieved30December2011.|chapter=ignored(help)
38. ^Ferreira,C.(2006)."DesigningNeuralNetworksUsingGeneExpressionProgramming"(http://www.gene
expressionprogramming.com/webpapers/FerreiraASCT2006.pdf).InA.Abraham,B.deBaets,M.Kppen,and
B.Nickolay,eds.,AppliedSoftComputingTechnologies:TheChallengeofComplexity,pages517536,
SpringerVerlag.
39. ^Da,Y.,Xiurun,G.(July2005).T.Villmann,ed."AnimprovedPSObasedANNwithsimulatedannealing
technique".NewAspectsinNeurocomputing:11thEuropeanSymposiumonArtificialNeuralNetworks
(http://www.dice.ucl.ac.be/esann/proceedings/electronicproceedings.htm).Elsevier.
doi:10.1016/j.neucom.2004.07.002(https://dx.doi.org/10.1016%2Fj.neucom.2004.07.002).Retrieved
30December2011.
40. ^Wu,J.,Chen,E.(May2009).Wang,H.,Shen,Y.,Huang,T.,Zeng,Z.,ed."ANovelNonparametric
RegressionEnsembleforRainfallForecastingUsingParticleSwarmOptimizationTechniqueCoupledwith
ArtificialNeuralNetwork".6thInternationalSymposiumonNeuralNetworks,ISNN2009
(http://www2.mae.cuhk.edu.hk/~isnn2009/).Springer.doi:10.1007/9783642015137_6
(https://dx.doi.org/10.1007%2F9783642015137_6).ISBN9783642012150.Retrieved1January2012.
41. ^RomanM.Balabin,EkaterinaI.Lomakina(2009)."Neuralnetworkapproachtoquantumchemistrydata:
Accuratepredictionofdensityfunctionaltheoryenergies".J.Chem.Phys.131(7):074104.
doi:10.1063/1.3206326(https://dx.doi.org/10.1063%2F1.3206326).PMID19708729
(https://www.ncbi.nlm.nih.gov/pubmed/19708729).
42. ^Ganesan,N."ApplicationofNeuralNetworksinDiagnosingCancerDiseaseUsingDemographicData"
(http://www.ijcaonline.org/journal/number26/pxc387783.pdf).InternationalJournalofComputerApplications.
http://en.wikipedia.org/wiki/Artificial_neural_network

18/20

31/01/2015

ArtificialneuralnetworkWikipedia,thefreeencyclopedia

43. ^Bottaci,Leonardo."ArtificialNeuralNetworksAppliedtoOutcomePredictionforColorectalCancerPatients
inSeparateInstitutions"(http://www.lcc.uma.es/~jja/recidiva/042.pdf).TheLancet.
44. ^Siegelmann,H.T.Sontag,E.D.(1991)."Turingcomputabilitywithneuralnets"
(http://www.math.rutgers.edu/~sontag/FTP_DIR/amlturing.pdf).Appl.Math.Lett.4(6):7780.
doi:10.1016/08939659(91)90080F(https://dx.doi.org/10.1016%2F08939659%2891%2990080F).
45. ^Balcazar,Jose(Jul1997)."ComputationalPowerofNeuralNetworks:AKolmogorovComplexity
Characterization"(http://ieeexplore.ieee.org/xpl/login.jsp?
tp=&arnumber=605580&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D605
580).InformationTheory,IEEETransactionson43(4):11751183.doi:10.1109/18.605580
(https://dx.doi.org/10.1109%2F18.605580).Retrieved3November2014.
46. ^NASADrydenFlightResearchCenterNewsRoom:NewsReleases:NASANEURALNETWORK
PROJECTPASSESMILESTONE(http://www.nasa.gov/centers/dryden/news/NewsReleases/2003/0349.html).
Nasa.gov.Retrievedon20131120.
47. ^RogerBridgman'sdefenceofneuralnetworks(http://members.fortunecity.com/templarseries/popper.html)
48. ^http://www.iro.umontreal.ca/~lisa/publications2/index.php/publications/show/4
49. ^SunandBookman(1990)
50. ^TahmasebiHezarkhani(2012)."Ahybridneuralnetworksfuzzylogicgeneticalgorithmforgradeestimation"
(http://www.sciencedirect.com/science/article/pii/S0098300412000398).Computers&Geosciences42:1827.
doi:10.1016/j.cageo.2012.02.004(https://dx.doi.org/10.1016%2Fj.cageo.2012.02.004).

Bibliography
BhadeshiaH.K.D.H.(1999)."NeuralNetworksinMaterialsScience"(http://www.msm.cam.ac.uk/phase
trans/abstracts/neural.review.pdf).ISIJInternational39(10):966979.doi:10.2355/isijinternational.39.966
(https://dx.doi.org/10.2355%2Fisijinternational.39.966).
Bishop,C.M.(1995)NeuralNetworksforPatternRecognition,Oxford:OxfordUniversityPress.ISBN019
8538499(hardback)orISBN0198538642(paperback)
Cybenko,G.V.(1989).ApproximationbySuperpositionsofaSigmoidalfunction,MathematicsofControl,
Signals,andSystems,Vol.2pp.303314.electronicversion
(http://actcomm.dartmouth.edu/gvc/papers/approx_by_superposition.pdf)
Duda,R.O.,Hart,P.E.,Stork,D.G.(2001)Patternclassification(2ndedition),Wiley,ISBN0471056693
EgmontPetersen,M.,deRidder,D.,Handels,H.(2002)."Imageprocessingwithneuralnetworksareview".
PatternRecognition35(10):22792301.doi:10.1016/S00313203(01)001789
(https://dx.doi.org/10.1016%2FS00313203%2801%29001789).
Gurney,K.(1997)AnIntroductiontoNeuralNetworksLondon:Routledge.ISBN1857286731(hardback)or
ISBN1857285034(paperback)
Haykin,S.(1999)NeuralNetworks:AComprehensiveFoundation,PrenticeHall,ISBN0132733501
Fahlman,S,Lebiere,C(1991).TheCascadeCorrelationLearningArchitecture,createdforNationalScience
Foundation,ContractNumberEET8716324,andDefenseAdvancedResearchProjectsAgency(DOD),ARPA
OrderNo.4976underContractF3361587C1499.electronicversion
(http://www.cs.iastate.edu/~honavar/fahlman.pdf)
http://en.wikipedia.org/wiki/Artificial_neural_network

19/20

31/01/2015

ArtificialneuralnetworkWikipedia,thefreeencyclopedia

Hertz,J.,Palmer,R.G.,Krogh.A.S.(1990)Introductiontothetheoryofneuralcomputation,PerseusBooks.
ISBN0201515601
Lawrence,Jeanette(1994)IntroductiontoNeuralNetworks,CaliforniaScientificSoftwarePress.ISBN1
883157005
Masters,Timothy(1994)SignalandImageProcessingwithNeuralNetworks,JohnWiley&Sons,Inc.ISBN0
471049638
Ripley,BrianD.(1996)PatternRecognitionandNeuralNetworks,Cambridge
Siegelmann,H.T.andSontag,E.D.(1994).Analogcomputationvianeuralnetworks,TheoreticalComputer
Science,v.131,no.2,pp.331360.electronicversion(http://www.math.rutgers.edu/~sontag/FTP_DIR/nets
real.pdf)
SergiosTheodoridis,KonstantinosKoutroumbas(2009)"PatternRecognition",4thEdition,AcademicPress,
ISBN9781597492720.
Smith,Murray(1993)NeuralNetworksforStatisticalModeling,VanNostrandReinhold,ISBN0442013108
Wasserman,Philip(1993)AdvancedMethodsinNeuralComputing,VanNostrandReinhold,ISBN0442
004613
ComputationalIntelligence:AMethodologicalIntroductionbyKruse,Borgelt,Klawonn,Moewes,Steinbrecher,
Held,2013,Springer,ISBN9781447150121
NeuroFuzzySysteme(3rdedition)byBorgelt,Klawonn,Kruse,Nauck,2003,Vieweg,ISBN9783528252656

Externallinks
NeuralNetworks

Wikibookshasabookon
thetopicof:Artificial
NeuralNetworks

(https://www.dmoz.org/Computers/Artificial_Intelligence/Neural_Networks)atDMOZ
AbriefintroductiontoNeuralNetworks(http://www.dkriesel.com/en/science/neural_networks)
(PDF),illustrated250ptextbookcoveringthecommonkindsofneuralnetworks(CClicense).
Retrievedfrom"http://en.wikipedia.org/w/index.php?title=Artificial_neural_network&oldid=644503870"
Categories: Computationalstatistics Artificialneuralnetworks Classificationalgorithms
Computationalneuroscience Mathematicalpsychology
Thispagewaslastmodifiedon28January2015,at03:42.
TextisavailableundertheCreativeCommonsAttributionShareAlikeLicenseadditionaltermsmay
apply.Byusingthissite,youagreetotheTermsofUseandPrivacyPolicy.Wikipediaisa
registeredtrademarkoftheWikimediaFoundation,Inc.,anonprofitorganization.

http://en.wikipedia.org/wiki/Artificial_neural_network

20/20

Anda mungkin juga menyukai