Anda di halaman 1dari 15

Cood mornlng, my name ls 8u and l work wlLh rof 8akesh kumar aL uluC.

My Lalk
also wlll be abouL algorlLhmlc error Lolerance. Speclcally l wlll Lalk abouL Lhe error
Lolerance of markov chaln sampllng algorlLhms.
1
- now, why do we focus on error LoleranL algorlLhms? We do LhaL because we
would llke Lo leverage such algorlLhms Lo lmplemenL appllcauons on fuLure low
power sysLems LhaL use error prone hardware or on approxlmaLe hardware. We
call Lhls approach appllcauon robusucauon.
- revlous work from our group had focused on one such class of robusL algorlLhms :
namely numerlcal opumlzauon algoruhms. We showed LhaL when appllcauons are
casL as numerlcal opumlzauon problems and Lhen solved uslng robusL solvers llke
gradlenL descenL, Lhey can be robusL Lo hlgh raLes of l errors. lor example when
graph maLchlng was casL as an opumlzauon problem, lL sull produced correcL
resulLs aL very hlgh error raLes.
- ln Lhls work we look aL casung apps as markov chaln sampllng algorlLhms. We
belleve LhaL Lhese algorlLhms are LoleranL Lo Lransluon errors.
- And Lherefore fuLure SW and PW can be lmplemenLed uslng Lhese algorlLhms as
LemplaLe
- ln Lhls Lalk, l wlll preseL markov chaln algorlLhms and presenL Lhe lnLuluon behlnd
behlnd Lhelr robusLness
2
- 8efore l dlscuss MC algorlLhms, leLs Lalk abouL how we can Lhlnk abouL appllcauons ln Lhls conLexL.
- lmaglne an appllcauon LhaL has cerLaln posslble soluuons or sLaLes. And one of Lhose ls acLually Lhe correcL soluuon or a
goal sLaLe. 1o glve you an example, conslder sorung. Any permuLauon of Lhe lnpuL numbers ls a sLaLe and one such
permuLauon ls Lhe goal sLaLe.
- now, lf l have a way Lo check lf a parucular soluuon ls Lhe correcL soluuon, one way Lo nd Lhe correcL soluuon ls Lo
generaLe samples from Lhe sLaLe space and check Lo see lf lL any of Lhem are lndeed Lhe correcL soluuon. now Lhlnk abouL
whaL happens lf Lhese samples are generaLed compleLely randomly. 1haL scenarlo corresponds Lo generaung samples from
a unlform dlsLrlbuuon over Lhe sLaLes. lor pracucal appllcauon, Lhe sLaLe space ls generally large. So lf samples are
generaLed for checklng compleLely randomly lL mlghL Lake a long ume before we nd Lhe correcL soluuon. Clearly Lhls
scheme ls noL emclenL.
- A MC algorlLhm does Lhls sampllng ln a much more lnLelllgenL way. A MC algorlLhm ls an lLerauve algorlLhm where ln every
lLerauon lL produces a sample from Lhe sample space. So for example lf aL ume 0 Lhe algorlLhm produced a parucular sLaLe,
ln Lhe nexL lLerauon lL goes Lo anoLher sLaLe and so on.
- now whaL ls done ls LhaL for a speclc appllcauon a markov chaln ls consLrucLed such LhaL lf you Lake a seL of samples LhaL
Lhe MC algorlLhm ls produclng ln sLeady sLaLe, and you look aL Lhe dlsLrlbuuon over sLaLes, you'll see a dlsLrlbuuon llke Lhls.
WhaL Lhls means ls LhaL, Lhe probablllLy of generaung Lhe goal sLaLe ls much hlgher Lhan Lhe oLher sLaLes.
- 1hls ensures LhaL lf you use Lhese samples Lo check for a soluuon, you wlll nd Lhe soluuon much fasLer Lhan when uslng
compleLely random sampllng.
- now how does a MC algorlLhm ensure such a sLeady sLaLe dlsLrlbuuon over sLaLes? 1haL ls done by havlng speclc Lransluon
probablllues beLween sLaLes. Say for example Lhe chaln ls ln sLaLe 3, lL wlll plck Lhe nexL sLaLe ouL of all Lhe posslble sLaLes .
ln Lhe nexL lLerauon Lhe chaln can vlslL Lhe dlerenL sLaLes wlLh cerLaln xed probablllues. ln a Markov Chaln Lhese depend
on Lhe currenL sLaLe. 1hese probablllues also depend on Lhe speclc appllcauon and appllcauon lnpuLs. l
- So, Lhls ls how markov chaln algorlLhms sample lnLelllgenLly. So, when Lhls algorlLhm ls lmplemenLed, ln each lLerauon, lL wlll
generally perform Lwo compuLauons. 1he rsL one ls Lhe calculauon of Lhese Lransluon probablllues or some heurlsuc LhaL ls
dlrecLly relaLed Lo Lhese probablllLes. 1he second ls Lo acLually generaLe a sample glven Lhe currenL sLaLe and Lhe appllcauon
lnpuLs.
- l wanL Lo emphaslze LhaL alLhough Lhere ls randomness ln Lhls whole process Lhere ls also some sLrucLure Lo lL ln Lerms of
Lhe shape of Lhe sLeady sLaLe dlsLrlbuuon. llndlng Lhe correcL soluuon ln a reasonable amounL of ume depends on Lhls
sLrucLured randomness.
- Clven Lhls background, now leL us Lry Lo undersLand why markov chaln algorlLhms are expecLed Lo be robusL.
- 1hese sLaLe changes are noL compleLely random as Lhere ls a goal LhaL you wanL Lo achleve. 1here ls a sLrucLured
randomness ln Lhe process. ?ou do noL wanL Lo be wanderlng around Lhe sLaLe space. ?ou wanL Lo reach cerLaln sLaLes of
lnLeresL or soluuon sLaLes. 1hls ls besL undersLood ln Lerms of sLeady sLaLe dlsLrlbuuon over sLaLes
3
- As l menuoned, ln every lLerauon, Lhe algorlLhm does Lwo operauons: calculaung Lhe probablllLy dlsLrlbuuons and sampllng
from lL. llrsL: even lf Lhere ls an error ln calculaung Lhe probablllues lL does noL necessarlly mean LhaL Lhe algorlLhm wlll
plck a sLaLe LhaL ls dlerenL from Lhe sLaLe lL would have plcked ln Lhe absence of errors.
- Second: even lf lL dld generaLe a dlerenL sample ln a parucular lLerauon, whaL lL means when we look across lLerauons aL
Lhe overall algorlLhm ls LhaL Lhe sLeady sLaLe dlsLrlbuuon over sLaLes wlll be a dlerenL. 8uL from Lhe polnL of vlew of Lhe
appllcauon, we do noL care abouL Lhe exacL dlsLrlbuuon here. WhaL we care abouL ls Lhe sLrucLure of Lhe dlsLrlbuuon ln LhaL
lL has a peak aL Lhe correcL soluuon. lL ls due Lo Lhls sLrucLure LhaL we are able Lo arrlve aL a soluuon ln a reasonable amounL
of ume.
- We belleve LhaL lL ls enurely posslble LhaL even ln Lhe presence of errors Lhe sLrucLure does noL change so much LhaL we
cannoL emclenLly nd a soluuon. (or enough of Lhe sLrucLure remalns LhaL)
- 1hls ls whaL makes MC algorlLhms robusL Lo errors.
- Powever Lhere are poLenual eecLs of changes ln Lhe sLeady sLaLe dlsLrlbuuon. llrsL and mosL lmporLanLly, lL can eecL Lhe
runume. Second, for some appllcauons, lL can also aecL speclc ouLpuL quallLy meLrlcs lmporLanL for Lhose appllcauons.
1hls ls lmporLanL Lo keep ln mlnd as we would evenLually have Lo deal wlLh Lhe quesuon of whaL error raLes are reasonable
ln a sysLem LhaL lmplemenLs appllcauons as mc algorlLhms? 1haL really wlll depend on whaL runume and ouLpuL quallLy ls
accepLable for dlerenL appllcauons. l wlll lllusLraLe Lhese cases when l show robusLness resulLs for Lhree speclc
appllcauons laLer ln Lhe Lalk.
- We can also lmaglne how Lhls ls also beneclal ln Lerms of approxlmaLe compuung by funcuonal under-deslgn. 8ecause of
Lhese properues, lL mlghL be possslble Lo do Lhese Lransluon probablllLy calculauons wlLh very low preclslon and sull geL
correcL ouLpuLs.
- 1hls concludes Lhe rsL parL of my Lalk. up unul Lhls polnL our dlscusslon has been very general. now leL me change gears
and Lalk abouL speclc appllcauons.
3
- 1o characLerlze Lhe robusLness of markov chaln algorlLhms we plcked Lhree appllcauons LhaL can be lmplemenLed as markov
chaln algorlLhms. 1hese Lhree appllcauons are: boolean sausablllLy or SA1, LuC decodlng and ClusLerlng.
- ln plcklng Lhese algorlLhms we consldered Lwo Lhlngs.
- llrsL we wanLed Lo plck Lhese appllcauons from Lhree dlerenL domalns so LhaL we can lllusLraLe Lhe generallLy of Lhe
markov chaln sampllng algorlLhms
- Second we plcked appllcauons LhaL exhlblL dlerenL eecLs of errors on Lhem ln Lerms of runume and ouLpuL quallLy.
- now leL us go lnLo more deLalls of Lhese appllcauons.
6
- llrsL conslder SA1. 1he saL problem conslsLs of some boolean varlables and some
clauses specled ln Lerms of Lhose varlables. 1he goal ls Lo nd some asslgnmenL
of Lhe varlables LhaL sauses all clauses.
- ln Lhe MC semng for saL, we can Lhlnk of any asslgnmenL of Lhe varlables as a
sLaLe. 1hus we go from one asslgnmenL Lo anoLher ln each lLerauon.
- Also, Lhere already exlsLs a MC algorlLhm for SA1 called walksaL alLhough lL ls
generally noL hlghllghLed as such.
- 1hls algorlLhm sLarLs wlLh a random asslgnmenL of Lhe varlables and ln each
lLerauon, lps one varlable ull lL reaches an asslgnmenL where all clauses are
saused.
- now, glven an asslgnmenL, how does lL declde whlch varlable Lo lp nexL? WhaL lL
does ls LhaL glven an asslgnmenL lL evaluaLes Lhe sausablllLy of all Lhe clauses. ln
oLher words lL gures ouL whlch clauses are saused and whlch are unsaused. lL
Lhen randomly plcks an unsaused clause and looks aL Lhe varlables ln LhaL clause.
- lor example, here Lhls clause has varlables x1, x3 and x6. lL Lhen compuLes a
heurlsuc called break counL LhaL baslcally ls Lhe number of clauses LhaL wlll break,
or go from saused Lo unsaused, lf Lhls parucular varlable ls lpped. lL Lhen
chooses Lhe blL wlLh Lhe lowesL break counL wlLh probablllLy p and randomly
chooses a blL wlLh probablllLy (1-p).
- So we see how walksaL nlcely Ls our model of a MC algorlLhm.
- Cne Lhlng Lo noLe here ls LhaL (you'll nouce LhaL) ln each lLerauon, Lhe algorlLhm
7
- now leL us look aL LuC decodlng. LuC codes arlse ln communlcauon sysLems
where people wanL Lo send messages over nolsy channels. So whaL Lhey do ls LhaL
Lhey append some exLra parlLy blLs Lo Lhe message. Cverall Lhe message blLs and
Lhe parlLy blLs sausfy a seL of xed parlLy consLralnLs. lor example ln Lhls gure
here Lhls code appends M parlLy blLs Lo a message of lengLh k.
- A message ls encoded ln Lhls fashlon aL Lhe Lransmluer and goes Lo Lhe recelver
over Lhe channel. AL Lhe recelver, Lhe decodlng algorlLhm checks lf all Lhe parlLy
consLralnLs hold or noL. lf Lhey do, Lhe message ls assumed Lo be error free and ls
passed on Lo Lhe recelver. lf noL, an error ls assumed and glven Lhls erroneous
message lL Lrles Lo come up wlLh Lhe mosL llkely message LhaL was senL from Lhe
Lransmluer.
- 1here already exlsLs a MC algorlLhm LhaL does Lhls. lL ls called Lhe WelghLed 8lL lp
algorlLhm. 1he algorlLhm sLarLs wlLh Lhe asslgnmenL of Lhe blLs LhaL lL recelved and
ln each lLerauon, lL lps one blL unul lL nds an asslgnmenL where all parlLy
consLralnLs are saused.
- Agaln, how does lL declde whlch blL Lo lp? lL uses a heurlsuc LhaL looks aL Lhe
number of unsaused parlLy checks LhaL each blL appears ln. 1he orlglnal W8l
algorlLhm plcks Lhe one wlLh Lhe hlghesL value of Lhls heurlsuc. So, Lo make lL a MC
we added a randomness sLep [usL llke ln WalksaL.
- WlLh LhaL we have a MC algorlLhm LhaL does LuC decodlng.
8
- 1alk abouL clusLerlng and uMM model and glbbs sampllng
- So, Lhls ls how we have a MC algorlLhm LhaL does clusLerlng.
- now, people ln dlerenL domalns mlghL use clusLerlng as a parL of a larger
appllcauon where Lhey care abouL Lhe ouLpuL quallLy of LhaL appllcauon. We
choose a meLrlc, mean square error, LhaL ls agnosuc of such speclc appllcauon
level meLrlcs and [usL characLerlzes Lhe goodness of Lhe clusLerlng. Mean square
error ls [usL Lhe mean of Lhe squared dlsLance of each daLa polnL from lL's clusLers
mean. A clusLerlng resulL wlLh a lower mean square error ls assumed Lo be beuer
ln Lhls case.
- 1hls concludes Lhe second parL of my Lalk. So far l have Lalked abouL Lhe general
lnLuuon of why markov chaln algorlLhms are robusL Lo errors and have presenLed 3
appllcauons from Lhree domalns and Lhelr markov chaln lmplemenLauons.
- now leL me move on Lo some resulLs from our error ln[ecuon experlmenLs. l wlll
focus on Lwo Lhlngs:
- llrsL, Lhese algorlLhms can lndeed be LoleranL Lo Lransluon errors as we
hypoLheslzed
- Second, dependlng on Lhe appllcauon errors could have dlerenL eecLs on
Lhe appllcauon ln Lerms of runume and ouLpuL quallLy meLrlcs.
- 8uL before l go Lhere, leL me presenL our faulL model.
9
- WlLh Lhls error model ln mlnd, now leL us look aL our error ln[ecuon resulLs
- Clven Lhese resulLs, ln fuLure work we wanL Lo lnvesugaLe Lhe posslblllLy of uslng
markov chalns as a LemplaLe for robusL execuuon
- SLudenL aer 30
10
- Pere l wanL Lo draw your auenuon Lo Lwo Lhlngs.
- Cardual degradauon ln runume
- lL looks reasonable up unul abouL 20 error raLe. 1hls ls a hlgh error raLe.
11
- Clven Lhese resulLs, ln fuLure work we wanL Lo lnvesugaLe Lhe posslblllLy of uslng
markov chalns as a LemplaLe for robusL execuuon
- SLudenL aer 30
12
- So Lhese resulLs show LhaL Lhere ls eecL only on ouLpuL quallLy and noL runume.
- So Lo summarlze and conclude,.
13
! 1he Lakeaway from all of Lhls we belleve ls LhaL:
! 1o conclude Lhe Lalk leL me glve you an overvlew of our currenL work LhaL wlll
Lake us anoLher sLep closer Lo Lhls goal.
14
- Slnce we are clalmlng LhaL..we are bulldlng proLoLypes
- llnally our longer Lerm goals are:
15
-
16

Anda mungkin juga menyukai