版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡介
1、PlayingAtariwithDeepReinfcementLearningVolodymyrMnihKayKavukcuogluDavidSilverAlexGravesIoannisAntonoglouDaanWierstraMartinRiedmillerDeepMindTechnologiesvladkaydavidalex.gravesioannisdaanmartin.riedmiller@AbstractWepresen
2、tthefirstdeeplearningmodeltosuccessfullylearncontrolpoliciesdirectlyfromhighdimensionalsensyinputusingreinfcementlearning.ThemodelisaconvolutionalneuralwktrainedwithavariantofQlearningwhoseinputisrawpixelswhoseoutputisav
3、aluefunctionestimatingfuturerewards.WeapplyourmethodtosevenAtari2600gamesfromtheArcadeLearningEnvironmentwithnoadjustmentofthearchitecturelearningalgithm.Wefindthatitoutperfmsallpreviousapproachesonsixofthegamessurpasses
4、ahumanexpertonthreeofthem.1IntroductionLearningtocontrolagentsdirectlyfromhighdimensionalsensyinputslikevisionspeechisoneofthelongstingchallengesofreinfcementlearning(RL).MostsuccessfulRLapplicationsthatoperateonthesedom
5、ainshavereliedonhcraftedfeaturescombinedwithlinearvaluefunctionspolicyrepresentations.Clearlytheperfmanceofsuchsystemsheavilyreliesonthequalityofthefeaturerepresentation.Recentadvancesindeeplearninghavemadeitpossibletoex
6、tracthighlevelfeaturesfromrawsensydataleadingtobreakthroughsincomputervision[112216]speechrecognition[67].ThesemethodsutilisearangeofneuralwkarchitecturesincludingconvolutionalwksmultilayerperceptronsrestrictedBoltzmannm
7、achinesrecurrentneuralwkshaveexploitedbothsupervisedunsupervisedlearning.ItseemsnaturaltoaskwhethersimilartechniquescouldalsobebeneficialfRLwithsensydata.Howeverreinfcementlearningpresentsseveralchallengesfromadeeplearni
8、ngperspective.Firstlymostsuccessfuldeeplearningapplicationstodatehaverequiredlargeamountsofhlabelledtrainingdata.RLalgithmsontheotherhmustbeabletolearnfromascalarrewardsignalthatisfrequentlysparsenoisydelayed.Thedelaybet
9、weenactionsresultingrewardswhichcanbethoussoftimestepslongseemsparticularlydauntingwhencomparedtothedirectassociationbetweeninputstargetsfoundinsupervisedlearning.Anotherissueisthatmostdeeplearningalgithmsassumethedatasa
10、mplestobeindependentwhileinreinfcementlearningonetypicallyencounterssequencesofhighlycrelatedstates.FurthermeinRLthedatadistributionchangesasthealgithmlearnsnewbehaviourswhichcanbeproblematicfdeeplearningmethodsthatassum
11、eafixedunderlyingdistribution.ThispaperdemonstratesthataconvolutionalneuralwkcanovercomethesechallengestolearnsuccessfulcontrolpoliciesfromrawvideodataincomplexRLenvironments.ThewkistrainedwithavariantoftheQlearning[26]a
12、lgithmwithstochasticgradientdescenttoupdatetheweights.Toalleviatetheproblemsofcrelateddatanonstationarydistributionsweuse1arXiv:1312.5602v1[cs.LG]19Dec2013maximisingtheexpectedvalueofrγQ?(s?a?)Q?(sa)=Es?~E?rγmaxa?Q?(s?a?
13、)???sa?(1)ThebasicideabehindmanyreinfcementlearningalgithmsistoestimatetheactionvaluefunctionbyusingtheBellmanequationasaniterativeupdateQi1(sa)=E[rγmaxa?Qi(s?a?)|sa].Suchvalueiterationalgithmsconvergetotheoptimalactionv
14、aluefunctionQi→Q?asi→∞[23].Inpracticethisbasicapproachistotallyimpracticalbecausetheactionvaluefunctionisestimatedseparatelyfeachsequencewithoutanygeneralisation.Insteaditiscommontouseafunctionapproximattoestimatetheacti
15、onvaluefunctionQ(saθ)≈Q?(sa).Inthereinfcementlearningcommunitythisistypicallyalinearfunctionapproximatbutsometimesanonlinearfunctionapproximatisusedinsteadsuchasaneuralwk.Werefertoaneuralwkfunctionapproximatwithweightsθa
16、saQwk.AQwkcanbetrainedbyminimisingasequenceoflossfunctionsLi(θi)thatchangesateachiterationiLi(θi)=Esa~ρ()?(yi?Q(saθi))2?(2)whereyi=Es?~E[rγmaxa?Q(s?a?θi?1)|sa]isthetargetfiterationiρ(sa)isaprobabilitydistributionoversequ
17、encessactionsathatwerefertoasthebehaviourdistribution.Theparametersfromthepreviousiterationθi?1areheldfixedwhenoptimisingthelossfunctionLi(θi).Notethatthetargetsdependonthewkweightsthisisincontrastwiththetargetsusedfsupe
18、rvisedlearningwhicharefixedbefelearningbegins.Differentiatingthelossfunctionwithrespecttotheweightswearriveatthefollowinggradient?θiLi(θi)=Esa~ρ()s?~E??rγmaxa?Q(s?a?θi?1)?Q(saθi)??θiQ(saθi)?.(3)Ratherthancomputingthefull
19、expectationsintheabovegradientitisoftencomputationallyexpedienttooptimisethelossfunctionbystochasticgradientdescent.Iftheweightsareupdatedaftereverytimesteptheexpectationsarereplacedbysinglesamplesfromthebehaviourdistrib
20、utionρtheemulatErespectivelythenwearriveatthefamiliarQlearningalgithm[26].Notethatthisalgithmismodelfree:itsolvesthereinfcementlearningtaskdirectlyusingsamplesfromtheemulatEwithoutexplicitlyconstructinganestimateofE.Itis
21、alsooffpolicy:itlearnsaboutthegreedystrategya=maxaQ(saθ)whilefollowingabehaviourdistributionthatensuresadequateexplationofthestatespace.Inpracticethebehaviourdistributionisoftenselectedbyan?greedystrategythatfollowsthegr
22、eedystrategywithprobability1??saromactionwithprobability?.3RelatedWkPerhapsthebestknownsuccessstyofreinfcementlearningisTDgammonabackgammonplayingprogramwhichlearntentirelybyreinfcementlearningselfplayachievedasuperhuman
23、levelofplay[24].TDgammonusedamodelfreereinfcementlearningalgithmsimilartoQlearningapproximatedthevaluefunctionusingamultilayerperceptronwithonehiddenlayer1.HoweverearlyattemptstofollowuponTDgammonincludingapplicationsoft
24、hesamemethodtochessGocheckerswerelesssuccessful.ThisledtoawidespreadbeliefthattheTDgammonapproachwasaspecialcasethatonlywkedinbackgammonperhapsbecausethestochasticityinthedicerollshelpsexplethestatespacealsomakesthevalue
25、functionparticularlysmooth[19].FurthermeitwasshownthatcombiningmodelfreereinfcementlearningalgithmssuchasQlearningwithnonlinearfunctionapproximats[25]indeedwithoffpolicylearning[1]couldcausetheQwktodiverge.Subsequentlyth
26、emajityofwkinreinfcementlearningfocusedonlinearfunctionapproximatswithbetterconvergenceguarantees[25].1InfactTDGammonapproximatedthestatevaluefunctionV(s)ratherthantheactionvaluefunctionQ(sa)learntonpolicydirectlyfromthe
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 眾賞文庫僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- 2010-2016精選論文2013-wang_iccv13
- 2010-2016精選論文2014-1408.5882v2
- 2010-2016精選論文2015-1503.04069
- 2010-2016精選論文2016-shahriari-bayesopt-ieee-2016
- 2010-2016精選論文2014-d14-1162
- 2010-2016精選論文2014-deepface-closing-the-gap-to-human-level-performance
- 2010-2016精選論文2015_batch_normalization_accelerating_deep_network_training_by_reducing_internal_covariate_shift
- 高中物理選修3-3(2010-2016年)高考題精選(含解析)
- 山東高考英語作文題及范文(2010-2016)
- 2010-2016年南京中考數(shù)學(xué)試題及答案
- 2010-2016年碩士研究生畢業(yè)情況
- 當(dāng)下中國電影的救贖性研究(2010-2016).pdf
- 2010-2016司考國際私法司考真題及解析
- 2010-2016生命科學(xué)技術(shù)學(xué)院獲獎(jiǎng)情況
- 國產(chǎn)系列電影傳播效果研究(2010-2016年)_2129.pdf
- 2010-2016年考研英語二歷年真題及答案解析
- 次北固山下-++中考古詩賞析要點(diǎn)解析++2010-2016
- 北京大學(xué)社會(huì)工作考研真題2010-2016
- 2010-2016年考研英語二歷年真題及答案解析(完整版)
- 新浪網(wǎng)2010-2016年性工作者媒介形象研究.pdf
評(píng)論
0/150
提交評(píng)論