2010-2016精選論文2015-1503.04069_第1頁
已閱讀1頁,還剩17頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權說明:本文檔由用戶提供并上傳,收益歸屬內容提供方,若內容存在侵權,請進行舉報或認領

文檔簡介

1、LSTM:ASearchSpaceOdysseyKlausGreffRupeshKumarSrivastavaJanKoutn?kBasR.SteunebrinkJurgenSchhubereSwissAILabIDSIAIstitutoDalleMollediStudisull’IntelligenzaArtificialeUniversit`adellaSvizzeraitaliana(USI)Scuolauniversitaria

2、professionaledellaSvizzeraitaliana(SUPSI)Galleria26928MannoLuganoSwitzerlAbstractSeveralvariantsoftheLongShtTermMemy(LSTM)architecturefrecurrentneuralwkshavebeenproposedsinceitsinceptionin1995.Inrecentyearsthesewkshavebe

3、comethestateoftheartmodelsfavarietyofmachinelearningproblems.ThishasledtoarenewedinterestinunderstingtheroleutilityofvariouscomputationalcomponentsoftypicalLSTMvariants.Inthispaperwepresentthefirstlargescaleanalysisofeig

4、htLSTMvariantsonthreerepresentativetasks:speechrecognitionhwritingrecognitionpolyphonicmusicmodeling.ThehyperparametersofallLSTMvariantsfeachtaskwereoptimizedseparatelyusingromsearchtheirimptancewasassessedusingthepowerf

5、ulfANOVAframewk.Intotalwesummarizetheresultsof5400experimentalruns(≈15yearsofCPUtime)whichmakesourstudythelargestofitskindonLSTMwks.OurresultsshowthatnoneofthevariantscanimproveuponthestardLSTMarchitecturesignificantlyde

6、monstratethefgetgatetheoutputactivationfunctiontobeitsmostcriticalcomponents.Wefurtherobservethatthestudiedhyperparametersarevirtuallyindependentderiveguidelinesftheirefficientadjustment.1.IntroductionRecurrentneuralwksw

7、ithLongShtTermMemy(whichwewillconciselyrefertoasLSTMs)haveemergedasaneffectivescalablemodelfseverallearningproblemsrelatedtosequentialdata.Earliermethodsfattackingtheseproblemswereusuallyhdesignedwkaroundstodealwiththese

8、quentialnatureofdatasuchaslanguageaudiosignals.SinceLSTMsareeffectiveatcapturinglongtermtempaldependencieswithoutsufferingfromtheoptimizationhurdlesthatplaguesimplerecurrentwks(SRNs)(Hochreiter1991Bengioetal.1994)theyhav

9、ebeenusedtoadvancethestateoftheartfmanydifficultproblems.Thisincludeshwritingrecognition(Gravesetal.2009Phametal.2013Doetschetal.2014)generation(Gravesetal.2013)languagemodeling(Zarembaetal.2014)translation(Luongetal.201

10、4)acousticmodelingofspeech(Saketal.2014)speechsynthesis(Fanetal.2014)proteinsecondarystructureprediction(Snderby1997).HoweverLSTMsarenowappliedtomanylearningproblemswhichdiffersignificantlyinscalenaturefromtheproblemstha

11、ttheseimprovementswereinitiallytestedon.AsystematicstudyoftheutilityofvariouscomputationalcomponentswhichcompriseLSTMs(seeFigure1)wasmissing.ThispaperfillsthatgapsystematicallyaddressestheopenquestionofimprovingtheLSTMar

12、chitecture.WeevaluatethemostpopularLSTMarchitecture(vanillaLSTMSection2)eightdifferentvariantsthereofonthreebenchmarkproblems:acousticmodelinghwritarXiv:1503.04069v1[cs.NE]13Mar2015LSTM:ASearchSpaceOdysseynectionswasd.Th

13、usthatstudydidnotusetheexactgradientftraining.Anotherfeatureofthatversionwastheuseoffullgaterecurrencewhichmeansthatallthegatesreceivedrecurrentinputsfromallgatesattheprevioustimestepinadditiontotherecurrentinputsfromthe

14、blockoutputs.Thisfeaturedidnotappearinanyofthelaterpapers.3.2.FgetGateThefirstpapertosuggestamodificationoftheLSTMarchitectureintroducedthefgetgate(Gersetal.1999)enablingtheLSTMtoresetitsownstate.Thisallowedlearningofcon

15、tinualtaskssuchasembeddedRebergrammar.3.3.PeepholeConnectionsGers&Schhuber(2000)arguedthatindertolearnprecisetimingsthecellneedstocontrolthegates.Sofarthiswasonlypossiblethroughanopenoutputgate.Peepholeconnections(connec

16、tionsfromthecelltothegatesblueinFigure1)wereaddedtothearchitectureindertomakeprecisetimingseasiertolearn.AdditionallytheoutputactivationfunctionwasomittedastherewasnoevidencethatitwasessentialfsolvingtheproblemsthatLSTMh

17、adbeentestedonsofar.3.4.FullGradientThefinalmodificationtowardsthevanillaLSTMwasdonebyGraves&Schhuber(2005).Thisstudypresentedthefullbackpropagationthroughtime(BPTT)trainingfLSTMwkswiththearchitecturedescribedinSection2p

18、resentedresultsontheTIMITbenchmark.UsingfullBPTThadtheaddedadvantagethatLSTMgradientscouldbecheckedusingfinitedifferencesmakingpracticalimplementationsmereliable.3.5.OtherVariantsSinceitsintroductionthevanillaLSTMhasbeen

19、themostcommonlyusedarchitecturebutothervariantshavebeensuggestedtoo.BefetheintroductionoffullBPTTtrainingGersetal.(2002)utilizedatrainingmethodbasedonExtendedKalmanFilteringwhichenabledtheLSTMtobetrainedonsomepathologica

20、lcasesatthecostofhighcomputationalcomplexity.Schhuberetal.(2007)proposedusingahybridevolutionbasedmethodinsteadofBPTTftrainingbutretainedthevanillaLSTMarchitecture.Bayeretal.(2009)evolveddifferentLSTMblockarchitecturesth

21、atmaximizefitnessoncontextsensitivegrammars.Saketal.(2014)introducedalinearprojectionlayerthatprojectstheoutputoftheLSTMlayerdownbeferecurrentfwardconnectionsindertoreducetheamountofparametersfLSTMwkswithmanyblocks.Byint

22、roducingatrainablescalingparameterftheslopeofthegateactivationfunctionsDoetschetal.(2014)wereabletoimprovetheperfmanceofLSTMonanofflinehwritingrecognitiondataset.InwhattheycallDynamicCtexMemyOtteetal.(2014)improvedconver

23、gencespeedofLSTMbyaddingrecurrentconnectionsbetweenthegatesofasingleblock(butnotbetweentheblocks).Choetal.(2014)proposedasimplificationoftheLSTMarchitecturecalledGatedRecurrentUnit(GRU).Theyusedneitherpeepholeconnections

24、noutputactivationfunctionscoupledtheinputthefgetgateintoanupdategate.Finallytheiroutputgate(calledresetgate)onlygatestherecurrentconnectionstotheblockinput(Wz).Chungetal.(2014)perfmedaninitialcomparisonbetweenGRULSTMrept

25、edmixedresults.4.EvaluationSetupThefocusofourstudyistocomparedifferentLSTMvariantsnottoachievestateoftheartresults.Therefeourexperimentsaredesignedtokeepthesetupsimplethecomparisonsfair.ThevanillaLSTMisusedasabaselineeva

26、luatedtogetherwitheightofitsvariants.Eachvariantaddsremovesmodifiesthebaselineinexactlyoneaspectwhichallowstoisolatetheireffect.Threedifferentdatasetsfromdifferentdomainsareusedtoaccountfcrossdomainvariations.Sincehyperp

27、arameterspaceislargeimpossibletotraversecompletelyromsearchwasusedindertoobtainthebestperfminghyperparameters(Bergstra&Bengio2012)feverycombinationofvariantdataset.Thereafterallanalysesfocusedonthe10%bestperfmingtrialsfe

28、achvariantdataset(Section5.1)makingtheresultsrepresentativefthecaseofreasonablehyperparametertuningeffts.Romsearchwasalsochosenftheaddedbenefitofprovidingenoughdatafanalyzingthegeneraleffectofvarioushyperparametersonthep

29、erfmanceofeachLSTMvariant(Section5.2).4.1.DatasetsEachdatasetissplitintothreeparts:atrainingsetavalidationsetwhichisusedfearlystoppingfoptimizingthehyperparametersatestsetfthefinalevaluation.Detailsofpreprocessingfeachda

30、tasetareprovidedinthesupplementarymaterial.4.1.1.TIMITTheTIMITSpeechcpus(Garofoloetal.1993)islargeenoughtobeareasonableacousticmodelingbenchmarkfspeechrecognitionyetitissmallenoughtokeepalargestudysuchasoursmanageable.Ou

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
  • 4. 未經(jīng)權益所有人同意不得將文件中的內容挪作商業(yè)或盈利用途。
  • 5. 眾賞文庫僅提供信息存儲空間,僅對用戶上傳內容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內容本身不做任何修改或編輯,并不能對任何下載內容負責。
  • 6. 下載文件中如有侵權或不適當內容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論