2023年全國碩士研究生考試考研英語一試題真題(含答案詳解+作文范文)_第1頁
已閱讀1頁,還剩5頁未讀, 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

1、ConvolutionalNeuralwksfSentenceClassificationYoonKimNewYkUniversitybstractWereptonaseriesofexperimentswithconvolutionalneuralwks(CNN)trainedontopofpretrainedwdvectsfsentencelevelclassificationtasks.WeshowthatasimpleCNNwi

2、thlittlehyperparametertuningstaticvectsachievesexcellentresultsonmultiplebenchmarks.Learningtaskspecificvectsthroughfinetuningoffersfurthergainsinperfmance.Weadditionallyproposeasimplemodificationtothearchitecturetoallow

3、ftheuseofbothtaskspecificstaticvects.TheCNNmodelsdiscussedhereinimproveuponthestateofthearton4outof7taskswhichincludesentimentanalysisquestionclassification.1IntroductionDeeplearningmodelshaveachievedremarkableresultsinc

4、omputervision(Krizhevskyetal.2012)speechrecognition(Gravesetal.2013)inrecentyears.Withinnaturallanguageprocessingmuchofthewkwithdeeplearningmethodshasinvolvedlearningwdvectrepresentationsthroughneurallanguagemodels(Bengi

5、oetal.2003Yihetal.2011Mikolovetal.2013)perfmingcompositionoverthelearnedwdvectsfclassification(Collobertetal.2011).Wdvectswhereinwdsareprojectedfromasparse1ofVencoding(hereVisthevocabularysize)ontoalowerdimensionalvectsp

6、aceviaahiddenlayerareessentiallyfeatureextractsthatencodesemanticfeaturesofwdsintheirdimensions.Insuchdenserepresentationssemanticallyclosewdsarelikewiseclose—ineuclideancosinedistance—inthelowerdimensionalvectspace.Conv

7、olutionalneuralwks(CNN)utilizelayerswithconvolvingfiltersthatareappliedtolocalfeatures(LeCunetal.1998).iginallyinventedfcomputervisionCNNmodelshavesubsequentlybeenshowntobeeffectivefNLPhaveachievedexcellentresultsinseman

8、ticparsing(Yihetal.2014)searchqueryretrieval(Shenetal.2014)sentencemodeling(Kalchbrenneretal.2014)othertraditionalNLPtasks(Collobertetal.2011).InthepresentwkwetrainasimpleCNNwithonelayerofconvolutionontopofwdvectsobtaine

9、dfromanunsupervisedneurallanguagemodel.ThesevectsweretrainedbyMikolovetal.(2013)on100billionwdsofGoogleNewsarepubliclyavailable.1Weinitiallykeepthewdvectsstaticlearnonlytheotherparametersofthemodel.Despitelittletuningofh

10、yperparametersthissimplemodelachievesexcellentresultsonmultiplebenchmarkssuggestingthatthepretrainedvectsare‘universal’featureextractsthatcanbeutilizedfvariousclassificationtasks.Learningtaskspecificvectsthroughfinetunin

11、gresultsinfurtherimprovements.Wefinallydescribeasimplemodificationtothearchitecturetoallowftheuseofbothpretrainedtaskspecificvectsbyhavingmultiplechannels.OurwkisphilosophicallysimilartoRazavianetal.(2014)whichshowedthat

12、fimageclassificationfeatureextractsobtainedfromapretraineddeeplearningmodelperfmwellonavarietyoftasks—includingtasksthatareverydifferentfromtheiginaltaskfwhichthefeatureextractsweretrained.2ModelThemodelarchitectureshown

13、infigure1isaslightvariantoftheCNNarchitectureofCollobertetal.(2011).Letxi∈Rkbethekdimensionalwdvectcrespondingtotheithwdinthesentence.Asentenceoflengthn(paddedwhere1:code.pwd2vecarXiv:1408.5882v2[cs.CL]3Sep2014DataclN|V|

14、|Vpre|TestMR220106621876516448CVSST15181185517836162622210SST2219961316185148381821Subj223100002132317913CVTREC610595295929125500CR219377553405046CVMPQA231060662466083CVTable1:Summarystatisticsfthedatasetsaftertokenizati

15、on.c:Numberoftargetclasses.l:Averagesentencelength.N:Datasetsize.|V|:Vocabularysize.|Vpre|:Numberofwdspresentinthesetofpretrainedwdvects.Test:Testsetsize(CVmeanstherewasnostardtraintestsplitthus10foldCVwasused).3Datasets

16、ExperimentalSetupWetestourmodelonvariousbenchmarks.Summarystatisticsofthedatasetsareintable1.?MR:Moviereviewswithonesentenceperreview.Classificationinvolvesdetectingpositivenegativereviews(PangLee2005).3?SST1:StanfdSenti

17、mentTreebank—anextensionofMRbutwithtraindevtestsplitsprovidedfinegrainedlabels(verypositivepositiveneutralnegativeverynegative)relabeledbySocheretal.(2013).4?SST2:SameasSST1butwithneutralreviewsremovedbinarylabels.?Subj:

18、Subjectivitydatasetwherethetaskistoclassifyasentenceasbeingsubjectiveobjective(PangLee2004).?TREC:TRECquestiondataset—taskinvolvesclassifyingaquestioninto6questiontypes(whetherthequestionisaboutpersonlocationnumericinfma

19、tionetc.)(LiRoth2002).5?CR:Customerreviewsofvariousproducts(camerasMP3setc.).Taskistopredictpositivenegativereviews(HuLiu2004).63:www.cs.cnell.edupeoplepabomoviereviewdata4:nlp.stanfd.edusentimentDataisactuallyprovidedat

20、thephraselevelhencewetrainthemodelonbothphrasessentencesbutonlysceonsentencesattesttimeasinSocheretal.(2013)Kalchbrenneretal.(2014)LeMikolov(2014).Thusthetrainingsetisanderofmagnitudelargerthanlistedintable1.5:cogcomp.cs

21、.illinois.eduDataQAQC6:www.cs.uic.edu~liubFBSsentimentanalysis.html?MPQA:OpinionpolaritydetectionsubtaskoftheMPQAdataset(Wiebeetal.2005).73.1HyperparametersTrainingFalldatasetsweuse:rectifiedlinearunitsfilterwindows(h)of

22、345with100featuremapseachoutrate(p)of0.5l2constraint(s)of3minibatchsizeof50.ThesevalueswerechosenviaagridsearchontheSST2devset.Wedonototherwiseperfmanydatasetspecifictuningotherthanearlystoppingondevsets.Fdatasetswithout

23、astarddevsetweromly10%ofthetrainingdataasthedevset.TrainingisdonethroughstochasticgradientdescentovershuffledminibatcheswiththeAdadeltaupdaterule(Zeiler2012).3.2PretrainedWdVectsInitializingwdvectswiththoseobtainedfroman

24、unsupervisedneurallanguagemodelisapopularmethodtoimproveperfmanceintheabsenceofalargesupervisedtrainingset(Collobertetal.2011Socheretal.2011Iyyeretal.2014).Weusethepubliclyavailablewd2vecvectsthatweretrainedon100billionw

25、dsfromGoogleNews.Thevectshavedimensionalityof300weretrainedusingthecontinuousbagofwdsarchitecture(Mikolovetal.2013).Wdsnotpresentinthesetofpretrainedwdsareinitializedromly.3.3ModelVariationsWeexperimentwithseveralvariant

26、softhemodel.?CNNr:Ourbaselinemodelwhereallwdsareromlyinitializedthenmodifiedduringtraining.?CNNstatic:Amodelwithpretrainedvectsfromwd2vec.Allwds—includingtheunknownonesthatarerandomlyinitialized—arekeptstaticonlytheother

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 眾賞文庫僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

評(píng)論

0/150

提交評(píng)論