2023年全國碩士研究生考試考研英語一試題真題(含答案詳解+作文范文)_第1頁
已閱讀1頁,還剩31頁未讀, 繼續(xù)免費閱讀

下載本文檔

版權說明:本文檔由用戶提供并上傳,收益歸屬內容提供方,若內容存在侵權,請進行舉報或認領

文檔簡介

1、Model Assessment & Selection,Dept. Computer Science & Engineering, Shanghai Jiaotong University,2024/3/17,Model Assessment & Selection,2,Outline,Bias, Variance and Model ComplexityThe Bias-Variance Decompos

2、itionOptimism of the Training Error RateEstimates of In-Sample Prediction ErrorThe Effective Number of ParametersThe Bayesian Approach and BICMinimum Description LengthVapnik-Chernovenkis DimensionCross-Validation

3、Bootstrap Methods,2024/3/17,Model Assessment & Selection,3,Bias, Variance & Model Complexity,2024/3/17,Model Assessment & Selection,4,Bias, Variance & Model Complexity,The standard of model assessment :

4、the generalization performance of a learning methodModel:Prediction Model:Loss function:,2024/3/17,Model Assessment & Selection,5,Bias, Variance & Model Complexity,Error: training error, generalization error

5、Typical loss function:,2024/3/17,Model Assessment & Selection,6,Bias, Variance & Model Complexity,模型選擇:估計不同模型的性能,以便選擇最好的模型模型評估:從選定的模型,估計新樣本的預測值解決方法:交叉校驗:數據集=訓練集+校證集+測試集,2024/3/17,Model Assessment & Se

6、lection,7,Bias-Variance Decomposition,Basic Model:The expected prediction error of a regression fit .The more complex the model, the lower the (squared) bias but the higher the variance.,2024/3/17,Model Asse

7、ssment & Selection,8,Bias-Variance Decomposition,For the k-NN regression fit the prediction error:For the linear model fit,2024/3/17,Model Assessment & Selection,9,Bias-Variance Decomposition,The in-sample err

8、or of the Linear Model The model complexity is directly related to the number of parameters p.For ridge regression the square bias,2024/3/17,Model Assessment & Selection,10,Bias-Variance Decomposition,Schematic o

9、f the behavior of bias and variance,2024/3/17,Model Assessment & Selection,11,Optimism of the Training Error Rate,Training Error < True Error is extra-sample errorThe in-sample errorOptimism:,2024/3/1

10、7,Model Assessment & Selection,12,Optimism of the Training Error Rate,For squared error, 0-1, other loss function: is obtained by a linear fit with d inputs or basis function, a simplification is:,輸入維數或基函數的個數

11、增加,樂觀性增大訓練樣本數增加,樂觀性降低,2024/3/17,Model Assessment & Selection,13,Estimates of In-sample Prediction Error,The general form of the in-sample estimates is parameters are fit under Squared error lossUse a log-like

12、lihood function to estimateThis relationship introduce the Akaike Information Criterion,2024/3/17,Model Assessment & Selection,14,Akaike Information Criterion,Akaike Information Criterion is a similar but more ge

13、nerally applicable estimate ofA set of models with a turning parameter : provides an estimate of the test error curve, and we find the turning parameter that minimizes it.,2024/3/17,Model

14、 Assessment & Selection,15,Akaike Information Criterion,For the logistic regression model, using the binomial log-likelihood.For Gaussian model the AIC statistic equals to the Cp statistic.,2024/3/17,Model Assessm

15、ent & Selection,16,Akaike信息準則,音素識別例子:,,2024/3/17,Model Assessment & Selection,17,Effective number of parameters,A linear fitting method:Effective number of parameters:If is an orthogonal projection matrix

16、 onto a basis set spanned by features, then: is the correct quantity to replace in the Cp statistic,2024/3/17,Model Assessment & Selection,18,Bayesian Approach & BIC,The Bayesian I

17、nformation Criterion (BIC)Gaussian model: Variance thenSo is proportional to , 2 replaced by 傾向選擇簡單模型, 而懲罰復雜模型,2024/3/17,Model Assessment & Selection,19,Bayesian

18、 Model Selection,BIC derived from Bayesian Model SelectionCandidate models Mm , model parameter and a prior distributionPosterior probability:,2024/3/17,Model Assessment & Selection,20,Bayesian Model Selection

19、,Compare two modelsIf the odds are greater than 1, model m will be chosen, otherwise choose model Bayes Factor:The contribution of the data to the posterior odds,2024/3/17,Model Assessment & Selection,21,Bayesi

20、an模型選擇,如果模型的先驗是均勻的Pr(M)是常數,,極小BIC的模型等價于極大化后驗概率模型優(yōu)點:當模型包含真實模型是,當樣本趨于無窮時,BIC選擇正確的概率是一。,2024/3/17,Model Assessment & Selection,22,最小描述長度(MDL),來源:最優(yōu)編碼信息: z1 z2 z3 z4編碼: 0 10 110 111編碼2: 110 10

21、 111 0準則:最頻繁的使用最短的編碼發(fā)送信息zi的概率:香農定律指出使用長度:,2024/3/17,Model Assessment & Selection,23,最小描述長度(MDL),2024/3/17,Model Assessment & Selection,24,模型選擇MDL,2024/3/17,Model Assessment & Selection,25,模型選擇MDL,MDL

22、原理:我們應該選擇模型,使得下列長度極小,2024/3/17,Model Assessment & Selection,26,Vapnik-Chernovenkis維,問題:如何選擇模型的參數的個數 d?該參數代表了模型的復雜度VC維是描述模型復雜度的一個重要的指標,2024/3/17,Model Assessment & Selection,27,VC維,類 的VC維定義為可以被

23、 成員分散的點的最大的個數平面的直線類VC維為3。sin(ax) 的VC維是無窮大。,2024/3/17,Model Assessment & Selection,28,VC維,實值函數類 的VC維定義為指示類 的VC維。引入VC維可以為泛化誤差提供一個估計設 的VC維為

24、h,樣本數為N.,2024/3/17,Model Assessment & Selection,29,交叉驗證,2024/3/17,Model Assessment & Selection,30,自助法,基本思想:從訓練數據中有放回地隨機抽樣數據集,每個數據集的與原訓練集相同。如此產生B組 自助法數據集如何利用這些數據集進行預測?,2024/3/17,Model Assessment & Selection,

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網頁內容里面會有圖紙預覽,若沒有圖紙預覽就沒有圖紙。
  • 4. 未經權益所有人同意不得將文件中的內容挪作商業(yè)或盈利用途。
  • 5. 眾賞文庫僅提供信息存儲空間,僅對用戶上傳內容的表現(xiàn)方式做保護處理,對用戶上傳分享的文檔內容本身不做任何修改或編輯,并不能對任何下載內容負責。
  • 6. 下載文件中如有侵權或不適當內容,請與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準確性、安全性和完整性, 同時也不承擔用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。

評論

0/150

提交評論