版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡介
1、SIAM J. MATRIX ANAL. APPL.Vol. 9, No. 2, April 1988(C) 1988 Society for Industrial and Applied Mathematics011ON MINIMIZING THE MAXIMUM EIGENVALUE OF A SYMMETRIC MATRIX*MICHAEL L. OVERTON’fAbstract. An important optimizat
2、ion problem that arises in control is to minimize o(x), the largest eigenvalue(in magnitude) of a symmetric matrix function of x. If the matrix function is affine, 9(x) is convex. However,9(x) is not differentiable, sinc
3、e the eigenvalues are not differentiable at points where they coalesce. In this paper an algorithm that converges to the minimum of 9(x) at a quadratic rate is outlined. Second derivatives are notrequired to obtain quadr
4、atic convergence in cases where the solution is strongly unique. An important feature of the algorithm is the ability to split a multiple eigenvalue, if necessary, to obtain a descent direction. In theserespects the new
5、algorithm represents a significant improvement on the first-order methods ofPolak and Wardi and ofDoyle. The new method has much in common with the recent work ofFletcher on semidefinite constraints and Friedland, Noceda
6、l, and Overton on inverse eigenvalue problems. Numerical examples are presented.Key words, nonsmooth optimization, nondifferentiable optimization, convex programming, semidefiniteconstraints, minimizing maximum singular
7、valueAMS(MOS) subject classifications. 65F99, 65K10, 90C251. Introduction. Many important optimization problems involve eigenvalue con-straints. For example, in structural engineering we may wish to minimize the cost of
8、some structure subject to constraints on its natural frequencies. A particularly common problem, which arises in control engineering, is(1.1) min qg(x) XEmwhere(1.2) o(x) max [Xi(A(x))l, l_i_nA(x) is a real symmetric n n
9、 matrix-valued affine function of x, and?i(A(x)), 1, n)are its eigenvalues. Since A(x) is an affine function, it may be writtenA(x) Ao + , x,A,.k=lThe function o(x) is convex, since the largest eigenvalue of a matrix is
10、a convex function of the matrix elements. An important special case is(1.3) A eeReceived by the editors February 1, 1987; accepted for publication October 1, 1987. This work was supported in part by the National Science
11、Foundation under grant DCR-85-02014. Some of the computer program development was performed at Stanford University, Stanford, California with support from the Office ofNaval Research under contract ONR N00014-82-K-0335.
12、This paper was presented at the SIAM Conference on Linear Algebra in Signals, Systems, and Control, which was held in Boston, Massachusetts on August 12- 14, 1986.“Courant Institute of Mathematical Sciences, New York Uni
13、versity, New York, New York 10012. This work was completed while the author was on sabbatical leave at the Centre for Mathematical Analysis and Mathematical Sciences Research Institute, Australian National University, Ca
14、nberra, Australia.256Downloaded 04/28/14 to 222.205.5.133. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php258 MICHAEL L. OVERTONwhere “>=“ in (2.4), (2.5) indicates a mat
15、rix positive semidefinite constraint. The secondformulation immediately suggests that the work of Fletcher (1985) is applicable. Fletchergives a quadratically convergent algorithm to solve(2.6) max xk i=(2.7) s.t. A0- Di
16、ag (x) >= 0, x >= 0and many of the components of his algorithm are therefore applicable to solving(2.3)-(2.5). However, the algorithm is not directly applicable and there are several reasons why it is possible to i
17、mprove on Fletcher’s method in this case. One reason is that Fletcher’s method solves a sequence of subproblems, each defined by a guess of the nullity of A0 + Diag (x), until the correct nullity is identified. Such a st
18、rategy cannot easily be extended to the case oftwo (or more) semidefinite constraints. One goal ofour algorithmis to be able to adjust multiplicity estimates while always obtaining a reduction of o(x)at each iteration. W
19、e are able to do this by computing an eigenvalue-eigenvector factor-ization of A(x) at each iteration. By contrast, Fletcher’s method uses a block Choleskifactorization ofA0 + Diag (x), together with an exact penalty fun
20、ction to impose (2.7). Also, because of the special form of (2.6), (2.7), Fletcher’s method does not require a technique for splitting eigenvalues. In other words, given a matrix A0 + Diag (x),satisfying (2.7), with null
21、ity t, it cannot be advantageous, in the sense of increasing (2.6),to reduce the multiplicity t. On the other hand it may be necessary to split a multipleeigenvalue in our case, and the ability to recognize this situatio
22、n and obtain an appropriate descent direction is an important part of our algorithm. Because we use an eigenvalue factorization of the matrix A(x) at each iterate x, our method has much in common with the methods describ
23、ed by Friedland, Nocedal, and Overton (1987). In the latter paper several quadratically convergent methods are given to solve(2.8) X(A(x)) 0, 1, t,(2.9) Xi(A(x)) ti, i= + 1, ..., [where (o, {ui}) are given distinct value
24、s and t, t“ (and m, the number of variables) areappropriately chosen. One of the contributions of that paper was to explain that the condition (2.8), although apparently only conditions, actually genetically imposest(t +
25、 1)/2 linearly independent constraints on the parameter space, and that effective numerical methods must be based on this consideration. The present paper may be viewed as generalizing the methods of Friedland, Nocedal,
26、and Overton to solve(2.10) min ,X [m(2.11) s.t. X(A(x)) w, 1, ..., t,(2.12) Xi(A(x)) -oo, n s + 1, nwhere, as a product ofthe minimization process, it is established that 0 max (Xl, with(2.13) oa= h Xt> kt+ kn-s> k
27、n-s+ kn We shall subsequently refer to and s as the upper and lower (eigenvalue) multiplicities of A(x). Note that it is possible that either or s is zero. The following notation will beuseful subsequently: let {ql(x), q
28、n(X)} be any orthonormal set of eigenvectors of A(x) corresponding to {,i}, and let Q [q, qt], Q2 [qn-s+, qn].Downloaded 04/28/14 to 222.205.5.133. Redistribution subject to SIAM license or copyright; see http://www.siam
溫馨提示
- 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 眾賞文庫僅提供信息存儲空間,僅對用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對自己和他人造成任何形式的傷害或損失。
最新文檔
- (節(jié)選)數(shù)學(xué)外文翻譯--最小化對稱矩陣的最大特征值(原文).pdf
- (節(jié)選)數(shù)學(xué)外文翻譯--最小化對稱矩陣的最大特征值(原文).pdf
- (節(jié)選)數(shù)學(xué)外文翻譯--最小化對稱矩陣的最大特征值
- (節(jié)選)數(shù)學(xué)外文翻譯--最小化對稱矩陣的最大特征值
- (節(jié)選)數(shù)學(xué)外文翻譯--最小化對稱矩陣的最大特征值(譯文)
- (節(jié)選)數(shù)學(xué)外文翻譯--最小化對稱矩陣的最大特征值(譯文).doc
- (節(jié)選)數(shù)學(xué)外文翻譯--最小化對稱矩陣的最大特征值(譯文).doc
- 圖的最小特征值
- 對稱箭形矩陣的逆特征值問題.pdf
- 實(shí)對稱正定模糊矩陣的模糊特征值.pdf
- 對稱矩陣特征值分解的硬件實(shí)現(xiàn)研究.pdf
- 21712.非負(fù)矩陣最大特征值的估計(jì)法
- 最小奇異值與特征值及迭代矩陣譜半徑的估計(jì).pdf
- 圖的最小特征值.pdf
- 補(bǔ)圖的最小特征值.pdf
- 1518.非負(fù)不可約矩陣最大特征值的估計(jì)
- 求對稱矩陣特征值的神經(jīng)網(wǎng)絡(luò)方法.pdf
- 非負(fù)矩陣最大特征值的界的估計(jì)和算法.pdf
- 矩陣特征值的求法研究
- 矩陣特征值的估計(jì).pdf
評論
0/150
提交評論