一維搜索演算法
① 請問MUSIC演算法和LMS演算法到底是怎麼回事,都是用來干嗎的啊
這是兩種不同的演算法,MUSIC演算法是多重信號分類演算法,是經典的空間譜估計演算法,通過將接受信號分成雜訊子空間和信號子空間(這兩子空間正交)達到超分辨譜估計.MUSIC演算法可以完成DOA(波達方向)估計和頻率估計.其實質是基於一維搜索的雜訊子空間演算法.
LMS演算法是最小均方演算法,是自適應技術的基礎.LMS演算法是達到輸入信號與期望信號有最小的均方誤差的一種演算法.
② music演算法的理論發展及應用
MUSIC(Multiple Signal Classification多信號分類)演算法是1979年由美國人R.O.Schmidt提出的,它標志著空間譜估計測向進入了繁榮發展的階段。它將「向量空間」的概念引入了空間譜估計領域,經過三十年的發展,可以說其理論已經比較成熟。
自80年代以來,人們對基於特徵分解的超解析度空間譜估計演算法進行了廣泛深入的研究,並提出了一系列高效的處理方法,其中最經典的是多信號分類(MUSIC)演算法,這種演算法要經過一維搜索才能求出信源的來向,而相對最大似然(ML)和加權子空間擬合(WSF)等多維搜索演算法的運算量已經減少了很多。以MUSIC為代表的演算法存在一個缺點,即對相干信號處理的不理想。在針對相干信號源的一系列處理方案中,比較經典的是空間平滑技術,如空間平滑(SS)和修正的空間平滑(MSS)演算法。然而,空間平滑技術是以損失陣列有效孔徑為代價的,而且只適用於等距均勻線陣(ULA)。
事實上空間譜估計演算法都是在已知信號源數目下計算的,而在實際應用中這是不可能的,只能根據觀測數據對源數目進行估計。R.O.Schmidt在他的經典之作中提出了依據陣列協方差矩陣特徵值的分布來估計信號源的方法。這種方法在理論上是完美的,至少對獨立源和部分相關源是正確的,但實際上由於數據長度有限,很大程度上只能依靠主觀判斷來確定源數。
③ 最優化理論與演算法的圖書目錄
第1章引言
1.1學科簡述
1.2線性與非線性規劃問題
*1.3幾個數學概
1.4凸集和凸函數
習題
第2章線性規劃的基本性質
2.1標准形式及圖解法
2.2基本性質
習題
第3章單純形方法
3.1單純形方法原理
3.2兩階段法與大M法
3.3退化情形
3.4修正單純形法
*3.5變數有界的情形
*3.6分解演算法
習題
第4章對偶原理及靈敏度分析
4.1線性規劃中的對偶理論
4.2對偶單純形法
4.3原始對偶演算法
4.4靈敏度分析
*4.5含參數線性規劃
習題
第5章運輸問題
5.1運輸問題的數學模型與基本性
5.2表上作業法
5.3產銷不平衡運輸問題
習題
第6章線性規劃的內點演算法
*6.1Karmarkar演算法
*6.2內點法
6.3路徑跟蹤法
第7章最優性條件
7.1無約束問題的極值條件
7.2約束極值問題的最優性條件
*7.3對偶及鞍點問題
習題
*第8章演算法
8.1演算法概念
8.2演算法收斂問題
習題
第9章一維搜索
9.1一維搜索概念
9.2試探法
9.3函數逼近法
習題
第10章使用導數的最優化方法
10.1最速下降法
10.2牛頓法
10.3共軛梯度法
10.4擬牛頓法
10.5信賴域方法
10.6最小二乘
習題
第11章無約束最優化的直接方法
11.1模式搜索法
11.2Rosenbrock方法
11.3單純形搜索法
11.4Powell方法
習題
第12章可行方向法
12.1Zoutendijk可行方向法
12.2Rosen梯度投影法
*12.3既約梯度法
12.4Frank?Wolfe方法
習題
第13章懲罰函數法
13.1外點罰函數法
13.2內點罰函數法
*13.3乘子法
習題
第14章二次規劃
14.1Lagrange方法
14.2起作用集方法
14.3Lemke方法
14.4路徑跟蹤法
習題
*第15章整數規劃簡介
15.1分支定界法
15.2割平面法
15.301規劃的隱數法
15.4指派問
習題
第16章動態規劃簡介
16.1動態規劃的一些基本概念
16.2動態規劃的基本定理和基本方程
16.3逆推解法和順推解法
16.4動態規劃與靜態規劃的關系
16.5函數迭代法
習題
參考文獻
④ BP演算法及其改進
傳統的BP演算法及其改進演算法的一個很大缺點是:由於其誤差目標函數對於待學習的連接權值來說非凸的,存在局部最小點,對網路進行訓練時,這些演算法的權值一旦落入權值空間的局部最小點就很難跳出,因而無法達到全局最小點(即最優點)而使得網路訓練失敗。針對這些缺陷,根據凸函數及其共軛的性質,利用Fenchel不等式,使用約束優化理論中的罰函數方法構造出了帶有懲罰項的新誤差目標函數。
用新的目標函數對前饋神經網路進行優化訓練時,隱層輸出也作為被優化變數。這個目標函數的主要特點有:
1.固定隱層輸出,該目標函數對連接權值來說是凸的;固定連接權值,對隱層輸出來說是凸的。這樣在對連接權值和隱層輸出進行交替優化時,它們所面對的目標函數都是凸函數,不存在局部最小的問題,演算法對於初始權值的敏感性降低;
2.由於懲罰因子是逐漸增大的,使得權值的搜索空間變得比較大,從而對於大規模的網路也能夠訓練,在一定程度上降低了訓練過程陷入局部最小的可能性。
這些特性能夠在很大程度上有效地克服以往前饋網路的訓練演算法易於陷入局部最小而使網路訓練失敗的重大缺陷,也為利用凸優化理論研究前饋神經網路的學習演算法開創了一個新思路。在網路訓練時,可以對連接權值和隱層輸出進行交替優化。把這種新演算法應用到前饋神經網路訓練學習中,在學習速度、泛化能力、網路訓練成功率等多方面均優於傳統訓練演算法,如經典的BP演算法。數值試驗也表明了這一新演算法的有效性。
本文通過典型的BP演算法與新演算法的比較,得到了二者之間相互關系的初步結論。從理論上證明了當懲罰因子趨於正無窮大時新演算法就是BP演算法,並且用數值試驗說明了懲罰因子在網路訓練演算法中的作用和意義。對於三層前饋神經網路來說,懲罰因子較小時,隱層神經元局部梯度的可變范圍大,有利於連接權值的更新;懲罰因子較大時,隱層神經元局部梯度的可變范圍小,不利於連接權值的更新,但能提高網路訓練精度。這說明了在網路訓練過程中懲罰因子為何從小到大變化的原因,也說明了新演算法的可行性而BP演算法則時有無法更新連接權值的重大缺陷。
礦體預測在礦床地質中佔有重要地位,由於輸入樣本量大,用以往前饋網路演算法進行礦體預測效果不佳。本文把前饋網路新演算法應用到礦體預測中,取得了良好的預期效果。
本文最後指出了新演算法的優點,並指出了有待改進的地方。
關鍵詞:前饋神經網路,凸優化理論,訓練演算法,礦體預測,應用
Feed forward Neural Networks Training Algorithm Based on Convex Optimization and Its Application in Deposit Forcasting
JIA Wen-chen (Computer Application)
Directed by YE Shi-wei
Abstract
The paper studies primarily the application of convex optimization theory and algorithm for feed forward neural networks』 training and convergence performance.
It reviews the history of feed forward neural networks, points out that the training of feed forward neural networks is essentially a non-linear problem and introces BP algorithm, its advantages as well as disadvantages and previous improvements for it. One of the big disadvantages of BP algorithm and its improvement algorithms is: because its error target function is non-convex in the weight values between neurons in different layers and exists local minimum point, thus, if the weight values enter local minimum point in weight values space when network is trained, it is difficult to skip local minimum point and reach the global minimum point (i.e. the most optimal point).If this happening, the training of networks will be unsuccessful. To overcome these essential disadvantages, the paper constructs a new error target function including restriction item according to convex function, Fenchel inequality in the conjugate of convex function and punishment function method in restriction optimization theory.
When feed forward neural networks based on the new target function is being trained, hidden layers』 outputs are seen as optimization variables. The main characteristics of the new target function are as follows:
1.With fixed hidden layers』 outputs, the new target function is convex in connecting weight variables; with fixed connecting weight values, the new target function is convex in hidden layers』 outputs. Thus, when connecting weight values and hidden layers』 outputs are optimized alternately, the new target function is convex in them, doesn』t exist local minimum point, and the algorithm』s sensitiveness is reced for original weight values .
2.Because the punishment factor is increased graally, weight values 』 searching space gets much bigger, so big networks can be trained and the possibility of entering local minimum point can be reced to a certain extent in network training process.
Using these characteristics can overcome efficiently in the former feed forward neural networks』 training algorithms the big disadvantage that networks training enters local minimum point easily. This creats a new idea for feed forward neural networks』 learning algorithms by using convex optimization theory .In networks training, connecting weight variables and hidden layer outputs can be optimized alternately. The new algorithm is much better than traditional algorithms for feed forward neural networks. The numerical experiments show that the new algorithm is successful.
By comparing the new algorithm with the traditional ones, a primary conclusion of their relationship is reached. It is proved theoretically that when the punishment factor nears infinity, the new algorithm is BP algorithm yet. The meaning and function of the punishment factor are also explained by numerical experiments. For three-layer feed forward neural networks, when the punishment factor is smaller, hidden layer outputs』 variable range is bigger and this is in favor to updating of the connecting weights values, when the punishment factor is bigger, hidden layer outputs』 variable range is smaller and this is not in favor to updating of the connecting weights values but it can improve precision of networks. This explains the reason that the punishment factor should be increased graally in networks training process. It also explains feasibility of the new algorithm and BP algorithm』s disadvantage that connecting weigh values can not be updated sometimes.
Deposit forecasting is very important in deposit geology. The previous algorithms』 effect is not good in deposit forecasting because of much more input samples. The paper applies the new algorithm to deposit forecasting and expectant result is reached.
The paper points out the new algorithm』s strongpoint as well as to-be-improved places in the end.
Keywords: feed forward neural networks, convex optimization theory, training algorithm, deposit forecasting, application
傳統的BP演算法及其改進演算法的一個很大缺點是:由於其誤差目標函數對於待學習的連接權值來說非凸的,存在局部最小點,對網路進行訓練時,這些演算法的權值一旦落入權值空間的局部最小點就很難跳出,因而無法達到全局最小點(即最優點)而使得網路訓練失敗。針對這些缺陷,根據凸函數及其共軛的性質,利用Fenchel不等式,使用約束優化理論中的罰函數方法構造出了帶有懲罰項的新誤差目標函數。
用新的目標函數對前饋神經網路進行優化訓練時,隱層輸出也作為被優化變數。這個目標函數的主要特點有:
1.固定隱層輸出,該目標函數對連接權值來說是凸的;固定連接權值,對隱層輸出來說是凸的。這樣在對連接權值和隱層輸出進行交替優化時,它們所面對的目標函數都是凸函數,不存在局部最小的問題,演算法對於初始權值的敏感性降低;
2.由於懲罰因子是逐漸增大的,使得權值的搜索空間變得比較大,從而對於大規模的網路也能夠訓練,在一定程度上降低了訓練過程陷入局部最小的可能性。
這些特性能夠在很大程度上有效地克服以往前饋網路的訓練演算法易於陷入局部最小而使網路訓練失敗的重大缺陷,也為利用凸優化理論研究前饋神經網路的學習演算法開創了一個新思路。在網路訓練時,可以對連接權值和隱層輸出進行交替優化。把這種新演算法應用到前饋神經網路訓練學習中,在學習速度、泛化能力、網路訓練成功率等多方面均優於傳統訓練演算法,如經典的BP演算法。數值試驗也表明了這一新演算法的有效性。
本文通過典型的BP演算法與新演算法的比較,得到了二者之間相互關系的初步結論。從理論上證明了當懲罰因子趨於正無窮大時新演算法就是BP演算法,並且用數值試驗說明了懲罰因子在網路訓練演算法中的作用和意義。對於三層前饋神經網路來說,懲罰因子較小時,隱層神經元局部梯度的可變范圍大,有利於連接權值的更新;懲罰因子較大時,隱層神經元局部梯度的可變范圍小,不利於連接權值的更新,但能提高網路訓練精度。這說明了在網路訓練過程中懲罰因子為何從小到大變化的原因,也說明了新演算法的可行性而BP演算法則時有無法更新連接權值的重大缺陷。
礦體預測在礦床地質中佔有重要地位,由於輸入樣本量大,用以往前饋網路演算法進行礦體預測效果不佳。本文把前饋網路新演算法應用到礦體預測中,取得了良好的預期效果。
本文最後指出了新演算法的優點,並指出了有待改進的地方。
關鍵詞:前饋神經網路,凸優化理論,訓練演算法,礦體預測,應用
Feed forward Neural Networks Training Algorithm Based on Convex Optimization and Its Application in Deposit Forcasting
JIA Wen-chen (Computer Application)
Directed by YE Shi-wei
Abstract
The paper studies primarily the application of convex optimization theory and algorithm for feed forward neural networks』 training and convergence performance.
It reviews the history of feed forward neural networks, points out that the training of feed forward neural networks is essentially a non-linear problem and introces BP algorithm, its advantages as well as disadvantages and previous improvements for it. One of the big disadvantages of BP algorithm and its improvement algorithms is: because its error target function is non-convex in the weight values between neurons in different layers and exists local minimum point, thus, if the weight values enter local minimum point in weight values space when network is trained, it is difficult to skip local minimum point and reach the global minimum point (i.e. the most optimal point).If this happening, the training of networks will be unsuccessful. To overcome these essential disadvantages, the paper constructs a new error target function including restriction item according to convex function, Fenchel inequality in the conjugate of convex function and punishment function method in restriction optimization theory.
When feed forward neural networks based on the new target function is being trained, hidden layers』 outputs are seen as optimization variables. The main characteristics of the new target function are as follows:
1.With fixed hidden layers』 outputs, the new target function is convex in connecting weight variables; with fixed connecting weight values, the new target function is convex in hidden layers』 outputs. Thus, when connecting weight values and hidden layers』 outputs are optimized alternately, the new target function is convex in them, doesn』t exist local minimum point, and the algorithm』s sensitiveness is reced for original weight values .
2.Because the punishment factor is increased graally, weight values 』 searching space gets much bigger, so big networks can be trained and the possibility of entering local minimum point can be reced to a certain extent in network training process.
Using these characteristics can overcome efficiently in the former feed forward neural networks』 training algorithms the big disadvantage that networks training enters local minimum point easily. This creats a new idea for feed forward neural networks』 learning algorithms by using convex optimization theory .In networks training, connecting weight variables and hidden layer outputs can be optimized alternately. The new algorithm is much better than traditional algorithms for feed forward neural networks. The numerical experiments show that the new algorithm is successful.
By comparing the new algorithm with the traditional ones, a primary conclusion of their relationship is reached. It is proved theoretically that when the punishment factor nears infinity, the new algorithm is BP algorithm yet. The meaning and function of the punishment factor are also explained by numerical experiments. For three-layer feed forward neural networks, when the punishment factor is smaller, hidden layer outputs』 variable range is bigger and this is in favor to updating of the connecting weights values, when the punishment factor is bigger, hidden layer outputs』 variable range is smaller and this is not in favor to updating of the connecting weights values but it can improve precision of networks. This explains the reason that the punishment factor should be increased graally in networks training process. It also explains feasibility of the new algorithm and BP algorithm』s disadvantage that connecting weigh values can not be updated sometimes.
Deposit forecasting is very important in deposit geology. The previous algorithms』 effect is not good in deposit forecasting because of much more input samples. The paper applies the new algorithm to deposit forecasting and expectant result is reached.
The paper points out the new algorithm』s strongpoint as well as to-be-improved places in the end.
Keywords: feed forward neural networks, convex optimization theory, training algorithm, deposit forecasting, application
BP演算法及其改進
2.1 BP演算法步驟
1°隨機抽取初始權值ω0;
2°輸入學習樣本對(Xp,Yp),學習速率η,誤差水平ε;
3°依次計算各層結點輸出opi,opj,opk;
4°修正權值ωk+1=ωk+ηpk,其中pk=,ωk為第k次迭代權變數;
5°若誤差E<ε停止,否則轉3°。
2.2 最優步長ηk的確定
在上面的演算法中,學習速率η實質上是一個沿負梯度方向的步長因子,在每一次迭代中如何確定一個最優步長ηk,使其誤差值下降最快,則是典型的一維搜索問題,即E(ωk+ηkpk)=(ωk+ηpk)。令Φ(η)=E(ωk+ηpk),則Φ′(η)=dE(ωk+ηpk)/dη=E(ωk+ηpk)Tpk。若ηk為(η)的極小值點,則Φ′(ηk)=0,即E(ωk+ηpk)Tpk=-pTk+1pk=0。確定ηk的演算法步驟如下
1°給定η0=0,h=0.01,ε0=0.00001;
2°計算Φ′(η0),若Φ′(η0)=0,則令ηk=η0,停止計算;
3°令h=2h, η1=η0+h;
4°計算Φ′(η1),若Φ′(η1)=0,則令ηk=η1,停止計算;
若Φ′(η1)>0,則令a=η0,b=η1;若Φ′(η1)<0,則令η0=η1,轉3°;
5°計算Φ′(a),若Φ′(a)=0,則ηk=a,停止計算;
6°計算Φ′(b),若Φ′(b)=0,則ηk=b,停止計算;
7°計算Φ′(a+b/2),若Φ′(a+b/2)=0,則ηk=a+b/2,停止計算;
若Φ′(a+b/2)<0,則令a=a+b/2;若Φ′(a+b/2)>0,則令b=a+b/2
8°若|a-b|<ε0,則令,ηk=a+b/2,停止計算,否則轉7°。
2.3 改進BP演算法的特點分析
在上述改進的BP演算法中,對學習速率η的選取不再由用戶自己確定,而是在每次迭代過程中讓計算機自動尋找最優步長ηk。而確定ηk的演算法中,首先給定η0=0,由定義Φ(η)=E(ωk+ηpk)知,Φ′(η)=dE(ωk+ηpk)/dη=E(ωk+ηpk)Tpk,即Φ′(η0)=-pTkpk≤0。若Φ′(η0)=0,則表明此時下降方向pk為零向量,也即已達到局部極值點,否則必有Φ′(η0)<0,而對於一維函數Φ(η)的性質可知,Φ′(η0)<0則在η0=0的局部范圍內函數為減函數。故在每一次迭代過程中給η0賦初值0是合理的。
改進後的BP演算法與原BP演算法相比有兩處變化,即步驟2°中不需給定學習速率η的值;另外在每一次修正權值之前,即步驟4°前已計算出最優步長ηk。