當前位置:首頁 » 操作系統 » matlab神經網路演算法

matlab神經網路演算法

發布時間: 2022-04-02 13:47:34

⑴ 用Matlab算BP神經網路的具體演算法

BP神經網路的傳遞函數一般採用sigmiod函數,學習演算法一般採用最小梯度下降法;下面是具體的程序例子:
例1 採用動量梯度下降演算法訓練 BP 網路。
訓練樣本定義如下:
輸入矢量為
p =[-1 -2 3 1
-1 1 5 -3]
目標矢量為 t = [-1 -1 1 1]
解:本例的 MATLAB 程序如下:

close all
clear
echo on
clc
% NEWFF——生成一個新的前向神經網路
% TRAIN——對 BP 神經網路進行訓練
% SIM——對 BP 神經網路進行模擬
pause
% 敲任意鍵開始
clc
% 定義訓練樣本
% P 為輸入矢量
P=[-1, -2, 3, 1; -1, 1, 5, -3];
% T 為目標矢量
T=[-1, -1, 1, 1];
pause;
clc
% 創建一個新的前向神經網路
net=newff(minmax(P),[3,1],{'tansig','purelin'},'traingdm')
% 當前輸入層權值和閾值
inputWeights=net.IW{1,1}
inputbias=net.b{1}
% 當前網路層權值和閾值
layerWeights=net.LW{2,1}
layerbias=net.b{2}
pause
clc
% 設置訓練參數
net.trainParam.show = 50;
net.trainParam.lr = 0.05; 學習速率
net.trainParam.mc = 0.9; 動量系數
net.trainParam.epochs = 1000;
net.trainParam.goal = 1e-3;
pause
clc
% 調用TRAINGDM 演算法訓練 BP 網路
[net,tr]=train(net,P,T);
pause
clc
% 對 BP 網路進行模擬
A = sim(net,P)
% 計算模擬誤差
E = T - A
MSE=mse(E)
pause
clc
echo off

⑵ 請問matlab中RBF神經網路newrbe函數用的什麼演算法

newrbe是設計精確的徑向基神經網路的函數,用法如:
P = [1 2 3];%輸入
T = [2.0 4.1 5.9];%目標
net = newrbe(P,T);%生成神經網路

其演算法是:生成的網路有2層,第一層是radbas神經元,用dist計算加權輸入,用netprod計算網路輸入,第二層是purelin神經元,用 dotprod計算加權輸入,用netsum計算網路輸入。兩層都有偏差b。
newrbe先設第一層權重為p',偏差為0.8326,第二層權重IW{2,1}從第一層的模擬輸出 A{1}得到,偏差 b{2}從解線性方程 [W{2,1} b{2}] * [A{1}; ones] = T 得到。

⑶ bp神經網路 matlab演算法 運行顯示一直在進行網路訓練,怎麼解決

運行結束,訓練101次,沒改你的源程序,你那有什麼錯誤提示么?這是matlab R2012a版本

⑷ matlab BP神經網路的訓練演算法中訓練函數(traingdm 、trainlm、trainbr)的實現過程及相應的VC源代碼

VC源代碼?你很搞笑嘛。。
給你trainlm的m碼

function [out1,out2] = trainlm(varargin)
%TRAINLM Levenberg-Marquardt backpropagation.
%
% <a href="matlab:doc trainlm">trainlm</a> is a network training function that updates weight and
% bias states according to Levenberg-Marquardt optimization.
%
% <a href="matlab:doc trainlm">trainlm</a> is often the fastest backpropagation algorithm in the toolbox,
% and is highly recommended as a first choice supervised algorithm,
% although it does require more memory than other algorithms.
%
% [NET,TR] = <a href="matlab:doc trainlm">trainlm</a>(NET,X,T) takes a network NET, input data X
% and target data T and returns the network after training it, and a
% a training record TR.
%
% [NET,TR] = <a href="matlab:doc trainlm">trainlm</a>(NET,X,T,Xi,Ai,EW) takes additional optional
% arguments suitable for training dynamic networks and training with
% error weights. Xi and Ai are the initial input and layer delays states
% respectively and EW defines error weights used to indicate
% the relative importance of each target value.
%
% Training occurs according to training parameters, with default values.
% Any or all of these can be overridden with parameter name/value argument
% pairs appended to the input argument list, or by appending a structure
% argument with fields having one or more of these names.
% show 25 Epochs between displays
% showCommandLine 0 generate command line output
% showWindow 1 show training GUI
% epochs 100 Maximum number of epochs to train
% goal 0 Performance goal
% max_fail 5 Maximum validation failures
% min_grad 1e-10 Minimum performance gradient
% mu 0.001 Initial Mu
% mu_dec 0.1 Mu decrease factor
% mu_inc 10 Mu increase factor
% mu_max 1e10 Maximum Mu
% time inf Maximum time to train in seconds
%
% To make this the default training function for a network, and view
% and/or change parameter settings, use these two properties:
%
% net.<a href="matlab:doc nnproperty.net_trainFcn">trainFcn</a> = 'trainlm';
% net.<a href="matlab:doc nnproperty.net_trainParam">trainParam</a>
%
% See also trainscg, feedforwardnet, narxnet.

% Mark Beale, 11-31-97, ODJ 11/20/98
% Updated by Orlando De Jes鷖, Martin Hagan, Dynamic Training 7-20-05
% Copyright 1992-2010 The MathWorks, Inc.
% $Revision: 1.1.6.11.2.2 $ $Date: 2010/07/23 15:40:16 $

%% =======================================================
% BOILERPLATE_START
% This code is the same for all Training Functions.

persistent INFO;
if isempty(INFO), INFO = get_info; end
nnassert.minargs(nargin,1);
in1 = varargin{1};
if ischar(in1)
switch (in1)
case 'info'
out1 = INFO;
case 'check_param'
nnassert.minargs(nargin,2);
param = varargin{2};
err = nntest.param(INFO.parameters,param);
if isempty(err)
err = check_param(param);
end
if nargout > 0
out1 = err;
elseif ~isempty(err)
nnerr.throw('Type',err);
end
otherwise,
try
out1 = eval(['INFO.' in1]);
catch me, nnerr.throw(['Unrecognized first argument: ''' in1 ''''])
end
end
return
end
nnassert.minargs(nargin,2);
net = nn.hints(nntype.network('format',in1,'NET'));
oldTrainFcn = net.trainFcn;
oldTrainParam = net.trainParam;
if ~strcmp(net.trainFcn,mfilename)
net.trainFcn = mfilename;
net.trainParam = INFO.defaultParam;
end
[args,param] = nnparam.extract_param(varargin(2:end),net.trainParam);
err = nntest.param(INFO.parameters,param);
if ~isempty(err), nnerr.throw(nnerr.value(err,'NET.trainParam')); end
if INFO.isSupervised && isempty(net.performFcn) % TODO - fill in MSE
nnerr.throw('Training function is supervised but NET.performFcn is undefined.');
end
if INFO.usesGradient && isempty(net.derivFcn) % TODO - fill in
nnerr.throw('Training function uses derivatives but NET.derivFcn is undefined.');
end
if net.hint.zeroDelay, nnerr.throw('NET contains a zero-delay loop.'); end
[X,T,Xi,Ai,EW] = nnmisc.defaults(args,{},{},{},{},{1});
X = nntype.data('format',X,'Inputs X');
T = nntype.data('format',T,'Targets T');
Xi = nntype.data('format',Xi,'Input states Xi');
Ai = nntype.data('format',Ai,'Layer states Ai');
EW = nntype.nndata_pos('format',EW,'Error weights EW');
% Prepare Data
[net,data,tr,~,err] = nntraining.setup(net,mfilename,X,Xi,Ai,T,EW);
if ~isempty(err), nnerr.throw('Args',err), end
% Train
net = struct(net);
fcns = nn.subfcns(net);
[net,tr] = train_network(net,tr,data,fcns,param);
tr = nntraining.tr_clip(tr);
if isfield(tr,'perf')
tr.best_perf = tr.perf(tr.best_epoch+1);
end
if isfield(tr,'vperf')
tr.best_vperf = tr.vperf(tr.best_epoch+1);
end
if isfield(tr,'tperf')
tr.best_tperf = tr.tperf(tr.best_epoch+1);
end
net.trainFcn = oldTrainFcn;
net.trainParam = oldTrainParam;
out1 = network(net);
out2 = tr;
end

% BOILERPLATE_END
%% =======================================================

% TODO - MU => MU_START
% TODO - alternate parameter names (i.e. MU for MU_START)

function info = get_info()
info = nnfcnTraining(mfilename,'Levenberg-Marquardt',7.0,true,true,...
[ ...
nnetParamInfo('showWindow','Show Training Window Feedback','nntype.bool_scalar',true,...
'Display training window ring training.'), ...
nnetParamInfo('showCommandLine','Show Command Line Feedback','nntype.bool_scalar',false,...
'Generate command line output ring training.'), ...
nnetParamInfo('show','Command Line Frequency','nntype.strict_pos_int_inf_scalar',25,...
'Frequency to update command line.'), ...
...
nnetParamInfo('epochs','Maximum Epochs','nntype.pos_int_scalar',1000,...
'Maximum number of training iterations before training is stopped.'), ...
nnetParamInfo('time','Maximum Training Time','nntype.pos_inf_scalar',inf,...
'Maximum time in seconds before training is stopped.'), ...
...
nnetParamInfo('goal','Performance Goal','nntype.pos_scalar',0,...
'Performance goal.'), ...
nnetParamInfo('min_grad','Minimum Gradient','nntype.pos_scalar',1e-5,...
'Minimum performance gradient before training is stopped.'), ...
nnetParamInfo('max_fail','Maximum Validation Checks','nntype.strict_pos_int_scalar',6,...
'Maximum number of validation checks before training is stopped.'), ...
...
nnetParamInfo('mu','Mu','nntype.pos_scalar',0.001,...
'Mu.'), ...
nnetParamInfo('mu_dec','Mu Decrease Ratio','nntype.real_0_to_1',0.1,...
'Ratio to decrease mu.'), ...
nnetParamInfo('mu_inc','Mu Increase Ratio','nntype.over1',10,...
'Ratio to increase mu.'), ...
nnetParamInfo('mu_max','Maximum mu','nntype.strict_pos_scalar',1e10,...
'Maximum mu before training is stopped.'), ...
], ...
[ ...
nntraining.state_info('gradient','Gradient','continuous','log') ...
nntraining.state_info('mu','Mu','continuous','log') ...
nntraining.state_info('val_fail','Validation Checks','discrete','linear') ...
]);
end

function err = check_param(param)
err = '';
end

function [net,tr] = train_network(net,tr,data,fcns,param)

% Checks
if isempty(net.performFcn)
warning('nnet:trainlm:Performance',nnwarning.empty_performfcn_corrected);
net.performFcn = 'mse';
net.performParam = mse('defaultParam');
tr.performFcn = net.performFcn;
tr.performParam = net.performParam;
end
if isempty(strmatch(net.performFcn,{'sse','mse'},'exact'))
warning('nnet:trainlm:Performance',nnwarning.nonjacobian_performfcn_replaced);
net.performFcn = 'mse';
net.performParam = mse('defaultParam');
tr.performFcn = net.performFcn;
tr.performParam = net.performParam;
end

% Initialize
startTime = clock;
original_net = net;
[perf,vperf,tperf,je,jj,gradient] = nntraining.perfs_jejj(net,data,fcns);
[best,val_fail] = nntraining.validation_start(net,perf,vperf);
WB = getwb(net);
lengthWB = length(WB);
ii = sparse(1:lengthWB,1:lengthWB,ones(1,lengthWB));
mu = param.mu;

% Training Record
tr.best_epoch = 0;
tr.goal = param.goal;
tr.states = {'epoch','time','perf','vperf','tperf','mu','gradient','val_fail'};

% Status
status = ...
[ ...
nntraining.status('Epoch','iterations','linear','discrete',0,param.epochs,0), ...
nntraining.status('Time','seconds','linear','discrete',0,param.time,0), ...
nntraining.status('Performance','','log','continuous',perf,param.goal,perf) ...
nntraining.status('Gradient','','log','continuous',gradient,param.min_grad,gradient) ...
nntraining.status('Mu','','log','continuous',mu,param.mu_max,mu) ...
nntraining.status('Validation Checks','','linear','discrete',0,param.max_fail,0) ...
];
nn_train_feedback('start',net,status);

% Train
for epoch = 0:param.epochs

% Stopping Criteria
current_time = etime(clock,startTime);
[userStop,userCancel] = nntraintool('check');
if userStop, tr.stop = 'User stop.'; net = best.net;
elseif userCancel, tr.stop = 'User cancel.'; net = original_net;
elseif (perf <= param.goal), tr.stop = 'Performance goal met.'; net = best.net;
elseif (epoch == param.epochs), tr.stop = 'Maximum epoch reached.'; net = best.net;
elseif (current_time >= param.time), tr.stop = 'Maximum time elapsed.'; net = best.net;
elseif (gradient <= param.min_grad), tr.stop = 'Minimum gradient reached.'; net = best.net;
elseif (mu >= param.mu_max), tr.stop = 'Maximum MU reached.'; net = best.net;
elseif (val_fail >= param.max_fail), tr.stop = 'Validation stop.'; net = best.net;
end

% Feedback
tr = nntraining.tr_update(tr,[epoch current_time perf vperf tperf mu gradient val_fail]);
nn_train_feedback('update',net,status,tr,data, ...
[epoch,current_time,best.perf,gradient,mu,val_fail]);

% Stop
if ~isempty(tr.stop), break, end

% Levenberg Marquardt
while (mu <= param.mu_max)
% CHECK FOR SINGULAR MATRIX
[msgstr,msgid] = lastwarn;
lastwarn('MATLAB:nothing','MATLAB:nothing')
warnstate = warning('off','all');
dWB = -(jj+ii*mu) \ je;
[~,msgid1] = lastwarn;
flag_inv = isequal(msgid1,'MATLAB:nothing');
if flag_inv, lastwarn(msgstr,msgid); end;
warning(warnstate)
WB2 = WB + dWB;
net2 = setwb(net,WB2);
perf2 = nntraining.train_perf(net2,data,fcns);

% TODO - possible speed enhancement
% - retain intermediate variables for Memory Rection = 1

if (perf2 < perf) && flag_inv
WB = WB2; net = net2;
mu = max(mu*param.mu_dec,1e-20);
break
end
mu = mu * param.mu_inc;
end

% Validation
[perf,vperf,tperf,je,jj,gradient] = nntraining.perfs_jejj(net,data,fcns);
[best,tr,val_fail] = nntraining.validation(best,tr,val_fail,net,perf,vperf,epoch);
end
end

⑸ 有哪位大神知道BP神經網路變學習率學習演算法在Matlab中怎麼實現啊

額。。。
一種啟發式的改進就是,為學習速率選用自適應值,它依賴於連續迭代步驟中的誤差函數值。
自適應調整學習速率的梯度下降演算法,在訓練的過程中,力圖使演算法穩定,同時又使學習的步長盡量地大,學習速率則是根據局部誤差曲面作出相應的調整。當誤差以減小的方式趨於目標時,說明修正方向正確,於是步長(學習速率)增加,因此學習速率乘以增量因子Ir_inc,使學習速率增加;而當誤差增加超過設定的值C倍時,說明修正過頭,應減小步長,因此學習速率乘以減量因子Ir_dec,使學習速率減少.其他情況學習速率則不變。
Matlab 里有對應的變學習速率的函數。
bpnet=newff(x,[60,4],{'logsig','logsig'},'traingda'); %'traingda'表示自適應學習速率調整方法
bpnet.trainParam.show=50;
bpnet.trainParam.lr=0.01; %預設值的學習速率
bpnet.trainParam.epochs=3000;
bpnet.trainParam.goal=0.247;
bpnet.trainParam.Ir_inc=1.05; %增加的學習速率倍數,默認為1.05
bpnet.trainParam.Ir_dec=0.7; %減少的學習速率倍數,默認為0.7
bpnet.trainParam.max_perf_inc=1.04; %誤差函數增加為迭代前的1.04時,減少學習速率。默認為1.04
[bpnet]=train(bpnet,p,t);
save bpnet;
%%%%%%%%%%%%%%%%%%%%

⑹ bp神經網路演算法 在matlab中的實現

BP神經網路是最基本、最常用的神經網路,Matlab有專用函數來建立、訓練它,主要就是newff()、train()、sim()這三個函數,當然其他如歸一化函數mapminmax()、其他net的參數設定(lr、goal等)設置好,就可以通過對歷史數據的學習進行預測。附件是一個最基本的預測實例,本來是電力負荷預測的實例,但具有通用性,你仔細看看就明白了。

⑺ Matlab神經網路原理中可以用於尋找最優解的演算法有哪些

若果對你有幫助,請點贊。
神經網路的結構(例如2輸入3隱節點1輸出)建好後,一般就要求神經網路里的權值和閾值。現在一般求解權值和閾值,都是採用梯度下降之類的搜索演算法(梯度下降法、牛頓法、列文伯格-馬跨特法、狗腿法等等),這些演算法會先初始化一個解,在這個解的基礎上,確定一個搜索方向和一個移動步長(各種法算確定方向和步長的方法不同,也就使各種演算法適用於解決不同的問題),使初始解根據這個方向和步長移動後,能使目標函數的輸出(在神經網路中就是預測誤差)下降。 然後將它更新為新的解,再繼續尋找下一步的移動方向的步長,這樣不斷的迭代下去,目標函數(神經網路中的預測誤差)也不斷下降,最終就能找到一個解,使得目標函數(預測誤差)比較小。
而在尋解過程中,步長太大,就會搜索得不仔細,可能跨過了優秀的解,而步長太小,又會使尋解過程進行得太慢。因此,步長設置適當非常重要。
學習率對原步長(在梯度下降法中就是梯度的長度)作調整,如果學習率lr = 0.1,那麼梯度下降法中每次調整的步長就是0.1*梯度,
而在matlab神經網路工具箱里的lr,代表的是初始學習率。因為matlab工具箱為了在尋解不同階段更智能的選擇合適的步長,使用的是可變學習率,它會根據上一次解的調整對目標函數帶來的效果來對學習率作調整,再根據學習率決定步長。
機制如下:
if newE2/E2 > maxE_inc %若果誤差上升大於閾值
lr = lr * lr_dec; %則降低學習率
else
if newE2 < E2 %若果誤差減少
lr = lr * lr_inc;%則增加學習率
end
詳細的可以看《神經網路之家》nnetinfo里的《[重要]寫自己的BP神經網路(traingd)》一文,裡面是matlab神經網路工具箱梯度下降法的簡化代碼

⑻ 自己用matlab實現的BP神經網路演算法,無法得到預期的效果,主要是誤差太大

lr=0.05; %lr為學習速率;
err_goal=0.1; %err_goal為期望誤差最小值
max_epoch=15000; %max_epoch為訓練的最大次數;
a=0.9; %a為慣性系數
Oi=0;
Ok=0; %置隱含層和輸出層各神經元輸出初值為0
這些初始參數是誰提供給你?

調整一下這些參數看看.

⑼ 在MATLAB中用神經網路演算法求解無約束最優化問題

程序一:GA訓練BP權值的主函數 function net=GABPNET(XX,YY) % 使用遺傳演算法對BP網路權值閾值進行優化,再用BP演算法訓練網路 %數據歸一化預處理 nntwarn off XX=[1:19;2:20;3:21;4:22]'; YY=[1:4]; XX=premnmx(XX); YY=premnmx(YY); YY %創建網路 net=newff(minmax(XX),[19,25,1],{'tansig','tansig','purelin'},'trainlm'); %下面使用遺傳演算法對網路進行優化 P=XX; T=YY; R=size(P,1); S2=size(T,1); S1=25;%隱含層節點數 S=R*S1+S1*S2+S1+S2;%遺傳演算法編碼長度 aa=ones(S,1)*[-1,1]; popu=50;%種群規模 save data2 XX YY % 是將 xx,yy 二個變數的數值存入 data2 這個MAT-file, initPpp=initializega(popu,aa,'gabpEval');%初始化種群 gen=100;%遺傳代數 %下面調用gaot工具箱,其中目標函數定義為gabpEval [x,endPop,bPop,trace]=ga(aa,'gabpEval',[],initPpp,[1e-6 1 1],'maxGenTerm',gen,... 'normGeomSelect',[0.09],['arithXover'],[2],'nonUnifMutation',[2 gen 3]); %繪收斂曲線圖 figure(1) plot(trace(:,1),1./trace(:,3),'r-'); hold on plot(trace(:,1),1./trace(:,2),'b-'); xlabel('Generation'); ylabel('Sum-Squared Error'); figure(2) plot(trace(:,1),trace(:,3),'r-'); hold on plot(trace(:,1),trace(:,2),'b-'); xlabel('Generation'); ylabel('Fittness');

⑽ matlab中的BP神經網路

從原理上來說,神經網路是可以預測未來的點的。
實際上,經過訓練之後,神經網路就擬合了輸入和輸出數據之間的函數關系。只要訓練的足夠好,那麼這個擬合的關系就會足夠准確,從而能夠預測在其他的輸入情況下,會有什麼樣的輸出。
如果要預測t=[6
7]兩點的R值,先以t=[1
2
3
4
5]作為輸入,R=[12
13
14
14
15]作為輸出,訓練網路。訓練完成之後,用t=[2
3
4
5
6]作為輸入,這樣會得到一個輸出。不出意外的話,輸出的數組應該是[13
14
14
15
X],這里的X就是預測t=6時的R值。然後以t=[3
4
5
6
7]作為輸入,同理得到t=7時候的R值。
根據我的神經網路預測,t=6時,R=15,t=7時,R=15。我不知道這個結果是否正確,因為神經網路通常需要大量的數據來訓練,而這里給的數據似乎太少,可能不足以擬合出正確的函數。

熱點內容
電腦如何設置訪問密碼 發布:2024-09-27 17:15:23 瀏覽:466
python石頭剪刀布 發布:2024-09-27 16:59:34 瀏覽:987
f演算法數學 發布:2024-09-27 16:59:25 瀏覽:591
opencv定位演算法 發布:2024-09-27 16:57:46 瀏覽:808
易安卓如何調用一個資源圖片 發布:2024-09-27 16:53:01 瀏覽:368
python搭建ftp服務 發布:2024-09-27 16:47:58 瀏覽:511
網游交易平台源碼 發布:2024-09-27 16:24:39 瀏覽:903
滑稽源碼 發布:2024-09-27 16:16:34 瀏覽:792
艾瑞澤gx高配有哪些配置 發布:2024-09-27 16:11:38 瀏覽:383
網站資料庫如何導入 發布:2024-09-27 16:09:44 瀏覽:4