123
来源:互联网 发布:网络终端管理软件 编辑:程序博客网 时间:2024/06/02 04:14
%% construct a network.net.nIn=1; %the input layer has 1 ANN.net.nHidden=10; %the hidden has 10 ANN.net.nOut=1; %the output layer has 1ANN.w=2*(rand(net.nHidden,net.nIn)-1/2); %the weight coefficient of the hidden layer.b=2*(rand(net.nHidden,1)-1/2); %the thresholdnet.w1=[w,b]; %the weight coefficient and the threshold are linked up.W=2*(rand(net.nOut,net.nHidden)-1/2); %the weight coefficient of the output layer.B=2*(rand(net.nOut,1)-1/2); %the thresholdnet.w2=[W,B]; %the weight coefficient and the threshold are linked up.%% set the parametersmc=0.01; %set the momentum termeta=0.001; %set the learining ratemaxiter=50000; %set the iteration times%% set the training samples.trainIn=[0:pi/4:2*pi]; %the input of the training samples.trainOut=sin(trainIn); %the ouput of the training samples.trainnum=9; %the amount of the training samples.SampIn=[trainIn;ones(1,trainnum)]; %the input of the network, and the input of the threshold is a constan 1.expectedOut=trainOut; %the expected ouput is the output of the training samples.errRec=zeros(1,maxiter); %used to store the error of the training output.%% set the testing samplestestIn=[0:pi/180:2*pi]; %the input of the testing samples.testOut=sin(testIn); %the output of the testing sanples.testnum=361; %the amount of the testing samples.%% the training procedurefor i=1:maxiter;hid_input=net.w1*SampIn; %calculate the weighting sum of the hidden layerhid_out=tansig(hid_input); %calculate the output of the hidden layer.ou_input1=[hid_out;ones(1,trainnum)]; %the input of the output layer, and the input of the threshold is a constan 1.ou_input2=net.w2*ou_input1; %calculate the weighting sum of the output layer.out_out=2*tansig(ou_input2); %calculate the output of the output layer.err=expectedOut-out_out; %caiculate the error vectorsse=sumsqr(err); %calculate the square sum of the error.errRec(i)=sse; %store the error%% the back-propagation of errorDELTA=err.*dtansig(ou_input2,out_out/2); %the gradient of between the hidden layer and the output layerdelta=net.w2(:,1:end-1)'*DELTA.*dtansig(hid_input,hid_out); %the gradient of between the input layer and the hidden layerdWEX=DELTA*ou_input1'; %the delta of the weight coefficient of the output layerdwex=delta*SampIn'; %the delta of the weight coefficient of the hidden layerif i==1 %if it is the first time to revise the coefficient, we do not use the momentum term net.w2=net.w2+eta*dWEX; net.w1=net.w1+eta*dwex;else %else we use the momentum term. net.w2=net.w2+(1-mc)*eta*dWEX+mc*dWEXOld; net.w1=net.w1+(1-mc)*eta*dwex+mc*dwexOld;enddWEXOld=dWEX; %record the delta of the last revisiondwexOld=dwex;end%% the display of the resultssubplot(1,2,1);plot(errRec); %plot the errortitle('error curve');xlabel('iteration times');ylabel('error');realIn=[testIn;ones(1,testnum)]; %the input of the testing samplesrealhid_input=net.w1*realIn; %calculate the weighting sum of the hidden layerrealhid_out=tansig(realhid_input); %calculate the output of the hidden layer.realou_input1=[realhid_out;ones(1,testnum)];%the input of the output layer, and the input of the threshold is a constan 1.realou_input2=net.w2*realou_input1; %calculate the weighting sum of the output layer.realout_out=2*tansig(realou_input2); %calculate the output of the output layer.realerr=testOut-realout_out; %caiculate the error vectorrealsse=sumsqr(realerr); %calculate the square sum of the error.subplot(1,2,2);plot(testIn,realout_out,testIn,sin(testIn));%plot the standard sin and the output of the testing.axis([0 2*pi -1.1 1.1]); %set the coordinate range.set(gca,'XTick',pi/4:pi/4:2*pi);grid on;title('the testing output and the standard output');
阅读全文
0 0
- 123
- 123
- 123
- 123
- 123
- 123
- 123
- 123
- 123
- 123
- 123
- 123
- 123
- 123
- 123
- 123
- 123
- 123
- Java工具库之Lombok
- Tomcat8开启APR
- linux的定时任务
- OpenAI探索机器人模拟训练新方法:仿真与真实世界无缝衔接
- 自动为人脸上色,Adobe的涂鸦AI想让世界更多彩
- 123
- 人类太多余?且慢,先听AI科学家详解AlphaGo Zero的伟大与局限
- Uber地图部门三员大将离职创业,要解决无人驾驶出租车的派单问题
- hadoop安装遇到的各种异常及解决办法
- 例题 6
- LSD algoritm
- KMP算法
- C++动态内存管理
- python中线程和进程