This code imports the training data,calculates the weight updates (dW1, dW2, dW3, and dW4) using the delta rule,and adjusts the weight of the neural network.
至此,该过程与以前的训练代码相同。
So far, the process is identical to theprevious training codes.
唯一的差别是隐藏节点采用ReLU函数,而不是sigmoid函数。
It only differs in that the hidden nodesemploy the function ReLU, in place of sigmoid.
当然,使用不同的激活函数会使得相应的导数发生变化。
Of course, the use of a differentactivation function yields a change in its derivative as well.
现在,让我们来看看DeepReLU调用的ReLU函数。
Now, let’s look into the function ReLU thatthe function DeepReLU calls.
ReLU.m文件中的ReLU函数代码清单如下。
The listing of the function ReLU shown hereis implemented in the ReLU.m file.
由于这里只是对ReLU进行定义,略去了进一步的讨论。
As this is just a definition, furtherdiscussion is omitted.
function y =ReLU(x)
y = max(0, x);
end
后向传播代码采用后向传播算法调整权值。
Consider the back-propagation algorithmportion, which adjusts the weights using the back-propagation algorithm.
以下代码为DeepReLU.m文件中的增量计算部分。
The following listing shows the extract ofthe delta calculation from the DeepReLU.m file.