怀念橡皮筋
看到BP了,如果不出所料应该是神经网络的内容不要相信翻译软件,帮你重翻了一遍:术语:weight 权重hidden unit 隐层Learning in such a network proceeds the same way as for perceptrons: example inputs are presented to the network, and if the network computes an output vector that matches the target, nothing is done. If there is an error (a difference between the output and target), then the weights are adjustec to reduce this error. The trick is to assess the blame for an error and divide it among the contributing weights. In perceptrons, this is easy because there is only one weight between each input and the output. But in multilayer networks. There are many weights connecting each input to an output, and each of these weights contributes to more than one output.在这样一个网络进程中的学习就类似于感知器:输入样本被送达到网络中,如果网络计算的输出向量与目标相匹配,则不采取任何行动。如果存在误差(即输出与目标之间有差值),那么则需要调整权重来减小这个误差。这里的窍门 找出误差的来源并且将在有涉及的权重中划分出来。在感知器中,这种坐法是十分简单的,因为每个输入到其对应输出之间只有一个权重。但是在多层网络中,输入到输出之间往往连接着许多个权重,而且每个权重都连向不止一个的输出。The back-propagation algorithm is a sensibly approach to dividing the contribution of each weight. As in the perceptron learning algorithm, we try to minimize the error between each target output and the output actually computed by the network. At the output layer, the weight update rule is very similar to the rule for the perceptron. There are two differences: the activation of the hidden unit aj is used instead of the input value; and the rule contains a term for the gradient of the activation function. If Erri is the error (Ti-Oi) at the output node, then the weight update rule for the link from unit j to unit i is 反向传播算法是一种划分有关的权重的简单方法。正如我们在感知学习算法中所做的,我们尽量减少目标输出与实际计算输出之间误差。在输出层中,权重矫正规则则和感知器中的规则十分相似,他们之间只有2个不同之处:1 隐层的激励用了aj而非输入值 2 规则中包含了激励函数的梯度。 如果Erri代表输出节点的Ti和Oi之间的误差,则从j到i的链接的权重矫正规则为:Wj,i <- Wj,I + α×aj ×Erri ×g’(ini )Where g’ is the derivative of the activation function g. We will find it convenient to define a new term △i , which for output nodes is defined as △i= Erri g’(ini ).The update rule then becomes 其中,g'是激活函数g的导数, 我们定义一个新参数△i则更加方便,即输出节点可以被定义为△i= Erri g’(ini ),由此,矫正函数变为: Wj,i <- Wj,I + α×aj ×△i有不明白再hi我吧,希望回答对你有帮助