统计学习方法第二章 感知机 学习笔记

论坛 期权论坛     
选择匿名的用户   2021-5-28 02:15   0   0
<h1>感知机</h1>
<p>感知机(perceptron)是二类分类的线性分类模型,其输入为实例的特征向量,输出为实例的类别,取&#43;1和-1二值。</p>
<p>感知机学习算法具有简单而易于实现的优点,分为原始形式和对偶形式。于1957年由Rosenblatt提出,是神经网络与支持向量机的基础。</p>
<h2>2.1 感知机模型</h2>
<p>假设输入空间(特征空间)是<img alt="X\subseteq R^{n}" class="mathcode" src="https://private.codecogs.com/gif.latex?X%5Csubseteq%20R%5E%7Bn%7D">,输出空间是<img alt="Y&#61;\left \{ &#43;1,-1 \right \}" class="mathcode" src="https://beijingoptbbs.oss-cn-beijing.aliyuncs.com/cs/5606289-e978f64b11d1dd099f9da6591c974cac.latex">。输入<img alt="x\in X" class="mathcode" src="https://beijingoptbbs.oss-cn-beijing.aliyuncs.com/cs/5606289-8a6b70050ffaaa54d613d7527d1a24eb.latex">表示实例的特征向量,对应于输入空间(特征空间)的点;输出<img alt="y\in Y" class="mathcode" src="https://beijingoptbbs.oss-cn-beijing.aliyuncs.com/cs/5606289-486c130c5db60b67b178bd7de49ea2d1.latex">表示实例的类别。由输入空间到输出空间的如下函数:</p>
<p><img alt="f(x)&#61;sign(w\cdot x&#43;b)" class="mathcode" src="https://beijingoptbbs.oss-cn-beijing.aliyuncs.com/cs/5606289-bfd1e41edc0547cc4ff4ece569e4897b.latex"></p>
<p>成为感知机。其中,w和b为感知机参数,<img alt="w\in R^{n}" class="mathcode" src="https://private.codecogs.com/gif.latex?w%5Cin%20R%5E%7Bn%7D">叫做权值或权值向量,<img alt="b\in R" class="mathcode" src="https://beijingoptbbs.oss-cn-beijing.aliyuncs.com/cs/5606289-e79001b36ecd8dc78511cfbfdee14a3c.latex">叫做偏置,<img alt="w\cdot x" class="mathcode" src="https://beijingoptbbs.oss-cn-beijing.aliyuncs.com/cs/5606289-b2edb063c5910cc330da14c297ef8807.latex">表示w和x的内积。sign是符号函数,即</p>
<p><img alt="sign(x)&#61;\left\{\begin{matrix} &#43;1,x\geq 0\\ -1,x&lt; 0 \end{matrix}\right." class="mathcode" src="https://beijingoptbbs.oss-cn-beijing.aliyuncs.com/cs/5606289-193cfa22fb7f059d7a4ff81a4954fdbe.latex"></p>
<p>感知机是一种线性分类模型,属于判别类型。</p>
<h2>2.2 感知机学习理论</h2>
<h3>2.2.1 数据集的线性可分性</h3>
<p>给定一个数据集,如果存在某个超平面S能够将数据集的正实例点和负实例点完全正确的划分到超平面的两侧,则称数据集T为可分数据集linearly separable data set;否则,称数据集T线性不可分。</p>
<h3>2.2.2 感知机学习策略</h3>
<p>感知机的目的就是找出一个能够让训练集正实例点和负实例点完全正确分开的分离超平面。也就是要确定函数中的参数w和b。</p>
<p>输入空间中任意一点<img alt="x_{0}" class="mathcode" src="https://beijingoptbbs.oss-cn-beijing.aliyuncs.com/cs/5606289-126dde8513426bc79b8c7090c5a6a8ce.latex">到超平面S的距离:<img alt="\frac{1}{||w||}|w\cdot x_{0}&#43;b|" class="mathcode" src="https://beijingoptbbs.oss-cn-beijing.aliyuncs.com/cs/5606289-e54cd660f7aaa77d44c668bc617f6fb7.latex">   这里<img alt="||w||" class="mathcode" src="https://beijingoptbbs.oss-cn-beijing.aliyuncs.com/cs/5606289-0935db3a8f1df37c4d680da1c9d74d35.latex">是w的<img alt="L_{2}" class="mathcode" src="https://beijingoptbbs.oss-cn-beijing.aliyuncs.com/cs/5606289-66f22d6a3329d849b3ea31b197a47a1d.latex">范数。 <img alt="||w||&#61;\sqrt{w_{1}^{2}&#43;w_{2}^{2}&#43;w_{3}^{2}&#43;\cdot \cdot \cdot &#43;w_{n}^{2}}" class="mathcode" src="https://private.codecogs.com/gif.latex?%7C%7Cw%7C%7C%3D%5Csqrt%7Bw_%7B1%7D%5E%7B2%7D&amp;plus;w_%7B2%7D%5E%7B2%7D&amp;plus;w_%7B3%7D%5E%7B2%7D&amp;plus;%5Ccdot%20%5Ccdot%20%5Ccdot%20&amp;plus;w_%7Bn%7D%5E%7B2%7D%7D"></p>
<p>由<img alt="sign(x)" class="mathcode" src="https://beijingoptbbs.oss-cn-beijing.aliyuncs.com/cs/5606289-9b8e3d6c8e172435c3bddc7fb9776868.latex">可知,对于误分类的数据<img alt="(x_{i},y_{i})" class="mathcode" src="https://beijingoptbbs.oss-cn-beijing.aliyuncs.com/cs/5606289-486d9cd3d8e990f7b234700c05dcbd35.latex">来说,<img alt="-y_{i}(w\cdot x_{i}&#43;b)&gt;0" class="mathcode" src="https://beijingoptbbs.oss-cn-beijing.aliyuncs.com/cs/5606289-5ec4266a87fec7b7df14cf226391b9a8.latex">成立。</p>
<p>因此,误分类点<img alt="x_{i}" class="mathcode" src="https://beijingoptbbs.oss-cn-beijing.aliyuncs.com/cs/5606289-d665d72d0e4b2b1a7e880fb368f4c1a8.latex">到超平面S的距离是<img alt="-\frac{1}{||w||}y_{i}(w\cdot x_{i}&#43;b)" class="mathcode" src="https://beijingoptbbs.oss-cn-beijing.aliyuncs.com/cs/5606289-52b27de6c266823c4ee9af6923119c12.latex"></p>
<p>这样,假设超平面S的误分类点集合为M,那么所有误分类点到超平面S的总距离为<img alt="-\frac{1}{||w||}\sum_{x_{i}\in M}^{ }y_{i}|w\cdot x_{i}&#43;b|" class="mathcode" src="https://private.codecogs.com/gif.latex?-%5Cfrac%7B1%7D%7B%7C%7Cw%7C%7C%7D%5Csum_%7Bx_%7Bi%7D%5Cin%20M%7D%5E%7B%20%7Dy_%7Bi%7D%7Cw%5Ccdot%20x_%7Bi%7D&amp;plus;b%7C">,不考虑<img alt="\frac{1}{||w||}" class="mathcode" src="https://beijingoptbbs.oss-cn-beijing.aliyuncs.com/cs/5606289-d871a9601f5186da6259bbdaedb1944c.latex">,就得到感知机学习的损失函数。(为什么不考虑,在第七章支持向量机中会解
分享到 :
0 人收藏
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

积分:3875789
帖子:775174
精华:0
期权论坛 期权论坛
发布
内容

下载期权论坛手机APP