感知机
概念
感知机(Perceptron
)在1957年由Rosenblatt
提出,是神经网络
和支持向量机
的基础。
感知机是一种二类分类的线性分类模型,其输入为实例的特征向量,输出为实例的类别,+1
代表正类,-1
代表负类。感知机属于判别模型,它的目标是要将输入实例通过分离超平面将正负二类分离。
原理
对于分类任务,线性回归模型就无能为力了,但是可以在线性模型的函数进行后再加入一层激活函数,这个函数是非线性的,激活函数的反函数叫做链接函数。有两种线性分类的方式:
硬分类,直接需要输出观测对应的分类。这类模型的代表为:
线性判别分析(Fisher 判别)(Filsher Discriminant Analysis)
软分类,产生不同类别的概率,这类算法根据概率方法的不同分为两种
生成式(根据贝叶斯定理先计算参数后验,再进行推断)(概率生成模型):高斯判别分析(GDA)
和朴素贝叶斯等为代表
判别式(直接对条件概率进行建模)(概率判别模型):逻辑回归(Logistic Regression)
模型:
f ( x ) = s i g n ( w T x ) , x i ∈ R , y i ∈ R , i = 1 , 2 , ⋯ , N f(x)=sign(w^Tx),x_i\in\mathbb{R},y_i\in\mathbb{R},i=1,2,\cdots,N f ( x ) = s i g n ( w T x ) , x i ∈ R , y i ∈ R , i = 1 , 2 , ⋯ , N 有:
其中$w_0$就是b
这样可以将线性回归的结果映射到两分类的结果上。
定义损失函数为错误分类的数目,比较直观的方式是使用指示函数,但是指示函数不可导,因此可以定义:
L ( w ) = ∑ i = 1 N I { y i w T x i < 0 } = ∑ x i ∈ D w r o n g − y i w T x i \begin{align} L(w)&=\sum\limits_{i=1}^NI\{y_iw^Tx_i<0\}\\ &=\sum\limits_{x_i\in\mathcal{D}_{wrong}}-y_iw^Tx_i \end{align} L ( w ) = i = 1 ∑ N I { y i w T x i < 0 } = x i ∈ D w ro n g ∑ − y i w T x i 其中y i w T x i < 0 y_iw^Tx_i<0 y i w T x i < 0 为更新条件
其中,D w r o n g \mathcal{D}_{wrong} D w ro n g 是错误分类集合,实际在每一次训练的时候,我们采用梯度下降(SGD)的算法。损失函数对 $w$ 的偏导为:
∂ ∂ w L ( w ) = ∑ x i ∈ D w r o n g − y i x i \frac{\partial}{\partial w}L(w)=\sum\limits_{x_i\in\mathcal{D}_{wrong}}-y_ix_i ∂ w ∂ L ( w ) = x i ∈ D w ro n g ∑ − y i x i 但是如果样本非常多的情况下,计算复杂度较高,但是,实际上并不需要绝对的损失函数下降的方向,只需要损失函数的期望值下降,但是计算期望需要知道真实的概率分布,实际只能根据训练数据抽样来估算这个概率分布(经验风险):
N N N 越大,样本近似真实分布越准确,但是对于一个标准差为 $\sigma$ 的数据,可以确定的标准差仅和 N \sqrt{N} N 成反比,而计算速度却和 N N N 成正比。因此可以每次使用较少样本,则在数学期望的意义上损失降低的同时,有可以提高计算速度,如果每次只使用一个错误样本,有下面的更新策略(根据泰勒公式,在负方向):
w t + 1 ← w t + λ y i x i w^{t+1}\leftarrow w^{t}+\lambda y_ix_i w t + 1 ← w t + λ y i x i 是可以收敛的,同时使用单个观测更新也可以在一定程度上增加不确定度,从而减轻陷入局部最小的可能。在更大规模的数据上,常用的是小批量随机梯度下降法。
创建数据
复制 #%%
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
# 伪造数据
"""
* n_samples:样本数量
* n_features:维度,特征数=n_informative() + n_redundant + n_repeated
* n_classes:划分几类
* data:为
"""
data,target=make_classification(n_samples=200, n_features=2,n_classes=2,n_informative=1,n_redundant=0,n_repeated=0,n_clusters_per_class=1)
绘图
复制 """
* c:指定目标值,根据目标值,进行划分类
* n_features:维度
* n_classes:划分几类
"""
plt.scatter(data[:, 0], data[:, 1], c=target,s=50)
训练
复制 lambda_ = 0.1 # lambda值
epochs = 10 # 步长
n_samples, n_features = data.shape # 样本数量,维度
Y = target.reshape(-1,1) # 转换为二维矩阵nx1
Y[Y == 0] = -1
X= np.c_[data, np.ones(shape=(n_samples,))] # 添加常量 x0=1
w = np.random.random(size=(n_features+1,1)) # 维度+1
XY = np.c_[X,Y] # (x_i,y_i)
复制 for _ in range(epochs):
np.random.shuffle(XY)
for i in range(n_samples):
x_i = XY[i,:-1]
y_i = XY[i,-1:]
if (x_i.dot(w)*y_i)[0] < 0:
nw = (x_i * y_i).reshape(-1,1)
w = w + lambda_ * nw
复制 w1 = w[0,0]
w2 = w[1,0]
bias = w[2,0]
w1,w2,bias
#%%
x = np.arange(np.min(X), np.max(X), 0.1)
y = -w1 / w2 * x - bias / w2
复制 x = np.arange(np.min(X), np.max(X), 0.1)
y = -w1 / w2 * x - bias / w2
plt.scatter(X[:,0], X[:,1], c=target, s=50)
plt.plot(x, y, 'r')
plt.show()
sklearn 实现
复制 from sklearn.linear_model import Perceptron
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
X,Y = make_classification(n_samples=200, n_features=2,n_classes=2,n_informative=1,n_redundant=0,n_repeated=0,n_clusters_per_class=1)
per = Perceptron()
per.fit(X,Y)
# 绘制散点
plt.scatter(X[:,0], X[:,1], c=Y, s=50)
w = per.coef_ # 权重矩阵
b = per.intercept_ #超平面的截距
x = np.arange(np.min(X), np.max(X), 0.1)
y = x * (-w[0][0] / w[0][1]) - b
plt.plot(x,y)
线性判别分析-LDA(隐含狄利克雷分布)
原理
假设,
X = ( x 1 , x 2 , ⋯ , x N ) T = [ x 1 T x 2 T ⋮ x N T ] N × p Y = [ y 1 y 2 ⋮ y N ] N × 1 { ( x i , y I ) } i + 1 N , x i ∈ R p , y i ∈ { + 1 , − 1 } ,其中 + 1 → c 1 , − 1 → c 2 x c 1 = { x i ∣ y i = + 1 } , x c 2 = { x i ∣ y i = − 1 } ∣ x c 1 ∣ = N 1 , ∣ x c 2 ∣ = N 2 , N = N 1 + N 2 \begin{align} X&=(x_1,x_2,\cdots,x_N)^T = \begin{bmatrix} x_{1}^T \\ x_{2}^T \\ \vdots \\ x_{N}^T \\ \end{bmatrix}_{N\times p} \\ Y & = \begin{bmatrix} y_{1} \\ y_{2} \\ \vdots \\ y_{N} \\ \end{bmatrix}_{N\times 1} \\ &\\ & \{(x_i,y_I) \}^N_{i+1},x_i\in\mathbb{R}^p,y_i\in \{+1 ,-1\},其中+1\rightarrow c1,-1\rightarrow c2 \\\\ & x_{c1} = \{x_i|y_i=+1\},x_{c2} = \{x_i|y_i=-1\} \\\\ &|x_{c1}| = N_1,|x_{c2}|=N2,N=N_1+N_2 \end{align} X Y = ( x 1 , x 2 , ⋯ , x N ) T = x 1 T x 2 T ⋮ x N T N × p = y 1 y 2 ⋮ y N N × 1 {( x i , y I ) } i + 1 N , x i ∈ R p , y i ∈ { + 1 , − 1 } ,其中 + 1 → c 1 , − 1 → c 2 x c 1 = { x i ∣ y i = + 1 } , x c 2 = { x i ∣ y i = − 1 } ∣ x c 1 ∣ = N 1 , ∣ x c 2 ∣ = N 2 , N = N 1 + N 2 在 LDA 中,基本想法是选定一个方向,将试验样本顺着这个方向投影,投影后的数据需要满足两个条件,从而可以更好地分类:
投影后类内方差最小,类间方差最大
首先是投影,假定原来的数据是向量 x x x ,那么顺着 w w w 方向的投影就是标量:
z = w T ⋅ x ( = ∣ w ∣ ⋅ ∣ x ∣ cos θ ) = > z i = w T x i z=w^T\cdot x(=|w|\cdot|x|\cos\theta)=>z_i = w^Tx_i z = w T ⋅ x ( = ∣ w ∣ ⋅ ∣ x ∣ cos θ ) => z i = w T x i 方差为:
z i = w T x i z ‾ = 1 N ∑ i = 1 N z i = 1 N ∑ i = 1 N w T x i S z = 1 N ∑ i = 1 N ( z i − z ‾ ) ( z i − z ‾ ) T = 1 N ∑ i = 1 N ( w T x i − z ‾ ) ( w T x i − z ‾ ) T \begin{align} z_i & = w^Tx_i\\ \overline{z}&=\frac{1}{N}\sum\limits_{i=1}^{N}z_i=\frac{1}{N}\sum\limits_{i=1}^{N}w^Tx_i \\ S_z&=\frac{1}{N}\sum\limits_{i=1}^{N}(z_i-\overline{z})(z_i-\overline{z})^T\nonumber\\ &=\frac{1}{N}\sum\limits_{i=1}^{N}(w^Tx_i-\overline{z})(w^Tx_i-\overline{z})^T\nonumber \end{align} z i z S z = w T x i = N 1 i = 1 ∑ N z i = N 1 i = 1 ∑ N w T x i = N 1 i = 1 ∑ N ( z i − z ) ( z i − z ) T = N 1 i = 1 ∑ N ( w T x i − z ) ( w T x i − z ) T 对第一点,相同类内部的样本更为接近,假设属于两类的试验样本数量分别是 N 1 N_1 N 1 和 N 2 N_2 N 2 ,那么采用方差矩阵来表征每一个类内的总体分布,这里使用了协方差的定义,用 S S S 表示原数据的协方差:
C 1 : V a r z [ C 1 ] = 1 N 1 ∑ i = 1 N 1 ( z i − z c 1 ‾ ) ( z i − z c 1 ‾ ) T = 1 N 1 ∑ i = 1 N 1 ( w T x i − 1 N 1 ∑ i = 1 N 1 w T x i ) ( w T x i − 1 N 1 ∑ i = 1 N 1 w T x i ) T = w T 1 N 1 ∑ i = 1 N 1 ( x i − x c 1 ‾ ) ( x i − x c 1 ‾ ) T w = w T S 1 w C 2 : V a r z [ C 2 ] = 1 N 2 ∑ i = 1 N 2 ( z i − z c 2 ‾ ) ( z i − z c 2 ‾ ) T = w T S 2 w \begin{align} C_1:Var_z[C_1]&=\frac{1}{N_1}\sum\limits_{i=1}^{N_1}(z_i-\overline{z_{c1}})(z_i-\overline{z_{c1}})^T\nonumber\\ &=\frac{1}{N_1}\sum\limits_{i=1}^{N_1}(w^Tx_i-\frac{1}{N_1}\sum\limits_{i=1}^{N_1}w^Tx_i)(w^Tx_i-\frac{1}{N_1}\sum\limits_{i=1}^{N_1}w^Tx_i)^T\nonumber\\ &=w^T\frac{1}{N_1}\sum\limits_{i=1}^{N_1}(x_i-\overline{x_{c1}})(x_i-\overline{x_{c1}})^Tw\nonumber\\ &=w^TS_1w\\ C_2:Var_z[C_2]&=\frac{1}{N_2}\sum\limits_{i=1}^{N_2}(z_i-\overline{z_{c2}})(z_i-\overline{z_{c2}})^T\nonumber\\ &=w^TS_2w \end{align} C 1 : Va r z [ C 1 ] C 2 : Va r z [ C 2 ] = N 1 1 i = 1 ∑ N 1 ( z i − z c 1 ) ( z i − z c 1 ) T = N 1 1 i = 1 ∑ N 1 ( w T x i − N 1 1 i = 1 ∑ N 1 w T x i ) ( w T x i − N 1 1 i = 1 ∑ N 1 w T x i ) T = w T N 1 1 i = 1 ∑ N 1 ( x i − x c 1 ) ( x i − x c 1 ) T w = w T S 1 w = N 2 1 i = 1 ∑ N 2 ( z i − z c 2 ) ( z i − z c 2 ) T = w T S 2 w 注意协方差矩阵不是点乘...
所以类内距离可以记为:
V a r z [ C 1 ] + V a r z [ C 2 ] = w T ( S 1 + S 2 ) w \begin{align} Var_z[C_1]+Var_z[C_2]=w^T(S_1+S_2)w \end{align} Va r z [ C 1 ] + Va r z [ C 2 ] = w T ( S 1 + S 2 ) w 对于第二点,可以用两类的均值表示这个距离:
( z c 1 ‾ − z c 2 ‾ ) 2 = ( 1 N 1 ∑ i = 1 N 1 w T x i − 1 N 2 ∑ i = 1 N 2 w T x i ) 2 = ( w T ( x c 1 ‾ − x c 2 ‾ ) ) 2 = w T ( x c 1 ‾ − x c 2 ‾ ) ( x c 1 ‾ − x c 2 ‾ ) T w \begin{align} (\overline{z_{c1}}-\overline{z_{c2}})^2&=(\frac{1}{N_1}\sum\limits_{i=1}^{N_1}w^Tx_i-\frac{1}{N_2}\sum\limits_{i=1}^{N_2}w^Tx_i)^2\nonumber\\ &=(w^T(\overline{x_{c1}}-\overline{x_{c2}}))^2\nonumber\\ &=w^T(\overline{x_{c1}}-\overline{x_{c2}})(\overline{x_{c1}}-\overline{x_{c2}})^Tw \end{align} ( z c 1 − z c 2 ) 2 = ( N 1 1 i = 1 ∑ N 1 w T x i − N 2 1 i = 1 ∑ N 2 w T x i ) 2 = ( w T ( x c 1 − x c 2 ) ) 2 = w T ( x c 1 − x c 2 ) ( x c 1 − x c 2 ) T w 综合这两点,由于协方差是一个矩阵,于是将这两个值相除来得到损失函数 ,并最大化这个值:
w ^ = a r g m a x w J ( w ) = a r g m a x w ( z c 1 ‾ − z c 2 ‾ ) 2 V a r z [ C 1 ] + V a r z [ C 2 ] = a r g m a x w w T ( x c 1 ‾ − x c 2 ‾ ) ( x c 1 ‾ − x c 2 ‾ ) T w w T ( S c 1 + S c 2 ) w = a r g m a x w w T S b w w T S w w \begin{align} \hat{w}=\mathop{argmax}\limits_wJ(w)&=\mathop{argmax}\limits_w\frac{(\overline{z_{c1}}-\overline{z_{c2}})^2}{Var_z[C_1]+Var_z[C_2]}\nonumber\\ &=\mathop{argmax}\limits_w\frac{w^T(\overline{x_{c1}}-\overline{x_{c2}})(\overline{x_{c1}}-\overline{x_{c2}})^Tw}{w^T(S_{c1}+S_{c2})w}\nonumber\\ &=\mathop{argmax}\limits_w\frac{w^TS_bw}{w^TS_ww} \end{align} w ^ = w a r g ma x J ( w ) = w a r g ma x Va r z [ C 1 ] + Va r z [ C 2 ] ( z c 1 − z c 2 ) 2 = w a r g ma x w T ( S c 1 + S c 2 ) w w T ( x c 1 − x c 2 ) ( x c 1 − x c 2 ) T w = w a r g ma x w T S w w w T S b w 其中S b : b e t w w e n − c l a s s S_b:betwwen-class S b : b e tww e n − c l a ss 类间方差,S w : w i t h − c l a s s S_w:with-class S w : w i t h − c l a ss 类内方差
S b = ( x c 1 ‾ − x c 2 ‾ ) ( x c 1 ‾ − x c 2 ‾ ) S_b=(\overline{x_{c1}}-\overline{x_{c2}})(\overline{x_{c1}}-\overline{x_{c2}}) S b = ( x c 1 − x c 2 ) ( x c 1 − x c 2 )
S w = S c 1 + S c 2 S_w=S_{c1}+S_{c2} S w = S c 1 + S c 2
这样,就把损失函数和原数据集以及参数结合起来了。下面对这个损失函数求偏导,注意其实对 w w w 的绝对值没有任何要求,只对方向有要求,因此只要一个方程就可以求解了:
∂ ∂ w J ( w ) = 2 S b w ( w T S w w ) − 1 − 2 w T S b w ( w T S w w ) − 2 S w w = 0 ⟹ S b w ( w T S w w ) = ( w T S b w ) S w w ⟹ S w w = w T S w w w T S b w w S b w ⟹ w = w T S w w w T S b w w S w − 1 S b w ⟹ w = ∝ S w − 1 S b w = S w − 1 ( x c 1 ‾ − x c 2 ‾ ) ( x c 1 ‾ − x c 2 ‾ ) T w ∝ S w − 1 ( x c 1 ‾ − x c 2 ‾ ) \begin{align} &\frac{\partial}{\partial w}J(w)=2S_bw(w^TS_ww)^{-1}-2w^TS_bw(w^TS_ww)^{-2}S_ww=0\nonumber\\ &\Longrightarrow S_bw(w^TS_ww)=(w^TS_bw)S_ww\nonumber\\ &\Longrightarrow S_ww=\frac {w^TS_ww}{w^TS_bww} S_bw\nonumber\\ &\Longrightarrow w=\frac {w^TS_ww}{w^TS_bww} S_w^{-1}S_bw\\ &\Longrightarrow w=\propto S_w^{-1}S_bw=S_w^{-1}(\overline{x_{c1}}-\overline{x_{c2}})(\overline{x_{c1}}-\overline{x_{c2}})^Tw\propto S_w^{-1}(\overline{x_{c1}}-\overline{x_{c2}}) \end{align} ∂ w ∂ J ( w ) = 2 S b w ( w T S w w ) − 1 − 2 w T S b w ( w T S w w ) − 2 S w w = 0 ⟹ S b w ( w T S w w ) = ( w T S b w ) S w w ⟹ S w w = w T S b ww w T S w w S b w ⟹ w = w T S b ww w T S w w S w − 1 S b w ⟹ w =∝ S w − 1 S b w = S w − 1 ( x c 1 − x c 2 ) ( x c 1 − x c 2 ) T w ∝ S w − 1 ( x c 1 − x c 2 ) w : p × 1 w T : 1 × p S w : p × p S b : p × p w T S w w : 1 × p − p × p − p × 1 = > 1 × 1 = = > 一个常数 w T S b w : 1 × p − p × p − p × 1 = > 1 × 1 = = > 一个常数 w:p\times 1\\w^T:1\times p\\S_w:p\times p\\S_b:p\times p \\ w^TS_ww:1 \times p - p \times p - p \times 1 =>1 \times 1 ==> 一个常数 \\ w^TS_bw:1 \times p - p \times p - p \times 1 =>1 \times 1 ==> 一个常数 w : p × 1 w T : 1 × p S w : p × p S b : p × p w T S w w : 1 × p − p × p − p × 1 => 1 × 1 ==> 一个常数 w T S b w : 1 × p − p × p − p × 1 => 1 × 1 ==> 一个常数 于是 w = S w − 1 ( x c 1 ‾ − x c 2 ‾ ) w= S_w^{-1}(\overline{x_{c1}}-\overline{x_{c2}}) w = S w − 1 ( x c 1 − x c 2 ) 就是需要寻找的方向。最后可以归一化求得单位的 w w w 值。
Fisher线性判别函数
得到w w w 后,C 0 , C 1 C_0,C1 C 0 , C 1 在直线上的投影中心为:
C 1 : μ 1 = w T x c 1 ‾ = ( x c 1 ‾ − x c 2 ‾ ) T S w − 1 x c 1 ‾ C 2 : μ 2 = w T x c 2 ‾ = ( x c 1 ‾ − x c 2 ‾ ) T S w − 1 x c 2 ‾ C1:\mu_{1} = w^T\overline{x_{c1}}=(\overline{x_{c1}}-\overline{x_{c2}})^TS_w^{-1}\overline{x_{c1}} \\C2:\mu_{2} = w^T\overline{x_{c2}}=(\overline{x_{c1}}-\overline{x_{c2}})^TS_w^{-1}\overline{x_{c2}} C 1 : μ 1 = w T x c 1 = ( x c 1 − x c 2 ) T S w − 1 x c 1 C 2 : μ 2 = w T x c 2 = ( x c 1 − x c 2 ) T S w − 1 x c 2 此时,两个群的投影中心间的中点为:
c = μ 1 + μ 2 2 = ( x c 1 ‾ − x c 2 ‾ + ) T S w − 1 ( x c 1 ‾ + x c 2 ‾ ) 2 c = \frac {\mu_1+\mu_2}{2}=\frac{(\overline{x_{c1}}-\overline{x_{c2}}+)^TS_w^{-1}(\overline{x_{c1}}+\overline{x_{c2}})}{2} c = 2 μ 1 + μ 2 = 2 ( x c 1 − x c 2 + ) T S w − 1 ( x c 1 + x c 2 ) 此时,Filsher线性判别函数为:
根据判别函数可以重新得到新的y y y 。同时可以得到Mahalanobis
平方距离:
d = ( x c 1 ‾ − x c 2 ‾ + ) T S w − 1 ( x c 1 ‾ − x c 2 ‾ ) d = (\overline{x_{c1}}-\overline{x_{c2}}+)^TS_w^{-1}(\overline{x_{c1}}-\overline{x_{c2}}) d = ( x c 1 − x c 2 + ) T S w − 1 ( x c 1 − x c 2 ) 创建数据
复制 import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets.samples_generator import make_classification
#%%
n_samples = 100
X, y = make_classification(n_samples=n_samples, n_features=2, n_redundant=0, n_classes=2,
n_informative=1, n_clusters_per_class=1, class_sep=0.5, random_state=10)
"""
X:2维
"""
绘图
复制 # 原始数据
plt.scatter(X[:, 0], X[:, 1], marker='o', c=y)
plt.plot()
plt.xlabel("x1")
plt.ylabel("x2")
plt.show()
实现
复制 # 类别
labels = np.unique(y)
Sw = 0
for label in labels:
x = np.array([X[i] for i in range(len(X)) if y[i] == label])
mu = np.mean(x,axis=0)
cov = np.dot((x-mu).T,x-mu)
Sw += cov # Sw:pxp
mus = np.mean(X[y==0],axis=0) - np.mean(X[y==1],axis=0)
mua = np.mean(X[y==0],axis=0) + np.mean(X[y==1],axis=0)
w = np.dot(np.linalg.pinv(Sw),mus.reshape(-1,1))
复制 y_new = X.dot(w)
n_samples = X.shape[0]
# 两个投影中心间的中心
c = 1/2*np.dot(np.dot(mus.reshape(1,-1),np.linalg.pinv(Sw)),mua.reshape(-1,1))
# 线性判别
h = y_new - c
y1 = []
for i in range(n_samples):
if h[i] >= 0: # 属于类别1
y1.append(0)
else: # 属于类型2
y1.append(1)
count = 0
for a,b in zip(y1,y): # 与原类别进行比较
if a == b:
count += 1
accuracy = count / n_samples
print("accuracy:", accuracy)
plt.scatter(X[:, 0], X[:, 1], marker='o', c=y1)
plt.xlabel("x1")
plt.ylabel("x2")
plt.show()
绘制决策边界
令 w 1 x 1 + w 2 x 2 + b = 0 ,可得 x 2 = − w 1 / w 2 , x 1 = − b / w 2 令w1x1+w2x2+b=0,可得x2=−w1/w2,x1=−b/w2 令 w 1 x 1 + w 2 x 2 + b = 0 ,可得 x 2 = − w 1/ w 2 , x 1 = − b / w 2
复制 x1 = np.arange(np.min(X), np.max(X), 0.1)
x2 = -w[0][0] * x1 - w[1][0]
复制 plt.scatter(X[:, 0], X[:, 1], c=y1, s=50)
plt.plot(x1, x2, 'r')
plt.show()
sklearn实现
复制 from sklearn.datasets.samples_generator import make_classification
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
n_samples = 100
X, y = make_classification(n_samples=n_samples, n_features=2, n_redundant=0, n_classes=2,
n_informative=1, n_clusters_per_class=1, class_sep=0.5, random_state=10)
lda = LDA(n_components=2)
x_new = lda.fit_transform(X, y)
lda.score(X,y)
概率判别模型-Logistic 回归
原理
假设,D a t a : { ( x i , y i ) } i = 1 N x i ∈ R p , y i ∈ { 0 , 1 } Data:\{(x_i,y_i)\}^N_{i=1}x_i\in\mathbb{R^p},y_i\in\{0,1\} D a t a : {( x i , y i ) } i = 1 N x i ∈ R p , y i ∈ { 0 , 1 }
有时候只要得到一个类别的概率,那么需要一种能输出 [ 0 , 1 ] [0,1] [ 0 , 1 ] 区间的值的函数。考虑两分类模型,利用判别模型,希望对 p ( C ∣ x ) p(C|x) p ( C ∣ x ) 建模,利用贝叶斯定理:
p ( C 1 ∣ x ) = p ( x ∣ C 1 ) p ( C 1 ) p ( x ∣ C 1 ) p ( C 1 ) + p ( x ∣ C 2 ) p ( C 2 ) p(C_1|x)=\frac{p(x|C_1)p(C_1)}{p(x|C_1)p(C_1)+p(x|C_2)p(C_2)} p ( C 1 ∣ x ) = p ( x ∣ C 1 ) p ( C 1 ) + p ( x ∣ C 2 ) p ( C 2 ) p ( x ∣ C 1 ) p ( C 1 ) 取 z = ln p ( x ∣ C 1 ) p ( C 1 ) p ( x ∣ C 2 ) p ( C 2 ) z=\ln\frac{p(x|C_1)p(C_1)}{p(x|C_2)p(C_2)} z = ln p ( x ∣ C 2 ) p ( C 2 ) p ( x ∣ C 1 ) p ( C 1 ) ,于是:
p ( C 1 ∣ x ) = 1 1 + e − z p(C_1|x)=\frac{1}{1+e^{-z}} p ( C 1 ∣ x ) = 1 + e − z 1 即,
上面的式子叫 Logistic Sigmoid
函数,其参数表示了两类联合概率比值的对数。在判别式中,不关心这个参数的具体值,模型假设直接对 $a$ 进行。
复制 import numpy as np
import matplotlib.pyplot as plt
def Sigmod(x):
return 1/(1+np.exp(-x))
x = np.arange(-10,10,0.1)
y = Sigmod(x)
plt.plot(x,y)
Logistic
回归的模型假设是:
于是,通过寻找 w w w 的最佳值可以得到在这个模型假设下的最佳模型。概率判别模型常用最大似然估计的方式来确定参数。
对于一次观测,获得分类 y y y 的概率为(假定C 1 = 1 , C 2 = 0 C_1=1,C_2=0 C 1 = 1 , C 2 = 0 ):
有,
p 1 = p ( y = 1 ∣ x ) = S i g m o d ( w T x ) = 1 1 + e − w T x , y = 1 p 0 = p ( y = 0 ∣ x ) = 1 − p ( y = 1 ∣ x ) = e − w T x 1 + e − w T x , y = 0 p_1 = p(y=1|x) = Sigmod(w^Tx) = \frac{1} {1+e^{-w^Tx}},y=1 \\ p_0 = p(y=0|x) = 1-p(y=1|x) = \frac{e^{-w^Tx}} {1+e^{-w^Tx}},y=0 \\ p 1 = p ( y = 1∣ x ) = S i g m o d ( w T x ) = 1 + e − w T x 1 , y = 1 p 0 = p ( y = 0∣ x ) = 1 − p ( y = 1∣ x ) = 1 + e − w T x e − w T x , y = 0 于是,综合表达:
p ( y ∣ x ) = p 1 y p 0 1 − y = > l o g p ( y ∣ x ) = y l o g p 1 + ( 1 − y ) l o g p 2 \begin{align} p(y|x)&=p_1^yp_0^{1-y} \\ &=> log \ p(y|x) = y log \ p_1+(1-y)log \ p_2 \end{align} p ( y ∣ x ) = p 1 y p 0 1 − y => l o g p ( y ∣ x ) = y l o g p 1 + ( 1 − y ) l o g p 2 那么对于 N N N 次独立全同的观测 MLE(极大似然估计)
为:
J ( θ ) = − 1 N ∑ i = 1 N ( y i log p 1 + ( 1 − y i ) log p 0 ) M L E : w ^ = a r g m a x w J ( w ) = a r g m a x w l o g p ( Y ∣ X ) = a r g m a x w l o g ∏ i = 1 N p ( y i ∣ x i ) = a r g m a x w ∑ i = 1 N l o g p ( y i ∣ x i ) = a r g m a x w ∑ i = 1 N ( y i log p 1 + ( 1 − y i ) log p 0 ) \begin{align} J(\theta)&=-\frac{1}{N}\sum\limits_{i=1}^N(y_i\log p_1+(1-y_i)\log p_0) \\ MLE:\hat{w} &=\mathop{argmax}_wJ(w)\\ &=\mathop{argmax}_wlog \ p(Y|X)\\ &=\mathop{argmax}_wlog\prod_{i=1}^N \ p(y_i|x_i)\\ &=\mathop{argmax}_w\sum\limits_{i=1}^N log \ p(y_i|x_i)\\ &=\mathop{argmax}_w\sum\limits_{i=1}^N(y_i\log p_1+(1-y_i)\log p_0) \end{align} J ( θ ) M L E : w ^ = − N 1 i = 1 ∑ N ( y i log p 1 + ( 1 − y i ) log p 0 ) = a r g ma x w J ( w ) = a r g ma x w l o g p ( Y ∣ X ) = a r g ma x w l o g i = 1 ∏ N p ( y i ∣ x i ) = a r g ma x w i = 1 ∑ N l o g p ( y i ∣ x i ) = a r g ma x w i = 1 ∑ N ( y i log p 1 + ( 1 − y i ) log p 0 ) 对这个函数求导数,注意到:
p 1 ′ = ( 1 1 + exp ( − z ) ) ′ = p 1 ( 1 − p 1 ) p_1'=(\frac{1}{1+\exp(-z)})'=p_1(1-p_1) p 1 ′ = ( 1 + exp ( − z ) 1 ) ′ = p 1 ( 1 − p 1 ) 则:
∂ L ∂ w = J ′ ( w ) = ∑ i = 1 N [ y i ( 1 − p 1 ) x i − p 1 x i + y i p 1 x i ] = ∑ i = 1 N ( y i − p 1 ) x i \begin{align} \frac{∂L}{∂w}=J'(w)&=\sum\limits_{i=1}^N[y_i(1-p_1)x_i-p_1x_i+y_ip_1x_i] \\ &=\sum\limits_{i=1}^N(y_i-p_1)x_i \end{align} ∂ w ∂ L = J ′ ( w ) = i = 1 ∑ N [ y i ( 1 − p 1 ) x i − p 1 x i + y i p 1 x i ] = i = 1 ∑ N ( y i − p 1 ) x i 由于概率值的非线性,放在求和符号中时,这个式子无法直接求解。于是在实际训练的时候,和感知机类似,也可以使用不同大小的批量随机梯度上升(对于最小化就是梯度下降)来获得这个函数的极大值。
M L E m a x MLE^{max} M L E ma x =>l o s s f u n c t i o n ( m i n c r o s s e n t r o p y ) loss \ function (min \ cross \ entropy) l oss f u n c t i o n ( min cross e n t ro p y )
所以w w w 的更新公式:w : = w − η ∂ L ∂ w w:=w−η\frac{∂L}{∂w} w := w − η ∂ w ∂ L
创建数据
复制 import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
X,y=make_classification(n_samples=100, n_features=2,n_classes=2,n_informative=1,n_redundant=0,n_repeated=0,n_clusters_per_class=1)
X = np.c_[X,np.ones(100)]
n_samples,n_features = X.shape
w = np.zeros((n_features,1))
Y = y.reshape(-1,1)
实现
复制 def calc_w(x,y,w,epochs=10000,eta=0.0001,batch_size=16):
n_samples,n_features = x.shape
xy = np.c_[x,y]
for _ in range(epochs):
np.random.shuffle(xy)
for i in range(n_samples):
batch_xy = xy[batch_size * i:batch_size * (i + 1)]
x = batch_xy[:,:-1]
y = batch_xy[:,-1:]
p1 = Sigmod(x.dot(w))
w = w - (eta* (y-p1).T.dot(x)/ batch_size).T
return w
xy = np.c_[X,y]
w = calc_w(X,y,w)
绘制决策边界
令 w 1 x 1 + w 2 x 2 + b = 0 ,可得 x 2 = − w 1 / w 2 , x 1 = − b / w 2 令w1x1+w2x2+b=0,可得x2=−w1/w2,x1=−b/w2 令 w 1 x 1 + w 2 x 2 + b = 0 ,可得 x 2 = − w 1/ w 2 , x 1 = − b / w 2
复制 w1 = w[0][0]
w2 = w[1][0]
bias = w[2][0]
x1 = np.arange(np.min(X), np.max(X), 0.1)
x2 = -w1 / w2 * x1 - bias / w2
复制 plt.scatter(X[:, 0], X[:, 1], c=y,s=50)
plt.plot(x1, x2, 'r')
plt.show()
sklearn实现
复制 #%%
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression
X,y=make_classification(n_samples=100, n_features=2,n_classes=2,n_informative=1,n_redundant=0,n_repeated=0,n_clusters_per_class=1)
lr = LogisticRegression()
lr.fit(X,y)
w1=lr.coef_[0][0]
w2=lr.coef_[0][1]
bias=lr.intercept_[0]
x1=np.arange(np.min(X),np.max(X),0.1)
x2=-w1/w2*x1-bias/w2
plt.scatter(X[:, 0], X[:, 1], c=y,s=50)
plt.plot(x1, x2, 'r')
plt.show()
#%%
#计算准确度
lr.score(X,y)
概率生成模型-高斯判别分析 GDA
模型定义
前提,p ( y ∣ x ) = p ( x ∣ y ) p ( y ) p ( x ) p(y|x) = \frac {p(x|y)p(y)}{p(x)} p ( y ∣ x ) = p ( x ) p ( x ∣ y ) p ( y ) ,由于x 与 y 没有多大的联系 x与y没有多大的联系 x 与 y 没有多大的联系 ,有p ( y ∣ x ) ∝ p ( x ∣ y ) p ( y ) p(y|x)\propto {p(x|y)p(y)} p ( y ∣ x ) ∝ p ( x ∣ y ) p ( y ) 。其中p ( y ) 为先验, p ( x ∣ y ) 为似然, p ( y ∣ x ) 为后验 p(y)为先验,p(x|y)为似然,p(y|x)为后验 p ( y ) 为先验, p ( x ∣ y ) 为似然, p ( y ∣ x ) 为后验 。
于是,生成模型为:
y ^ = a r g m a x y ∈ { 0 , 1 } p ( y ∣ x ) = a r g m a x y p ( y ) p ( x ∣ y ) \hat y = argmax_{y \in \{0,1\}} p(y|x)=argmax_yp(y)p(x|y) y ^ = a r g ma x y ∈ { 0 , 1 } p ( y ∣ x ) = a r g ma x y p ( y ) p ( x ∣ y ) 其中,
x ∣ y = 1 ∼ N ( μ 1 , Σ ) N ( μ 1 , Σ ) x|y=1\sim\mathcal{N}(\mu_1,\Sigma)\mathcal{N}(\mu_1,\Sigma) x ∣ y = 1 ∼ N ( μ 1 , Σ ) N ( μ 1 , Σ )
x ∣ y = 0 ∼ N ( μ 2 , Σ ) x|y=0\sim\mathcal{N}(\mu_2,\Sigma) x ∣ y = 0 ∼ N ( μ 2 , Σ )
= > N ( μ 1 , Σ ) y ⋅ N ( μ 2 , Σ ) 1 − y =>\mathcal{N}(\mu_1,\Sigma)^y \cdot \mathcal{N}(\mu_2,\Sigma)^{1-y} => N ( μ 1 , Σ ) y ⋅ N ( μ 2 , Σ ) 1 − y 那么e独立全同的数据集最大后验概率(MAP)可以表示为:
l o g − l i k e l i h o o d : L ( θ ) = l o g ∏ i = 1 N p ( x i , y i ) = ∑ i = 1 N l o g ( p ( x i ∣ y i ) p ( y i ) ) = ∑ i = 1 N [ l o g p ( x i ∣ y i ) + l o g p ( y i ) ] \begin{align} log-likelihood:L(\theta)& = log\prod_{i=1}^N \ p(x_i,y_i) \\ &=\sum^{N}_{i=1}log(p(x_i|y_i)p(y_i)) \\ &=\sum^{N}_{i=1}[log\ p(x_i|y_i)+log\ p(y_i)] \\ \end{align} l o g − l ik e l ih oo d : L ( θ ) = l o g i = 1 ∏ N p ( x i , y i ) = i = 1 ∑ N l o g ( p ( x i ∣ y i ) p ( y i )) = i = 1 ∑ N [ l o g p ( x i ∣ y i ) + l o g p ( y i )] 由上可知,
L ( θ ) = ∑ i = 1 N [ l o g N ( μ 1 , Σ ) y i ⋅ N ( μ 2 , Σ ) 1 − y i + l o g ϕ y i ⋅ ( 1 − ϕ ) 1 − y i ] = ∑ i = 1 N [ l o g N ( μ 1 , Σ ) y i + l o g N ( μ 2 , Σ ) 1 − y i + l o g ϕ y i + l o g ( 1 − ϕ ) 1 − y i ] = ∑ i = 1 N [ y i l o g N ( μ 1 , Σ ) + ( 1 − y i ) l o g N ( μ 2 , Σ ) + y i l o g ϕ + ( 1 − y i ) l o g ( 1 − ϕ ) ] \begin{align} L(\theta)&=\sum^{N}_{i=1}[log\ \mathcal{N}(\mu_1,\Sigma)^{y_i} \cdot \mathcal{N}(\mu_2,\Sigma)^{1-y_i}+log\ \phi^{y_i} \cdot (1-\phi)^{1-y_i}] \\ &=\sum^{N}_{i=1}[log\ \mathcal{N}(\mu_1,\Sigma)^{y_i} +log\ \mathcal{N}(\mu_2,\Sigma)^{1-y_i}+log\ \phi^{y_i} + log \ (1-\phi)^{1-y_i}] \\ &=\sum^{N}_{i=1}[y_ilog\ \mathcal{N}(\mu_1,\Sigma) +(1-y_i)log\ \mathcal{N}(\mu_2,\Sigma)+y_ilog\ \phi +(1-y_i) log \ (1-\phi)] \\ \end{align} L ( θ ) = i = 1 ∑ N [ l o g N ( μ 1 , Σ ) y i ⋅ N ( μ 2 , Σ ) 1 − y i + l o g ϕ y i ⋅ ( 1 − ϕ ) 1 − y i ] = i = 1 ∑ N [ l o g N ( μ 1 , Σ ) y i + l o g N ( μ 2 , Σ ) 1 − y i + l o g ϕ y i + l o g ( 1 − ϕ ) 1 − y i ] = i = 1 ∑ N [ y i l o g N ( μ 1 , Σ ) + ( 1 − y i ) l o g N ( μ 2 , Σ ) + y i l o g ϕ + ( 1 − y i ) l o g ( 1 − ϕ )] 有,
θ = ( μ 1 , μ 2 , Σ , ϕ ) θ ^ = a r g m a x L ( θ ) y = 1 : N 1 个样本 y = 0 : N 2 个样本 N = N 1 + N 2 \theta = (\mu_1,\mu_2,\Sigma,\phi) \\ \hat \theta = argmax \ L(\theta) \\ y=1:N1个样本\\ y=0:N2个样本\\ N=N1+N2\\ θ = ( μ 1 , μ 2 , Σ , ϕ ) θ ^ = a r g ma x L ( θ ) y = 1 : N 1 个样本 y = 0 : N 2 个样本 N = N 1 + N 2 模型求解
首先对ϕ \phi ϕ 进行求解,将式子对ϕ \phi ϕ 求偏导:
∂ ∂ ϕ L ( θ ) = ∑ i = 1 N [ y i ϕ − 1 − y i 1 − ϕ ] = ∑ i = 1 N [ y i ϕ + y i − 1 1 − ϕ ] = 1 ϕ ( 1 − ϕ ) ∑ i = 1 N [ y i ( 1 − ϕ ) + ( y i − 1 ) ϕ ] = 1 ϕ ( 1 − ϕ ) ∑ i = 1 N ( y i − ϕ ) = 1 ϕ ( 1 − ϕ ) [ ∑ i = 1 N y i − N ϕ ] = 0 \begin{align} \frac{\partial}{\partial \phi}L(\theta)& = \sum_{i=1}^N[\frac {y_i}{\phi}-\frac {1-y_i}{1-\phi}] \\ &=\sum_{i=1}^N[\frac {y_i}{\phi}+\frac {y_i-1}{1-\phi}]\\ &=\frac {1}{\phi(1-\phi)}\sum_{i=1}^N[{y_i}(1-\phi)+(y_i-1)\phi]\\ &=\frac {1}{\phi(1-\phi)}\sum_{i=1}^N(y_i-\phi)\\ &=\frac {1}{\phi(1-\phi)}[\sum_{i=1}^Ny_i-N\phi]\\ &=0 \\ \end{align} ∂ ϕ ∂ L ( θ ) = i = 1 ∑ N [ ϕ y i − 1 − ϕ 1 − y i ] = i = 1 ∑ N [ ϕ y i + 1 − ϕ y i − 1 ] = ϕ ( 1 − ϕ ) 1 i = 1 ∑ N [ y i ( 1 − ϕ ) + ( y i − 1 ) ϕ ] = ϕ ( 1 − ϕ ) 1 i = 1 ∑ N ( y i − ϕ ) = ϕ ( 1 − ϕ ) 1 [ i = 1 ∑ N y i − Nϕ ] = 0 有,
ϕ = ∑ i = 1 N y i N = N 1 N , 由于 y = 0 ,就没了 N 2 个样本 \phi=\frac{\sum\limits_{i=1}^Ny_i}{N}=\frac{N_1}{N},由于y=0,就没了N2个样本 ϕ = N i = 1 ∑ N y i = N N 1 , 由于 y = 0 ,就没了 N 2 个样本 求期望
l ( μ 1 ) = ∑ i = 1 N y i l o g N ( μ 1 , Σ ) = ∑ i = 1 N y i l o g 1 ( 2 π ) p 2 ∣ Σ ∣ 1 2 e − ( x i − μ 1 ) T ( x i − μ 1 ) 2 Σ \begin{align} l(\mu_1)&=\sum\limits_{i=1}^Ny_ilog \ N(\mu_1,\Sigma)\\ &=\sum\limits_{i=1}^Ny_ilog \frac{1}{(2\pi)^{\frac{p}{2}}|\Sigma|^{\frac{1}{2}} } e^{\frac {-(x_i-\mu_1)^T(x_i-\mu_1)}{2\Sigma}} \\ \end{align} l ( μ 1 ) = i = 1 ∑ N y i l o g N ( μ 1 , Σ ) = i = 1 ∑ N y i l o g ( 2 π ) 2 p ∣Σ ∣ 2 1 1 e 2Σ − ( x i − μ 1 ) T ( x i − μ 1 ) 有,
μ 1 ^ = a r g m a x μ 1 ∑ i = 1 N y i log N ( μ 1 , Σ ) = a r g m a x μ 1 ∑ i = 1 N y i [ − 1 2 ( x i − μ 1 ) T Σ − 1 ( x i − μ 1 ) ] = a r g m i n μ 1 ∑ i = 1 N y i ( x i − μ 1 ) T Σ − 1 ( x i − μ 1 ) \begin{align}\hat{\mu_1}&=\mathop{argmax}_{\mu_1}\sum\limits_{i=1}^Ny_i\log\mathcal{N}(\mu_1,\Sigma)\nonumber\\ &=\mathop{argmax}_{\mu_1}\sum\limits_{i=1}^Ny_i[-\frac{1}{2}(x_i-\mu_1)^T\Sigma^{-1}(x_i-\mu_1)]\\ &=\mathop{argmin}_{\mu_1}\sum\limits_{i=1}^Ny_i(x_i-\mu_1)^T\Sigma^{-1}(x_i-\mu_1) \end{align} μ 1 ^ = a r g ma x μ 1 i = 1 ∑ N y i log N ( μ 1 , Σ ) = a r g ma x μ 1 i = 1 ∑ N y i [ − 2 1 ( x i − μ 1 ) T Σ − 1 ( x i − μ 1 )] = a r g min μ 1 i = 1 ∑ N y i ( x i − μ 1 ) T Σ − 1 ( x i − μ 1 ) 由于:
Δ = ∑ i = 1 N y i ( x i − μ 1 ) T Σ − 1 ( x i − μ 1 ) = ∑ i = 1 N [ y i x i T Σ − 1 x i − 2 y i μ 1 T Σ − 1 x i + y i μ 1 T Σ − 1 μ 1 ] \begin{align} \Delta&=\sum\limits_{i=1}^Ny_i(x_i-\mu_1)^T\Sigma^{-1}(x_i-\mu_1) \\ &=\sum\limits_{i=1}^N[y_ix_i^T\Sigma^{-1}x_i-2y_i\mu_1^T\Sigma^{-1}x_i+y_i\mu_1^T\Sigma^{-1}\mu_1] \end{align} Δ = i = 1 ∑ N y i ( x i − μ 1 ) T Σ − 1 ( x i − μ 1 ) = i = 1 ∑ N [ y i x i T Σ − 1 x i − 2 y i μ 1 T Σ − 1 x i + y i μ 1 T Σ − 1 μ 1 ] 求微分左边乘以 Σ \Sigma Σ 可以得到:
∂ ∂ μ 1 L ( Δ ) = ∑ i = 1 N [ − 2 y i Σ − 1 x i + 2 y i Σ − 1 μ 1 ] = 0 ⟹ μ 1 = ∑ i = 1 N y i x i ∑ i = 1 N y i = ∑ i = 1 N y i x i N 1 \begin{align} \frac{\partial}{\partial \mu_1}L(\Delta)=\sum\limits_{i=1}^N[-2y_i\Sigma^{-1}x_i+2y_i\Sigma^{-1}\mu_1]=0\nonumber \\ \Longrightarrow\mu_1=\frac{\sum\limits_{i=1}^Ny_ix_i}{\sum\limits_{i=1}^Ny_i}=\frac{\sum\limits_{i=1}^Ny_ix_i}{N_1} \end{align} ∂ μ 1 ∂ L ( Δ ) = i = 1 ∑ N [ − 2 y i Σ − 1 x i + 2 y i Σ − 1 μ 1 ] = 0 ⟹ μ 1 = i = 1 ∑ N y i i = 1 ∑ N y i x i = N 1 i = 1 ∑ N y i x i 求解 μ 2 \mu_2 μ 2 ,由于正反例是对称的,所以:
μ 2 = ∑ i = 1 N ( 1 − y i ) x i N 2 \mu_2=\frac{\sum\limits_{i=1}^N(1-y_i)x_i}{N_2} μ 2 = N 2 i = 1 ∑ N ( 1 − y i ) x i 求协方差
最为困难的是求解 Σ \Sigma Σ ,模型假设对正反例采用相同的协方差矩阵,当然从上面的求解中可以看到,即使采用不同的矩阵也不会影响之前的三个参数。首先有:
∑ i = 1 N log N ( μ , Σ ) = ∑ i = 1 N log ( 1 ( 2 π ) p / 2 ∣ Σ ∣ 1 / 2 ) + ( − 1 2 ( x i − μ ) T Σ − 1 ( x i − μ ) ) = C o n s t − 1 2 N log ∣ Σ ∣ − 1 2 ∑ i = 1 N ( x i − μ ) T Σ − 1 ( x i − μ ) = C o n s t − 1 2 N log ∣ Σ ∣ − 1 2 T r a c e ( ∑ i = 1 N ( x i − μ ) ( x i − μ ) T Σ − 1 ) = C o n s t − 1 2 N log ∣ Σ ∣ − 1 2 T r a c e ( N S Σ − 1 ) = C o n s t − 1 2 N log ∣ Σ ∣ − 1 2 N T r a c e ( S Σ − 1 ) \begin{align} \sum\limits_{i=1}^N\log\mathcal{N}(\mu,\Sigma)&=\sum\limits_{i=1}^N\log(\frac{1}{(2\pi)^{p/2}|\Sigma|^{1/2}})+(-\frac{1}{2}(x_i-\mu)^T\Sigma^{-1}(x_i-\mu))\nonumber\\ &=Const-\frac{1}{2}N\log|\Sigma|-\frac{1}{2}\sum\limits_{i=1}^N(x_i-\mu)^T\Sigma^{-1}(x_i-\mu)\nonumber\\ &=Const-\frac{1}{2}N\log|\Sigma|-\frac{1}{2}Trace(\sum\limits_{i=1}^N(x_i-\mu)(x_i-\mu)^T\Sigma^{-1})\nonumber\\ &=Const-\frac{1}{2}N\log|\Sigma|-\frac{1}{2}Trace(NS\Sigma^{-1})\nonumber\\ &=Const-\frac{1}{2}N\log|\Sigma|-\frac{1}{2}NTrace(S\Sigma^{-1}) \end{align} i = 1 ∑ N log N ( μ , Σ ) = i = 1 ∑ N log ( ( 2 π ) p /2 ∣Σ ∣ 1/2 1 ) + ( − 2 1 ( x i − μ ) T Σ − 1 ( x i − μ )) = C o n s t − 2 1 N log ∣Σ∣ − 2 1 i = 1 ∑ N ( x i − μ ) T Σ − 1 ( x i − μ ) = C o n s t − 2 1 N log ∣Σ∣ − 2 1 T r a ce ( i = 1 ∑ N ( x i − μ ) ( x i − μ ) T Σ − 1 ) = C o n s t − 2 1 N log ∣Σ∣ − 2 1 T r a ce ( NS Σ − 1 ) = C o n s t − 2 1 N log ∣Σ∣ − 2 1 NT r a ce ( S Σ − 1 ) 其中p p p 为x的维度,矩阵的迹为主对角和,实数的迹为实数本身。
其中C o n s t = 1 ( 2 π ) p / 2 Const=\frac{1}{(2\pi)^{p/2}} C o n s t = ( 2 π ) p /2 1 ,在这个表达式中,在标量上加入迹从而可以交换矩阵的顺序,对于包含绝对值和迹的表达式的导数,有:
∂ ∂ A ( ∣ A ∣ ) = ∣ A ∣ A − 1 ∂ ∂ A T r a c e ( A B ) = B T T r a c e ( A B ) = T r a c e ( B A ) \begin{align} &\frac{\partial}{\partial A}(|A|)=|A|A^{-1}\\ &\frac{\partial}{\partial A}Trace(AB)=B^T \\ &Trace(AB)=Trace(BA) \end{align} ∂ A ∂ ( ∣ A ∣ ) = ∣ A ∣ A − 1 ∂ A ∂ T r a ce ( A B ) = B T T r a ce ( A B ) = T r a ce ( B A ) 因此综上,可得:
l = [ ∑ i = 1 N ( ( 1 − y i ) log N ( μ 2 , Σ ) + y i log N ( μ 1 , Σ ) ] = C o n s t − 1 2 N log ∣ Σ ∣ − 1 2 N 1 T r a c e ( S 1 Σ − 1 ) − 1 2 N 2 T r a c e ( S 2 Σ − 1 ) \begin{align} l &= [\sum\limits_{i=1}^N((1-y_i)\log\mathcal{N}(\mu_2,\Sigma)+y_i\log \mathcal{N}(\mu_1,\Sigma)] \nonumber \\ &=Const-\frac{1}{2}N\log|\Sigma|-\frac{1}{2}N_1Trace(S_1\Sigma^{-1})-\frac{1}{2}N_2Trace(S_2\Sigma^{-1}) \end{align} l = [ i = 1 ∑ N (( 1 − y i ) log N ( μ 2 , Σ ) + y i log N ( μ 1 , Σ )] = C o n s t − 2 1 N log ∣Σ∣ − 2 1 N 1 T r a ce ( S 1 Σ − 1 ) − 2 1 N 2 T r a ce ( S 2 Σ − 1 ) 其中,S 1 , S 2 S_1,S_2 S 1 , S 2 分别为两个类数据内部的协方差矩阵,对 Σ \Sigma Σ 进行求导,有:
∂ ∂ Σ l = N Σ − 1 − N 1 S 1 T Σ − 2 − N 2 S 2 T Σ − 2 = 0 ⟹ Σ = N 1 S 1 + N 2 S 2 N \begin{align} \frac{\partial}{\partial \Sigma}l=&N\Sigma^{-1}-N_1S_1^T\Sigma^{-2}-N_2S_2^T\Sigma^{-2}=0\nonumber \\ & \Longrightarrow\Sigma=\frac{N_1S_1+N_2S_2}{N} \end{align} ∂ Σ ∂ l = N Σ − 1 − N 1 S 1 T Σ − 2 − N 2 S 2 T Σ − 2 = 0 ⟹ Σ = N N 1 S 1 + N 2 S 2 这里应用了类协方差矩阵的对称性。
于是就利用最大后验的方法求得了模型假设里面的所有参数,根据模型,可以得到联合分布,也就可以得到用于推断的条件分布了。
分类结果
求出结果后,那么分类结果为:
创建数据
复制 import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
X,y=make_classification(n_samples=100, n_features=2,n_classes=2,n_informative=1,n_redundant=0,n_repeated=0,n_clusters_per_class=1)
实现
计算μ 1 , μ 2 , Σ , ϕ \mu_1,\mu_2,\Sigma,\phi μ 1 , μ 2 , Σ , ϕ 的值
复制 def gda(x,y):
n_samples,n_features = x.shape
n1 = y[y==1].shape[0]
n2 = y[y==0].shape[0]
phi = n1 / n_samples
mu1 = np.sum([xi*yi for xi,yi in zip(x,y)])/n1
mu2 = np.sum([xi*(1-yi) for xi,yi in zip(x,y)])/n2
x1 = x[y==1]
x0 = x[y==0]
sigma = (n1*np.dot(x0.T,x0) + n2*np.dot(x1.T,x1)) / n_samples
return phi,mu1,mu2,sigma
phi,mu1,mu2,sigma = gda(X,y)
复制 def Gaussian(x, mean, cov):
"""
这是自定义的高斯分布概率密度函数
:param x: 输入数据
:param mean: 均值向量
:param cov: 协方差矩阵
:return: x的概率
"""
dim = np.shape(cov)[0]
# cov的行列式为零时的措施
covdet = np.linalg.det(cov + np.eye(dim) * 0.001)
covinv = np.linalg.inv(cov + np.eye(dim) * 0.001)
xdiff = (x - mean).reshape((1, dim))
# 概率密度
p = 1.0 / (np.power(np.power(2 * np.pi, dim) * np.abs(covdet), 0.5)) * \
np.exp(-0.5 * xdiff.dot(covinv).dot(xdiff.T))[0][0]
return p
复制 def predict(x,mu1,mu2,sigma):
y = []
for xi in x:
if Gaussian(xi, mu1, sigma) >= Gaussian(xi, mu2, sigma):
y.append(1)
else:
y.append(0)
return y
绘图
概率生成模型-朴素贝叶斯
思想:朴素贝叶斯假设(条件独立性假设),又是最简单的概率图(有向图)模型。
动机:为了简化运算。
朴素贝叶斯队数据的属性之间的关系作出了假设,一般地,有需要得到 $p(x|y)$ 这个概率值,由于 $x$ 有 $p$ 个维度,因此需要对这么多的维度的联合概率进行采样,但是知道这么高维度的空间中采样需要的样本数量非常大才能获得较为准确的概率近似。
在一般的有向概率图模型中,对各个属性维度之间的条件独立关系作出了不同的假设,其中最为简单的一个假设就是在朴素贝叶斯模型描述中的条件独立性假设。
p ( x ∣ y ) = ∏ i = 1 n p ( x i ∣ y ) p(x|y)=\prod\limits_{i=1}^np(x_i|y) p ( x ∣ y ) = i = 1 ∏ n p ( x i ∣ y ) 即:
x i ⊥ x j ∣ y , ∀ i ≠ j x_i\perp x_j|y,\forall\ i\ne j x i ⊥ x j ∣ y , ∀ i = j 于是利用贝叶斯定理,对于单次观测:
p ( y ∣ x ) = p ( x ∣ y ) p ( y ) p ( x ) = ∏ i = 1 p p ( x i ∣ y ) p ( y ) p ( x ) p(y|x)=\frac{p(x|y)p(y)}{p(x)}=\frac{\prod\limits_{i=1}^pp(x_i|y)p(y)}{p(x)} p ( y ∣ x ) = p ( x ) p ( x ∣ y ) p ( y ) = p ( x ) i = 1 ∏ p p ( x i ∣ y ) p ( y ) 对于单个维度的条件概率以及类先验作出进一步的假设:
x i x_i x i 为连续变量:p ( x i ∣ y ) = N ( μ i , σ i 2 ) p(x_i|y)=\mathcal{N}(\mu_i,\sigma_i^2) p ( x i ∣ y ) = N ( μ i , σ i 2 )
x i x_i x i 为离散变量:类别分布(Categorical):p ( x i = i ∣ y ) = θ i , ∑ i = 1 K θ i = 1 p(x_i=i|y)=\theta_i,\sum\limits_{i=1}^K\theta_i=1 p ( x i = i ∣ y ) = θ i , i = 1 ∑ K θ i = 1
p ( y ) = ϕ y ( 1 − ϕ ) 1 − y p(y)=\phi^y(1-\phi)^{1-y} p ( y ) = ϕ y ( 1 − ϕ ) 1 − y
对这些参数的估计,常用 M L E MLE M L E 的方法直接在数据集上估计,由于不需要知道各个维度之间的关系,因此,所需数据量大大减少了。估算完这些参数,再代入贝叶斯定理中得到类别的后验分布。
采用MLE(似然)进行参数估计