基本概念
隐马尔可夫模型是关于时序的概率模型,描述由一个隐藏的马尔可夫链随机生成不可观察的状态随机序列,再由各个状态生成一个观测而产生随机序列的过程。隐藏的马尔可夫链随机生成的状态的序列,称为状态序列
;每个状态生成一个观测,而由此产生的观测的随机序列,称为观测序列
。序列的每一个未知又可以看做是一个时刻。
用途
隐马尔科夫模型是可用于标注问题的统计学习模型,描述由隐藏的马尔可夫链随机生成观测序列的过程,属于生成模型
。
前提
随机变量
随机变量(Random Variable) ,通常用大写字母来表示一个随机事件。比如看下面的例子:
$X$:河水是咸的;$Y$:井水是甜的
很显然,$X$与$Y$两个随机事件是没有关系的。也就是说和之间是相互独立的。记作:
随机过程
对于一类随机变量来说,它们之间存在着某种关系。
比如: :表示在 时刻某支股票的价格,那么$S_{t+1}$和$S_t$之间一定是有关系的,至于具体什么样的关系,这里原先不做深究,但有一点可以确定,两者之间一定存在的一种关系。随着时间$t$的变化,可以写出下面的形式:
⋯ S t , S t + 1 , S t + 2 ⋯ \cdots S_t,S_{t+1},S_{t+2}\cdots ⋯ S t , S t + 1 , S t + 2 ⋯ 这样就生成了一组随机变量,它们之间存在着一种相当复杂的关系,也就是说,各个随机变量之间存在着关系,即不相互独立。由此,我们会把按照某个时间或者次序上的一组不相互独立的随机变量的这样一个整体作为研究对象。这样的话,也就引出了另外的一个概念:随机过程(Stochastic Process) 。也就是说随机过程的研究对象不在是单个的随机变量,而是一组随机变量,并且这一组随机变量之间存在着一种非常紧密的关系(不相互独立)。记作:
{ S t } t = 1 ∞ \{S_t\}^{\infty}_{t=1} { S t } t = 1 ∞ 马尔可夫链/过程
**马尔科夫链(Markov Chain)**即马尔可夫过程,是一种特殊的随机过程——具备马尔可夫性的随机过程。
马尔可夫性:(Markov Property): 如果满足$P(S_{t+1}|S_t,S_{t-1}\cdots S_1)=P(S_{t+1}|S_t)$,即具备了马尔可夫性。简单来说, 和之间存在关系,和以前的时刻的没有关系,即只和“最近的状态” 有关系。
现实例子:**下一个时刻仅依赖于当前时刻,跟过去无关。**比如:一个老师讲课,明天的讲课状态一定和今天的状态最有关系,和过去十年的状态基本就没关系了。
最主要考量:为了简化计算。 $P(S_{t+1}|S_t,S_{t-1}\cdots S_1)=P(S_{t+1}|S_t)$ ,如果$S_{t+1}$和 $S_t,S_{t-1}\cdots S_1$都有关系的话,计算就会太大了,可能造成无法计算。
状态空间模型
状态空间模型(State Space Model) ,常应用于 HMM,Kalman Filterm Particle Filter
。在这里就是指马尔可夫链 + 观测变量,即Markov Chain + Obervation
。
如上图所示,s1-s2-s3
为马尔可夫链,a1
, a2
, a3
为观测变量,以a2
为例,a2
只和s2
有关和s1
, s3
无关。状态空间模型可以说是由马尔可夫链演化而来的模型。记作:
P ( S t + 1 ∣ S t ) = P ( S t + 1 ∣ S 1 , ⋯ , S t ) P(S_{t+1}|S_t) = P(S_{t+1}|S1,\cdots,S_t) P ( S t + 1 ∣ S t ) = P ( S t + 1 ∣ S 1 , ⋯ , S t ) 马尔可夫奖励过程
马尔可夫奖励过程(Markov Reward Process) ,即马尔可夫链+奖励,即:Markov Chain + Reward
。如下图:
举个例子,比如说你买了一支股票,然后你每天就会有“收益”,当然了这里的收益是泛化的概念,收益有可能是正的,也有可能是负的,有可能多,有可能少,总之从今天的状态$S_t$到明天的状态 $s_{t+1}$,会有一个reward
。记作:
R s = E [ R t + 1 ∣ S t ] R_s = E[R_{t+1}|S_t] R s = E [ R t + 1 ∣ S t ] 模型定义
隐马尔可夫模型由初始概率分布、状态转移概率分布以及观测概率分布确定。隐马尔可夫模型的形式定义如下:
设$Q$是所有可能的状态集合 ,$V$是所有可能的观测集合 。
Q = { q 1 , q 2 , ⋯ , q N } , V = { v 1 , v 2 , ⋯ , v M } Q = \{q_1,q_2,\cdots,q_N\},V = \{v_1,v_2,\cdots,v_M\} Q = { q 1 , q 2 , ⋯ , q N } , V = { v 1 , v 2 , ⋯ , v M } 其中,$N$是可能的状态数,$M$是可能的观测数。
$I$是长度为$T$的状态序列
,$O$是对应的观测序列
。
I = ( i 1 , i 2 , ⋯ , i T ) , O = ( o 1 , o 2 , ⋯ , o T ) I = (i_1,i_2,\cdots,i_T),O = (o_1,o_2,\cdots,o_T) I = ( i 1 , i 2 , ⋯ , i T ) , O = ( o 1 , o 2 , ⋯ , o T ) $A$是状态转移概率矩阵:
A = [ a i j ] N × N A = [a_{ij}]_{N\times N} A = [ a ij ] N × N 其中,
a i j = P ( i t + 1 = q j ∣ i t = q i ) , i = 1 , 2 , ⋯ , N ; j = 1 , 2 , ⋯ , N a_{ij} = P(i_{t+1}=q_j|i_t=q_i),i=1,2,\cdots,N;j=1,2,\cdots,N a ij = P ( i t + 1 = q j ∣ i t = q i ) , i = 1 , 2 , ⋯ , N ; j = 1 , 2 , ⋯ , N 是在时刻$t$处于状态$q_i$的条件下在时刻$t+1$转移到状态$q_j$的概率。
$B$是观测概率矩阵:
b j ( k ) = P ( o t = v k ∣ i t = q j ) , k = 1 , 2 , ⋯ , M ; j = 1 , 2 , ⋯ , N b_j(k)=P(o_t=v_k|i_t=q_j),k=1,2,\cdots,M;j=1,2,\cdots,N b j ( k ) = P ( o t = v k ∣ i t = q j ) , k = 1 , 2 , ⋯ , M ; j = 1 , 2 , ⋯ , N 是在时刻$t$处于状态$q_j$的条件下生成观测$v_k$的概率。
$\pi$是初始状态概率向量:
π = ( π i ) \pi = (\pi_i) π = ( π i ) 其中,
π i = P ( i 1 = q i ) , i = 1 , 2 , ⋯ , N \pi_i = P(i_1=q_i),i=1,2,\cdots,N π i = P ( i 1 = q i ) , i = 1 , 2 , ⋯ , N 是时刻$t=1$处于状态$q_i$的概率。
$(i)$ 隐马尔可夫模型$\lambda = (\pi,A,B)$
隐马尔可夫模型由初始状态概率向量 $\pi$、状态转移概率矩阵 $A$以及观测概率矩阵 $B$决定。$\pi$和$A$决定状态序列,$B$决定观测序列。因此,隐马尔可夫模型$\lambda$由如下表示:
λ = ( π , A , B ) \lambda = (\pi,A,B) λ = ( π , A , B ) $(ii)$ 从定义可知,隐马尔可夫模型作了两个基本假设:
齐次马尔可夫性假设 :假设隐藏的马尔科夫链在任意时刻$t$的状态只依赖于其前一时刻的状态,与其他的时刻的状态及观测无关,也与时刻$t$无关。
P ( i t + 1 ∣ i 1 ⋯ t , o 1 ⋯ t ) = P ( i t + 1 ∣ i t ) P(i_{t+1}|i_{1\cdots t},o_{1\cdots t}) = P(i_{t+1}|i_t) P ( i t + 1 ∣ i 1 ⋯ t , o 1 ⋯ t ) = P ( i t + 1 ∣ i t ) 观测独立性假设 :假设任意时刻的观测只依赖于该时刻的马尔可夫链的状态,与其他观测及状态无关。
P ( O t ∣ i 1 ⋯ t , o 1 ⋯ t ) = P ( O t ∣ i t ) P(O_t|i_{1\cdots t},o_{1\cdots t}) = P(O_t|i_t) P ( O t ∣ i 1 ⋯ t , o 1 ⋯ t ) = P ( O t ∣ i t ) $(iii)$ 三个问题
$(1).$ 概率计算问题 。给定$\lambda$,求$P(O|\lambda)$
$(2).$ 学习问题
$(3).$ 预测问题(解码问题)
概率计算问题
直接计算法
状态序列$I=(i_1,i_2,\cdots,i_T)$的概率是:
P ( I ∣ λ ) = P ( i 1 ⋯ T ∣ λ ) = P ( i 1 ⋯ T − 1 , i T ∣ λ ) = P ( i T ∣ i T − 1 ) ⋅ P ( i 1 ⋯ T ∣ λ ) = π i 1 a i 1 i 2 a i 2 i 3 ⋯ a i T − 1 i T = π i 1 ∏ t = 2 T a i t − 1 i t P(I|\lambda)= P(i_{1\cdots T}|\lambda) = P(i_{1\cdots T-1},i_T|\lambda)=P(i_T|i_{T-1})\cdot P(i_{1\cdots T}|\lambda) = \pi_{i_1}a_{i_1i_2}a_{i_2i_3}\cdots a_{i_{T-1}i_T} = \pi_{i_1}\prod_{t=2}^Ta_{i_{t-1}i_t} P ( I ∣ λ ) = P ( i 1 ⋯ T ∣ λ ) = P ( i 1 ⋯ T − 1 , i T ∣ λ ) = P ( i T ∣ i T − 1 ) ⋅ P ( i 1 ⋯ T ∣ λ ) = π i 1 a i 1 i 2 a i 2 i 3 ⋯ a i T − 1 i T = π i 1 t = 2 ∏ T a i t − 1 i t 状态$i_t$的状态转移概率分布{$a_{i_t,i_{t+1}}$}产生状态$i_{t+1}$
对固定的状态序列$I=(i_1,i_2,\cdots,i_T)$,观测序列$O=(o_1,o_2,\cdots,o_T)$的概率是$P(O|I,\lambda)$:
P ( O ∣ I , λ ) = b i 1 ( o 1 ) b i 2 ( o 2 ) ⋯ b i T ( o T ) P(O|I,\lambda) = b_{i_1}(o_1)b_{i_2}(o_2)\cdots b_{i_T}(o_T) P ( O ∣ I , λ ) = b i 1 ( o 1 ) b i 2 ( o 2 ) ⋯ b i T ( o T ) 状态$i_t$的观测概率分布$b_{i_t}(k)$生成$o_t$
然后,对所有可能的状态序列$I$求和,得到观测序列$O$的概率$P(O|\lambda)$,即
P ( O ∣ λ ) = ∑ I P ( I , O ∣ λ ) = ∑ I P ( I ∣ λ ) P ( O ∣ I , λ ) = ∑ i 1 , i 2 , ⋯ , i T π i 1 ∏ t = 2 T a i t − 1 , i t ∏ t = 1 T b i t ( o t ) P(O|\lambda)=\sum_{I} P(I,O|\lambda)=\sum_{I} P(I|\lambda)P(O|I,\lambda)=\sum_{i_1,i_2,\cdots,i_T}\pi_{i_1}\prod_{t=2}^T{a_{i_{t-1},i_t}}\prod_{t=1}^Tb_{i_t}(o_t) P ( O ∣ λ ) = I ∑ P ( I , O ∣ λ ) = I ∑ P ( I ∣ λ ) P ( O ∣ I , λ ) = i 1 , i 2 , ⋯ , i T ∑ π i 1 t = 2 ∏ T a i t − 1 , i t t = 1 ∏ T b i t ( o t ) 由于空间复杂度为$O(TN^T)$,故该算法理论上可行,实际不可行。
前向算法
前向概率定义 :给定隐马尔可夫模型$\lambda$,定义到时刻$t$部分观测序列为$o_1,o_2,\cdots,o_t$且状态为$q_i$的概率为前向概率,记作:
α t ( i ) = P ( o 1 , o 2 , ⋯ , o t , i t = q i ∣ λ ) \alpha_t(i)=P(o_1,o_2,\cdots,o_t,i_t=q_i|\lambda) α t ( i ) = P ( o 1 , o 2 , ⋯ , o t , i t = q i ∣ λ ) 有,
α T ( i ) = P ( O , i t = q i ∣ λ ) \alpha_T(i) = P(O,i_t=q_i|\lambda) α T ( i ) = P ( O , i t = q i ∣ λ ) 算法过程 :
输入:隐马尔可夫模型$\lambda$,观测序列$O$;
输出:观测序列概率$P(O|\lambda)$。
α 1 ( i ) = π i b i ( o 1 ) , i = 1 , 2 , ⋯ , N \alpha_1(i) = \pi_ib_i(o_1),i=1,2,\cdots,N α 1 ( i ) = π i b i ( o 1 ) , i = 1 , 2 , ⋯ , N α t + 1 ( j ) = P ( o 1 , ⋯ , o t , o t + 1 , i t + 1 = q j ∣ λ ) = ∑ i = 1 N P ( o 1 , ⋯ , o t , o t + 1 , i t + 1 = q j , i t = q i ∣ λ ) = ∑ i = 1 N P ( o t + 1 ∣ o 1 , ⋯ , o t , i t = q i , i t + 1 = q j , λ ) ⋅ P ( o 1 , ⋯ , o t , i t = q i , i t + 1 = q j ∣ λ ) = ∑ i = 1 N P ( o t + 1 ∣ i t + 1 = q j , λ ) ⋅ P ( i t + 1 = q j ∣ o 1 , ⋯ , o t , i t = q i , λ ) ⋅ P ( o 1 , ⋯ , o t , i t = q i ∣ λ ) = ∑ i = 1 N P ( o t + 1 ∣ i t + 1 = q j , λ ) ⋅ P ( i t + 1 = q j ∣ i t = q i , λ ) ⋅ P ( o 1 , ⋯ , o t , i t = q i ∣ λ ) = ∑ i = 1 N b j ( o t + 1 ) ⋅ a i j ⋅ α t ( i ) = ∑ i = 1 N [ α t ( i ) a i j ] b j ( o t + 1 ) \begin{align} \alpha_{t+1}(j) &= P(o_1,\cdots,o_t,o_{t+1},i_{t+1}=q_j|\lambda) \\ &= \sum_{i=1}^N P(o_1,\cdots,o_t,o_{t+1},i_{t+1}=q_j,i_t=q_i|\lambda) \\ &= \sum_{i=1}^N P(o_{t+1}|o_1,\cdots,o_t,i_t=q_i,i_{t+1}=q_j,\lambda)\cdot P(o_1,\cdots,o_t,i_t=q_i,i_{t+1}=q_j|\lambda) \\ &= \sum_{i=1}^N P(o_{t+1}|i_{t+1}=q_j,\lambda)\cdot P(i_{t+1}=q_j|o_1,\cdots,o_t,i_t=q_i,\lambda)\cdot P(o_1,\cdots,o_t,i_t=q_i|\lambda) \\ &= \sum_{i=1}^N P(o_{t+1}|i_{t+1}=q_j,\lambda)\cdot P(i_{t+1}=q_j|i_t=q_i,\lambda)\cdot P(o_1,\cdots,o_t,i_t=q_i|\lambda) \\ &= \sum_{i=1}^N b_j(o_{t+1})\cdot a_{ij}\cdot \alpha_t(i) \\ &= \sum_{i=1}^N [\alpha_t(i)a_{ij}]b_j(o_{t+1}) \end{align} α t + 1 ( j ) = P ( o 1 , ⋯ , o t , o t + 1 , i t + 1 = q j ∣ λ ) = i = 1 ∑ N P ( o 1 , ⋯ , o t , o t + 1 , i t + 1 = q j , i t = q i ∣ λ ) = i = 1 ∑ N P ( o t + 1 ∣ o 1 , ⋯ , o t , i t = q i , i t + 1 = q j , λ ) ⋅ P ( o 1 , ⋯ , o t , i t = q i , i t + 1 = q j ∣ λ ) = i = 1 ∑ N P ( o t + 1 ∣ i t + 1 = q j , λ ) ⋅ P ( i t + 1 = q j ∣ o 1 , ⋯ , o t , i t = q i , λ ) ⋅ P ( o 1 , ⋯ , o t , i t = q i ∣ λ ) = i = 1 ∑ N P ( o t + 1 ∣ i t + 1 = q j , λ ) ⋅ P ( i t + 1 = q j ∣ i t = q i , λ ) ⋅ P ( o 1 , ⋯ , o t , i t = q i ∣ λ ) = i = 1 ∑ N b j ( o t + 1 ) ⋅ a ij ⋅ α t ( i ) = i = 1 ∑ N [ α t ( i ) a ij ] b j ( o t + 1 ) P ( O ∣ λ ) = ∑ i = 1 N α T ( i ) P(O|\lambda) = \sum_{i=1}^N \alpha_T(i) P ( O ∣ λ ) = i = 1 ∑ N α T ( i ) 后向算法
后向概率定义 :给定隐马尔可夫模型$\lambda$,定义在时刻$t$状态为$q_i$的条件下,从$t+1$到$T$的部分观测序列为$o_{t+1},o_{t+2},\cdots,o_T$的概率为后向概率,记作:
β t ( i ) = P ( o t + 1 , o t + 2 , ⋯ , o T ∣ i t = q i , λ ) \beta_t(i) = P(o_{t+1},o_{t+2},\cdots,o_T|i_t=q_i,\lambda) β t ( i ) = P ( o t + 1 , o t + 2 , ⋯ , o T ∣ i t = q i , λ ) 算法过程 :
输入:隐马尔可夫模型$\lambda$,观测序列$O$;
输出:观测序列概率$P(O|\lambda)$。
β T ( i ) = 1 , i = 1 , 2 , ⋯ , N \beta_T(i) = 1,i=1,2,\cdots,N β T ( i ) = 1 , i = 1 , 2 , ⋯ , N β t ( i ) = P ( o t + 1 , o t + 2 , ⋯ , o T ∣ i t = q i , λ ) = ∑ j = 1 N P ( o t + 1 , o t + 2 , ⋯ , o T , i t + 1 = q j ∣ i t = q i , λ ) = ∑ j = 1 N P ( o t + 1 , o t + 2 , ⋯ , o T ∣ i t = q i , i t + 1 = q j , λ ) ⋅ P ( i t + 1 = q j ∣ i t = q i , λ ) = ∑ j = 1 N P ( o t + 2 , ⋯ , o T ∣ i t + 1 = q j , λ ) ⋅ P ( o t + 1 ∣ o t + 2 , ⋯ , o T , i t + 1 = q j , i t = q i , λ ) ⋅ P ( i t + 1 = q j ∣ i t = q i , λ ) = ∑ j = 1 N P ( o t + 1 ∣ i t + 1 = q j , λ ) ⋅ P ( i t + 1 = q j ∣ i t = q i , λ ) ⋅ P ( o t + 2 , ⋯ , o T ∣ i t + 1 = q j , λ ) = ∑ j = 1 N b j ( o t + 1 ) a i j β t + 1 ( j ) \begin{align} \beta_t(i) &= P(o_{t+1},o_{t+2},\cdots,o_T|i_t=q_i,\lambda) \\ &= \sum_{j=1}^N P(o_{t+1},o_{t+2},\cdots,o_T,i_{t+1}=q_j|i_t=q_i,\lambda) \\ &= \sum_{j=1}^N P(o_{t+1},o_{t+2},\cdots,o_T|i_t=q_i,i_{t+1}=q_j,\lambda)\cdot P(i_{t+1}=q_j|i_t=q_i,\lambda) \\ &= \sum_{j=1}^N P(o_{t+2},\cdots,o_T|i_{t+1}=q_j,\lambda)\cdot P(o_{t+1}|o_{t+2},\cdots,o_T,i_{t+1}=q_j,i_t=q_i,\lambda)\cdot P(i_{t+1}=q_j|i_t=q_i,\lambda) \\ &= \sum_{j=1}^N P(o_{t+1}|i_{t+1}=q_j,\lambda)\cdot P(i_{t+1}=q_j|i_t=q_i,\lambda)\cdot P(o_{t+2},\cdots,o_T|i_{t+1}=q_j,\lambda) \\ &= \sum_{j=1}^N b_j(o_{t+1})a_{ij}\beta_{t+1}(j) \end{align} β t ( i ) = P ( o t + 1 , o t + 2 , ⋯ , o T ∣ i t = q i , λ ) = j = 1 ∑ N P ( o t + 1 , o t + 2 , ⋯ , o T , i t + 1 = q j ∣ i t = q i , λ ) = j = 1 ∑ N P ( o t + 1 , o t + 2 , ⋯ , o T ∣ i t = q i , i t + 1 = q j , λ ) ⋅ P ( i t + 1 = q j ∣ i t = q i , λ ) = j = 1 ∑ N P ( o t + 2 , ⋯ , o T ∣ i t + 1 = q j , λ ) ⋅ P ( o t + 1 ∣ o t + 2 , ⋯ , o T , i t + 1 = q j , i t = q i , λ ) ⋅ P ( i t + 1 = q j ∣ i t = q i , λ ) = j = 1 ∑ N P ( o t + 1 ∣ i t + 1 = q j , λ ) ⋅ P ( i t + 1 = q j ∣ i t = q i , λ ) ⋅ P ( o t + 2 , ⋯ , o T ∣ i t + 1 = q j , λ ) = j = 1 ∑ N b j ( o t + 1 ) a ij β t + 1 ( j ) P ( O ∣ λ ) = P ( o 1 , ⋯ , o T ∣ λ ) = ∑ i = 1 N P ( o 1 , ⋯ , o T , i 1 = q i ∣ λ ) = ∑ i = 1 N P ( o 1 , ⋯ , o T ∣ i 1 = q i , λ ) ⋅ P ( i 1 = q i ∣ λ ) = ∑ i = 1 N P ( o 1 ∣ o 2 , ⋯ , o T , i 1 = q i , λ ) ⋅ P ( o 2 , ⋯ , o T ∣ i 1 = q i , λ ) ⋅ π i = ∑ i = 1 N P ( o 1 ∣ i 1 = q i , λ ) ⋅ P ( o 2 , ⋯ , o T ∣ i 1 = q i , λ ) ⋅ π i = ∑ i = 1 N b i ( o 1 ) β 1 ( i ) π i \begin{align} P(O|\lambda) &= P(o_1,\cdots,o_T|\lambda) \\ &= \sum_{i=1}^N P(o_1,\cdots,o_T,i_1=q_i|\lambda) \\ &= \sum_{i=1}^N P(o_1,\cdots,o_T|i_1=q_i,\lambda)\cdot P(i_1=q_i|\lambda) \\ &= \sum_{i=1}^N P(o_1|o_2,\cdots,o_T,i_1=q_i,\lambda)\cdot P(o_2,\cdots,o_T|i_1=q_i,\lambda) \cdot \pi_i \\ &= \sum_{i=1}^N P(o_1|i_1=q_i,\lambda)\cdot P(o_2,\cdots,o_T|i_1=q_i,\lambda) \cdot \pi_i \\ &= \sum_{i=1}^N b_i(o_1)\beta_1(i)\pi_i \end{align} P ( O ∣ λ ) = P ( o 1 , ⋯ , o T ∣ λ ) = i = 1 ∑ N P ( o 1 , ⋯ , o T , i 1 = q i ∣ λ ) = i = 1 ∑ N P ( o 1 , ⋯ , o T ∣ i 1 = q i , λ ) ⋅ P ( i 1 = q i ∣ λ ) = i = 1 ∑ N P ( o 1 ∣ o 2 , ⋯ , o T , i 1 = q i , λ ) ⋅ P ( o 2 , ⋯ , o T ∣ i 1 = q i , λ ) ⋅ π i = i = 1 ∑ N P ( o 1 ∣ i 1 = q i , λ ) ⋅ P ( o 2 , ⋯ , o T ∣ i 1 = q i , λ ) ⋅ π i = i = 1 ∑ N b i ( o 1 ) β 1 ( i ) π i 其他概率与期望值的计算
利用前向概率和后向概率,可以得到关于单个状态和两个状态概率的计算公式。
由前向概率$\alpha_t(i)$和后向概率$\beta_t(i)$定义可知:
α t ( i ) β t ( i ) = P ( o 1 , ⋯ , o t , i t = q i ∣ λ ) ⋅ P ( o t + 1 , ⋯ , o T ∣ i t = q i , λ ) = P ( o 1 , ⋯ , o t , i t = q i ∣ λ ) ⋅ P ( o t + 1 , ⋯ , o T ∣ i t = q i , o 1 , ⋯ , o t , λ ) = P ( o 1 , ⋯ , o t , o t + 1 , ⋯ , o T , i t = q i ∣ λ ) = P ( O , i t = q i ∣ λ ) \begin{align} \alpha_t(i)\beta_t(i) &= P(o_1,\cdots,o_t,i_t=q_i|\lambda)\cdot P(o_{t+1},\cdots,o_T|i_t=q_i,\lambda) \\ &= P(o_1,\cdots,o_t,i_t=q_i|\lambda)\cdot P(o_{t+1},\cdots,o_T|i_t=q_i,o_1,\cdots,o_t,\lambda) \\ &= P(o_1,\cdots,o_t,o_{t+1},\cdots,o_T,i_t=q_i|\lambda) \\ &= P(O,i_t=q_i|\lambda) \end{align} α t ( i ) β t ( i ) = P ( o 1 , ⋯ , o t , i t = q i ∣ λ ) ⋅ P ( o t + 1 , ⋯ , o T ∣ i t = q i , λ ) = P ( o 1 , ⋯ , o t , i t = q i ∣ λ ) ⋅ P ( o t + 1 , ⋯ , o T ∣ i t = q i , o 1 , ⋯ , o t , λ ) = P ( o 1 , ⋯ , o t , o t + 1 , ⋯ , o T , i t = q i ∣ λ ) = P ( O , i t = q i ∣ λ ) 给定模型$\lambda$和观测$O$,在时刻$t$处于状态$q_i$的概率,记为:
γ t ( i ) = P ( i t = q i ∣ O , λ ) = P ( i t = q i , O ∣ λ ) P ( O ∣ λ ) = P ( i t = q i , O ∣ λ ) ∑ j = 1 N P ( i t = q j , O ∣ λ ) = α t ( i ) β t ( i ) ∑ j = 1 N α t ( j ) β t ( j ) \begin{align} \gamma_t(i) &= P(i_t=q_i|O,\lambda) \\ &=\frac{P(i_t=q_i,O|\lambda)}{P(O|\lambda)} \\ &=\frac{P(i_t=q_i,O|\lambda)}{\sum_{j=1}^N P(i_t=q_j,O|\lambda)} \\ &=\frac{\alpha_t(i)\beta_t(i)}{\sum_{j=1}^N \alpha_t(j)\beta_t(j)} \end{align} γ t ( i ) = P ( i t = q i ∣ O , λ ) = P ( O ∣ λ ) P ( i t = q i , O ∣ λ ) = ∑ j = 1 N P ( i t = q j , O ∣ λ ) P ( i t = q i , O ∣ λ ) = ∑ j = 1 N α t ( j ) β t ( j ) α t ( i ) β t ( i ) 给定模型$\lambda$和观测$O$,在时刻$t$处于状态$q_i$且在时刻$t+1$处于状态$q_j$的概率。记为:
ξ t ( i , j ) = P ( i t = q i , i t + 1 = q j ∣ O , λ ) = P ( i t = q i , i t + 1 = q j , O ∣ λ ) P ( O ∣ λ ) = P ( i t = q i , i t + 1 = q j , O ∣ λ ) ∑ i = 1 N ∑ j = 1 N P ( i t = q i , i t + 1 = q j , O ∣ λ ) = α t ( i ) a i j b j ( o t + 1 ) β t + 1 ( j ) ∑ i = 1 N ∑ j = 1 N α t ( i ) a i j b j ( o t + 1 ) β t + 1 ( j ) \begin{align} \xi_t(i,j) &= P(i_t=q_i,i_{t+1}=q_j|O,\lambda) \\ &= \frac{P(i_t=q_i,i_{t+1}=q_j,O|\lambda)}{P(O|\lambda)} \\ &= \frac{P(i_t=q_i,i_{t+1}=q_j,O|\lambda)}{\sum_{i=1}^N\sum_{j=1}^N P(i_t=q_i,i_{t+1}=q_j,O|\lambda)} \\ &= \frac{\alpha_t(i) a_{ij} b_j(o_{t+1})\beta_{t+1}(j)}{\sum_{i=1}^N \sum_{j=1}^N \alpha_t(i) a_{ij} b_j(o_{t+1})\beta_{t+1}(j)} \end{align} ξ t ( i , j ) = P ( i t = q i , i t + 1 = q j ∣ O , λ ) = P ( O ∣ λ ) P ( i t = q i , i t + 1 = q j , O ∣ λ ) = ∑ i = 1 N ∑ j = 1 N P ( i t = q i , i t + 1 = q j , O ∣ λ ) P ( i t = q i , i t + 1 = q j , O ∣ λ ) = ∑ i = 1 N ∑ j = 1 N α t ( i ) a ij b j ( o t + 1 ) β t + 1 ( j ) α t ( i ) a ij b j ( o t + 1 ) β t + 1 ( j ) 其中,
P ( i t = q i , i t + 1 = q j , O ∣ λ ) = P ( i t = q i , o 1 , ⋯ , o t , i t + 1 = q j , o t + 1 , ⋯ , o T ∣ λ ) = P ( i t = q i , o 1 , ⋯ , o t ∣ λ ) ⋅ P ( i t + 1 = q j , o t + 1 , ⋯ , o T ∣ i t = q i , o 1 , ⋯ , o t , λ ) = α t ( i ) ⋅ P ( i t + 1 = q j ∣ i t = q i , O , λ ) ⋅ P ( o t + 1 , ⋯ , o T ∣ i t + 1 = q j , i t = q i , o 1 , ⋯ , o t , λ ) = α t ( i ) ⋅ a i j ⋅ P ( o t + 1 , o t + 2 , ⋯ , o T ∣ i t + 1 = q j , i t = q i , o 1 , ⋯ , o t , λ ) = α t ( i ) ⋅ a i j ⋅ P ( o t + 1 ∣ i t + 1 = q j , i t = q i , o 1 , ⋯ , o t ) ⋅ P ( o t + 2 , ⋯ , o T ∣ i t + 1 = q j , i t = q i , o 1 , ⋯ , o t , o t + 1 ) = α t ( i ) ⋅ a i j ⋅ P ( o t + 1 ∣ i t + 1 ) ⋅ P ( o t + 2 , ⋯ , o T ∣ i t + 1 = q j ) = α t ( i ) a i j b j ( o t + 1 ) β t + 1 ( j ) \begin{align} P(i_t=q_i,i_{t+1}=q_j,O|\lambda) &= P(i_t=q_i,o_1,\cdots,o_t,i_{t+1}=q_j,o_{t+1},\cdots,o_T | \lambda) \\ &= P(i_t=q_i,o_1,\cdots,o_t|\lambda)\cdot P(i_{t+1}=q_j,o_{t+1},\cdots,o_T|i_t=q_i,o_1,\cdots,o_t,\lambda) \\ &= \alpha_t(i)\cdot P(i_{t+1}=q_j|i_t=q_i,O,\lambda)\cdot P(o_{t+1},\cdots,o_T|i_{t+1}=q_j,i_t=q_i,o_1,\cdots,o_t,\lambda) \\ &= \alpha_t(i)\cdot a_{ij}\cdot P(o_{t+1},o_{t+2},\cdots,o_T|i_{t+1}=q_j,i_t=q_i,o_1,\cdots,o_t,\lambda) \\ &= \alpha_t(i)\cdot a_{ij}\cdot P(o_{t+1}|i_{t+1}=q_j,i_t=q_i,o_1,\cdots,o_t)\cdot P(o_{t+2},\cdots,o_T|i_{t+1}=q_j,i_t=q_i,o_1,\cdots,o_t,o_{t+1}) \\ &= \alpha_t(i)\cdot a_{ij}\cdot P(o_{t+1}|i_{t+1})\cdot P(o_{t+2},\cdots,o_T|i_{t+1}=q_j) \\ &= \alpha_t(i) a_{ij} b_j(o_{t+1})\beta_{t+1}(j) \end{align} P ( i t = q i , i t + 1 = q j , O ∣ λ ) = P ( i t = q i , o 1 , ⋯ , o t , i t + 1 = q j , o t + 1 , ⋯ , o T ∣ λ ) = P ( i t = q i , o 1 , ⋯ , o t ∣ λ ) ⋅ P ( i t + 1 = q j , o t + 1 , ⋯ , o T ∣ i t = q i , o 1 , ⋯ , o t , λ ) = α t ( i ) ⋅ P ( i t + 1 = q j ∣ i t = q i , O , λ ) ⋅ P ( o t + 1 , ⋯ , o T ∣ i t + 1 = q j , i t = q i , o 1 , ⋯ , o t , λ ) = α t ( i ) ⋅ a ij ⋅ P ( o t + 1 , o t + 2 , ⋯ , o T ∣ i t + 1 = q j , i t = q i , o 1 , ⋯ , o t , λ ) = α t ( i ) ⋅ a ij ⋅ P ( o t + 1 ∣ i t + 1 = q j , i t = q i , o 1 , ⋯ , o t ) ⋅ P ( o t + 2 , ⋯ , o T ∣ i t + 1 = q j , i t = q i , o 1 , ⋯ , o t , o t + 1 ) = α t ( i ) ⋅ a ij ⋅ P ( o t + 1 ∣ i t + 1 ) ⋅ P ( o t + 2 , ⋯ , o T ∣ i t + 1 = q j ) = α t ( i ) a ij b j ( o t + 1 ) β t + 1 ( j ) 将$\gamma_t(i)$和$\xi_t(i,j)$对各个时刻的$t$求和,可以得到一些有用的期望值。
$(1).$ 在观测$O$下状态$i$出现 的期望值
∑ t = 1 T γ t ( i ) \sum_{t=1}^T \gamma_t(i) t = 1 ∑ T γ t ( i ) $(2).$ 在观测$O$下由状态$i$转移 的期望值
∑ t = 1 T − 1 γ t ( i ) \sum_{t=1}^{T-1}\gamma_t(i) t = 1 ∑ T − 1 γ t ( i ) $(3).$ 在观测$O$下由状态$i$转移到状态$j$的 期望值
∑ t = 1 T ξ t ( i , j ) \sum_{t=1}^T \xi_t(i,j) t = 1 ∑ T ξ t ( i , j ) 学习问题
隐马尔可夫模型的学习,根据训练数据是包括观测序列和对应的状态序列还是只有观测序列,可以分别由监督学习与非监督学习实现。
监督学习算法
假设已给训练数据包含$S$个长度相同的观测序列和对应的状态序列${(O_1,I_1),(O_2,I_2),\cdots,(O_S,I_S)}$,那么可以利用极大似然估计法来估计隐马尔可夫模型的参数。
算法过程 :
设样本中时刻$t$处于状态$i$时刻$t+1$转移到状态$j$的频数为$A_{ij}$,那么状态转移概率$a_{ij}$的估计是:
a i j ^ = A i j ∑ i = 1 N A i j , i = 1 , 2 , ⋯ , N ; j = 1 , 2 , ⋯ , N \hat{a_{ij}} = \frac{A_{ij}}{\sum_{i=1}^N A_{ij}},i=1,2,\cdots,N;j=1,2,\cdots,N a ij ^ = ∑ i = 1 N A ij A ij , i = 1 , 2 , ⋯ , N ; j = 1 , 2 , ⋯ , N 设样本中状态为$j$并观测为$k$的频数是$B_{jk}$,那么状态为$j$观测为$k$的概率$b_j(k)$的估计是:
b j ( k ) ^ = B j k ∑ k = 1 M B j k , j = 1 , 2 , ⋯ , N ; k = 1 , 2 , ⋯ , M \hat{b_j(k)} = \frac{B_{jk}}{\sum_{k=1}^M B_{jk}},j=1,2,\cdots,N;k=1,2,\cdots,M b j ( k ) ^ = ∑ k = 1 M B jk B jk , j = 1 , 2 , ⋯ , N ; k = 1 , 2 , ⋯ , M 初始化状态概率$\pi_i$的估计$\hat{\pi_i}$为$S$个样本中初始状态为$q_i$的概率
由于监督学习需要使用训练数据,而人工标注训练数据往往代价很高,有时就会利用非监督学习的方法。
非监督学习算法: $Baum-Welch$算法
假设给定训练数据只包含$S$个长度为$T$的观测序列${O_1,O_2,\cdots,O_S}$而没有对应的状态序列,目标是学习隐马尔可夫模型$\lambda=(A,B,\pi)$的参数。将观测序列数据看做观测数据$O$,状态序列数据看做不可预测的隐数据 $I$,那么因马尔可夫模型事实上是一个含有隐变量的概率模型 :
P ( O ∣ λ ) = ∑ I P ( O ∣ I , λ ) P ( I ∣ λ ) P(O|\lambda) = \sum_I P(O|I,\lambda)P(I|\lambda) P ( O ∣ λ ) = I ∑ P ( O ∣ I , λ ) P ( I ∣ λ ) 该模型的参数可以由$EM$算法实现。
算法过程 :
所有观测数据写成$O=(o_1,o_2,\cdots,o_T)$,所有隐数据写成$I=(i_1,i_2,\cdots,i_T)$,完全数据是$(O,I)=(o_1,o_2,\cdots,o_T,i_1,i_2,\cdots,i_T)$。完全数据的对数似然函数是$log\ P(O,I|\lambda)$。
$EM$算法的$E$步:求$Q$函数$Q(\lambda,\lambda^{(t)})$
Q ( λ , λ ( t ) ) = ∑ I l o p P ( O , I ∣ λ ) P ( O , I ∣ λ ( t ) ) Q(\lambda,\lambda^{(t)}) = \sum_I lop\ P(O,I|\lambda)P(O,I|\lambda^{(t)}) Q ( λ , λ ( t ) ) = I ∑ l o p P ( O , I ∣ λ ) P ( O , I ∣ λ ( t ) ) 其中,$\lambda^{(t)}$是隐马尔可夫模型参数的当前估计值,$\lambda$是要极大化的隐马尔可夫模型参数。
P ( O , I ∣ λ ) = π i 1 b i 1 ( o 1 ) a i 1 , i 2 b i 2 ( o 2 ) ⋯ a i T − 1 , i T b i T ( o T ) = π i 1 ∏ t = 2 T a i t − 1 , i t ∏ t = 1 T b i t ( o t ) P(O,I|\lambda) = \pi_{i_1}b_{i_1}(o_1)a_{i_1,i_2}b_{i_2}(o_2)\cdots a_{i_{T-1},i_T}b_{i_T}(o_T)=\pi_{i_1}\prod_{t=2}^Ta_{i_{t-1},i_t}\prod_{t=1}^T b_{i_t}(o_t) P ( O , I ∣ λ ) = π i 1 b i 1 ( o 1 ) a i 1 , i 2 b i 2 ( o 2 ) ⋯ a i T − 1 , i T b i T ( o T ) = π i 1 t = 2 ∏ T a i t − 1 , i t t = 1 ∏ T b i t ( o t ) 于是:
Q ( λ , λ ( t ) ) = ∑ I [ l o g π i 1 ∏ t = 2 T a i t − 1 , i t ∏ t = 1 T b i t ( o t ) ] P ( O , I ∣ λ ( t ) ) = ∑ I [ l o g π i 1 + ∑ t = 2 T l o g a i t − 1 , i t + ∑ t = 1 T l o g b i t ( o t ) ] P ( O , I ∣ λ ( t ) ) \begin{align} Q(\lambda,\lambda^{(t)}) &= \sum_I[log \ \pi_{i_1}\prod_{t=2}^Ta_{i_{t-1},i_t}\prod_{t=1}^T b_{i_t}(o_t)]P(O,I|\lambda^{(t)}) \\ &= \sum_I[log \ \pi_{i_1}+ \sum_{t=2}^T log\ a_{i_{t-1},i_t} + \sum_{t=1}^T log\ b_{i_t}(o_t)]P(O,I|\lambda^{(t)}) \\ \end{align} Q ( λ , λ ( t ) ) = I ∑ [ l o g π i 1 t = 2 ∏ T a i t − 1 , i t t = 1 ∏ T b i t ( o t )] P ( O , I ∣ λ ( t ) ) = I ∑ [ l o g π i 1 + t = 2 ∑ T l o g a i t − 1 , i t + t = 1 ∑ T l o g b i t ( o t )] P ( O , I ∣ λ ( t ) ) $EM$算法的$M$步:极大化$Q$函数$Q(\lambda,\lambda^{(t)})$求模型参数$A$,$B$,$\pi$
λ ( t + 1 ) = a r g m a x λ Q ( λ , λ ( t ) ) = a r g m a x λ ∑ I [ l o g π i 1 + ∑ t = 2 T l o g a i t − 1 , i t + ∑ t = 1 T l o g b i t ( o t ) ] P ( O , I ∣ λ ( t ) ) \lambda^{(t+1)} = {argmax}_{\lambda}Q(\lambda,\lambda^{(t)})= argmax_{\lambda} \sum_I[\textcolor{blue}{log \ \pi_{i_1}}+ \textcolor{green}{\sum_{t=2}^T log\ a_{i_{t-1},i_t}} + \textcolor{orange}{\sum_{t=1}^T log\ b_{i_t}(o_t)}]P(O,I|\lambda^{(t)}) λ ( t + 1 ) = a r g ma x λ Q ( λ , λ ( t ) ) = a r g ma x λ I ∑ [ l o g π i 1 + t = 2 ∑ T l o g a i t − 1 , i t + t = 1 ∑ T l o g b i t ( o t ) ] P ( O , I ∣ λ ( t ) ) 由上可知,三个参数分别单独在三个部分,在求解其中一个时,可以忽略掉其他两个,单独讨论。
对于$\pi$ :
∑ I π i 1 P ( O , I ∣ λ ) = ∑ i = 1 N l o g π i P ( O , i 1 = i ∣ λ ( t ) ) \sum_I \pi_{i_1}P(O,I|\lambda) = \sum_{i=1}^N log\ \pi_iP(O,i_1=i|\lambda^{(t)}) I ∑ π i 1 P ( O , I ∣ λ ) = i = 1 ∑ N l o g π i P ( O , i 1 = i ∣ λ ( t ) ) 由于$\pi$是一个概率分布,因此有一个约束条件$\sum_{i=1}^N \pi_i=1$,于是利用拉格朗日乘子法,构造拉格朗日函数,有:
∑ i = 1 N l o g π i P ( O , i 1 = i ∣ λ ( t ) ) + γ ( ∑ i = 1 N π i − 1 ) \sum_{i=1}^N log\ \pi_iP(O,i_1=i|\lambda^{(t)})+\gamma(\sum_{i=1}^N \pi_i-1) i = 1 ∑ N l o g π i P ( O , i 1 = i ∣ λ ( t ) ) + γ ( i = 1 ∑ N π i − 1 ) 对其求偏导数并令结果为0,有:
∂ ∂ π i [ ∑ i = 1 N l o g π i P ( O , i 1 = i ∣ λ ( t ) ) + γ ( ∑ i = 1 N π i − 1 ) ] = 0 \frac{\partial}{\partial\pi_i}[\sum_{i=1}^N log\ \pi_iP(O,i_1=i|\lambda^{(t)})+\gamma(\sum_{i=1}^N \pi_i-1)] = 0 ∂ π i ∂ [ i = 1 ∑ N l o g π i P ( O , i 1 = i ∣ λ ( t ) ) + γ ( i = 1 ∑ N π i − 1 )] = 0 得:
1 π i P ( O , i 1 = i ∣ λ ( t ) ) + γ = 0 P ( O , i 1 = i ∣ λ ( t ) ) + γ π i = 0 \begin{align} \frac{1}{\pi_i}P(O,i_1=i|\lambda^{(t)}) + \gamma &= 0 \\ P(O,i_1=i|\lambda^{(t)}) + \gamma\pi_i&=0 \end{align} π i 1 P ( O , i 1 = i ∣ λ ( t ) ) + γ P ( O , i 1 = i ∣ λ ( t ) ) + γ π i = 0 = 0 同时进行连加,得到$\gamma$:
∑ i = 1 N P ( O , i 1 = i ∣ λ ( t ) + γ π i ) = P ( O ∣ λ ( t ) ) + γ = 0 \sum_{i=1}^N P(O,i_1=i|\lambda^{(t)}+\gamma\pi_i) = P(O|\lambda^{(t)}) + \gamma = 0 i = 1 ∑ N P ( O , i 1 = i ∣ λ ( t ) + γ π i ) = P ( O ∣ λ ( t ) ) + γ = 0 有:
γ = − P ( O ∣ λ ( t ) ) \gamma = -P(O|\lambda^{(t)}) γ = − P ( O ∣ λ ( t ) ) 带入$P(O,i_1=i|\lambda^{(t)}) + \gamma\pi_i=0$得:
π i ( t + 1 ) = P ( O , i 1 = i ∣ l a m b d a ( t ) ) P ( O ∣ λ ( t ) ) \pi_i^{(t+1)} = \frac{P(O,i_1=i|lambda^{(t)})}{P(O|\lambda^{(t)})} π i ( t + 1 ) = P ( O ∣ λ ( t ) ) P ( O , i 1 = i ∣ l amb d a ( t ) ) 对于$a_{ij}$ :
∑ I ∑ t = 2 T l o g a i t − 1 , i t P ( O , I ∣ λ ) = ∑ i = 1 N ∑ j = 1 N ∑ t = 1 T − 1 l o g a i j l o g P ( O , i t = i , i t + 1 = j ∣ λ ( t ) ) \sum_I \sum_{t=2}^T log\ a_{i_{t-1},i_t}P(O,I|\lambda) = \sum_{i=1}^N\sum_{j=1}^N \sum_{t=1}^{T-1} log\ a_{ij} log\ P(O,i_t=i,i_{t+1}=j|\lambda^{(t)}) I ∑ t = 2 ∑ T l o g a i t − 1 , i t P ( O , I ∣ λ ) = i = 1 ∑ N j = 1 ∑ N t = 1 ∑ T − 1 l o g a ij l o g P ( O , i t = i , i t + 1 = j ∣ λ ( t ) ) 由于$A$是一个概率分布的矩阵,其约束条件为$\sum_{j=1}^N a_{ij}=1$,与上述相同,利用拉格朗日乘子法,构造拉格朗日函数,求出得:
a i j ( t + 1 ) = P ( O , i t = i , i t + 1 = j ∣ λ ( t ) ∑ t = 1 T − 1 P ( O , i t = i ∣ λ ( t ) ) a_{ij}^{(t+1)} = \frac{P(O,i_t=i,i_{t+1}=j|\lambda^{(t)}}{\sum_{t=1}^{T-1}P(O,i_t=i|\lambda^{(t)})} a ij ( t + 1 ) = ∑ t = 1 T − 1 P ( O , i t = i ∣ λ ( t ) ) P ( O , i t = i , i t + 1 = j ∣ λ ( t ) 对于$b_j(k)$ :
∑ I [ ∑ t = 1 T l o g b i t ( o t ) ] P ( O , I ∣ λ ( t ) ) = ∑ j = 1 N ∑ t = 1 T l o g b j ( o t ) P ( O , i t = j ∣ λ ( t ) ) \sum_I[\sum_{t=1}^T log\ b_{i_t}(o_t)]P(O,I|\lambda^{(t)}) = \sum_{j=1}^N\sum_{t=1}^T log\ b_{j}(o_t)P(O,i_t=j|\lambda^{(t)}) I ∑ [ t = 1 ∑ T l o g b i t ( o t )] P ( O , I ∣ λ ( t ) ) = j = 1 ∑ N t = 1 ∑ T l o g b j ( o t ) P ( O , i t = j ∣ λ ( t ) ) 由于$B$是一个概率分布的矩阵,其约束条件为$\sum_{k=1}^M b_j(o_k)=1$,与上述相同,利用拉格朗日乘子法,构造拉格朗日函数,求出得:
∂ ∂ b j ( k ) [ ∑ j = 1 N ∑ t = 1 T l o g b j ( o t ) P ( O , i t = j ∣ λ ( t ) ) + ∑ i = 1 N η i ( ∑ k = 1 M ( b j ( k ) − 1 ) ] = 0 ∑ t = 1 T 1 b j ( k ) P ( O , i t = j ∣ λ ( t ) ) I ( o t = v k ) + ∑ i = 1 η i = 0 \begin{align} \frac{\partial}{\partial b_j(k)}[\sum_{j=1}^N\sum_{t=1}^T log\ b_{j}(o_t)P(O,i_t=j|\lambda^{(t)})+\sum_{i=1}^N\eta_i(\sum_{k=1}^M (b_j(k)-1)] &= 0 \\ \sum_{t=1}^T \frac{1}{b_j(k)}P(O,i_t=j|\lambda^{(t)})\textcolor{blue}{I(o_t=v_k)}+\sum_{i=1}\eta_i &= 0 \end{align} ∂ b j ( k ) ∂ [ j = 1 ∑ N t = 1 ∑ T l o g b j ( o t ) P ( O , i t = j ∣ λ ( t ) ) + i = 1 ∑ N η i ( k = 1 ∑ M ( b j ( k ) − 1 )] t = 1 ∑ T b j ( k ) 1 P ( O , i t = j ∣ λ ( t ) ) I ( o t = v k ) + i = 1 ∑ η i = 0 = 0 注意,只有在$o_t=v_k$时,$b_j(o_t)$对$b_j(k)$的偏导数才不为0,以$I(o_t=v_k)$表示,该函数的作用是满足条件值为1,否则为0。
于是:
b j ( k ) ( t + 1 ) = − ∑ t = 1 T P ( O , i t = j ∣ λ ( t ) ) I ( o t = v k ) ∑ i = 1 η i b_j(k)^{(t+1)} = -\frac{\sum_{t=1}^T P(O,i_t=j|\lambda^{(t)}){I(o_t=v_k)}}{\sum_{i=1}\eta_i} b j ( k ) ( t + 1 ) = − ∑ i = 1 η i ∑ t = 1 T P ( O , i t = j ∣ λ ( t ) ) I ( o t = v k ) $Baum-Welch$模型的参数估计公式
利用其它概率与期望值的计算 的公式,进行改写为如下式子:
a i j = ∑ t = 1 T ξ t ( i , j ) ∑ t = 1 T γ t ( i ) b j ( k ) = ∑ t = 1 , o t = v k T γ t ( j ) ∑ t = 1 T γ t ( j ) π i = γ 1 ( i ) \begin{align} a_{ij} &= \frac{\sum_{t=1}^T\xi_t(i,j)}{\sum_{t=1}^T\gamma_t(i)} \\ b_j(k) &=\frac{\sum_{t=1,o_t=v_k}^T \gamma_t(j)}{\sum_{t=1}^T \gamma_t(j)} \\ \pi_i &= \gamma_1(i) \end{align} a ij b j ( k ) π i = ∑ t = 1 T γ t ( i ) ∑ t = 1 T ξ t ( i , j ) = ∑ t = 1 T γ t ( j ) ∑ t = 1 , o t = v k T γ t ( j ) = γ 1 ( i ) 预测算法
求概率最大的状态序列。
维特比算法
利用动态规划求解隐马尔可夫模型预测问题,即用动态规划求概率最大路径(最优路径)。这时一条路径对应着一个状态序列。
定义在时刻$t$状态为$i$的所有单个路径$(i_1,i_2,\cdots,i_t)$中概率最大值为:
δ t ( i ) = m a x i 1 , i 2 , ⋯ , i t − 1 P ( i t = i , i t − 1 , ⋯ , i 1 , o t , ⋯ , o 1 ∣ λ ) , i = 1 , 2 , ⋯ , N \delta_t(i) = max_{i_1,i_2,\cdots,i_{t-1}} P(i_t=i,i_{t-1},\cdots,i_1,o_t,\cdots,o_1|\lambda),i=1,2,\cdots,N δ t ( i ) = ma x i 1 , i 2 , ⋯ , i t − 1 P ( i t = i , i t − 1 , ⋯ , i 1 , o t , ⋯ , o 1 ∣ λ ) , i = 1 , 2 , ⋯ , N 由定义可得变量$\delta$的递推公式为:
δ t + 1 ( i ) = m a x i 1 , i 2 , ⋯ , i t P ( i t + 1 = i , i t , ⋯ , i 1 , o t + 1 , ⋯ , o 1 ∣ λ ) = m a x 1 ≤ j ≤ N [ δ t ( j ) a j i ] b i ( o t + 1 ) , i = 1 , 2 , ⋯ , N ; t = 1 , 2 , ⋯ , T − 1 \begin{align} \delta_{t+1}(i) &= max_{i_1,i_2,\cdots,i_{t}} P(i_{t+1}=i,i_{t},\cdots,i_1,o_{t+1},\cdots,o_1|\lambda) \\ &= max_{1\leq j \leq N}[\delta_t(j)a_{ji}]b_i(o_{t+1}),i=1,2,\cdots,N;t=1,2,\cdots,T-1 \end{align} δ t + 1 ( i ) = ma x i 1 , i 2 , ⋯ , i t P ( i t + 1 = i , i t , ⋯ , i 1 , o t + 1 , ⋯ , o 1 ∣ λ ) = ma x 1 ≤ j ≤ N [ δ t ( j ) a ji ] b i ( o t + 1 ) , i = 1 , 2 , ⋯ , N ; t = 1 , 2 , ⋯ , T − 1 定义在时刻$t$状态为$i$的所有单个路径$(i_1,i_2,\cdots,i_{t-1},i)$中概率最大的路径的第$t-1$个结点为:
ψ t ( i ) = a r g m a x 1 ≤ j ≤ N [ δ t − 1 ( j ) a j i ] , i = 1 , 2 , ⋯ , N \psi_t(i) = argmax_{1 \leq j \leq N}[\delta_{t-1}(j)a_{ji}],i=1,2,\cdots,N ψ t ( i ) = a r g ma x 1 ≤ j ≤ N [ δ t − 1 ( j ) a ji ] , i = 1 , 2 , ⋯ , N 算法过程 :
输入:模型$\lambda=(A,B,\pi)$和观测$O=(o_1,o_2,\cdots,o_T)$;
输出:最优路径$I^=(i^ _1,i^_2,\cdots,i^ _T)$。
δ 1 ( i ) = π i b i ( o 1 ) , i = 1 , 2 , ⋯ , N ψ 1 ( i ) = 0 , i = 1 , 2 , ⋯ , N \delta_1(i) = \pi_ib_i(o_1),i=1,2,\cdots,N \\ \psi_1(i) = 0,i=1,2,\cdots,N δ 1 ( i ) = π i b i ( o 1 ) , i = 1 , 2 , ⋯ , N ψ 1 ( i ) = 0 , i = 1 , 2 , ⋯ , N δ t ( i ) = m a x 1 ≤ j ≤ N [ δ t − 1 ( j ) a j i ] b i ( o t ) , i = 1 , 2 , ⋯ , N ψ t ( i ) = a r g m a x 1 ≤ j ≤ N [ δ t − 1 ( j ) a j i ] , i = 1 , 2 , ⋯ , N \delta_{t}(i) = max_{1\leq j \leq N}[\delta_{t-1}(j)a_{ji}]b_i(o_{t}),i=1,2,\cdots,N \\ \psi_t(i) = argmax_{1 \leq j \leq N}[\delta_{t-1}(j)a_{ji}],i=1,2,\cdots,N δ t ( i ) = ma x 1 ≤ j ≤ N [ δ t − 1 ( j ) a ji ] b i ( o t ) , i = 1 , 2 , ⋯ , N ψ t ( i ) = a r g ma x 1 ≤ j ≤ N [ δ t − 1 ( j ) a ji ] , i = 1 , 2 , ⋯ , N P ∗ = m a x 1 ≤ i ≤ N δ T ( i ) i T ∗ = a r g m a x 1 ≤ i ≤ N [ δ T ( i ) ] P^* = max_{1\leq i \leq N} \delta_T(i) \\ i^*_T = argmax_{1\leq i \leq N} [\delta_T(i)] P ∗ = ma x 1 ≤ i ≤ N δ T ( i ) i T ∗ = a r g ma x 1 ≤ i ≤ N [ δ T ( i )] 最有路径回溯,对$t=T-1,T-2,\cdots,1$
i t ∗ = ψ t + 1 ( i t + 1 ∗ ) i^*_t = \psi_{t+1}(i^*_{t+1}) i t ∗ = ψ t + 1 ( i t + 1 ∗ ) 求得最优路径$I^=(i^ _1,i^_2,\cdots,i^ _T)$。
代码实现
复制 import logging
import numpy as np
np.set_printoptions(suppress=True, threshold=np.inf,precision=10)
class HMM:
def __init__(self,M,N,epoches=10):
self.A = np.abs(np.random.randn(M,M))
self.B = np.abs(np.random.randn(M,N))
self.pi = np.ones(M)
self.T = None
self.O = None
self.M = M # 可能的状态数
self.N = N # 可能的观察数
self.alpha = None
self.beta = None
self.epoches = epoches
self.logger = logging.getLogger(__name__)
self.logger.setLevel(logging.DEBUG)
def forward(self,):
alpha = np.zeros((self.T,self.M))
for t in range(self.T):
if t == 0:
alpha[t,:] = self.B[:,O[t]]*self.pi # 初始化
continue
alpha[t,:] = np.array([self.A[:,i].dot(alpha[t-1,:])*self.B[i,O[t]] for i in range(self.M)])
return alpha
def backward(self,):
beta = np.zeros((self.T,self.M))
beta[-1,:] = np.ones(self.M)
for t in list(range(self.T-2,-1,-1)):
beta[t,:] = np.array([np.sum(np.array([self.A[i,j]*self.B[j,O[t+1]]*beta[t+1,j] for j in range(self.M)])) for i in range(self.M)])
return beta
def calc_gamma(self,t,i):
gamma = (self.alpha[t,i]*self.beta[t,i]) / np.sum(np.array([self.alpha[t,j]*self.beta[t,j] for j in range(self.M)]))
return gamma
def calc_kesai(self,t,i,j):
kesai = (self.alpha[t,i]*self.A[i,j]*self.B[j,self.O[t+1]]*self.beta[t+1,j]) / np.sum(np.array([[self.alpha[t,i]*self.A[i,j]*self.B[j,self.O[t+1]]*self.beta[t+1,j] for j in range(M)] for i in range(M)]))
return kesai
def init_params(self,O,A,B,pi):
self.T = len(O)
self.O = O
if A is not None:
self.A = A
if B is not None:
self.B = B
if pi is not None:
self.pi = pi
self.M,self.N = self.B.shape
if A is None:
self.A = self.A / np.sum(self.A)
self.B = self.B / np.sum(self.B)
self.pi = self.pi / np.sum(self.pi)
def cal_probality(self,select=None):
p = 0
if select != "backward":
self.logger.info("前向算法")
else:
self.logger.info("后向算法")
self.alpha = self.forward()
self.beta = self.backward()
p = np.sum(self.alpha[-1,:])
return p
def baum_welch(self,):
for e in range(self.epoches):
self.alpha = self.forward()
self.beta = self.backward()
self.logger.info("第{}次迭代".format(e))
A_ = np.zeros((self.M,self.M))
B_ = np.zeros((self.M,self.N))
pi_ = np.zeros(self.M)
# a_{ij}
for i in range(self.M):
for j in range(self.M):
molecular = 0.0
denominator = 0.0
for t in range(self.T-1):
molecular += self.calc_kesai(t,i,j)
denominator += self.calc_gamma(t,i)
A_[i,j] = molecular / denominator
# b_{jk}
for j in range(self.M):
for k in range(self.N):
molecular = 0.0
denominator = 0.0
for t in range(self.T):
if k == self.O[t]:
molecular += self.calc_gamma(t,j)
denominator += self.calc_gamma(t,j)
B_[j,k] = molecular / denominator
# pi_{i}
for i in range(self.M):
pi_[i] = self.calc_gamma(0,i)
# 更新
self.A = A_/np.sum(A_)
self.B = np.array([self.B[i,:]/np.sum(self.B[i,:]) for i in range(self.M)])
self.pi = pi_/np.sum(pi_)
self.logger.info("更新完毕:\n A:\n{}\nB:\n{}\npi:\n{}\n".format(self.A.round(5),
self.B.round(5),self.pi.round(5)))
def viterbi(self,):
delta = np.zeros((self.T,self.M))
delta[0,:] = np.array([self.pi[i]*self.B[i,self.O[0]] for i in range(self.M)])
psi = np.zeros((self.T,self.M))
for t in range(1,self.T):
for i in range(self.M):
max_delta = np.array([delta[t-1,j]*self.A[j,i] for j in range(self.M)])
delta[t,i] = np.max(max_delta)*self.B[i,self.O[t]]
psi[t,i] = np.argmax(max_delta)
print("delta:\n{},\npsi:\n{}".format(delta,psi))
# 终止
max_delta = delta[self.T-1,:]
I = np.zeros(self.T,dtype=int)
P = np.max(max_delta)
I[-1] = np.argmax(max_delta)
print("P(*)-最优路径的概率:{};最优路径的终点是:{}".format(P.round(5),I[-1]))
# 回溯
for t in range(self.T-2,-1,-1):
I[t] = psi[t+1,I[t+1]]
print("最优路径I(*):\n{}".format(I))
def fit(self,O,A=None,B=None,pi=None,select=None):
if A is None:
select = "bw"
self.init_params(O,A,B,pi)
self.logger.info("初始化:\n A:\n{}\nB:\n{}\npi:\n{}\n状态数:{},观测数:{}".format(self.A.round(5),
self.B.round(5),self.pi.round(5),self.M,self.N))
if select == "bw" or select == "baum_welch":
self.baum_welch()
self.p = self.cal_probality()
self.logger.info("当前p(o|lambda)={}".format(self.p.round(5)))
self.p = self.cal_probality("backward")
self.logger.info("当前p(o|lambda)={}".format(self.p.round(5)))
print("p(o|lambda)的概率为:{}".format(self.p))
A = np.array([ # 状态转移概率分布
[.5,.2,.3],
[.3,.5,.2],
[.2,.3,.5]
])
B = np.array([ # 观测概率分布
[.5,.5],
[.4,.6],
[.7,.3]
])
pi = np.array([ # 初始概率分布 => 状态的分布
.2,.4,.4
])
O = [0,1,0] # 1表示红,0表示白 # 观测序列
M,N = A.shape[1],B.shape[0]
hmm = HMM(10,4)
hmm.fit(O,A,B,pi)
hmm.viterbi()
复制 O = [0,1,0] # 1表示红,0表示白 # 观测序列
M,N = A.shape[1],B.shape[0]
hmm = HMM(3,2)
hmm.fit(O)
hmm.viterbi()