基于\alpha\,-混合序列的学习机器一致收敛速率的界

The Bound for the Rate of Uniform Convergence for Learning Machine Based on \alpha-mixing Sequence

  • 摘要: Vapnik, Cucker和Smale已经证明了, 当样本的数目趋于无限时, 基于独立同分布序列学习机器的经验 风险会一致收敛到它的期望风险\bd 本文把这些基于独立同分布序列的结果推广到了\alpha\,-混合序列, 应用Markov不等式得到了基于\alpha\,-混合序列的学习机器一致收敛速率的界

     

    Abstract: It has been shown previously by Vapnik, Cucker and Smale that, the empirical risks based on an independent and identically distributed (i.i.d.) sequence must uniformly converge to their expected risks for learning machines as the number of samples approaches infinity. This paper extends the results to the case where the i.i.d. sequence replaced by \alpha-mixing sequence. It establishes the rate of uniform convergence for learning machine based on \alpha-mixing sequence by applying Markov's inequality.

     

/

返回文章
返回