|
|
马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。
您需要 登录 才可以下载或查看,没有账号?注册
×
本帖最后由 cjsb37 于 2013-4-29 08:57 编辑
1 Introduction 1
1.1 The Learning Problem and (Statistical) Inference 1
1.1.1 Supervised Learning . . . . . . . . . . . . . . . 3
1.1.2 Unsupervised Learning . . . . . . . . . . . . . . 6
1.1.3 Reinforcement Learning . . . . . . . . . . . . . 7
1.2 Learning Kernel Classifiers 8
1.3 The Purposes of Learning Theory 11
I LEARNING ALGORITHMS
2 Kernel Classifiers from a Machine Learning Perspective 17
2.1 The Basic Setting 17
2.2 Learning by Risk Minimization 24
2.2.1 The (Primal) Perceptron Algorithm . . . . . . . 26
2.2.2 RegularizedRiskFunctionals . . . . . . . . . . 27
2.3 Kernels and Linear Classifiers 30
2.3.1 TheKernelTechnique . . . . . . . . . . . . . . 33
2.3.2 Kernel Families . . . . . . . . . . . . . . . . . . 36
2.3.3 The Representer Theorem . . . . . . . . . . . . 47
2.4 Support Vector Classification Learning 49
2.4.1 Maximizing theMargin . . . . . . . . . . . . . 49
2.4.2 Soft Margins—Learning with Training Error . . 53
2.4.3 Geometrical Viewpoints on Margin Maximization 56
2.4.4 The ν–Trick andOtherVariants . . . . . . . . . 58
|
|