|
马上注册,结交更多好友,享用更多功能,让你轻松玩转社区。
您需要 登录 才可以下载或查看,没有账号?注册
x
【书名】 Digital Modulation Techniques, Second Edition
【作者】 R.G. Gallager
【出版社】 Springer
【出版时间】 2009
【语言】 英语
【版别版次】 第二版
【阅读语言】: English
【页数】: 333
【文件格式】: PDF
R. G. Gallager是通信界的经典人物,在此就不加赘述。
本书是其1995年第一版(ISBN:0792395832)之后的修订手稿(2009.8.17),Gallager将以之为将来第二版出版
此书详细介绍Discrete-time Stochastic Processes的原理,阐述角度透析Stochastic Processes的本质,
又辅以严密的数学左证,阅读后带有强烈启发作用,是不可不读的经典好书!
写作风格依然如经典名著Information Theory and Reliable Communication一般具有令人着魔般不断阅读的风格,
通信人应该人人都该有此书在手!
Book Description
Stochasticprocesses are found in probabilistic systems that evolve with time.Discrete stochastic processes change by only integer time steps (forsome time scale), or are characterized by discrete occurrences atarbitrary times. Discrete Stochastic Processes helps the reader developthe understanding and intuition necessary to apply stochastic processtheory in engineering, science and operations research. The bookapproaches the subject via many simple examples which build insightinto the structure of stochastic processes and the general effect ofthese phenomena in real systems. The book presents mathematical ideaswithout recourse to measure theory, using only minimal mathematicalanalysis. In the proofs and explanations, clarity is favored overformal rigor, and simplicity over generality. Numerous examples aregiven to show how results fail to hold when all the conditions are notsatisfied. Audience: An excellent textbook for a graduate level coursein engineering and operations research. Also an invaluable referencefor all those requiring a deeper understanding of the subject.
【摘要目录】:
1 INTRODUCTION AND REVIEW OF PROBABILITY 1
1.1 Probability models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 The sample space of a probability model . . . . . . . . . . . . . . . . 3
1.1.2 Assigning probabilities for finite sample spaces . . . . . . . . . . . . 4
1.2 The axioms of probability theory . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 Axioms for the class of events for a sample space ≠: . . . . . . . . . 6
1.2.2 Axioms of probability . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Probability review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.1 Conditional probabilities and statistical independence . . . . . . . . 8
1.3.2 Repeated idealized experiments . . . . . . . . . . . . . . . . . . . . . 10
1.3.3 Random variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3.4 Multiple random variables and conditional probabilities . . . . . . . 12
1.3.5 Stochastic processes . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.3.6 Expectations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3.7 Random variables as functions of other random variables . . . . . . 19
1.3.8 Conditional expectations . . . . . . . . . . . . . . . . . . . . . . . . 22
1.3.9 Indicator random variables . . . . . . . . . . . . . . . . . . . . . . . 24
1.3.10 Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.4 The laws of large numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.4.1 Basic inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.4.2 Weak law of large numbers with a finite variance . . . . . . . . . . . 28
1.4.3 Relative frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.4.4 The central limit theorem . . . . . . . . . . . . . . . . . . . . . . . . 31
1.4.5 Weak law with an infinite variance . . . . . . . . . . . . . . . . . . . 33
1.4.6 Strong law of large numbers (SLLN) . . . . . . . . . . . . . . . . . . 34
1.4.7 Convergence of random variables . . . . . . . . . . . . . . . . . . . . 38
1.5 Relation of probability models to the real world . . . . . . . . . . . . . . . . 42
1.5.1 Relative frequencies in a probability model . . . . . . . . . . . . . . 42
1.5.2 Relative frequencies in the real world . . . . . . . . . . . . . . . . . . 43
1.5.3 Statistical independence of real-world experiments . . . . . . . . . 45
1.5.4 Limitations of relative frequencies . . . . . . . . . . . . . . . . . . . 46
1.5.5 Subjective probability . . . . . . . . . . . . . . . . . . . . . . . . . . 47
1.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
1.7 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
1.7.1 Table of standard random variables . . . . . . . . . . . . . . . . . . . 49
1.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2 POISSON PROCESSES 58
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
2.1.1 Arrival processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
2.2 Definition and properties of the Poisson process . . . . . . . . . . . . . . . . 60
2.2.1 Memoryless property . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
2.2.2 Probability density of Sn and S1, . . . Sn . . . . . . . . . . . . . . . . 64
2.2.3 The PMF for N(t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
2.2.4 Alternate definitions of Poisson processes . . . . . . . . . . . . . . . 67
2.2.5 The Poisson process as a limit of shrinking Bernoulli processes . . . 69
2.3 Combining and splitting Poisson processes . . . . . . . . . . . . . . . . . . . 70
2.3.1 Subdividing a Poisson process . . . . . . . . . . . . . . . . . . . . . . 72
2.3.2 Examples using independent Poisson processes . . . . . . . . . . . . 73
2.4 Non-homogeneous Poisson processes . . . . . . . . . . . . . . . . . . . . . . 74
2.5 Conditional arrival densities and order statistics . . . . . . . . . . . . . . . . 77
2.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
2.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3 RENEWAL PROCESSES 92
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.2 Strong Law of Large Numbers for renewal processes . . . . . . . . . . . . . 93
3.3 Expected number of renewals . . . . . . . . . . . . . . . . . . . . . . . . . . 97
3.3.1 Laplace transform approach . . . . . . . . . . . . . . . . . . . . . . . 98
3.3.2 Random stopping times . . . . . . . . . . . . . . . . . . . . . . . . . 100
3.3.3 Wald’s equality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
3.4 Renewal-reward processes; time-averages . . . . . . . . . . . . . . . . . . . . 105
3.4.1 General renewal-reward processes . . . . . . . . . . . . . . . . . . . . 108
3.5 Renewal-reward processes; ensemble-averages . . . . . . . . . . . . . . . . . 112
3.6 Applications of renewal-reward theory . . . . . . . . . . . . . . . . . . . . . 117
3.6.1 Little’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
3.6.2 Expected queueing time for an M/G/1 queue . . . . . . . . . . . . . 120
3.7 Delayed renewal processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
3.7.1 Delayed renewal-reward processes . . . . . . . . . . . . . . . . . . . . 124
3.7.2 Transient behavior of delayed renewal processes . . . . . . . . . . . . 125
3.7.3 The equilibrium process . . . . . . . . . . . . . . . . . . . . . . . . . 126
3.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
3.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
4 FINITE-STATE MARKOV CHAINS 139
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
4.2 Classification of states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
4.3 The Matrix representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
4.3.1 The eigenvalues and eigenvectors of P . . . . . . . . . . . . . . . . . 149
4.4 Perron-Frobenius theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
4.5 Markov chains with rewards . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
4.6 Markov decision theory and dynamic programming . . . . . . . . . . . . . . 165
4.6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
4.6.2 Dynamic programming algorithm . . . . . . . . . . . . . . . . . . . . 167
4.6.3 Optimal stationary policies . . . . . . . . . . . . . . . . . . . . . . . 170
4.6.4 Policy iteration and the solution of Bellman’s equation . . . . . . . . 173
4.6.5 Stationary policies with arbitrary final rewards . . . . . . . . . . . . 177
4.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
4.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
5 COUNTABLE-STATE MARKOV CHAINS 197
5.1 Introduction and classification of states . . . . . . . . . . . . . . . . . . . . 197
5.2 Branching processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
5.3 Birth-death Markov chains . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
5.4 Reversible Markov chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
5.5 The M/M/1 sample-time Markov chain . . . . . . . . . . . . . . . . . . . . 214
5.6 Round-robin and processor sharing . . . . . . . . . . . . . . . . . . . . . . . 217
5.7 Semi-Markov processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
5.8 Example — the M/G/1 queue . . . . . . . . . . . . . . . . . . . . . . . . . . 227
5.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
5.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
6 MARKOV PROCESSES WITH COUNTABLE STATE SPACES 235
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
6.1.1 The sampled-time approximation to a Markov process . . . . . . . . 238
6.2 Steady-state behavior of irreducible Markov processes . . . . . . . . . . . . 239
6.2.1 The number of transitions per unit time . . . . . . . . . . . . . . . . 240
6.2.2 Renewals on successive entries to a state . . . . . . . . . . . . . . . . 240
6.2.3 The strong law for time-average state probabilities . . . . . . . . . . 243
6.2.4 The equations for the steady state process probabilities . . . . . . . 244
6.2.5 The sampled-time approximation again . . . . . . . . . . . . . . . . 245
6.2.6 Pathological cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
6.3 The Kolmogorov differential equations . . . . . . . . . . . . . . . . . . . . . 246
6.4 Uniformization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
6.5 Birth-death processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
6.6 Reversibility for Markov processes . . . . . . . . . . . . . . . . . . . . . . . 253
6.7 Jackson networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
6.7.1 Closed Jackson networks . . . . . . . . . . . . . . . . . . . . . . . . . 264
6.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
6.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
7 RANDOM WALKS AND MARTINGALES 279
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
7.1.1 Simple random walks . . . . . . . . . . . . . . . . . . . . . . . . . . 279
7.1.2 Integer-valued random walks . . . . . . . . . . . . . . . . . . . . . . 280
7.1.3 Renewal processes as special cases of random walks . . . . . . . . . . 280
7.2 The waiting time in a G/G/1 queue: . . . . . . . . . . . . . . . . . . . . . . 281
7.3 Detection, Decisions, and Hypothesis testing . . . . . . . . . . . . . . . . . . 284
7.4 Threshold crossing probabilities . . . . . . . . . . . . . . . . . . . . . . . . . 286
7.5 Thresholds, stopping rules, and Wald’s identity . . . . . . . . . . . . . . . . 291
7.5.1 Stopping rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292
7.5.2 Joint distribution of N and barrier . . . . . . . . . . . . . . . . . . . 296
7.5.3 Proof of Wald’s identity . . . . . . . . . . . . . . . . . . . . . . . . . 297
7.6 Martingales and submartingales . . . . . . . . . . . . . . . . . . . . . . . . . 299
7.6.1 Simple examples of martingales . . . . . . . . . . . . . . . . . . . . . 299
7.6.2 Markov modulated random walks . . . . . . . . . . . . . . . . . . . . 300
7.6.3 Generating functions for Markov random walks . . . . . . . . . . . . 302
7.6.4 Scaled branching processes . . . . . . . . . . . . . . . . . . . . . . . 303
7.6.5 Partial isolation of past and future in martingales . . . . . . . . . . 304
7.6.6 Submartingales and supermartingales . . . . . . . . . . . . . . . . . 304
7.7 Stopped processes and stopping times . . . . . . . . . . . . . . . . . . . . . 307
7.7.1 Stopping times for martingales relative to a process . . . . . . . . . 310
7.8 The Kolmogorov inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . 312
7.8.1 The strong law of large numbers . . . . . . . . . . . . . . . . . . . . 314
7.8.2 The martingale convergence therem . . . . . . . . . . . . . . . . . . 315
7.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
7.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 |
|