Fundamentals of Adaptive Filtering
Samenvatting
This book is based on a graduate level course offered by the author at UCLA and has been classed tested there and at other universities over a number of years. This will be the most comprehensive book on the market today providing instructors a wide choice in designing their courses.
∗ Offers computer problems to illustrate real life applications for students and professionals alike
∗ An Instructor′s Manual presenting detailed solutions to all the problems in the book is available from the Wiley editorial department.
An Instructor′s Manual presenting detailed solutions to all the problems in the book is available from the Wiley editorial department.
Specificaties
Inhoudsopgave
<p>ACKNOWLEDGMENTS xxix</p>
<p>NOTATION xxxi</p>
<p>SYMBOLS xxxv</p>
<p>1 OPTIMAL ESTIMATION 1</p>
<p>1.1 Variance of a Random Variable 1</p>
<p>1.2 Estimation Given No Observations 5</p>
<p>1.3 Estimation Given Dependent Observations 6</p>
<p>1.4 Estimation in the Complex and Vector Cases 18</p>
<p>1.5 Summary of Main Results 30</p>
<p>1.6 Bibliographic Notes 31</p>
<p>1.7 Problems 33</p>
<p>1.8 Computer Project 37</p>
<p>l.A Hermitian and Positive–Definite Matrices 39</p>
<p>l.B Gaussian Random Vectors 42</p>
<p>2 LINEAR ESTIMATION 47</p>
<p>2.1 Normal Equations 48</p>
<p>2.2 Design Examples 54</p>
<p>2.3 Existence of Solutions 60</p>
<p>2.4 Orthogonality Principle 63</p>
<p>2.5 Nonzero–Mean Variables 65</p>
<p>2.6 Linear Models 66</p>
<p>2.7 Applications 68</p>
<p>2.8 Summary of Main Results 76</p>
<p>2.9 Bibliographic Notes 77</p>
<p>2.10 Problems 79</p>
<p>2.11 Computer Project 95</p>
<p>2.A Range Spaces and Nullspaces of Matrices 103</p>
<p>2.B Complex Gradients 105</p>
<p>2.C Kalman Filter 108</p>
<p>3 CONSTRAINED LINEAR ESTIMATION 114</p>
<p>3.1 Minimum–Variance Unbiased Estimation 115</p>
<p>3.2 Application: Channel and Noise Estimation 119</p>
<p>3.3 Application: Decision Feedback Equalization 120</p>
<p>3.4 Application: Antenna Beamforming 128</p>
<p>3.5 Summary of Main Results 131</p>
<p>3.6 Bibliographic Notes 131</p>
<p>3.7 Problems 133</p>
<p>3.8 Two Computer Projects 143</p>
<p>3.A Schur Complements 155</p>
<p>3.B Primer on Channel Equalization 159</p>
<p>3.C Causal Wiener–Hopf Filtering 167</p>
<p>4 STEEPEST–DESCENT ALGORITHMS 170</p>
<p>4.1 Linear Estimation Problem 171</p>
<p>4.2 Steepest–Descent Method 174</p>
<p>4.3 Transient Behavior 179</p>
<p>4.4 Iteration–Dependent Step–Sizes 187</p>
<p>4.5 Newton′s Method 191</p>
<p>4.6 Summary of Main Results 193</p>
<p>4.7 Bibliographic Notes 194</p>
<p>4.8 Problems 196</p>
<p>4.9 Two Computer Projects 204</p>
<p>5 STOCHASTIC–GRADIENT ALGORITHMS 212</p>
<p>5.1 Motivation 213</p>
<p>5.2 LMS Algorithm 214</p>
<p>5.3 Application: Adaptive Channel Estimation 218</p>
<p>5.4 Application: Adaptive Channel Equalization 220</p>
<p>5.5 Application: Decision–Feedback Equalization 223</p>
<p>5.6 Normalized LMS Algorithm 225</p>
<p>5.7 Other LMS–type Algorithms 233</p>
<p>5.8 Affine Projection Algorithms 238</p>
<p>5.9 RLS Algorithm 245</p>
<p>5.10 Ensemble–Average Learning Curves 248</p>
<p>5.11 Summary of Main Results 251</p>
<p>5.12 Bibliographic Notes 252</p>
<p>5.13 Problems 256</p>
<p>5.14 Three Computer Projects 267</p>
<p>6 STEADY–STATE PERFORMANCE OF ADAPTIVE FILTERS 281</p>
<p>6.1 Performance Measure 282</p>
<p>6.2 Stationary Data Model 284</p>
<p>6.3 Fundamental Energy–Conservation Relation 287</p>
<p>6.4 Fundamental Variance Relation 290</p>
<p>6.5 Mean–Square Performance of LMS 292</p>
<p>6.6 Mean–Square Performance of –NLMS 300</p>
<p>6.7 Mean–Square Performance of Sign–Error LMS 305</p>
<p>6.S Mean–Square Performance of LMF and LMMN 308</p>
<p>6.9 Mean–Square Performance of RLS 317</p>
<p>6.10 Mean–Square Performance of e–APA 322</p>
<p>6.11 Mean–Square Performance of Other Filters 325</p>
<p>6.12 Performance Table for Small Step–Sizes 327</p>
<p>6.13 Summary of Main Results 327</p>
<p>6.14 Bibliographic Notes 329</p>
<p>6.15 Problems 332</p>
<p>6.16 Computer Project 343</p>
<p>6.A Interpretations of the Energy Relation 348</p>
<p>6.B Relating e–NLMS to LMS 353</p>
<p>6.C Affine Projection Performance Condition 355</p>
<p>7 TRACKING PERFORMANCE OF ADAPTIVE FILTERS 357</p>
<p>7.1 Motivation 357</p>
<p>7.2 Nonstationary Data Model 358</p>
<p>7.3 Fundamental Energy–Conservation Relation 364</p>
<p>7.4 Fundamental Variance Relation 364</p>
<p>7.5 Tracking Performance of LMS 367</p>
<p>7.6 Tracking Performance of e–NLMS 370</p>
<p>7.7 Tracking Performance of Sign–Error LMS 372</p>
<p>7.8 Tracking Performance of LMF and LMMN 374</p>
<p>7.9 Comparison of Tracking Performance 378</p>
<p>7.10 Tracking Performance of RLS 380</p>
<p>7.11 Tracking Performance of e–APA 384</p>
<p>7.12 Tracking Performance of Other Filters 386</p>
<p>7.13 Performance Table for Small Step–Sizes 387</p>
<p>7.14 Summary of Main Results 387</p>
<p>7.15 Bibliographic Notes 389</p>
<p>7.16 Problems 391</p>
<p>7.17 Computer Project 401</p>
<p>8 FINITE PRECISION EFFECTS 408</p>
<p>8.1 Quantization Model 409</p>
<p>8.2 Data Model and Quantization Error Sources 410</p>
<p>8.3 Fundamental Energy–Conservation Relation 413</p>
<p>8.4 Fundamental Variance Relation 416</p>
<p>8.5 Performance Degradation of LMS 419</p>
<p>8.6 Performance Degradation of e–NLMS 421</p>
<p>8.7 Performance Degradation of Sign–Error LMS 423</p>
<p>8.8 Performance Degradation of LMF and LMMN 424</p>
<p>8.9 Performance Degradation of Other Filters 425</p>
<p>8.10 Summary of Main Results 426</p>
<p>8.11 Bibliographic Notes 428</p>
<p>8.12 Problems 430</p>
<p>8.13 Computer Project 437</p>
<p>9 TRANSIENT PERFORMANCE OF ADAPTIVE FILTERS 441</p>
<p>9.1 Data Model 442</p>
<p>9.2 Data–Normalized Adaptive Filters 442</p>
<p>9.3 Weighted Energy–Conservation Relation 443</p>
<p>9.4 Weighted Variance Relation 445</p>
<p>9.5 Transient Performance of LMS 452</p>
<p>9.6 Transient Performance of e–NLMS 471</p>
<p>9.7 Performance of Data–Normalized Filters 474</p>
<p>9.8 Summary of Main Results 477</p>
<p>9.9 Bibliographic Notes 481</p>
<p>9.10 Problems 487</p>
<p>9.11 Computer Project 516</p>
<p>9.A Stability Bound 522</p>
<p>9.B Stability of e–NLMS 524</p>
<p>9.C Adaptive Filters with Error Nonlinearities 526</p>
<p>9.D Convergence Time of Adaptive Filters 538</p>
<p>9.E Learning Behavior of Adaptive Filters 545</p>
<p>9.F Independence and Averaging Analysis 559</p>
<p>9.G Interpretation of Weighted Energy Relation 568</p>
<p>9.H Kronecker Products 570</p>
<p>10 BLOCK ADAPTIVE FILTERS 572</p>
<p>10.1 Transform–Domain Adaptive Filters 573</p>
<p>10.2 Motivation for Block Adaptive Filters 584</p>
<p>10.3 Efficient Block Convolution 586</p>
<p>10.4 DFT–Based Block Adaptive Filters 597</p>
<p>10.5 Subband Adaptive Filters 605</p>
<p>10.6 Summary of Main Results 612</p>
<p>10.7 Bibliographic Notes 614</p>
<p>10.8 Problems 616</p>
<p>10.9 Computer Project 620</p>
<p>10.A DCT–Transformed Regressors 626</p>
<p>10.B More Constrained DFT Block Filters 628</p>
<p>10.C Overlap–Add DFT–Based Block Adaptive Filter 632</p>
<p>10.D DCT–Based Block Adaptive Filters 640</p>
<p>10.E DHT–Based Block Adaptive Filters 648</p>
<p>11 THE LEAST–SQUARES CRITERION 657</p>
<p>11.1 Least–Squares Problem 658</p>
<p>11.2 Weighted Least–Squares 666</p>
<p>11.3 Regularized Least–Squares 669</p>
<p>11.4 Weighted Regularized Least–Squares 671</p>
<p>11.5 Order–Update Relations 672</p>
<p>11.6 Summary of Main Results 688</p>
<p>11.7 Bibliographic Notes 689</p>
<p>11.8 Problems 693</p>
<p>11.9 Three Computer Projects 703</p>
<p>11.A Equivalence Results in Linear Estimation 724</p>
<p>ll.B QR Decomposition 726</p>
<p>ll.C Singular Value Decomposition 728</p>
<p>12 RECURSIVE LEAST–SQUARES 732</p>
<p>12.1 Motivation 732</p>
<p>12.2 RLS Algorithm 733</p>
<p>12.3 Exponentially–Weighted RLS Algorithm 739</p>
<p>12.4 General Time–Update Result 741</p>
<p>12.5 Summary of Main Results 745</p>
<p>12.6 Bibliographic Notes 745</p>
<p>12.7 Problems 748</p>
<p>12.8 Two Computer Projects 755</p>
<p>12.A Kalman Filtering and Recursive Least–Squares 763</p>
<p>12.B Extended RLS Algorithms 768</p>
<p>13 RLS ARRAY ALGORITHMS 775</p>
<p>13.1 Some Difficulties 775</p>
<p>13.2 Square–Root Factors 776</p>
<p>13.3 Norm and Angle Preservation 778</p>
<p>13.4 Motivation for Array Methods 780</p>
<p>13.5 RLS Algorithm 784</p>
<p>13.6 Inverse QR Algorithm 785</p>
<p>13.7 QR Algorithm 788</p>
<p>13.8 Extended QR Algorithm 793</p>
<p>13.9 Summary of Main Results 794</p>
<p>13.10 Bibliographic Notes 795</p>
<p>13.11 Problems 797</p>
<p>13.12 Computer Project 802</p>
<p>13.A Unitary Transformations 804</p>
<p>13.A.I Givens Rotations 804</p>
<p>13.A.2 Householder Transformations 808</p>
<p>13.B Array Algorithms for Kalman Filtering 812</p>
<p>14 FAST FIXED–ORDER FILTERS 816</p>
<p>14.1 Fast Array Algorithm 817</p>
<p>14.2 Regularized Prediction Problems 825</p>
<p>14.3 Fast Transversal Filter 832</p>
<p>14.4 FAEST Filter 836</p>
<p>14.5 Fast Kalman Filter 838</p>
<p>14.6 Stability Issues 839</p>
<p>14.7 Summary of Main Results 845</p>
<p>14.8 Bibliographic Notes 846</p>
<p>14.9 Problems 848</p>
<p>14.10 Computer Project 857</p>
<p>14.A Hyperbolic Rotations 860</p>
<p>14.B Hyperbolic Basis Rotations 867</p>
<p>14.C Backward Consistency and Minimality 869</p>
<p>14.D Chandrasekhar Filter 871</p>
<p>15 LATTICE FILTERS 874</p>
<p>15.1 Motivation and Notation 875</p>
<p>15.2 Joint Process Estimation 878</p>
<p>15.3 Backward Estimation Problem 880</p>
<p>15.4 Forward Estimation Problem 883</p>
<p>15.5 Time and Order–Update Relations 885</p>
<p>15.6 Significance of Data Structure 891</p>
<p>15.7 A Posteriori–Based Lattice Filter 894</p>
<p>15.8 A Priori–Based Lattice Filter 895</p>
<p>15.9 A Priori Error–Feedback Lattice Filter 897</p>
<p>15.10 A Posteriori Error–Feedback Lattice Filter 902</p>
<p>15.11 Normalized Lattice Filter 904</p>
<p>15.12 Array–Based Lattice Filter 910</p>
<p>15.13 Relation Between RLS and Lattice Filters 915</p>
<p>15.14 Summary of Main Results 917</p>
<p>15.15 Bibliographic Notes 918</p>
<p>15.16 Problems 920</p>
<p>15.17 Computer Project 925</p>
<p>16 LAGUERRE ADAPTIVE FILTERS 931</p>
<p>16.1 Orthonormal Filter Structures 932</p>
<p>16.2 Data Structure 934</p>
<p>16.3 Fast Array Algorithm 936</p>
<p>16.4 Regularized Projection Problems 942</p>
<p>16.5 Extended Fast Transversal Filter 954</p>
<p>16.6 Extended FAEST Filter 957</p>
<p>16.7 Extended Fast Kalman Filter 958</p>
<p>16.8 Stability Issues 959</p>
<p>16.9 Order–Recursive Filters 960</p>
<p>16.10 A Posteriori–Based Lattice Filter 968</p>
<p>16.11 A Priori–Based Lattice Filter 970</p>
<p>16.12 A Priori Error–Feedback Lattice Filter 972</p>
<p>16.13 A Posteriori Error–Feedback Lattice Filter 976</p>
<p>16.14 Normalized Lattice Filter 978</p>
<p>16.15 Array Lattice Filter 982</p>
<p>16.16 Summary of Main Results 985</p>
<p>16.17 Bibliographic Notes 986</p>
<p>16.18 Problems 989</p>
<p>16.19 Computer Project 994</p>
<p>16.A Modeling with Orthonormal Basis Functions 999</p>
<p>16.B Efficient Matrix–Vector Multiplication 1007</p>
<p>16.C Lyapunov Equations 1009</p>
<p>17 ROBUST ADAPTIVE FILTERS 1012</p>
<p>17.1 Indefinite Least–Squares 1013</p>
<p>17.2 Recursive Minimization Algorithm 1018</p>
<p>17.3 A Posteriori–Based Robust Filters 1027</p>
<p>17.4 A Priori–Based Robust Filters 1036</p>
<p>17.5 Energy Conservation Arguments 1043</p>
<p>17.6 Summary of Main Results 1052</p>
<p>17.7 Bibliographic Notes 1052</p>
<p>17.8 Problems 1056</p>
<p>17.9 Computer Project 1072</p>
<p>17.A Arbitrary Coefficient Matrices 1078</p>
<p>17.B Total–Least–Squares 1081</p>
<p>17.C H°° Filters 1085</p>
<p>17.D Stationary Points 1089</p>
<p>BIBLIOGRAPHY 1090</p>
<p>AUTHOR INDEX 1113</p>
<p>SUBJECT INDEX 1118</p>