000 | 16113nam a2201969 i 4500 | ||
---|---|---|---|
001 | 5237520 | ||
003 | IEEE | ||
005 | 20200421114111.0 | ||
006 | m o d | ||
007 | cr |n||||||||| | ||
008 | 090527t20152008njua ob 001 0 eng d | ||
020 |
_a9780470374122 _qelectronic |
||
020 |
_z9780470253885 _qpaper |
||
020 |
_z0470374128 _qelectronic |
||
024 | 7 |
_a10.1002/9780470374122 _2doi |
|
035 | _a(CaBNVSL)mat05237520 | ||
035 | _a(IDAMS)0b000064810958b8 | ||
040 |
_aCaBNVSL _beng _erda _cCaBNVSL _dCaBNVSL |
||
050 | 4 |
_aTK7872.F5 _bS285 2008eb |
|
082 | 0 | 4 |
_a621.3815/324 _222 |
100 | 1 |
_aSayed, Ali H. _eauthor. |
|
245 | 1 | 0 |
_aAdaptive filters / _cAli H. Sayed. |
264 | 1 |
_aHoboken, New Jersey : _bWiley-Interscience : _cc2008. |
|
264 | 2 |
_a[Piscataqay, New Jersey] : _bIEEE Xplore, _c2008. |
|
300 |
_a1 PDF (xxx, 786 pages) : _billustrations. |
||
336 |
_atext _2rdacontent |
||
337 |
_aelectronic _2isbdmedia |
||
338 |
_aonline resource _2rdacarrier |
||
504 | _aIncludes bibliographical references (p. 758-774) and indexes. | ||
505 | 0 | _aPreface and Acknowledgments -- Notation and Symbols -- BACKGROUND MATERIAL -- A. Random Variables -- A.1 Variance of a Random Variable -- A.2 Dependent Random Variables -- A.3 Complex-Valued Random Variables -- A.4 Vector-Valued Random Variables -- A.5 Gaussian Random Vectors -- B. Linear Algebra -- B.1 Hermitian and Positive-Definite Matrices -- B.2 Range Spaces and Nullspaces of Matrices -- B.3 Schur Complements -- B.4 Cholesky Factorization -- B.5 QR Decomposition -- B.6 Singular Value Decomposition -- B.7 Kronecker Products -- C. Complex Gradients -- C.1 Cauchy-Riemann Conditions -- C.2 Scalar Arguments -- C.3 Vector Arguments -- PART I: OPTIMAL ESTIMATION -- 1. Scalar-Valued Data -- 1.1 Estimation Without Observations -- 1.2 Estimation Given Dependent Observations -- 1.3 Orthogonality Principle -- 1.4 Gaussian Random Variables -- 2. Vector-Valued Data -- 2.1 Optimal Estimator in the Vector Case -- 2.2 Spherically Invariant Gaussian Variables -- 2.3 Equivalent Optimization Criterion -- Summary and Notes -- Problems and Computer Projects -- PART II: LINEAR ESTIMATION -- 3. Normal Equations -- 3.1 Mean-Square Error Criterion -- 3.2 Minimization by Differentiation -- 3.3 Minimization by Completion-of-Squares -- 3.4 Minimization of the Error Covariance Matrix -- 3.5 Optimal Linear Estimator -- 4. Orthogonality Principle -- 4.1 Design Examples -- 4.2 Orthogonality Condition -- 4.3 Existence of Solutions -- 4.4 Nonzero-Mean Variables -- 5. Linear Models -- 5.1 Estimation using Linear Relations -- 5.2 Application: Channel Estimation -- 5.3 Application: Block Data Estimation -- 5.4 Application: Linear Channel Equalization -- 5.5 Application: Multiple-Antenna Receivers -- 6. Constrained Estimation -- 6.1 Minimum-Variance Unbiased Estimation -- 6.2 Example: Mean Estimation -- 6.3 Application: Channel and Noise Estimation -- 6.4 Application: Decision Feedback Equalization -- 6.5 Application: Antenna Beamforming -- 7. Kalman Filter. | |
505 | 8 | _a7.1 Innovations Process -- 7.2 State-Space Model -- 7.3 Recursion for the State Estimator -- 7.4 Computing the Gain Matrix -- 7.5 Riccati Recursion -- 7.6 Covariance Form -- 7.7 Measurement and Time-Update Form -- Summary and Notes -- Problems and Computer Projects -- PART III: STOCHASTIC GRADIENT ALGORITHMS -- 8. Steepest-Descent Technique -- 8.1 Linear Estimation Problem -- 8.2 Steepest-Descent Method -- 8.3 More General Cost Functions -- 9. Transient Behavior -- 9.1 Modes of Convergence -- 9.2 Optimal Step-Size -- 9.3 Weight-Error Vector Convergence -- 9.4 Time Constants -- 9.5 Learning Curve -- 9.6 Contour Curves of the Error Surface -- 9.7 Iteration-Dependent Step-Sizes -- 9.8 Newton?s Method -- 10. LMS Algorithm -- 10.1 Motivation -- 10.2 Instantaneous Approximation -- 10.3 Computational Cost -- 10.4 Least-Perturbation Property -- 10.5 Application: Adaptive Channel Estimation -- 10.6 Application: Adaptive Channel Equalization -- 10.7 Application: Decision-Feedback Equalization -- 10.8 Ensemble-Average Learning Curves -- 11. Normalized LMS Algorithm -- 11.1 Instantaneous Approximation -- 11.2 Computational Cost -- 11.3 Power Normalization -- 11.4 Least-Perturbation Property -- 12. Other LMS-Type Algorithms -- 12.1 Non-Blind Algorithms -- 12.2 Blind Algorithms -- 12.3 Some Properties -- 13. Affine Projection Algorithm -- 13.1 Instantaneous Approximation -- 13.2 Computational Cost -- 13.3 Least-Perturbation Property -- 13.4 Affine Projection Interpretation -- 14. RLS Algorithm -- 14.1 Instantaneous Approximation -- 14.2 Computational Cost -- Summary and Notes -- Problems and Computer Projects -- PART IV: MEAN-SQUARE PERFORMANCE -- 15. Energy Conservation -- 15.1 Performance Measure -- 15.2 Stationary Data Model -- 15.3 Energy Conservation Relation -- 15.4 Variance Relation -- 15.A Interpretations of the Energy Relation -- 16. Performance of LMS -- 16.1 Variance Relation -- 16.2 Small Step-Sizes -- 16.3 Separation Principle. | |
505 | 8 | _a16.4 White Gaussian Input -- 16.5 Statement of Results -- 16.6 Simulation Results -- 17. Performance of NLMS -- 17.1 Separation Principle -- 17.2 Simulation Results -- 17.A Relating NLMS to LMS -- 18. Performance of Sign-Error LMS -- 18.1 Real-Valued Data -- 18.2 Complex-Valued Data -- 18.3 Simulation Results -- 19. Performance of RLS and Other Filters -- 19.1 Performance of RLS -- 19.2 Performance of Other Filters -- 19.3 Performance Table for Small Step-Sizes -- 20. Nonstationary Environments -- 20.1 Motivation -- 20.2 Nonstationary Data Model -- 20.3 Energy Conservation Relation -- 20.4 Variance Relation -- 21. Tracking Performance -- 21.1 Performance of LMS -- 21.2 Performance of NLMS -- 21.3 Performance of Sign-Error LMS -- 21.4 Performance of RLS -- 21.5 Comparison of Tracking Performance -- 21.6 Comparing RLS and LMS -- 21.7 Performance of Other Filters -- 21.8 Performance Table for Small Step-Sizes -- Summary and Notes -- Problems and Computer Projects -- PART V: TRANSIENT PERFORMANCE -- 22. Weighted Energy Conservation -- 22.1 Data Model -- 22.2 Data-Normalized Adaptive Filters -- 22.3 Weighted Energy Conservation Relation -- 22.4 Weighted Variance Relation -- 23. LMS with Gaussian Regressors -- 23.1 Mean and Variance Relations -- 23.2 Mean Behavior -- 23.3 Mean-Square Behavior -- 23.4 Mean-Square Stability -- 23.5 Steady-State Performance -- 23.6 Small Step-Size Approximations -- 23.A Convergence Time -- 24. LMS with non-Gaussian Regressors -- 24.1 Mean and Variance Relations -- 24.2 Mean-Square Stability and Performance -- 24.3 Small Step-Size Approximations -- 24.A Independence and Averaging Analysis -- 25. Data-Normalized Filters -- 25.1 NLMS Filter -- 25.2 Data-Normalized Filters -- 25.A Stability Bound -- 25.B Stability of NLMS -- Summary and Notes -- Problems and Computer Projects -- PART VI: BLOCK ADAPTIVE FILTERS -- 26. Transform Domain Adaptive Filters -- 26.1 Transform-Domain Filters -- 26.2 DFT-Domain LMS. | |
505 | 8 | _a26.3 DCT-Domain LMS -- 26.A DCT-Transformed Regressors -- 27. Efficient Block Convolution -- 27.1 Motivation -- 27.2 Block Data Formulation -- 27.3 Block Convolution -- 28. Block and Subband Adaptive Filters -- 28.1 DFT Block Adaptive Filters -- 28.2 Subband Adaptive Filters -- 28.A Another Constrained DFT Block Filter -- 28.B Overlap-Add Block Adaptive Filters -- Summary and Notes -- Problems and Computer Projects -- PART VII: LEAST-SQUARES METHODS -- 29. Least-Squares Criterion -- 29.1 Least-Squares Problem -- 29.2 Geometric Argument -- 29.3 Algebraic Arguments -- 29.4 Properties of Least-Squares Solution -- 29.5 Projection Matrices -- 29.6 Weighted Least-Squares -- 29.7 Regularized Least-Squares -- 29.8 Weighted Regularized Least-Squares -- 30. Recursive Least-Squares -- 30.1 Motivation -- 30.2 RLS Algorithm -- 30.3 Regularization -- 30.4 Conversion Factor -- 30.5 Time-Update of the Minimum Cost -- 30.6 Exponentially-Weighted RLS Algorithm -- 31. Kalman Filtering and RLS -- 31.1 Equivalence in Linear Estimation -- 31.2 Kalman Filtering and Recursive Least-Squares -- 31.A Extended RLS Algorithms -- 32. Order and Time-Update Relations -- 32.1 Backward Order-Update Relations -- 32.2 Forward Order-Update Relations -- 32.3 Time-Update Relation -- Summary and Notes -- Problems and Computer Projects -- PART VIII: ARRAY ALGORITHMS -- 33. Norm and Angle Preservation -- 33.1 Some Difficulties -- 33.2 Square-Root Factors -- 33.3 Norm and Angle Preservation -- 33.4 Motivation for Array Methods -- 34. Unitary Transformations -- 34.1 Givens Rotations -- 34.2 Householder Transformations -- 35. QR and Inverse QR Algorithms -- 35.1 Inverse QR Algorithm -- 35.2 QR Algorithm -- 35.3 Extended QR Algorithm -- 35.A Array Algorithms for Kalman Filtering -- Summary and Notes -- Problems and Computer Projects -- PART IX: FAST RLS ALGORITHMS -- 36. Hyperbolic Rotations -- 36.1 Hyperbolic Givens Rotations -- 36.2 Hyperbolic Householder Transformations. | |
505 | 8 | _a36.3 Hyperbolic Basis Rotations -- 37. Fast Array Algorithm -- 37.1 Time-Update of the Gain Vector -- 37.2 Time-Update of the Conversion Factor -- 37.3 Initial Conditions -- 37.4 Array Algorithm -- 37.A Chandrasekhar Filter -- 38. Regularized Prediction Problems -- 38.1 Regularized Backward Prediction -- 38.2 Regularized Forward Prediction -- 38.3 Low-Rank Factorization -- 39. Fast Fixed-Order Filters -- 39.1 Fast Transversal Filter -- 39.2 FAEST Filter -- 39.3 Fast Kalman Filter -- 39.4 Stability Issues -- Summary and Notes -- Problems and Computer Projects -- PART X: LATTICE FILTERS -- 40. Three Basic Estimation Problems -- 40.1 Motivation for Lattice Filters -- 40.2 Joint Process Estimation -- 40.3 Backward Estimation Problem -- 40.4 Forward Estimation Problem -- 40.5 Time and Order-Update Relations -- 41. Lattice Filter Algorithms -- 41.1 Significance of Data Structure -- 41.2 A Posteriori-Based Lattice Filter -- 41.3 A Priori-Based Lattice Filter -- 42. Error-Feedback Lattice Filters -- 42.1 A Priori Error-Feedback Lattice Filter -- 42.2 A Posteriori Error-Feedback Lattice Filter -- 42.3 Normalized Lattice Filter -- 43. Array Lattice Filters -- 43.1 Order-Update of Output Estimation Errors -- 43.2 Order-Update of Backward Estimation Errors -- 43.3 Order-Update of Forward Estimation Errors -- 43.4 Significance of Data Structure -- Summary and Notes -- Problems and Computer Projects -- PART XI: ROBUST FILTERS -- 44. Indefinite Least-Squares -- 44.1 Indefinite Least-Squares -- 44.2 Recursive Minimization Algorithm -- 44.3 Time-Update of the Minimum Cost -- 44.4 Singular Weighting Matrices -- 44.A Stationary Points -- 44.B Inertia Conditions -- 45. Robust Adaptive Filters -- 45.1 A Posteriori-Based Robust Filters -- 45.2 ε-NLMS Algorithm -- 45.3 A Priori-Based Robust Filters -- 45.4 LMS Algorithm -- 45.A H1 Filters -- 46. Robustness Properties -- 46.1 Robustness of LMS -- 46.2 Robustness of εNLMS. | |
505 | 8 | _a46.3 Robustness of RLS -- Summary and Notes -- Problems and Computer Projects -- REFERENCES AND INDICES -- References -- Author Index -- Subject Index. | |
506 | 1 | _aRestricted to subscribers or individual electronic text purchasers. | |
520 | _aAdaptive filtering is a topic of immense practical and theoretical value, having applications in areas ranging from digital and wireless communications to biomedical systems. This book enables readers to gain a gradual and solid introduction to the subject, its applications to a variety of topical problems, existing limitations, and extensions of current theories. The book consists of eleven parts?each part containing a series of focused lectures and ending with bibliographic comments, problems, and computer projects with MATLAB solutions. | ||
530 | _aAlso available in print. | ||
538 | _aMode of access: World Wide Web | ||
588 | _aDescription based on PDF viewed 12/21/2015. | ||
650 | 0 | _aAdaptive filters. | |
655 | 0 | _aElectronic books. | |
695 | _aAcoustics | ||
695 | _aAdaptation model | ||
695 | _aAdaptive algorithms | ||
695 | _aAdaptive equalizers | ||
695 | _aAdaptive filters | ||
695 | _aAdditive noise | ||
695 | _aAdditives | ||
695 | _aAlgorithm design and analysis | ||
695 | _aAntenna measurements | ||
695 | _aApproximation algorithms | ||
695 | _aApproximation methods | ||
695 | _aArrays | ||
695 | _aArtificial intelligence | ||
695 | _aArtificial neural networks | ||
695 | _aBayesian methods | ||
695 | _aBibliographies | ||
695 | _aBinary phase shift keying | ||
695 | _aBlind equalizers | ||
695 | _aBooks | ||
695 | _aChannel estimation | ||
695 | _aChebyshev approximation | ||
695 | _aComputational complexity | ||
695 | _aComputational efficiency | ||
695 | _aComputer aided software engineering | ||
695 | _aComputers | ||
695 | _aContext | ||
695 | _aConvergence | ||
695 | _aCorrelation | ||
695 | _aCost function | ||
695 | _aCovariance matrix | ||
695 | _aData models | ||
695 | _aData structures | ||
695 | _aDecision feedback equalizers | ||
695 | _aDelay | ||
695 | _aDigital filters | ||
695 | _aDiscrete Fourier transforms | ||
695 | _aDiscrete cosine transforms | ||
695 | _aEcho cancellers | ||
695 | _aEigenvalues and eigenfunctions | ||
695 | _aEnergy conservation | ||
695 | _aEqualizers | ||
695 | _aEquations | ||
695 | _aEstimation | ||
695 | _aEstimation error | ||
695 | _aEstimation theory | ||
695 | _aExtraterrestrial measurements | ||
695 | _aFace | ||
695 | _aFilter bank | ||
695 | _aFiltering algorithms | ||
695 | _aFiltering theory | ||
695 | _aFinite impulse response filter | ||
695 | _aFrequency domain analysis | ||
695 | _aGaussian distribution | ||
695 | _aGaussian noise | ||
695 | _aGaussian processes | ||
695 | _aGeometry | ||
695 | _aHistograms | ||
695 | _aHistory | ||
695 | _aIndexes | ||
695 | _aInformation filters | ||
695 | _aInspection | ||
695 | _aJoints | ||
695 | _aKalman filters | ||
695 | _aLattices | ||
695 | _aLead | ||
695 | _aLeast squares approximation | ||
695 | _aLinear algebra | ||
695 | _aLinear matrix inequalities | ||
695 | _aLinear systems | ||
695 | _aLinearity | ||
695 | _aManifolds | ||
695 | _aMaterials | ||
695 | _aMathematical model | ||
695 | _aMatrices | ||
695 | _aMatrix decomposition | ||
695 | _aMaximum likelihood estimation | ||
695 | _aMeasurement uncertainty | ||
695 | _aMedical services | ||
695 | _aNoise | ||
695 | _aNoise measurement | ||
695 | _aOFDM | ||
695 | _aOptimization | ||
695 | _aOscillators | ||
695 | _aPerformance analysis | ||
695 | _aPoles and zeros | ||
695 | _aPolynomials | ||
695 | _aPrediction algorithms | ||
695 | _aPresses | ||
695 | _aProbability density function | ||
695 | _aProjection algorithms | ||
695 | _aQualifications | ||
695 | _aRandom processes | ||
695 | _aRandom variables | ||
695 | _aReceivers | ||
695 | _aReflection | ||
695 | _aReliability theory | ||
695 | _aRobustness | ||
695 | _aSections | ||
695 | _aSignal to noise ratio | ||
695 | _aSilicon compounds | ||
695 | _aSimulation | ||
695 | _aSpeech | ||
695 | _aSpeech processing | ||
695 | _aStability analysis | ||
695 | _aStacking | ||
695 | _aSteady-state | ||
695 | _aStochastic processes | ||
695 | _aSymmetric matrices | ||
695 | _aTechnological innovation | ||
695 | _aTerminology | ||
695 | _aTime measurement | ||
695 | _aTin | ||
695 | _aTransfer functions | ||
695 | _aTransforms | ||
695 | _aTransient analysis | ||
695 | _aTransversal filters | ||
695 | _aVectors | ||
695 | _aWeight measurement | ||
695 | _aWriting | ||
710 | 2 |
_aJohn Wiley & Sons _epublisher. |
|
710 | 2 |
_aIEEE Xplore (Online service), _edistributor. |
|
776 | 0 | 8 |
_iPrint version: _z9780470253885 |
856 | 4 | 2 |
_3Abstract with links to resource _uhttp://ieeexplore.ieee.org/xpl/bkabstractplus.jsp?bkn=5237520 |
942 | _cEBK | ||
999 |
_c59352 _d59352 |