Knowledge Discovery with Support Vector Machines

Specificaties
Gebonden, 262 blz. | Engels
John Wiley & Sons | e druk, 2009
ISBN13: 9780470371923
Rubricering
John Wiley & Sons e druk, 2009 9780470371923
€ 161,77
Levertijd ongeveer 8 werkdagen

Samenvatting

An easy–to–follow introduction to support vector machines

This book provides an in–depth, easy–to–follow introduction to support vector machines drawing only from minimal, carefully motivated technical and mathematical background material. It begins with a cohesive discussion of machine learning and goes on to cover:

Knowledge discovery environments

Describing data mathematically

Linear decision surfaces and functions

Perceptron learning

Maximum margin classifiers

Support vector machines

Elements of statistical learning theory

Multi–class classification

Regression with support vector machines

Novelty detection

Complemented with hands–on exercises, algorithm descriptions, and data sets, Knowledge Discovery with Support Vector Machines is an invaluable textbook for advanced undergraduate and graduate courses. It is also an excellent tutorial on support vector machines for professionals who are pursuing research in machine learning and related areas.

Specificaties

ISBN13:9780470371923
Taal:Engels
Bindwijze:gebonden
Aantal pagina's:262

Inhoudsopgave

Preface.
<p>PART I.</p>
<p>1 What is Knowledge Discovery?</p>
<p>1.1 Machine Learning.</p>
<p>1.2 The Structure of the Universe X.</p>
<p>1.3 Inductive Learning.</p>
<p>1.4 Model Representations.</p>
<p>Exercises.</p>
<p>Bibliographic Notes.</p>
<p>2 Knowledge Discovery Environments.</p>
<p>2.1 Computational Aspects of Knowledge Discovery.</p>
<p>2.1.1 Data Access.</p>
<p>2.1.2 Visualization.</p>
<p>2.1.3 Data Manipulation.</p>
<p>2.1.4 Model Building and Evaluation.</p>
<p>2.1.5 Model Deployment.</p>
<p>2.2 Other Toolsets.</p>
<p>Exercises.</p>
<p>Bibliographic Notes.</p>
<p>3 Describing Data Mathematically.</p>
<p>3.1 From Data Sets to Vector Spaces.</p>
<p>3.1.1 Vectors.</p>
<p>3.1.2 Vector Spaces.</p>
<p>3.2 The Dot Product as a Similarity Score.</p>
<p>3.3 Lines, Planes, and Hyperplanes.</p>
<p>Exercises.</p>
<p>Bibliographic Notes.</p>
<p>4 Linear Decision Surfaces and Functions.</p>
<p>4.1 From Data Sets to Decision Functions.</p>
<p>4.1.1 Linear Decision Surfaces through the Origin.</p>
<p>4.1.2 Decision Surfaces with an Offset Term.</p>
<p>4.2 A Simple Learning Algorithm.</p>
<p>4.3 Discussion.</p>
<p>Exercises.</p>
<p>Bibliographic Notes.</p>
<p>5 Perceptron Learning.</p>
<p>5.1 Perceptron Architecture and Training.</p>
<p>5.2 Duality.</p>
<p>5.3 Discussion.</p>
<p>Exercises.</p>
<p>Bibliographic Notes.</p>
<p>6 Maximum Margin Classifiers.</p>
<p>6.1 Optimization Problems.</p>
<p>6.2 Maximum Margins.</p>
<p>6.3 Optimizing the Margin.</p>
<p>6.4 Quadratic Programming.</p>
<p>6.5 Discussion.</p>
<p>Exercises.</p>
<p>Bibliographic Notes.</p>
<p>PART II.</p>
<p>7 Support Vector Machines.</p>
<p>7.1 The Lagrangian Dual.</p>
<p>7.2 Dual MaximumMargin Optimization.</p>
<p>7.2.1 The Dual Decision Function.</p>
<p>7.3 Linear Support Vector Machines.</p>
<p>7.4 Non–Linear Support Vector Machines.</p>
<p>7.4.1 The Kernel Trick.</p>
<p>7.4.2 Feature Search.</p>
<p>7.4.3 A Closer Look at Kernels.</p>
<p>7.5 Soft–Margin Classifiers.</p>
<p>7.5.1 The Dual Setting for Soft–Margin Classifiers.</p>
<p>7.6 Tool Support.</p>
<p>7.6.1 WEKA.</p>
<p>7.6.2 R.</p>
<p>7.7 Discussion.</p>
<p>Exercises.</p>
<p>Bibliographic Notes.</p>
<p>8 Implementation.</p>
<p>8.1 Gradient Ascent.</p>
<p>8.1.1 The Kernel–Adatron Algorithm.</p>
<p>8.2 Quadratic Programming.</p>
<p>8.2.1 Chunking.</p>
<p>8.3 Sequential Minimal Optimization.</p>
<p>8.4 Discussion.</p>
<p>Exercises.</p>
<p>Bibliographic Notes.</p>
<p>9 Evaluating What has been Learned.</p>
<p>9.1 Performance Metrics.</p>
<p>9.1.1 The Confusion Matrix.</p>
<p>9.2 Model Evaluation.</p>
<p>9.2.1 The Hold–Out Method.</p>
<p>9.2.2 The Leave–One–Out Method.</p>
<p>9.2.3 N–Fold Cross–Validation.</p>
<p>9.3 Error Confidence Intervals.</p>
<p>9.3.1 Model Comparisons.</p>
<p>9.4 Model Evaluation in Practice.</p>
<p>9.4.1 WEKA.</p>
<p>9.4.2 R.</p>
<p>Exercises.</p>
<p>Bibliographic Notes.</p>
<p>10 Elements of Statistical Learning Theory.</p>
<p>10.1 The VC–Dimension and Model Complexity.</p>
<p>10.2 A Theoretical Setting for Machine Learning.</p>
<p>10.3 Empirical Risk Minimization.</p>
<p>10.4 VC–Confidence.</p>
<p>10.5 Structural Risk Minimization.</p>
<p>10.6 Discussion.</p>
<p>Exercises.</p>
<p>Bibliographic Notes.</p>
<p>PART III.</p>
<p>11 Multi–Class Classification.</p>
<p>11.1 One–versus–the–Rest Classification.</p>
<p>11.2 Pairwise Classification.</p>
<p>11.3 Discussion.</p>
<p>Exercises.</p>
<p>Bibliographic Notes.</p>
<p>12 Regression with Support Vector Machines.</p>
<p>12.1 Regression as Machine Learning.</p>
<p>12.2 Simple and Multiple Linear Regression.</p>
<p>12.3 Regression with Maximum Margin Machines.</p>
<p>12.4 Regression with Support Vector Machines.</p>
<p>12.5 Model Evaluation.</p>
<p>12.6 Tool Support.</p>
<p>12.6.1 WEKA.</p>
<p>12.6.2 R.</p>
<p>Exercises.</p>
<p>Bibliographic Notes.</p>
<p>13 Novelty Detection.</p>
<p>13.1 Maximum Margin Machines.</p>
<p>13.2 The Dual Setting.</p>
<p>13.3 Novelty Detection in R.</p>
<p>Exercises.</p>
<p>Bibliographic Notes.</p>
<p>Appendix A: Notation.</p>
<p>Appendix B: A Tutorial Introduction to R.</p>
<p>B.1 Programming Constructs.</p>
<p>B.2 Data Constructs.</p>
<p>B.3 Basic Data Analysis.</p>
<p>Bibliographic Notes.</p>
<p>References.</p>
<p>Index.&nbsp;</p>
€ 161,77
Levertijd ongeveer 8 werkdagen

Rubrieken

    Personen

      Trefwoorden

        Knowledge Discovery with Support Vector Machines