SHARE

Table of contents

1- The ingredients of machine learning.

  • Tasks: the problems that can be solved with machine learning.
  • Looking for structure.
  • Models: the output of machine learning.
  • Geometric models.
  • Probabilistic models.
  • Logical models.
  • Grouping and grading.
  • Features: the workhorses of machine learning.
  • Many uses of features.
  • Feature construction and transformation.

2- Binary classification and related tasks.

  • Classification.
  • Assessing classification performance.
  • Visualising classification performance.
  • Scoring and ranking.
  • Assessing and visualizing ranking performance.
  • Tuning rankers.
  • Class probability estimation.
  • Assessing class probability estimates.

3- Beyond binary classification.

  • Handling more than two classes.
  • Multi-class classification.
  • Multi-class scores and probabilities.
  • Regression.
  • Unsupervised and descriptive learning.
  • Predictive and descriptive clustering.
  • Other descriptive models.

4- Tree models.

  • Decision trees.
  • Ranking and probability estimation trees.
  • Sensitivity to skewed class distributions.
  • Tree learning as variance reduction.
  • Regression trees.
  • Clustering trees.

5- Rule models.

  • Learning ordered rule lists.
  • Rule lists for ranking and probability estimation.
  • Learning unordered rule sets.
  • Rule sets for ranking and probability estimation.
  • Descriptive rule learning.
  • Rule learning for subgroup discovery.
  • Association rule mining.

6- Linear models.

  • The least-squares method.
  • Multivariate linear regression.
  • The perceptron: a heuristic learning algorithm for linear classifiers.
  • Support vector machines.
  • Soft margin SVM.
  • Obtaining probabilities from linear classifiers.