= scale(iris[,1:4])
x = iris[,5]
y plot(x[-100,1], x[-100, 3], col = y)
points(x[100,1], x[100, 3], col = "blue", pch = 18, cex = 1.3)
6 Distance-based Algorithms
In this chapter, we introduce support-vector machines (SVMs) and other distance-based methods Hint: Distance-based models need scaling!
6.1 K-Nearest-Neighbor
K-nearest-neighbor (kNN) is a simple algorithm that stores all the available cases and classifies the new data based on a similarity measure. It is mostly used to classify a data point based on how its \(k\) nearest neighbors are classified.
Let us first see an example:
Which class would you decide for the blue point? What are the classes of the nearest points? Well, this procedure is used by the k-nearest-neighbors classifier and thus there is actually no “real” learning in a k-nearest-neighbors classification.
For applying a k-nearest-neighbors classification, we first have to scale the data set, because we deal with distances and want the same influence of all predictors. Imagine one variable has values from -10.000 to 10.000 and another from -1 to 1. Then the influence of the first variable on the distance to the other points is much stronger than the influence of the second variable. On the iris data set, we have to split the data into training and test set on our own. Then we will follow the usual pipeline.
= iris
data 1:4] = apply(data[,1:4],2, scale)
data[,= sample.int(nrow(data), 0.7*nrow(data))
indices = data[indices,]
train = data[-indices,] test
Fit model and create predictions:
library(kknn)
set.seed(123)
= kknn(Species~., train = train, test = test)
knn summary(knn)
Call:
kknn(formula = Species ~ ., train = train, test = test)
Response: "nominal"
fit prob.setosa prob.versicolor prob.virginica
1 setosa 1 0.0000000 0.0000000
2 setosa 1 0.0000000 0.0000000
3 setosa 1 0.0000000 0.0000000
4 setosa 1 0.0000000 0.0000000
5 setosa 1 0.0000000 0.0000000
6 setosa 1 0.0000000 0.0000000
7 setosa 1 0.0000000 0.0000000
8 setosa 1 0.0000000 0.0000000
9 setosa 1 0.0000000 0.0000000
10 setosa 1 0.0000000 0.0000000
11 setosa 1 0.0000000 0.0000000
12 setosa 1 0.0000000 0.0000000
13 setosa 1 0.0000000 0.0000000
14 setosa 1 0.0000000 0.0000000
15 versicolor 0 0.9843084 0.0156916
16 versicolor 0 1.0000000 0.0000000
17 versicolor 0 1.0000000 0.0000000
18 versicolor 0 1.0000000 0.0000000
19 versicolor 0 1.0000000 0.0000000
20 virginica 0 0.4482986 0.5517014
21 versicolor 0 1.0000000 0.0000000
22 versicolor 0 1.0000000 0.0000000
23 versicolor 0 1.0000000 0.0000000
24 versicolor 0 1.0000000 0.0000000
25 virginica 0 0.0000000 1.0000000
26 versicolor 0 0.7783044 0.2216956
27 virginica 0 0.1339415 0.8660585
28 virginica 0 0.1008186 0.8991814
29 virginica 0 0.0156916 0.9843084
30 virginica 0 0.0000000 1.0000000
31 virginica 0 0.0000000 1.0000000
32 virginica 0 0.1257844 0.8742156
33 versicolor 0 0.8742156 0.1257844
34 virginica 0 0.0000000 1.0000000
35 virginica 0 0.3569042 0.6430958
36 virginica 0 0.4420312 0.5579688
37 virginica 0 0.0000000 1.0000000
38 versicolor 0 0.6931774 0.3068226
39 virginica 0 0.0000000 1.0000000
40 virginica 0 0.1885727 0.8114273
41 virginica 0 0.0000000 1.0000000
42 virginica 0 0.0000000 1.0000000
43 virginica 0 0.4934627 0.5065373
44 virginica 0 0.0000000 1.0000000
45 virginica 0 0.2373872 0.7626128
table(test$Species, fitted(knn))
setosa versicolor virginica
setosa 14 0 0
versicolor 0 9 1
virginica 0 3 18
6.2 Support Vector Machines (SVMs)
Support vectors machines have a different approach. They try to divide the predictor space into sectors for each class. To do so, a support-vector machine fits the parameters of a hyperplane (a \(n-1\) dimensional subspace in a \(n\)-dimensional space) in the predictor space by optimizing the distance between the hyperplane and the nearest point from each class.
Fitting a support-vector machine:
library(e1071)
= iris
data 1:4] = apply(data[,1:4], 2, scale)
data[,= sample.int(nrow(data), 0.7*nrow(data))
indices = data[indices,]
train = data[-indices,]
test
= svm(Species~., data = train, kernel = "linear")
sm = predict(sm, newdata = test) pred
= par(mfrow = c(1, 2))
oldpar plot(test$Sepal.Length, test$Petal.Length,
col = pred, main = "predicted")
plot(test$Sepal.Length, test$Petal.Length,
col = test$Species, main = "observed")
par(oldpar)
mean(pred == test$Species) # Accuracy.
[1] 0.9777778
Support-vector machines can only work on linearly separable problems. (A problem is called linearly separable if there exists at least one line in the plane with all of the points of one class on one side of the hyperplane and all the points of the others classes on the other side).
If this is not possible, we however, can use the so called kernel trick, which maps the predictor space into a (higher dimensional) space in which the problem is linear separable. After having identified the boundaries in the higher-dimensional space, we can project them back into the original dimensions.
= seq(-3, 3, length.out = 100)
x1 = seq(-3, 3, length.out = 100)
x2 = expand.grid(x1, x2)
X = apply(X, 1, function(t) exp(-t[1]^2 - t[2]^2))
y = ifelse(1/(1+exp(-y)) < 0.62, 0, 1)
y
image(matrix(y, 100, 100))
::saveGIF(
animation
{for(i in c("truth", "linear", "radial", "sigmoid")){
if(i == "truth"){
image(matrix(y, 100,100),
main = "Ground truth", axes = FALSE, las = 2)
else{
}= e1071::svm(x = x, y = factor(y), kernel = i)
sv image(matrix(as.numeric(as.character(predict(sv, x))), 100, 100),
main = paste0("Kernel: ", i), axes = FALSE, las = 2)
axis(1, at = seq(0,1, length.out = 10),
labels = round(seq(-3, 3, length.out = 10), 1))
axis(2, at = seq(0,1, length.out = 10),
labels = round(seq(-3, 3, length.out = 10), 1), las = 2)
}
}
},movie.name = "svm.gif", autobrowse = FALSE, interval = 2
)
As you have seen, this does not work with every kernel. Hence, the problem is to find the actual correct kernel, which is again an optimization procedure and can thus be approximated.