6  Distance-based Algorithms

In this chapter, we introduce support-vector machines (SVMs) and other distance-based methods Hint: Distance-based models need scaling!

6.1 K-Nearest-Neighbor

K-nearest-neighbor (kNN) is a simple algorithm that stores all the available cases and classifies the new data based on a similarity measure. It is mostly used to classify a data point based on how its \(k\) nearest neighbors are classified.

Let us first see an example:

x = scale(iris[,1:4])
y = iris[,5]
plot(x[-100,1], x[-100, 3], col = y)
points(x[100,1], x[100, 3], col = "blue", pch = 18, cex = 1.3)

Which class would you decide for the blue point? What are the classes of the nearest points? Well, this procedure is used by the k-nearest-neighbors classifier and thus there is actually no “real” learning in a k-nearest-neighbors classification.

For applying a k-nearest-neighbors classification, we first have to scale the data set, because we deal with distances and want the same influence of all predictors. Imagine one variable has values from -10.000 to 10.000 and another from -1 to 1. Then the influence of the first variable on the distance to the other points is much stronger than the influence of the second variable. On the iris data set, we have to split the data into training and test set on our own. Then we will follow the usual pipeline.

data = iris
data[,1:4] = apply(data[,1:4],2, scale)
indices = sample.int(nrow(data), 0.7*nrow(data))
train = data[indices,]
test = data[-indices,]

Fit model and create predictions:

library(kknn)
set.seed(123)

knn = kknn(Species~., train = train, test = test)
summary(knn)

Call:
kknn(formula = Species ~ ., train = train, test = test)

Response: "nominal"
          fit prob.setosa prob.versicolor prob.virginica
1      setosa           1       0.0000000      0.0000000
2      setosa           1       0.0000000      0.0000000
3      setosa           1       0.0000000      0.0000000
4      setosa           1       0.0000000      0.0000000
5      setosa           1       0.0000000      0.0000000
6      setosa           1       0.0000000      0.0000000
7      setosa           1       0.0000000      0.0000000
8      setosa           1       0.0000000      0.0000000
9      setosa           1       0.0000000      0.0000000
10     setosa           1       0.0000000      0.0000000
11     setosa           1       0.0000000      0.0000000
12     setosa           1       0.0000000      0.0000000
13     setosa           1       0.0000000      0.0000000
14     setosa           1       0.0000000      0.0000000
15 versicolor           0       0.9843084      0.0156916
16 versicolor           0       1.0000000      0.0000000
17 versicolor           0       1.0000000      0.0000000
18 versicolor           0       1.0000000      0.0000000
19 versicolor           0       1.0000000      0.0000000
20  virginica           0       0.4482986      0.5517014
21 versicolor           0       1.0000000      0.0000000
22 versicolor           0       1.0000000      0.0000000
23 versicolor           0       1.0000000      0.0000000
24 versicolor           0       1.0000000      0.0000000
25  virginica           0       0.0000000      1.0000000
26 versicolor           0       0.7783044      0.2216956
27  virginica           0       0.1339415      0.8660585
28  virginica           0       0.1008186      0.8991814
29  virginica           0       0.0156916      0.9843084
30  virginica           0       0.0000000      1.0000000
31  virginica           0       0.0000000      1.0000000
32  virginica           0       0.1257844      0.8742156
33 versicolor           0       0.8742156      0.1257844
34  virginica           0       0.0000000      1.0000000
35  virginica           0       0.3569042      0.6430958
36  virginica           0       0.4420312      0.5579688
37  virginica           0       0.0000000      1.0000000
38 versicolor           0       0.6931774      0.3068226
39  virginica           0       0.0000000      1.0000000
40  virginica           0       0.1885727      0.8114273
41  virginica           0       0.0000000      1.0000000
42  virginica           0       0.0000000      1.0000000
43  virginica           0       0.4934627      0.5065373
44  virginica           0       0.0000000      1.0000000
45  virginica           0       0.2373872      0.7626128
table(test$Species, fitted(knn))
            
             setosa versicolor virginica
  setosa         14          0         0
  versicolor      0          9         1
  virginica       0          3        18

6.2 Support Vector Machines (SVMs)

Support vectors machines have a different approach. They try to divide the predictor space into sectors for each class. To do so, a support-vector machine fits the parameters of a hyperplane (a \(n-1\) dimensional subspace in a \(n\)-dimensional space) in the predictor space by optimizing the distance between the hyperplane and the nearest point from each class.

Fitting a support-vector machine:

library(e1071)

data = iris
data[,1:4] = apply(data[,1:4], 2, scale)
indices = sample.int(nrow(data), 0.7*nrow(data))
train = data[indices,]
test = data[-indices,]

sm = svm(Species~., data = train, kernel = "linear")
pred = predict(sm, newdata = test)
oldpar = par(mfrow = c(1, 2))
plot(test$Sepal.Length, test$Petal.Length,
     col =  pred, main = "predicted")
plot(test$Sepal.Length, test$Petal.Length,
     col =  test$Species, main = "observed")

par(oldpar)

mean(pred == test$Species) # Accuracy.
[1] 0.9777778

Support-vector machines can only work on linearly separable problems. (A problem is called linearly separable if there exists at least one line in the plane with all of the points of one class on one side of the hyperplane and all the points of the others classes on the other side).

If this is not possible, we however, can use the so called kernel trick, which maps the predictor space into a (higher dimensional) space in which the problem is linear separable. After having identified the boundaries in the higher-dimensional space, we can project them back into the original dimensions.

x1 = seq(-3, 3, length.out = 100)
x2 = seq(-3, 3, length.out = 100)
X = expand.grid(x1, x2)
y = apply(X, 1, function(t) exp(-t[1]^2 - t[2]^2))
y = ifelse(1/(1+exp(-y)) < 0.62, 0, 1)

image(matrix(y, 100, 100))
animation::saveGIF(
  {
    for(i in c("truth", "linear", "radial", "sigmoid")){
      if(i == "truth"){
        image(matrix(y, 100,100),
        main = "Ground truth", axes = FALSE, las = 2)
      }else{
        sv = e1071::svm(x = x, y = factor(y), kernel = i)
        image(matrix(as.numeric(as.character(predict(sv, x))), 100, 100),
        main = paste0("Kernel: ", i), axes = FALSE, las = 2)
        axis(1, at = seq(0,1, length.out = 10),
        labels = round(seq(-3, 3, length.out = 10), 1))
        axis(2, at = seq(0,1, length.out = 10),
        labels = round(seq(-3, 3, length.out = 10), 1), las = 2)
      }
    }
  },
  movie.name = "svm.gif", autobrowse = FALSE, interval = 2
)

As you have seen, this does not work with every kernel. Hence, the problem is to find the actual correct kernel, which is again an optimization procedure and can thus be approximated.

6.3 Exercises

Question: Hyperparameter tuning of kNN

Combing back to the titanic dataset from the morning, we want to optimize the number of neighbors (k) and the kernel of the kNN:

Prepare the data:

library(EcoData)
library(dplyr)

Attaching package: 'dplyr'
The following objects are masked from 'package:stats':

    filter, lag
The following objects are masked from 'package:base':

    intersect, setdiff, setequal, union
library(missRanger)
data(titanic_ml)
data = titanic_ml
data = 
  data %>% select(survived, sex, age, fare, pclass)
data[,-1] = missRanger(data[,-1], verbose = 0)

data_sub =
  data %>%
    mutate(age = scales::rescale(age, c(0, 1)),
           fare = scales::rescale(fare, c(0, 1))) %>%
    mutate(sex = as.integer(sex) - 1L,
           pclass = as.integer(pclass - 1L))
data_new = data_sub[is.na(data_sub$survived),] # for which we want to make predictions at the end
data_obs = data_sub[!is.na(data_sub$survived),] # data with known response

Hints:

  • check the help of the kNN function to get an idea about the hyperparameters
library(kknn)
set.seed(42)
data_obs = data_sub[!is.na(data_sub$survived),] 
cv = 3

outer_split = as.integer(cut(1:nrow(data_obs), breaks = cv))

hyper_k = ... # must be integer vector
hyper_kernel = ... # must be character vector

results = data.frame(
  set = rep(NA, cv),
  k = rep(NA, cv),
  kernel = rep(NA, cv),
  AUC = rep(NA, cv)
)

for(i in 1:cv) {
  train_outer = data_obs[outer_split != i, ]
  test_outer = data_obs[outer_split == i, ]
  
  tuning_results = 
      sapply(1:length(hyper_k), function(k) {
        predictions = kknn(as.factor(survived)~., train = train_outer, test = test_outer, k = hyper_k[k], scale = FALSE, kernel = hyper_kernel[k])
        return(Metrics::auc(test_outer$survived, predictions$prob[,2]))
      })
  
  results[i, 1] = i
  results[i, 2] = hyper_k[which.max(tuning_results)]
  results[i, 3] = hyper_kernel[which.max(tuning_results)]  
  results[i, 4] = max(tuning_results)
}

print(results)
library(kknn)
set.seed(42)
data_obs = data_sub[!is.na(data_sub$survived),] 
cv = 3

outer_split = as.integer(cut(1:nrow(data_obs), breaks = cv))

# sample minnodesize values (must be integers)
hyper_k = sample(10, 10)
hyper_kernel = sample(c("triangular", "inv", "gaussian", "rank"), 10, replace = TRUE)

results = data.frame(
  set = rep(NA, cv),
  k = rep(NA, cv),
  kernel = rep(NA, cv),
  AUC = rep(NA, cv)
)

for(i in 1:cv) {
  train_outer = data_obs[outer_split != i, ]
  test_outer = data_obs[outer_split == i, ]
  
  tuning_results = 
      sapply(1:length(hyper_k), function(k) {
        predictions = kknn(as.factor(survived)~., train = train_outer, test = test_outer, k = hyper_k[k], scale = FALSE, kernel = hyper_kernel[k])
        return(Metrics::auc(test_outer$survived, predictions$prob[,2]))
      })
  
  results[i, 1] = i
  results[i, 2] = hyper_k[which.max(tuning_results)]
  results[i, 3] = hyper_kernel[which.max(tuning_results)]  
  results[i, 4] = max(tuning_results)
}

print(results)
  set  k   kernel       AUC
1   1 10 gaussian 0.8149876
2   2  9     rank 0.8080321
3   3  6      inv 0.8290219

Make predictions:

prediction_ensemble = 
  sapply(1:nrow(results), function(i) {
    predictions = kknn(as.factor(survived)~., train = data_obs, test = data_new, k = results$k[i], scale = FALSE, kernel = results$kernel[i])
    return(predictions$prob[,2])
  })

# Single predictions from the ensemble model:
write.csv(data.frame(y = apply(prediction_ensemble, 1, mean)), file = "Max_titanic_ensemble.csv")
Question: kNN and SVM

Fit a standard k-nearest-neighbor classifier and a support vector machine with a linear kernel (check help) on the Sonar dataset, and report what fitted better.

Prepare dataset:

library(mlbench)
set.seed(123)

data(Sonar)
data = Sonar
#str(data)

# Do not forget scaling! This may be done implicitly by most functions.
# Here, it's done explicitly for teaching purposes.
data = cbind.data.frame(
  scale(data[,-length(data)]),
  "class" = data[,length(data)]
)

n = length(data[,1])
indicesTrain = sample.int(n, (n+1) %/% 2) # Take (at least) 50 % of the data.

train = data[indicesTrain,]
test = data[-indicesTrain,]

Tasks:

  • Fit a svm (from the e1071 package) on the train dataset and make predictions for the test dataset
  • Fit a kNN (from the kknn package) on the train dataset and make predictions for the test dataset
  • Calculate confusion matrices to compare the performance
library(e1071)
library(kknn)

knn = kknn(class~., train = train, test = test, scale = FALSE,
           kernel = "rectangular")
predKNN = predict(knn, newdata = test)

sm = svm(class~., data = train, scale = FALSE, kernel = "linear")
predSVM = predict(sm, newdata = test)
K-nearest-neighbor, standard (rectangular) kernel:
       labelsTest
predKNN  M  R
      M 46 29
      R  8 21
Correctly classified:  67  /  104
Support-vector machine, linear kernel:
       labelsTest
predSVM  M  R
      M 41 15
      R 13 35
Correctly classified:  76  /  104

K-nearest neighbor fitted (slightly) better.