0

An iterative process is used to compare machine learning models.

I was able to find the k-neighbor method in parsnip, but can I modify it to k-means to create something similar to the above code?
Do you have any information that might be useful?

To make the code easier to understand, I want to tune the K using tune().

poi <- tibble(
  cluster = factor(1:3),num_points = c(100, 150, 50),
  x1 = c(5, 0, -3),x2 = c(-1, 1, -2))%>%
  mutate(
    x1 = map2(num_points, x1, rnorm),
    x2 = map2(num_points, x2, rnorm)) %>% 
  unnest(cols = c(x1, x2)) %>%
  select(-cluster,-num_points)

res <- 
  tibble(k = 1:9) %>%
  mutate(
    kclust = map(k, ~kmeans(poi, .x)),
    augmented = map(kclust, augment, poi)
  )
MrFlick
  • 195,160
  • 17
  • 277
  • 295
h-y-jp
  • 199
  • 1
  • 8
  • 1
    k-means is an unsupervised, clustering algorithm, so we don't have support for it in parsnip, which focuses on supervised models. It's also not typically used in preprocessing, so it's not in recipes (where we have some unsupervised modeling approaches). We don't currently have support to `tune()` k, but instead [advise this approach](https://www.tidymodels.org/learn/statistics/k-means/). – Julia Silge Oct 20 '20 at 20:07
  • So parsnip was not intended for unsupervised learning. I'm glad to know. I am always referring to your blog. I'll be supporting you from Japan. Thanks for the answer. – h-y-jp Oct 20 '20 at 22:40

0 Answers0