# K-Means Clustering: Unsupervised Learning for Recommender Systems

###
Luciano Strika
*
Originally published at
datastuff.tech
on
*
ă»8 min read

Unsupervised Learning has been called the closest thing we have to âactualâ Artificial Intelligence, in the sense of General AI, with K-Means Clustering one of its simplest, but most powerful applications.

I am not here to discuss whether those claims are true or not, as I am not an expert nor a philosopher. I will however state, that I am often amazed by how well unsupervised learning techniques, even the most rudimentary, capture patterns in the data that I would expect only people to find.

Today weâll apply unsupervised learning on a Dataset I gathered myself. Itâs a database of professional Magic: The Gathering decks that I crawled from mtgtop8.com, an awesome website if youâre into Magic: the Gathering.

I scraped the MtgTop8 data from a few years of tournaments, and theyâre all available to be consumed in this GitHub repository.

If youâre not into the game, or even if youâve never even played it, **donât worry** : it wonât get in the way too much, as I will just explain the theoretical side of K-means Clustering and show you how to apply it using Dask. If you are into the game, then youâre gonna love the examples.

## K-Means Clustering

The algorithm we will look into today is called âK-means clusteringâ. It provides a way to characterize and categorize data if we donât really know how to separate it before hand.

### Why do we need Unsupervised Learning?

What do I mean by Unsupervised Learning? Suppose you had a set of pictures of cats and dogs. You could train a supervised Machine Learning model to classify the pictures into either category.

However, imagine you have a big, complex dataset of things you donât know a lot about. For instance, you could have data about the light spectrum produced by different planets, and be looking for a way to group them into categories.

For another example, you could have loads of genetic data from many different organisms, and wish to define which ones belong to the same genus or families in an intuitive way.

Or, in our case, we could have 777 different Magic: The Gathering decks, using over 600 different cards (yeah, the professional meta is not *that* diverse), and want to train a machine learning model so that it understands which cards work well together and which ones donât.

Now imagine you had to do this task, and you had no idea how to play this game. Wouldnât it be nice if someone had invented an algorithm to cluster data together if they look similar, without you having to provide a definition for âsimilarâ? Thatâs what clustering, and k-means clustering in particular, are all about.

Now thatâs done, I hope youâre motivated, because itâs time to get our hands dirty with some theory.

### How does K-Means Clustering work?

K-Means Clustering receives a single hyperparameter: *k,* which specifies how many clusters we want to categorize our data into.

The clusters wonât necessarily have all the same quantity of instances. However, they should each characterize a specific subset of our data. How will we achieve this? Letâs find out!

First of all, the input for this algorithm needs to be a set of vectors. That is, all your features should be numerical, and in the same order. If you had any categorical features, my advice would be to use one-hot encode: convert each categorical variable into a vector of n-elements: one for each possible category, all set to 0 except the one for the given category.

What the algorithm will do is initiate k random âcentroidsâ -points in the space defined by the dimensions of the datasetâs elements-, and then it will:

- Assign each element to the centroid closest to it.
- Remap the centroid to the point lying on the average of all the elements assigned to it.
- Repeat steps 1 and 2, until convergence (or a stopping condition, such as doing N iterations for a given N) is met.

In the end, each element will have been assigned to one of k clusters, such that the elements in the same cluster all lie closest to it.

### Applications for K-means clustering

Like many other unsupervised learning algorithms, K-means clustering can work wonders if used as a way to generate inputs for a supervised Machine Learning algorithm (for instance, a classifier).

The inputs could be a one-hot encode of which cluster a given instance falls into, or the k distances to each clusterâs centroid.

For this project however, what weâll be developing will be a (somewhat rudimentary) recommender system which will, given an instance, return elements appearing on the same cluster.

## Using Daskâs K-means Clustering in Python

Having defined the concepts for this project, letâs now begin the practical part. The code is available in the Jupyter Notebook on this repository.

### Processing the Data

I stored the decks following this format:

```
N card name
M another card name
```

in 777 different *.txt* files, where each line refers to a card, and the digits before the first space are the number of apparitions for that card in the deck.

In order to transform them into a more manageable format -Iâll be using a list of tuples (Int, String) for each deck, each tuple a card-, this is what weâll do:

This is what a deck looks like now.

```
[(4, 'Ancient Ziggurat'), (4, 'Cavern of Souls'), (4, 'Horizon Canopy'), (1, 'Plains'), (2, 'Seachrome Coast'), (4, 'Unclaimed Territory'), (4, 'Champion of the Parish'), (1, 'Dark Confidant') ...]
```

Where each tuple represents a card (yes, those are real card names), and how many times it appears.

Since we want to map each deck to a vector, it seems intuitive to just

- Turn them into a list, with one element for each different card in the whole MtgTop8 dataset.
- Set each component to the number of apparitions for the corresponding card (with all the components corresponding to cards that do not appear in the deck set to 0).

To do that, letâs get all the different cards that appear in all the decks.

And now letâs use our newfound knowledge about card names to turn all decks into beautiful, consumable vectors.

Now all our decks can be easily fed into Daskâs K-Means Clustering algorithm, and we can play with the output.

We could have just used âbinaryâ vectors, and set the components to 1 if the card appears in the deck, and 0 if it doesnât. We could also try that later and see if we get good results too.

### Applying K-means Clustering

Now that our data is all neatly mapped to the vector space, actually using Daskâs K-means Clustering is pretty simple.

Where the most important part is the *n_clusters* argument, which I kind of arbitrarily set to 8.

In real life, you may want to experiment with different values. For this particular case, I know MtG has 5 different âcolorsâ for cards. To prevent the algorithm from just clustering the cards for their colors (which it didnât do at all anyway), I chose a number bigger than 5.

The algorithm returns the labels as a Dask Array. I may do an article on how to use those later, but right now I didnât want to deal with all that. Also, the MtgTop8 dataset is small enough for that to not matter that much, so I decided to convert it back to a list of integers. Sue me.

### Exploratory Analysis: Letâs see what we got

At first I wanted to check if the results made any sense. This was my first time using K-means Clustering on this dataset, and I wanted to make sure it had learned something valuable. So I just checked which cards were most frequent in the decks from each cluster. The results were, at least to me, astounding. Hereâs what I did to check.

If youâre interested in the results, hereâs a separate article about them. I just didnât want to mix my M:tG findings with this tutorial so that readers who are into Data Science but not into the game wonât be bored.

In case youâre interested, I later wrote about a completely different application for K-Means Clustering: image filters.

I also strongly encourage you to download the notebook from the GitHub project and play with it, itâs honest fun!

### Card Recommendations using K-Means Clustering

Now we made that sanity check, we can proceed with the actual application for all the labels we generated.

There are many ways we could have approached the recommendation problem: given a card, suggest other cards that go well with it, without using any data about the cards except which decks they appear in (that is, no cheating and asking for more data about the cards like color, price, or an expertâs opinion).

Think for a moment, how would you use the clusters data to generate the recommendations? Iâm sure you could come up with a few ideas.

If what you came up with is not what Iâm about to do, please tell me in the comments! Creativity is more fun if itâs a team effort, and I really want to see what my dear readers come up with.

Finally, hereâs what I did:

As you can see, I omit how many times a card appears on a given deck for this part, and just look at the relative number of apparitions for a card on a given cluster.

I then return the cards with the most similar relative apparitions (defined by Euclidian distance).

If youâre a Magic: The Gathering player, try out this code and see the results, it makes pretty good (though kinda conservative) recommendations!

## Conclusion

K-Means clustering allowed us to approach a domain without really knowing a whole lot about it, and draw conclusions and even design a useful application around it.

It let us do that by learning the underlying patterns in the data for us, only asking that we gave it the data in the correct format.

I encourage you to play with the code here, and try making your own recommendationâs system with a different Dataset, or solving some other problem. If you do, please show me your results! I wanna see what you come up with.

In the future, Iâd like to do this same analysis using non-professional decks. That way, I could make a recommendations engine for casual players (like me). I think it would be cool if it worked with almost any card, and not just 642.

If you want to delve deeper into Unsupervised Learning, I canât recommend Introduction to Statistical Learning enough. Thatâs the book I learned about K-Means Clustering from. Itâs also the book thanks to which I finally understood Boosted Trees, though thatâs a tale for another article.

*Liked this article? Then please follow me on* *Medium,* *and consider helping me* *maintain my websiteâs hosting**. Let me know if you found the article helpful, or if any of it sounds wrong or doesnât work!*

Happy coding.

The post K-Means Clustering: Unsupervised Learning for Recommender Systems appeared first on Data Stuff.

I'm currently taking an AI course for grad school, and one of our projects was to use k-means to implement simple image compression, clustering using Euclidean distance with R, G, and B as the axes. Results were surprisingly good! I can't release the Python code for that implementation because of the plagiarism policy in the course, but maybe I'll try implementing the same thing in JavaScript using the canvas and see how it goes... đ€

That's awesome! I guess you were finding clusters for the most common colors, and then storing the differences? Maybe even turning similar colors to the centroid one, to get a lossy compression?

I don't really know enough about compression yet, to be honest.

Any good books on Information Theory or Compression you could recommend me?

Yeah, basically. We looked for centroids based on the three color channels, so the average Red, Green, and Blue values within each cluster. Then once the centroids converged and stopped shifting around, we rendered a new version of the image with each pixel replaced by the centroid value of its cluster. So essentially you compress the image by reducing it down to only

`k`

unique colors. So yes, it's definitely a lossy compression.I was very surprised at how recognizable the pictures were even with

`k=5`

. I mean, it's very clearly compressed, but it still looks surprisingly good. Bump it up to`k=16`

or`k=24`

and it'll definitely take a lot longer to run, but the output is very nice.As for books, tbh I'm not that well read on information theory or image compression either, I basically just learned what I needed to for this project haha. The textbook we're using for this class is the famous Artificial Intelligence: A Modern Approach (warning: large PDF) by Stuart Russel and Peter Norvig, which is an awesome introductory AI textbook, though it sounds like you're already well into AI anyway

Thank you! That's a clever algorithm, I'll try it out sometime.

In class, I remember we tried denoising images with PCA, and it worked wonders.