DEV Community

Henrik Warne
Henrik Warne

Posted on • Originally published at on

Classic Computer Science Problems in Python

I really enjoyed Classic Computer Science Problems in Python by David Kopec. It covers many different problems I hadn’t read detailed explanations of before. For example: neural networks, constraint-satisfaction problems, genetic algorithms and the minimax algorithm. Unlike many other books on algorithms and programming problems, this one builds up complete (but small) programs that are easy to explore on your own.

I like learning about algorithms. I have read several books on programming problems, such as Cracking the Coding Interview. I have also taken MOOC courses on algorithms, like Design and Analysis of Algorithms part 1 and part 2. Despite this, there were several algorithms in this book that I hadn’t come across before.

Chapters I Liked The Most

Neural networks. The best part of the book for me was the chapter on neural networks. The author builds up a complete example of a neural network. Along the way he explains how all the parts work, both in general and how the specific parts of the program works. He covers how neurons and layers work, how the activation function works, how backpropagation is used in training, and how the weights are adjusted.

Finally the neural network is used on the iris data set. This is a classic data set from the 1930s that consists of 150 samples from three different species of iris flowers (50 samples from each). Each sample has four numerical values: petal length and width, and sepal length and width (I had to look up what sepal means). First the values are normalized. Then a network with three layers is created. The input layer has four neurons (one for each input value in a sample), the hidden layer has six neurons, and the output layer has three neurons (one for each possible flower).

140 of the 150 samples are selected at random and used for training the network. The training is run 50 times over the 140 samples. Then the network is used to predict which of the three flowers the remaining 10 samples belong to. Most of the time, it correctly identifies at least eight of the samples, and it is not unusual that it gets all ten correct.

I really liked this approach of coding a complete neural network from scratch, without using any machine learning libraries. I spent quite a bit of time playing around with this program (all of the code in the book is available on GitHub). For example, I printed the weights in all the neurons after the network had been fully trained. I wanted to see if the weights were similar between different training runs. It turns out that they were wildly different between runs, even though they still had similar prediction performance. Maybe this is well known, but for me it was really interesting to see for myself.

Adversarial search. In this chapter, the minimax algorithm is introduced. It is used to find the best move in a two-player, zero-sum game with perfect information. The idea is to analyze potential moves by alternating between the two players, each responding to the previous move, until the game is decided (or the maximum recursion depth is reached). The first example is an AI that plays tic-tac-toe. In this case, the complete sequence of moves can be analyzed, since the board is only a 3 by 3 grid.

Like in the other chapters, a general framework is developed, and this is then applied to the particular cases. After the tic-tac-toe example, the framework is used for the game Connect Four. The evaluation of the moves is much more complex than in tic-tac-toe, so in the example the search depth is limited to three moves. I enjoyed this chapter a lot because I had not seen the minimax algorithm before.

Constraint-satisfaction problems. Here too a general framework is developed, and it is then used to solve several problems. The core of the framework is recursive backtracking search. The first problem that is solved with the framework is the Australian map-coloring problem: is it possible to use only three colors while ensuring that neighboring states don’t share a color (answer: yes). The second problem is to place eight queens on a chess board so no queen threatens any other queen. The third problem solved is to place a list of words on a grid to create a word-search square. Finally the framework is used to solve the classic cryptarithmetic puzzle SEND + MORE = MONEY. The object is to find what digits each letter represents to make the mathematical statement true.

I liked this chapter because of the variety of the problems that were used as examples, and because I had not read about this approach before (at least not as a systematic way of solving such problems).

Genetic algorithms. I had only a vague notion of how genetic algorithms work before I read this chapter. The code in this chapter introduces a chromosome class that is instantiated with randomly selected genes. The chromosome can assess its own fitness, and it implements crossover (combining itself with another of the same type) and mutate (small fairly random changes to itself).

The algorithm starts with a random population. In each generation of the population, it checks (via the fitness function) if there is a solution that is good enough (higher fitness than a given threshold). If not, a new generation is created by repeatedly combining two random individuals (the higher their individual fitness, the likelier they are to be used). Next, a few individuals are selected for mutations. These new individuals now become the new population, and again their fitness functions are checked. This cycle is repeated for a given number of generations.

The example problems are in turn: maximization of a mathematical expression, the SEND + MORE = MONEY problem from before, and the ordering of elements in a list to minimize the compressed size of the list. The advantage here, like in many of the other chapters, is that the concepts are shown in the context of a small but complete solution.

Search. The algorithms in this chapter were not new to me (except for A*), but the chapter was still interesting because of the examples used. The author uses maze solving to illustrate how depth first and breadth first search works. The mazes are simple 9 by 9 grids with randomly positioned obstacles in them, and printed using only ASCII characters. The algorithms are used to find a way diagonally across the grid, while avoiding the obstacles. The path found is also printed in the maze.

It is easy to modify the programs to see how the algorithms perform in different scenarios. After the depth first and breadth first are presented, A* is introduced. The only difference is that a heuristic (the Euclidean distance) is introduced to guide which paths to try first. The last example in the chapter uses a search function to solve the puzzle of how to transport three missionaries and three cannibals across a river in a canoe. The constraint is that there can never be more cannibals than missionaries on either river bank, or the cannibals will eat the missionaries. I thought this was a clever and interesting application of a search algorithm.

Other Chapters

These chapters mostly contain algorithms I was already familiar with.

Small problems. The first chapter covers many small examples. It starts with different ways of calculating Fibonacci numbers (including recursive, with memoization, iterative, and with a Python generator). Next, bit manipulations are used for compression and XOR ciphers. Finally, recursion is used to solve the towers of Hanoi problem.

Graph problems. Chapter four covers graph algorithms for finding the shortest path and a minimal spanning tree, using a map with US cities as the example. Dijkstra’s algorithm is also covered.

K-means clustering. This chapter shows how to group data points into a predefined number of clusters, based on each point’s relative distance to the center of the cluster. The examples in this chapter were not as interesting as in the other chapters.

Miscellaneous problems. The last chapter covers the knapsack problem, the travelling salesman problem, and phone number mnemonics.


All the programs use Python type hints. I am ambivalent about this. On the one hand, types improve the understanding of programs. But on the other hand I am not used to seeing them in Python, and sometimes I had to figure out how the type hints worked before I got to the logic of the programs.

If I were to complain about anything, it would be that the section on dynamic programming could have been more comprehensive. Dynamic programming is tricky, and it would have been good to see more examples of how you develop the solution using it.


One sign of how much I liked this book is that I keep recommending it to colleagues – so far two of them have ordered it for themselves. The main reasons I like it are the breadth of algorithms covered, the complete (yet small) solutions that are easy to explore on your own, and the interesting examples used when demonstrating the algorithms.

Top comments (0)