It turns out that wellknown algorithms like BFS, DFS, Dijkstra, and AStar are essentially variations of the same algorithm.
In other words, it is possible to implement a universal data structure that can switch between these algorithms without requiring changes to its core components. While there are some limitations to consider, exploring this approach is interesting.
You can find all the working code for these algorithms on my GitHub repository here. I recommend experimenting with the code while reading this article since practical experience enhances learning more than just theoretical understanding.
Graph representation
Let's consider a graph with 25 nodes arranged in a 5x5 grid, where we aim to find a path from Node 0 in the top left corner to Node 24 in the bottom right corner:
( 0 )  ( 1 )  ( 2 )  ( 3 )  ( 4 )
    
( 5 )  ( 6 )  ( 7 )  ( 8 )  ( 9 )
    
( 10 )  ( 11 )  ( 12 )  ( 13 )  ( 14 )
    
( 15 )  ( 16 )  ( 17 )  ( 18 )  ( 19 )
    
( 20 )  ( 21 )  ( 22 )  ( 23 )  ( 24 )
Each of the mentioned algorithms is capable of achieving this, but they have their own limitations:
 Both BFS and DFS algorithms operate on unweighted graphs, disregarding edge weights. Although they can find any path, they do not guarantee the optimal path.
 Both Dijkstra's and AStar algorithms work on weighted graphs but should not be used with graphs containing negative weights. AStar is usually faster due to its optimization, which incorporates Euclidean coordinates during path finding.
In this article, I do not cover the basic concepts, hoping that you are already familiar with them. If the terminology mentioned above seems daunting to you, you should probably learn the basics as well. However, playing around with these code examples can still be exciting.
To account for these limitations, let's assign imaginary coordinates to each node (X, Y):
(0, 0)  (0, 1)  (0, 2)  (0, 3)  (0, 4)
    
(1, 0)  (1, 1)  (1, 2)  (1, 3)  (1, 4)
    
(2, 0)  (2, 1)  (2, 2)  (2, 3)  (2, 4)
    
(3, 0)  (3, 1)  (3, 2)  (3, 3)  (3, 4)
    
(4, 0)  (4, 1)  (4, 2)  (4, 3)  (4, 4)
Finally, lets assign some weight for every edge in the graph:
(0, 0) 1 (0, 1) 1 (0, 2) 1 (0, 3) 2 (0, 4)
    
2 1 1 2 2
    
(1, 0) 2 (1, 1) 1 (1, 2) 2 (1, 3) 1 (1, 4)
    
2 1 1 1 1
    
(2, 0) 1 (2, 1) 1 (2, 2) 1 (2, 3) 2 (2, 4)
    
2 1 1 1 2
    
(3, 0) 2 (3, 1) 2 (3, 2) 1 (3, 3) 2 (3, 4)
    
2 1 1 2 2
    
(4, 0) 2 (4, 1) 1 (4, 2) 2 (4, 3) 2 (4, 4)
In C++, this structure may be represented as follows:
class GraphNode
{
public:
int X;
int Y;
};
class Graph
{
public:
vector<vector<pair<int, int>>> Edges;
vector<GraphNode> Nodes;
};
The edges list in the Graph is represented by an array of arrays, where the index corresponds to the number of the exit node for each edge in the graph. Then, every element contains a pair of values:
 The number of the entering node for each edge in the graph.
 The weight of the edge.
Using this simple construct, we can traverse every node in the graph and obtain all the necessary information about its connections:
int toNode = graph.Edges[fromNode][neighbourIndex].first;
int weight = graph.Edges[fromNode][neighbourIndex].second;
Now, let's create some custom connections within the graph to observe the effect on how our universal algorithm works. Since this code is not the main focus here, I will provide links to the relevant methods:
Alternatively, it is also possible to lazily generate all the connections and weights in this graph with even less code. However, this approach might not provide a comprehensive understanding of the actual differences in how the algorithms traverse the graph.
Universal algorithm
At the core of the universal pathfinding algorithm lies the universal data structure, which we will refer to as the "Queue" for the purposes of this project. However, it is not a classic FIFO (FirstInFirstOut) data structure. Instead, it is a general structure that allows us to implement node queuing during traversal while being able to change the queuing mechanism based on the algorithm being used. The interface for this "Queue" is simple:
class pathFindingBase
{
public:
virtual void insert(int node) = 0;
virtual int getFirst() = 0;
virtual bool isEmpty() = 0;
};
Before we delve into the details of the Queue, let's examine the traversal algorithm itself.
Essentially, it closely resembles a typical AStar or Dijkstra algorithm. First, we need to initialize a set of collections that enable us to:
 Maintain a list of nodes that have not been processed yet (colored white), are currently being processed (colored gray), and have been processed/visited (colored black).
 Keep track of the current distance of the shortest path from the start node to each node in the collection.
 Store a list of pairs of previousnext nodes that allows us to reconstruct the final path afterward.
const int INF = 1000000;
const int WHITE = 0;
const int GREY = 1;
const int BLACK = 2;
/// <summary>
/// Universal algorithm to apply Path search using BFS, DFS, Dijkstra, AStar.
/// </summary>
vector<int> FindPath(Graph& graph, int start, int finish, int finishX, int finishY)
{
int verticesNumber = graph.Nodes.size();
vector<int> nodeColor(verticesNumber, WHITE); // All the nodes are White colored initially
vector<int> shortestPath(verticesNumber, INF); // Current shortest path found from Start to i is some large/INFinite number from the beginning.
vector<int> previousVertex(verticesNumber, 1); // Index of the vertex/node that is predecessor of ith vertex in a shortest path to it.
// We should use pointers here because we want to pass the pointer to a datastructure
// so it may receive all the updates automatically on every step.
shared_ptr<vector<int>> ptrShortestPath = make_shared<vector<int>>(shortestPath);
shared_ptr<Graph> ptrGraph = make_shared<Graph>(graph);
Next, we need to initialize our data structure. By using the code provided in the GitHub repository, you can simply uncomment the necessary line of code. The code is not designed to select the data structure based on a parameter because I want you to actively experiment with it to gain a better understanding (yes, I'm a tough guy :D).
//////////////////////////////////////////////////
// TODO
// UNCOMMENT DATA STRUCTURE YOU WANT TO USE:
//dfsStack customQueue;
//bfsQueue customQueue;
//dijkstraQueue customQueue(ptrShortestPath);
//aStarQueue customQueue(finishX, finishY, ptrGraph, ptrShortestPath);
// END OF TODO
/////////////////////////////////////////////////
Finally, the algorithm itself. Essentially, it is a combination of all three algorithms with some additional checks. We initialize a "customQueue" and execute the algorithm until it becomes empty. When examining each neighboring node in the graph, we enqueue every node that potentially needs to be traversed next. Then, we call the getFirst() method, which extracts only one node that should be traversed next in the algorithm.
customQueue.insert(start);
nodeColor[start] = BLACK;
ptrShortestPath>at(start) = 0;
while (!customQueue.isEmpty()) // Traverse nodes starting from start node.
{
int current = customQueue.getFirst();
if (current == finish) // If we found finish node, then let's print full path.
{
vector<int> path;
int cur = finish;
path.push_back(cur);
while (previousVertex[cur] != 1) // Recover path node by node.
{
cur = previousVertex[cur];
path.push_back(cur);
}
reverse(path.begin(), path.end()); // Since we are at the finish node, reverse list to be at start.
return path;
}
for (int neighbourIndex = 0; neighbourIndex < graph.Edges[current].size(); neighbourIndex++)
{
int to = graph.Edges[current][neighbourIndex].first;
int weight = graph.Edges[current][neighbourIndex].second;
if (nodeColor[to] == WHITE) // If node is not yet visited.
{
nodeColor[to] = GREY; // Mark node as "in progress".
customQueue.insert(to);
previousVertex[to] = current;
ptrShortestPath>at(to) = ptrShortestPath>at(current) + weight; // Calculate cost of moving to this node.
}
else // Select the most optimal route.
{
if (ptrShortestPath>at(to) > ptrShortestPath>at(current) + weight)
{
ptrShortestPath>at(to) = ptrShortestPath>at(current) + weight;
}
}
}
nodeColor[current] = BLACK;
}
return {};
}
Up until this point, the implementation does not differ significantly from other examples you may find in books or on the internet. However, here is where the key aspect lies  getFirst() is the method that serves the main purpose as it determines the exact order of node traversal.
BFS queue
Let's take a closer look at the inner workings of our queue data structure. The queue interface for BFS is the simplest one. bfsQueue.h#L11:
#include <queue>
#include "pathFindingBase.h"
class bfsQueue : public pathFindingBase
{
private:
queue<int> _queue;
public:
virtual void insert(int node)
{
_queue.push(node);
}
virtual int getFirst()
{
int value = _queue.front();
_queue.pop();
return value;
}
virtual bool isEmpty()
{
return _queue.empty();
}
};
In reality, we could simply replace the custom queue interface here with the standard C++ queue provided by the STL (Standard Template Library). However, the goal here is universality. Now, you only need to uncomment the line in the main method and run this algorithm:
//bfsQueue customQueue; // UNCOMMENT TO USE BFS
As a result, BFS finds the path 24<19<14<9<8<7<6<1<0.
(0, 0)  (0, 1)  (0, 2)  (0, 3)  (0, 4)

(1, 4)

(2, 4)

(3, 4)

(4, 4)
If we consider weights, the final cost of this path will be 11. However, remember that neither BFS nor DFS consider weights. Instead, they traverse all nodes in the graph hoping to find the desired node sooner or later.
DFS queue
DFS doesn't look very different. We only replace the STD queue with a stack. dfsStack.h#L11:
#include <stack>
#include "pathFindingBase.h"
class dfsStack : public pathFindingBase
{
private:
stack<int> _queue;
public:
virtual void insert(int node)
{
_queue.push(node);
}
virtual int getFirst()
{
int value = _queue.top();
_queue.pop();
return value;
}
virtual bool isEmpty()
{
return _queue.empty();
}
};
DFS finds the path 24<23<22<21<20<15<10<5<0 with a cost of 15 (it doesn't prioritize finding the optimal cost). Interestingly, it traverses in the opposite direction compared to BFS:
(0, 0)

(1, 0)

(2, 0)

(3, 0)

(4, 0)  (4, 1)  (4, 2)  (4, 3)  (4, 4)
Dijkstra queue
Now, Dijkstra's algorithm is the most wellknown greedy search algorithm in a graph. Despite its known limitations (inability to handle negative paths, cycles, etc.), it remains popular and efficient enough.
It is important to note that the getFirst() method in this implementation uses a greedy approach to select nodes for traversal. dijkstraQueue.h#L17:
#include <queue>
#include "pathFindingBase.h"
class dijkstraQueue : public pathFindingBase
{
private:
vector<int> _queue;
shared_ptr<vector<int>> _shortestPaths;
public:
dijkstraQueue(shared_ptr<vector<int>> shortestPaths) : _shortestPaths(shortestPaths) { }
virtual void insert(int node)
{
_queue.push_back(node);
}
virtual int getFirst()
{
int minimum = INF;
int minimumNode = 1;
for (int i = 0; i < _queue.size(); i++)
{
int to = _queue[i];
int newDistance = _shortestPaths>at(to);
if (minimum > newDistance) // Greedy selection: select node with minimum distance on every step
{
minimum = newDistance;
minimumNode = to;
}
}
if (minimumNode != 1)
{
remove(_queue.begin(), _queue.end(), minimumNode);
}
return minimumNode;
}
virtual bool isEmpty()
{
return _queue.empty();
}
};
Dijkstra's algorithm finds the SHORTEST and most OPTIMAL path 24<19<18<13<12<7<6<1<0 with a cost of 10:
(0, 0) 1 (0, 1)

1

(1, 1) 1 (1, 2)

1

(2, 2) 1 (2, 3)

1

(3, 3) 1 (3, 4)

1

(4, 4)
AStar
The AStar algorithm is particularly suited for cases where a path is sought in a Euclidean space with coordinates, such as maps. This is why it is widely used in games. It not only utilizes a "blind" greedy search based on minimal weights but also considers the Euclidean distance to the goal. As a result, it is usually much more efficient than Dijkstra's algorithm in practical scenarios (refer to my other GitHub project for more details). aStarQueue.h#L18:
class aStarQueue : public pathFindingBase
{
private:
vector<int> _queue;
shared_ptr<vector<int>> _shortestPaths;
shared_ptr<Graph> _graph;
int _finishX;
int _finishY;
/// <summary>
/// Euclidian distance from node start to specified node id.
/// </summary>
int calcEuristic(int id)
{
return sqrt(
pow(abs(
_finishX > _graph>Nodes[id].X ?
_finishX  _graph>Nodes[id].X :
_graph>Nodes[id].X  _finishX), 2) +
pow(abs(
_finishY > _graph>Nodes[id].Y ?
_finishY  _graph>Nodes[id].Y :
_graph>Nodes[id].Y  _finishY), 2));
}
public:
aStarQueue(int finishX, int finishY, shared_ptr<Graph> graph, shared_ptr<vector<int>> shortestPaths)
:
_shortestPaths(shortestPaths),
_graph(graph)
{
_finishX = finishX;
_finishY = finishY;
}
virtual void insert(int node)
{
_queue.push_back(node);
}
virtual int getFirst()
{
int minimum = INF;
int minimumNode = 1;
for (int i = 0; i < _queue.size(); i++)
{
int to = _queue[i];
int newDistance = _shortestPaths>at(to);
int euristic = calcEuristic(to);
if (minimum > newDistance + euristic)
{
minimum = newDistance + euristic;
minimumNode = to;
}
}
if (minimumNode != 1)
{
_queue.erase(remove(_queue.begin(), _queue.end(), minimumNode), _queue.end());
}
return minimumNode;
}
virtual bool isEmpty()
{
return _queue.empty();
}
};
As a result, we obtain the same results as Dijkstra's algorithm because it provides the most optimal route.
It is possible that this example might be too simplistic to demonstrate the real differences in performance among these algorithms. If you are interested in exploring the potential of these algorithms, please refer to my other project, which implements these algorithms efficiently and employs a more visual approach with a wide range of test data.
Downsides
However, there is a problem with our Dijkstra's and AStar algorithms...
The above implementation uses a vector (a dynamic array []) within our universal data structure. On every call to getFirst(), it takes O(N) time to find the required node in the vector. Consequently, assuming that the main algorithm also takes O(N*M) time, where M is the average number of neighbors, the overall complexity could become almost cubic. This would lead to significant performance degradation on large graphs.
While this sample is useful for grasping the overall idea that all four algorithms are not fundamentally different, the devil lies in the details. Implementing all four algorithms efficiently using a universal data structure is challenging.
For optimal performance (which is typically the primary concern in 99% of cases), more effort should be directed towards optimizations. For example, it makes a lot of sense to use priority queue instead of an array for both Dijkstra's and AStar algorithms.
Speaking about optimizations of AStar algorithm, it makes a lot of sense to mention a few links that will open a deep world of optimizations: A* Optimizations and Improvements by Lucho Suaya and JPS+: Over 100x Faster than A* by Steve Rabin.
Final word
The goal of this article was to show how relevant all traversing algorithms are to each other. But example of a graph used in this article is definitely too simplistic to demonstrate the real differences in performance among these algorithms. Therefore, use these examples primarily to gain a conceptual understanding, rather than for production purposes.
If you are interested in exploring the potential of these algorithms, please read my next article based on my other project, which implements these algorithms efficiently and employs a more visual approach with a wide range of test data.
Stay tuned!
Top comments (0)