Prim's Algorithm for Computing Minimum Spanning Trees


By Alex Allain
The idea behind minimum spanning trees is simple: given a graph with weighted edges, find a tree of edges with the minimum total weight. What is a spanning tree? A graph that satisfies these three properties: connected, acyclic, and consisting of |V| - 1 edges. (In fact, any two of the three conditions imply the third condition.) What this means in practice is that a spanning tree is a collection of edges that connect any two nodes via a single path. It's probably easiest to understand a tree using the definition that a tree is both connected and acyclic; think about binary trees--every node in the tree is reachable, so it's connected, and there are no cycles because you can't go back up the tree.



Spanning trees often come up in computer networking. For instance, if you have a large local area network with a lot of switches, it might be useful to find a minimum spanning tree so that only the minimum number of packets need to be relayed across the network and multiple copies of the same packet don't arrive via different paths (remember, any two nodes are connected via only a single path in a spanning tree). Other real-world problems include laying out electrical grids, reportedly the original motivation for Boruvka's algorithm, one of the first algorithms for finding minimum spanning trees. It shouldn't be surprising that it would be better to find a minimum spanning tree than just any old spanning tree; if one spanning tree on a network would involve taking the most congested, slowest path, it's probably not going to be ideal!

So how can you find a minimum spanning tree? It turns out that finding minimum spanning trees can be done in polynomial time. Even better, the algorithms required for fast calculations are not too complicated (one common algorithm, Prim's algorithm, is, in fact, very similar to Dijkstra's algorithm for finding shortest paths).

The Cut Property

The cut property is useful to fully understand minimum spanning trees, their construction, and why a greedy algorithm--one that always selects the next best choice--works.

The cut property states that given a collection of vertices, if the collection is divided into two sets, with no edges between the two sets that are already in the minimum spanning tree, then the shortest edge between the two sets is part of a minimum spanning tree of the graph.

Intuitively, the cut property says that we can always make the choice of adding an edge to our minimum spanning tree simply by finding a way to connect two sets of vertices (note that we don't have to know that each set is connected internally or not; all that matters is that we can find a bridge between the two).

The basic intuition behind why this works is that at some point, the two collections of vertices have to be linked together, and choosing a longer edge than the minimum edge would only really help if we didn't also need the shorter edge (but if we needed only the shorter edge, we wouldn't need the longer edge).

Prim's Algorithm

Prim's algorithm closely resembles Dijkstra's algorithm because they both rely on a similar approach of finding the "next closest" vertex. Prim's algorithm slowly grows a minimum spanning tree, starting from a single vertex and adding in new edges that link the partial tree to a new vertex outside of the tree. Using the cut property, we know that if we have a partial tree, then to add a new vertex to the tree, all we actually need to do is find the vertex that is closest to the tree (the elements in the tree and the elements outside the tree are the two sets in the cut property, so we just choose the smallest edge bridging the two).

So how does this work in practice? We start out with an array of distances, d, from the minimum spanning tree for each vertex in the graph. Initially, all distances should be set to infinity (or another large number) except for the vertex that we will use to "seed" the minimum spanning tree (the first vertex in the tree). After that vertex is added to the tree, each of its neighbors will have their distances to the tree updated appropriately. The closest vertex to the minimum spanning tree should then be added to the tree, and the process of checking its neighbors to see if they can reach the tree more quickly via the new vertex than any other vertex in the tree should be repeated.

The pseudocode for Prim's algorithm looks remarkably like the pseudocode for Dijkstra's algorithm.
Given a graph, G, with edges E of the form (v1, v2) and vertices V

dist : array of distances from the source to each vertex
edges: array indicating, for a given vertex, which vertex in the tree it 
       is closest to
i    : loop index
F    : list of finished vertices
U    : list or heap unfinished vertices

/* Initialization: set every distance to INFINITY until we discover a way to
link a vertex to the spanning tree */
for i = 0 to |V| - 1
    dist[i] = INFINITY
    edge[i] = NULL
end

pick a vertex, s, to be the seed for the minimum spanning tree

/* Since no edge is needed to add s to the minimum spanning tree, its distance
from the tree is 0 */
dist[s] = 0


while(F is missing a vertex)
   pick the vertex, v, in U with the shortest edge to the group of vertices in
       the spanning tree add v to F

   /* this loop looks through every neighbor of v and checks to see if that
    * neighbor could reach the minimum spanning tree more cheaply through v
    * than by linking through a previous vertex */
   for each edge of v, (v1, v2)
        if(length(v1, v2) < dist[v2])
            dist[v2] = length(v1, v2)
            edges[v2] = v1
            possibly update U, depending on implementation
        end if
    end for
end while
A significant difference between Prim's algorithm and Dijkstra's algorithm is that in Dijkstra's algorithm, the distance being checked is to a source node whereas in Prim's algorithm, the distance is only to the minimum spanning tree. The running time, too, is similar to Dijkstra's algorithm--when a heap is used to pick the next vertex to consider, the running time is O(|E| * log|V|), whereas when an unsorted list of vertices is searched in linear time, the running time is O(|V|^2 + |E|).

As with Dijkstra's algorithm, there are a few tradeoffs here that suggest different implementations based on the type of graph. If the graph is fairly sparse, it might make sense to use a heap to store the vertices, heaping on the distance from the minimum spanning tree. In this way, the next vertex to add to the tree will always be at the top of the heap. On the other hand, if there are a lot of edges in the graph, then it would make more sense to keep an unsorted priority queue and scan through it linearly each time to pick the next vertex to add to the tree.


Related articles

Dijkstra's Algorithm for Single-Source Shortest Paths

The Graph Data Structure in Computer Science

Algorithmic Efficiency and Big-O Notation