Graph Traversal: BFS and DFS

Recall that we have the following tree traversals:

Similarly, we can traverse a graph using BFS and DFS. Just like in trees, both BFS and DFS color each node in three colors in the traversal process:

In the “waterfront” analogy, you can imagine white nodes being dry areas, gray nodes being the waterfront, and the black areas being completely submerged in water. Note this analogy works slightly better for BFS (and its derivatives such as Dijkstra) than for DFS.

BFS

BFS for graph is exactly like BFS for tree, except that now you need to check whether a node has been visited before when you try to add it to the queue. You only want to add a “white” node that has never been seen before to the queue.

def bfs(n, edges): # full BFS procedure for the whole graph
    def _BFS(v): # do one BFS from v
        queue = [v] # start node
        for u in queue: # pop u logically but not physically from queue
            order.append(u) # BFS visit order 
            for w in adjlist[u]: # u->w
                if color[w] == 0: # only if gray->white (tree edge)
                    queue.append(w)
                    color[w] = 1 # gray
            color[u] = 2 # black: i'm done

    adjlist = edges2adjlist(edges) # convert edges to adjaency list
    color = defaultdict(int) # default 0: white
    order = [] # output
    for v in range(n): 
        if color[v] == 0: # only do BFS on white nodes
            _BFS(v) # another BFS tree in BFS forest
    return order

Caveats:

For example,

0 --> 6 --> 2 --> 3 <-- 7 --> 8 
  /   |           v 
1     +---> 4 --> 5

>>> print(bfs(9, [(0,6), (1,6), (6,2), (2,3), (6,4), (4,5), (3,5), (7,3), (7,8)]))

[0, 6, 2, 4, 3, 5, 1, 7, 8]

Note that there are three BFS trees in this BFS forest:

BFS(0)  BFS(1)  BFS(7)
    0       1       7
    |               |
    6               8
   / \
  2   4
  |   |
  3   5

The time complexity for BFS is O(V+E) (linear in the size of the graph) because you need to visit each edge once and only once, and each node is added to the queue once and popped from the queue once.

Application of BFS: Shortest-Path in Unweighted Graphs

BFS can be used to find the single-source shortest-path(s) in unweighted graphs (where each edge has a unit cost), which is also known as “uniform cost” search in AI. Basically, the first time you add a node v to the queue, it is guaranteed that v’s optimal distance for the source has been found. Therefore, in single-source-single-destination problems, when the target node t is added to the queue, you can terminate immediately. BFS is indeed the fastest algorithm for shortest-path on unweighted graphs. For example, for the coins problem (minimum number of coins to make up an amount), BFS would be much faster than Viterbi (or DP).

By contrast, in a general weighted graph, nodes in the queue can still be updated to better distances, and therefore you need to use a priority queue instead of a queue, and this becomes the Dijkstra algorithm. So Dijkstra is a generalization of BFS on weighted graphs.

DFS

DFS becomes more interesting on graphs, especially directed graphs. Unlike BFS, here we use a stack, and like BFS, we only push a node to the stack if it’s white. In real implementation, like in trees, we use recursion which maintains the implicit stack for you. The following DFS code also detects cycles as a by-product:

def dfs(n, edges): # DFS for the whole graph
    def _DFS(v): # recursive
        color[v] = 1 # gray
        order.append(v)
        for u in adjlist[v]: # v->u
            if color[u] == 0: # tree edge (gray->white)
                _DFS(u)
        color[v] = 2 # black

    adjlist = edges2adjlist(edges) # convert edges to adjacency list
    color = defaultdict(int) # default 0: white
    order = [] # output
    for v in range(n): # no need for this loop if there is "God" node
        if color[v] == 0:
            _DFS(v) # another DFS tree

    return order

Note that

For example,

0 --> 6 --> 2 --> 3 <-- 7 --> 8 
  /   |           v 
1     +---> 4 --> 5

>>> print(dfs(9, [(0,6), (1,6), (6,2), (2,3), (6,4), (4,5), (3,5), (7,3), (7,8)]))
[0, 6, 2, 3, 5, 4, 1, 7, 8] # different from BFS order

Here is the DFS forest; as you can see DFS trees (being “depth”-first) are indeed “deeper” than BFS ones:

DFS(0)  DFS(1)  DFS(7)
    0       1       7
    |               |
    6               8
   / \
  2   4
  |   
  3
  |
  5

Application of DFS: Cycle Detection

A very nice byproduct of DFS is cycle detection. In order to form a cycle v1v2vkv1, you must have a “back edge” to complete the loop, which means the current node v is attempting to visit an active “gray” node u in the stack who is v’s ancestor (thus forming a complete cycle uvu); i.e., a back edge is from a gray node to another gray node higher up in the DFS tree. This works for both undirected and directed graphs.

Small caveat: for undirected graphs, visiting your parent is not a possible back edge (so a back edge in an undirected graph must be visting your grandparent or above), but for directed graph it is possible (uv, and vu).

Just add these two lines after if color[u]==0: block:

            if color[u] == 0: # tree edge (gray->white)
                _DFS(u)
            elif color[u] == 1: # gray: active; back edge (gray->gray)
                print("cycle detected %d->%d" % (v, u)) 

For example, if we add two edges, 5->1 and 5->7, to the above graph to form cycles:

0 --> 6 --> 2 --> 3 <-- 7 --> 8 
  /   |           v     ^
1     +---> 4 --> 5 ----+
^                 |
  \---------------+

>>> print(dfs(9, [(0,6), (1,6), (6,2), (2,3), (6,4), (4,5), (3,5), (7,3), (7,8), (5,1), (5,7)]))
cycle detected 1->6
cycle detected 7->3
[0, 6, 2, 3, 5, 1, 7, 8, 4]

DFS detects cycle twice, producing this DFS forest with a single, deep, tree (with back edges shown in dotted edges):

  DFS(0)
      0 
      | 
....> 6 
.    / \
.   2   4
.   |   
.   3 <..
.   |   .
.   5   .
.  / \  .
..1   7..
      |
      8  

Note that our code can only detect the existence of cycles. If you also want to print a cycle (6->2->3->5->1->6 and 3->5->7) then you do need to maintain the stack. Note there are many other “related” cycles that are not detected in DFS, such as 6->4->5->1->6.

Like BFS, the time complexity for DFS is also O(V+E).

DFS edge classification and applications in connectivity

Above we used gray-to-white edges (tree edges) to construct the DFS tree and gray-to-gray edges (back edges) to detect cycles. But are there other cases such as gray-to-black edges? Well, yes and no – it all depends on the directedness of the graph.

On undirected graphs, DFS classifies each edge into two classes:

Why you can never rediscover a black node? Let’s prove by contradiction. If current node v rediscovers a black node u using the edge (v,u), then when u was still active (gray), it would also visit the (u,v) edge (each edges is bidirectional), so this edge would be either tree edge or back edge, depending on the color of v at that time.

However, on directed graphs, DFS classifies each edge into four classes:

These classifications are not just useful for detecting cycles, but also useful in other more advanced connectivity/component problems such as detecting strongly connected components in directed graphs and detecting “bridges” in undirected graphs.

(from wikipedia) Edge classification in DFS for directed graph. Tree edges (father-to-son) make up the DFS recursion tree; back edges (descent-to-ancestor) detect cycles; forward edges: ancestor-to-descendant; cross edges: cousin-to-cousin.

You can also use either BFS or DFS to figure out the topological ordering, which we’ll discuss next.