Difference between revisions of "Dijkstra's algorithm"

From PEGWiki
Jump to: navigation, search
(Created page with ''''Dijkstra's algorithm''' finds single-source shortest paths in a directed graph with non-negative edge weights. (When negative-weight edges are allowed, the […')
 
 
(11 intermediate revisions by 2 users not shown)
Line 1: Line 1:
'''Dijkstra's algorithm''' finds [[Shortest path|single-source shortest paths]] in a directed graph with non-negative edge weights. (When negative-weight edges are allowed, the [[Bellman–Ford algorithm]] must be used instead.) It is the algorithm of choice for solving this problem, because it is easy to understand, relatively easy to code, and, so far, the fastest algorithm known for solving this problem in the general case. In sparse graphs, running it once on every vertex to generate all-pairs shortest paths is faster than solving the same problem with the [[Floyd–Warshall algorithm]]. (The precise time complexity of Dijkstra's depends on the nature of the data structures used; read on.)
+
'''Dijkstra's algorithm''' finds [[Shortest_path#Single-source_shortest_paths|single-source shortest paths]] in a directed graph with non-negative edge weights. (When negative-weight edges are allowed, the [[Bellman–Ford algorithm]] must be used instead.) It is the algorithm of choice for solving this problem, because it is easy to understand, relatively easy to code, and, so far, the fastest algorithm known for solving this problem in the general case. In sparse graphs, running it once on every vertex to generate all-pairs shortest paths is faster than solving the same problem with the [[Floyd–Warshall algorithm]]. (The precise time complexity of Dijkstra's depends on the nature of the data structures used; read on.)
  
=Theory of the algorithm=
+
==Theory of the algorithm==
Dijkstra's may be characterized as a [[greedy algorithm]], which builds the shortest-paths tree one edge at a time, adding vertices in non-decreasing order of their distance from the source. That is, in each step of the algorithm, we will find the next-closest vertex to the source. (If there is a tie, it does not matter which one is chosen.)
+
Dijkstra's may be characterized as a [[greedy algorithm]], which builds the shortest-paths tree one edge at a time, adding vertices in non-decreasing order of their distance from the source. That is, in each step of the algorithm, we will find the next-closest vertex to the source. (If there is a tie, it does not matter which one is chosen.) We assume below that all nodes are reachable from the source. (If you find the two sections below too difficult, skip them.)
  
==Lemma 1==
+
===Lemma===
<p>Suppose that the distances from the source to the <math>k</math> closest vertices (<math>0 < k < V</math>) are known, where this set of <math>k</math> vertices is denoted by <math>T</math>. (If there is a tie, it does not matter which vertices are chosen.) Then, some shortest path to any vertex not in <math>T</math> consists of zero or more edges from <math>T</math> to itself followed by a single edge that leads out of <math>T</math>.</p>
+
Suppose that we are given a set <math>T</math> of vertices, containing the source <i>s</i>. We shall call a path <i>admissible</i> if it starts at <i>s</i>, proceeds through a sequence of vertices contained within <math>T</math>, and ends with a single non-<math>T</math> vertex. We claim that there exists an admissible path such that no other path from <i>s</i> to a non-<math>T</math> vertex is shorter.
  
<p>''Proof'': Suppose we have a path from the source <i>s</i> to some vertex <i>v</i> not in <math>T</math>. Then, the first vertex in the path is <i>s</i>, and it is folloewd by zero or more edges that lead from <math>T</math> to itself. At some point there must be an edge leading out of <math>T</math> to some vertex <i>u</i>. This sequence of edges constituting a path from <i>s</i> to <math>T</math> is at least as short as the whole path, since all edges have non-negative weights. So any path that does not consist of one or more edges from <math>T</math> to itself followed by a single edge out of <math>T</math> either exceeds or equals in length a path that does satisfy the property, and hence the original path does not need to be considered if we seek the next closest vertex from the source.</p>
+
''Proof'': This consists of nothing but a series of observations. First, any path from <i>s</i> to a vertex <i>v</i> outside <math>T</math> contains at least one edge from a vertex in <math>T</math> to one outside, since <i>s</i> ∈ <math>T</math>. Second, if the first such edge encountered along the path from <i>s</i> is not the last edge in the path, we can "cut off" the path at that point to obtain a path from <i>s</i> out of <math>T</math> that is not longer (since all edges have non-negative weights). Third, if the sub-path from <i>s</i> to the last vertex in <math>T</math>, denoted <i>u</i>, is not itself a shortest path from <i>s</i> to <i>u</i>, the length of the whole path may be decreased by substituting a shortest path from <i>s</i> to <i>u</i> for the current one. Now, suppose the opposite of what we want to prove: the shortest path from <i>s</i> out of <math>T</math>, or all the shortest paths from <i>s</i> out of <math>T</math>, is/are inadmissible. Then we may construct an admissible path using the three observations above which is not longer, a contradiction.
  
==The algorithm==
+
===The algorithm===
The preceding Lemma should give us an idea of how to proceed. We start with only the source vertex in the shortest-paths tree; its distance to itself is obviously zero. Then, we repeatedly apply the Lemma: we consider all outgoing edges <i>u</i>-<i>v</i> of vertices in <math>T</math>; each one induces a possible shortest path from the source <i>s</i> to <i>v</i> when the <i>u</i>-<i>v</i> edge is appended to the shortest <i>s</i>-<i>u</i> path (already known).  
+
The preceding Lemma should give us an idea of how to proceed. We start with only the source vertex in the shortest-paths tree (<math>T</math> is merely the vertex set of the partial shortest-paths tree); its distance to itself is obviously zero. Then, we repeatedly apply the Lemma by considering all admissible paths and finding the shortest. To do this we consider all edges that lead from a <math>T</math> vertex <i>u</i> to a non-<math>T</math> vertex <i>v</i>. Concatenating the <i>s</i>-<i>u</i> shortest path and the <i>u</i>-<i>v</i> edge yields an admissible path whose length is the sum of the length of the already-known <i>s</i>-<i>u</i> path and the weight of the <i>u</i>-<i>v</i> edge. The edge and vertex <i>v</i> at the very end of the shortest admissible path are added to the shortest-paths tree (and thus <i>v</i> is added to <math>T</math>); as no path from <i>s</i> to a non-<math>T</math> vertex can be shorter, we are justified in claiming that <i>v</i> is the closest non-<math>T</math> vertex to <i>s</i>, and that its distance from <i>s</i> is the shortest obtainable from an admissible path currently. This method of extending the shortest-paths tree by one vertex is repeated until all vertices have been added, and induction proves the algorithm's validity. (The extension can always be performed because otherwise the remaining vertices would be unreachable from the source, a contradiction.)
  
<!-- <p>To this end, we start with just the source vertex in the tree. Evidently, the distance to it is zero. Now, consider all immediate out-neighbors of the source vertex. If any other vertex is reachable from the source, then the path from the source to it must go through one of the immediate neighbors, and since all weights are non-negative, this path cannot possibly be shorter than that to the immediate neighbor through which it passes. So
+
==Implementation 1==
 +
As the previous sections are a bit heavy, here is some pseudocode for Dijkstra's algorithm:
 
<pre>
 
<pre>
input G,v
+
input G,s
for each u ∈ V(G)
+
for each v ∈ V(G)
     let dist[u] = ∞
+
     let dist[v] = ∞
let dist[v] = 0
+
let dist[s] = 0
for each i [1..V-1]
+
let T = ∅
     for each (u,w) ∈ E(G)
+
while T ≠ V(G)
           dist[w] = min(dist[w],dist[u]+wt(u,w))
+
    let v ∈ V(G)\T such that dist[v] is minimal
for each (u,w) ∈ E(G)
+
    add v to T
    if dist[w] > dist[u]+wt(u,w)
+
     for each w ∈ V(G) such that (v,w) ∈ E(G)
          error "Graph contains negative-weight cycles"
+
           dist[w] = min(dist[w],dist[v]+wt(v,w))
 
</pre>
 
</pre>
<i>G</i> is the directed, weighted graph in question, and <i>v</i> the source. The output is the array <i>dist</i>; at the completion of the algorithm, <i>dist[x]</i> contains the shortest-path distance from <i>v</i> to <i>x</i>. If the graph contains a cycle of negative weight, an error message is generated to that effect.
+
Following the completion of this code, the <code>dist</code> array will contain the minimum path lengths from <code>s</code> to each vertex, or <math>\infty</math> if no such path exists for a given vertex.
  
=Theory of the algorithm=
+
===Analysis===
==How the algorithm works==
+
In each iteration the inner loop will add a vertex to <math>T</math>, so at most <math>V</math> iterations will take place; in each one it takes <math>O(V)</math> time to find the vertex with the minimal <code>dist</code> entry. The inner loop executes at most <math>2E</math> times, since it considers each edge at most twice (once from each endpoint). A naive implementation therefore takes <math>O(E+V^2)</math> time. In a dense graph, this is asymptotically optimal.
The algorithm works by performing a series of <i>relaxations</i>. A relaxation occurs whenever the current shortest distance from node <i>v</i> to node <i>w</i> is improved because, by travelling from <i>v</i> to some intermediate vertex <i>u</i>, and then from <i>u</i> to <i>w</i>, a shorter path is obtained. (Floyd–Warshall and Dijkstra's algorithms rely upon this same technique.) The key is that, after <i>N</i> passes of the main loop in Bellman–Ford have completed, at least <i>N</i>+1 of the shortest-path distances in <i>dist</i> are correct. (We consider all pairs of vertices to be connected, so that all "missing" edges are assigned a weight of positive infinity.)
+
==Proof of correctness for graphs containing no negative-weight cycles==
+
We proceed by induction:
+
* When <math>N=0</math>, there is at least 1 correct entry in <i>dist</i>, the one stating that the distance from the source to itself is zero.
+
* Now suppose that <math>N</math> passes have occurred and that we know the shortest-path distances from the source to <math>N+1</math> of the vertices. Now, either <math>N</math> is equal to <math>V-1</math>, and we are done, or the vertices may be partitioned into two sets: <math>S</math>, which contains <math>N+1</math> vertices for which we already know shortest-path distances (with any <math>N+1</math> being chosen if there are more than this number), and <math>\overline{S}</math>, which contains the rest. Now, since a shortest-paths tree exists (it always does when there are no negative-weight cycles; the proof is in the [[Shortest path]] article), there must exist some vertex <i>w</i> in <math>\overline{S}</math> whose parent <i>u</i> in the shortest-paths tree is in <math>S</math>. Then, when the edge <i>(u,w)</i> is relaxed, the <i>dist</i> array will contain the correct shortest-path distance to <i>w</i>. Thus, after the next pass of the outer loop has occurred, <math>N+1</math> passes will have occurred in total, and the shortest-path distances to at least <math>(N+1)+1</math> vertices will be correctly known.
+
Thus, when a negative-weight cycle does not exist, after the main loop has finished, all distances in <i>dist</i> are correct. Now, if an edge <i>(u,w)</i> still exists such that <code>dist[w] > dist[u]+wt(u,w)</code>, then the distances could not possibly have been correct, because relaxation of <i>(u,w)</i> would give a shorter path to <i>w</i>. Since this is a contradiction, the assumption of the non-existence of negative-weight cycles must be incorrect in this case. We see then that as long as there are no negative-weight cycles, the algorithm always computes all distances correctly and terminates successfully.
+
  
==Proof of detection of negative-weight cycles==
+
==Implementation 2==
Suppose vertices <math>v_0, v_1,..., v_{n-1}</math> vertices form a negative-weight cycle, that at some point their entries in the <i>dist</i> array are <math>d_0, d_1,\ldots, d_{n-1}</math>, and that the numbers <math>w_0, w_1,\ldots, w_{n-1}</math> represent the weights of edges in the cycle, where <math>w_i</math> is the weight of the edge from <math>v_i</math> to <math>v_{i+1}</math>. Now, because the cycle has negative weight, we know that <math>w_0+w_1+\ldots+w_{n-1}<0</math>. We show by contradiction that it is always possible to relax one of the edges.<br/>
+
This implementation allows us to make optimizations, and more closely follows the theory, but requires a data structure <code>Q</code>:
<br/>
+
<pre>
Suppose we assume the opposite: that there exists no <math>i</math> such that <math>d_i + w_i < d_{i+1}</math>. Then, we have <math>d_0 + w_0 \ge d_1, d_1 + w_1 \ge d_2,\ldots, d_{n-1} + w_{n-1} \ge d_0</math>. Adding yields <math>(d_0 + d_1 + \ldots + d_{n-1}) + (w_0 + w_1 + \ldots + w_{n-1}) \ge (d_0 + d_1 + \ldots + d_{n-1})</math>, which, after cancelling the <math>d</math>'s, is a contradiction. So our assumption must be false, and it must always be possible to relax at least one of the edges in a negative-weight cycle. Thus, if there is a negative-weight cycle, it will always be detected at the end of the algorithm, because at least one edge on that cycle must be capable of relaxation.
+
input G,s
-->
+
for each v ∈ V(G)
 +
    let dist[v] = ∞
 +
add (s,0) to Q
 +
while Q is nonempty
 +
    let (v,d) ∈ Q such that d is minimal
 +
    remove (v,d) from Q
 +
    if dist[v] = ∞
 +
          dist[v] = d
 +
          for each w ∈ V(G) such that (v,w) ∈ E(G)
 +
              add (w,d+wt(v,w)) to Q
 +
</pre>
 +
Each iteration of the main loop is again an iteration of the Lemma. <math>T</math> is now implicit; it consists of all vertices with current distance (<code>dist</code> value) less than infinity. The data structure <code>Q</code> contains nodes that should potentially be explored next; it contains all nodes that are reachable from a single edge out of <math>T</math>. Selecting the closest one at each iteration, we eventually explore the entire connected component and compute all shortest paths.
 +
 
 +
===Analysis===
 +
The data structure <code>Q</code> is a priority queue ADT. If we use the [[binary heap]] implementation, then insertion and removal both take <math>O(\log N)</math> time, whereas querying the minimal element takes constant time. At most <math>2E</math> edges are inserted and at most <math>V</math> deletions occur, which gives a time bound of <math>O((E+V) \log (2E))</math>. We assume the graph has no duplicate edges, so that <math>E < V^2</math>, and then <math>\log (2E) < 2 \log V + \log 2</math>, giving the oft-quoted <math>O((E+V) \log V)</math> time bound. Hence this implementation outperforms the first in a sparse graph, and running it once per vertex to obtain all-pairs shortest paths outperforms the [[Floyd–Warshall algorithm]] in sparse graphs.
 +
 
 +
Using a [[Fibonacci heap]] implementation, which supports amortized constant time insertion, we can improve this to <math>O(E + V \log V)</math>.
 +
 
 +
==Singly constrained variant==
 +
Dijkstra's algorithm can solve the ''singly constrained shortest path problem''. In this problem, each edge has ''two'' nonnegative weights, a length and a cost, and we wish to minimize path length subject to the constraint that the total cost must not exceed <math>C</math>. The code looks very similar:
 +
<pre>
 +
input G,s
 +
for each v ∈ V(G)
 +
    let dist[v] = ∞
 +
    let mincost[v] = ∞
 +
add (s,0,0) to Q
 +
while Q is nonempty
 +
    let (v,d,c) ∈ Q such that c is minimal
 +
    remove (v,d,c) from Q
 +
    if d &lt; dist[v]
 +
          dist[v] = d
 +
          for each w ∈ V(G) such that (v,w) ∈ E(G) and c + cost(v,w) &le; C
 +
              add (w,d+wt(v,w),c+cost(v,w)) to Q
 +
</pre>
 +
Following completion of this code, the <code>dist</code> array holds the minimum path lengths found, and the <code>mincost</code> array holds the cost, for each vertex, required to achieve the minimum distance.
 +
 
 +
==References==
 +
* Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001). "Section 24.3: Dijkstra's algorithm". ''Introduction to Algorithms'' (Second ed.). MIT Press and McGraw-Hill. pp. 595–601. ISBN 0-262-03293-7.
 +
 
 +
[[Category:Algorithms]]
 +
[[Category:Graph theory]]

Latest revision as of 05:45, 31 May 2011

Dijkstra's algorithm finds single-source shortest paths in a directed graph with non-negative edge weights. (When negative-weight edges are allowed, the Bellman–Ford algorithm must be used instead.) It is the algorithm of choice for solving this problem, because it is easy to understand, relatively easy to code, and, so far, the fastest algorithm known for solving this problem in the general case. In sparse graphs, running it once on every vertex to generate all-pairs shortest paths is faster than solving the same problem with the Floyd–Warshall algorithm. (The precise time complexity of Dijkstra's depends on the nature of the data structures used; read on.)

Theory of the algorithm[edit]

Dijkstra's may be characterized as a greedy algorithm, which builds the shortest-paths tree one edge at a time, adding vertices in non-decreasing order of their distance from the source. That is, in each step of the algorithm, we will find the next-closest vertex to the source. (If there is a tie, it does not matter which one is chosen.) We assume below that all nodes are reachable from the source. (If you find the two sections below too difficult, skip them.)

Lemma[edit]

Suppose that we are given a set T of vertices, containing the source s. We shall call a path admissible if it starts at s, proceeds through a sequence of vertices contained within T, and ends with a single non-T vertex. We claim that there exists an admissible path such that no other path from s to a non-T vertex is shorter.

Proof: This consists of nothing but a series of observations. First, any path from s to a vertex v outside T contains at least one edge from a vertex in T to one outside, since sT. Second, if the first such edge encountered along the path from s is not the last edge in the path, we can "cut off" the path at that point to obtain a path from s out of T that is not longer (since all edges have non-negative weights). Third, if the sub-path from s to the last vertex in T, denoted u, is not itself a shortest path from s to u, the length of the whole path may be decreased by substituting a shortest path from s to u for the current one. Now, suppose the opposite of what we want to prove: the shortest path from s out of T, or all the shortest paths from s out of T, is/are inadmissible. Then we may construct an admissible path using the three observations above which is not longer, a contradiction.

The algorithm[edit]

The preceding Lemma should give us an idea of how to proceed. We start with only the source vertex in the shortest-paths tree (T is merely the vertex set of the partial shortest-paths tree); its distance to itself is obviously zero. Then, we repeatedly apply the Lemma by considering all admissible paths and finding the shortest. To do this we consider all edges that lead from a T vertex u to a non-T vertex v. Concatenating the s-u shortest path and the u-v edge yields an admissible path whose length is the sum of the length of the already-known s-u path and the weight of the u-v edge. The edge and vertex v at the very end of the shortest admissible path are added to the shortest-paths tree (and thus v is added to T); as no path from s to a non-T vertex can be shorter, we are justified in claiming that v is the closest non-T vertex to s, and that its distance from s is the shortest obtainable from an admissible path currently. This method of extending the shortest-paths tree by one vertex is repeated until all vertices have been added, and induction proves the algorithm's validity. (The extension can always be performed because otherwise the remaining vertices would be unreachable from the source, a contradiction.)

Implementation 1[edit]

As the previous sections are a bit heavy, here is some pseudocode for Dijkstra's algorithm:

input G,s
for each v ∈ V(G)
     let dist[v] = ∞
let dist[s] = 0
let T = ∅
while T ≠ V(G)
     let v ∈ V(G)\T such that dist[v] is minimal
     add v to T
     for each w ∈ V(G) such that (v,w) ∈ E(G)
          dist[w] = min(dist[w],dist[v]+wt(v,w))

Following the completion of this code, the dist array will contain the minimum path lengths from s to each vertex, or \infty if no such path exists for a given vertex.

Analysis[edit]

In each iteration the inner loop will add a vertex to T, so at most V iterations will take place; in each one it takes O(V) time to find the vertex with the minimal dist entry. The inner loop executes at most 2E times, since it considers each edge at most twice (once from each endpoint). A naive implementation therefore takes O(E+V^2) time. In a dense graph, this is asymptotically optimal.

Implementation 2[edit]

This implementation allows us to make optimizations, and more closely follows the theory, but requires a data structure Q:

input G,s
for each v ∈ V(G)
     let dist[v] = ∞
add (s,0) to Q
while Q is nonempty
     let (v,d) ∈ Q such that d is minimal
     remove (v,d) from Q
     if dist[v] = ∞
          dist[v] = d
          for each w ∈ V(G) such that (v,w) ∈ E(G)
               add (w,d+wt(v,w)) to Q

Each iteration of the main loop is again an iteration of the Lemma. T is now implicit; it consists of all vertices with current distance (dist value) less than infinity. The data structure Q contains nodes that should potentially be explored next; it contains all nodes that are reachable from a single edge out of T. Selecting the closest one at each iteration, we eventually explore the entire connected component and compute all shortest paths.

Analysis[edit]

The data structure Q is a priority queue ADT. If we use the binary heap implementation, then insertion and removal both take O(\log N) time, whereas querying the minimal element takes constant time. At most 2E edges are inserted and at most V deletions occur, which gives a time bound of O((E+V) \log (2E)). We assume the graph has no duplicate edges, so that E < V^2, and then \log (2E) < 2 \log V + \log 2, giving the oft-quoted O((E+V) \log V) time bound. Hence this implementation outperforms the first in a sparse graph, and running it once per vertex to obtain all-pairs shortest paths outperforms the Floyd–Warshall algorithm in sparse graphs.

Using a Fibonacci heap implementation, which supports amortized constant time insertion, we can improve this to O(E + V \log V).

Singly constrained variant[edit]

Dijkstra's algorithm can solve the singly constrained shortest path problem. In this problem, each edge has two nonnegative weights, a length and a cost, and we wish to minimize path length subject to the constraint that the total cost must not exceed C. The code looks very similar:

input G,s
for each v ∈ V(G)
     let dist[v] = ∞
     let mincost[v] = ∞
add (s,0,0) to Q
while Q is nonempty
     let (v,d,c) ∈ Q such that c is minimal
     remove (v,d,c) from Q
     if d < dist[v]
          dist[v] = d
          for each w ∈ V(G) such that (v,w) ∈ E(G) and c + cost(v,w) ≤ C
               add (w,d+wt(v,w),c+cost(v,w)) to Q

Following completion of this code, the dist array holds the minimum path lengths found, and the mincost array holds the cost, for each vertex, required to achieve the minimum distance.

References[edit]

  • Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001). "Section 24.3: Dijkstra's algorithm". Introduction to Algorithms (Second ed.). MIT Press and McGraw-Hill. pp. 595–601. ISBN 0-262-03293-7.