Editing Shortest path

Jump to: navigation, search

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 54: Line 54:
  
 
==All-pairs shortest paths==
 
==All-pairs shortest paths==
We might try to find all-pairs shortest paths by running a single-source shortest paths algorithm using each vertex in turn as the source. Hence, we have the following bounds with a [[binary heap]] implementation of the [[priority queue]] ADT:
+
We might try to find all-pairs shortest paths by running a single-source shortest paths algorithm using each vertex in turn as the source. Hence, we have the following bounds:
 
* <math>O(V(E+V))</math> when the graph is unweighted
 
* <math>O(V(E+V))</math> when the graph is unweighted
* <math>O(V(E+V)\log V)</math> when the edge weights are nonnegative and the graph is sparse (<math>O(V(E + \log V))</math> using a [[Fibonacci heap]])
+
* <math>O(V(E+V)\log V)</math> when the edge weights are nonnegative and the graph is sparse
 
* <math>O(VE+V^3)</math> when the edge weights are nonnegative and the graph is dense
 
* <math>O(VE+V^3)</math> when the edge weights are nonnegative and the graph is dense
 
* <math>O(V^2E)</math> when edge weights are allowed to be negative, but no negative-weight cycles exist.
 
* <math>O(V^2E)</math> when edge weights are allowed to be negative, but no negative-weight cycles exist.
Line 62: Line 62:
 
There is also a general-purpose technique called the [[Floyd–Warshall]] algorithm, which is often considered [[Dynamic programming|dynamic]], that solves all four cases in <math>O(V^3)</math> time. It works by using each vertex in turn and using it to try to relax the distance between every pair of nodes in the graph. When the graph is dense, Floyd–Warshall is just as fast as BFS or Dijkstra's, and it always outperforms Bellman–Ford. So we have the bound <math>O(V(E+V)\log V)</math> for sparse graphs with nonnegative edge weights and <math>O(V^3)</math> for dense graphs.
 
There is also a general-purpose technique called the [[Floyd–Warshall]] algorithm, which is often considered [[Dynamic programming|dynamic]], that solves all four cases in <math>O(V^3)</math> time. It works by using each vertex in turn and using it to try to relax the distance between every pair of nodes in the graph. When the graph is dense, Floyd–Warshall is just as fast as BFS or Dijkstra's, and it always outperforms Bellman–Ford. So we have the bound <math>O(V(E+V)\log V)</math> for sparse graphs with nonnegative edge weights and <math>O(V^3)</math> for dense graphs.
  
What if the graph is sparse and it has negative edge weights? Can we do better than <math>O(V^3)</math> here? Not using the Bellman–Ford algorithm <math>V</math> times, certainly. However, it turns out that it is possible to run Bellman–Ford ''once'' and thus transform the graph into a form in which all edge weights are nonnegative, then run Dijkstra's <math>V</math> times. This gives a method known as [[Johnson's algorithm]] that matches the time bound for graphs with nonnegative edge weights (since the time taken to run Bellman–Ford is asymptotically dominated by the time taken to run <math>V</math> invocations of Dijkstra's).
+
What if the graph is sparse and it has negative edge weights? Can we do better than <math>O(V^3)</math> here? Not using the Bellman–Ford algorithm <math>V</math> times, certainly. However, it turns out that it is possible to run Bellman–Ford ''once'' and thus transform the graph into a form in which all edge weights are nonnegative, then run Dijkstra's <math>V</math> times. This gives an <math>O(VE + V(E+V)\log V)</math> method known as [[Johnson's algorithm]].
  
 
==Single-pair shortest path==
 
==Single-pair shortest path==
Line 73: Line 73:
  
 
===Meet-in-the-middle===
 
===Meet-in-the-middle===
Another example of when we don't have to worry much about the worst case, and hence might be able to get a better search time, is when the graph we are searching is extremely large but yet we know that the shortest path is relatively short. For example, consider the Rubik's cube graph, in which each legal position is a vertex and an edge exists between two vertices if the positions they represent can be interconverted with one face turn. There are approximately <math>4.3\times10^{19}</math> positions of the Rubik's cube, but the shortest path between any pair of vertices always has length 20 or less.<ref>Tomas Rokicki ''et al.'' "God's Number is 20". Retrieved 2011-03-02 from http://cube20.org/</ref> The graph we are searching may even be infinite.
+
Another example of when we don't have to worry much about the worst case, and hence might be able to get a better search time, is when the graph we are searching is extremely large but yet we know that the shortest path is relatively short. For example, consider the Rubik's cube graph, in which each legal position is a vertex and an edge exists between two vertices if the positions they represent can be interconverted with one face turn. There are approximately <math>4.3\times10^19</math> positions of the Rubik's cube, but the shortest path between any pair of vertices always has length 20 or less.<ref>Tomas Rokicki ''et al.'' "God's Number is 20". Retrieved 2011-03-02 from http://cube20.org/</ref> The graph we are searching may even be infinite.
  
 
Meet-in-the-middle search works similarly to a single-source search such as BFS or Djikstra's algorithm, but it differs in that the algorithm branches outward from both the source and the destination (''backward'' along directed edges, if any). In this way we simultaneously build the beginning of the path (by searching from the source) and the end (by searching from the destination). When a common vertex is reached, these two pieces of the path may be assembled to form the full path. To understand why this is faster, imagine we were to execute BFS on the Rubik's Cube graph in order to try to find a path from a scrambled position to the solved position. Each vertex in the graph has degree 18 (three possible moves per face), so, if the distance from the scrambled position to the solved position is, say, 8, we will visit on the order of 18<sup>8</sup> vertices in the search (about 11 billion). If on the other hand we found this path by finding a path of length 4 from the source and a path of length 4 from the destination that happened to share an endpoint, we would only visit on the order of 2&times;18<sup>4</sup> vertices, which is about 210000. So we have cut down the size of the part of the graph we have to explore by a factor of about 50000. Note however that there is no free lunch; we need some way of determining when a vertex has been visited both forward and backward, such as a hash table.
 
Meet-in-the-middle search works similarly to a single-source search such as BFS or Djikstra's algorithm, but it differs in that the algorithm branches outward from both the source and the destination (''backward'' along directed edges, if any). In this way we simultaneously build the beginning of the path (by searching from the source) and the end (by searching from the destination). When a common vertex is reached, these two pieces of the path may be assembled to form the full path. To understand why this is faster, imagine we were to execute BFS on the Rubik's Cube graph in order to try to find a path from a scrambled position to the solved position. Each vertex in the graph has degree 18 (three possible moves per face), so, if the distance from the scrambled position to the solved position is, say, 8, we will visit on the order of 18<sup>8</sup> vertices in the search (about 11 billion). If on the other hand we found this path by finding a path of length 4 from the source and a path of length 4 from the destination that happened to share an endpoint, we would only visit on the order of 2&times;18<sup>4</sup> vertices, which is about 210000. So we have cut down the size of the part of the graph we have to explore by a factor of about 50000. Note however that there is no free lunch; we need some way of determining when a vertex has been visited both forward and backward, such as a hash table.

Please note that all contributions to PEGWiki are considered to be released under the Attribution 3.0 Unported (see PEGWiki:Copyrights for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource. Do not submit copyrighted work without permission!

Cancel | Editing help (opens in new window)