https://wcipeg.com/wiki/index.php?title=Ford%E2%80%93Fulkerson_method&feed=atom&action=historyFord–Fulkerson method - Revision history2024-03-29T08:25:27ZRevision history for this page on the wikiMediaWiki 1.25.2https://wcipeg.com/wiki/index.php?title=Ford%E2%80%93Fulkerson_method&diff=1735&oldid=prevBrian: Created page with "The Ford–Fulkerson method is a technique for solving the maximum flow problem. The input is a flow network, <math>G_0 = (V, s, t, c)</math>. Then the algorithm proceeds..."2013-09-09T04:12:53Z<p>Created page with "The <a href="/wiki/Ford%E2%80%93Fulkerson_method" title="Ford–Fulkerson method">Ford–Fulkerson method</a> is a technique for solving the <a href="/wiki/Maximum_flow_problem" class="mw-redirect" title="Maximum flow problem">maximum flow problem</a>. The input is a flow network, <math>G_0 = (V, s, t, c)</math>. Then the algorithm proceeds..."</p>
<p><b>New page</b></p><div>The [[Ford–Fulkerson method]] is a technique for solving the [[maximum flow problem]]. The input is a flow network, <math>G_0 = (V, s, t, c)</math>. Then the algorithm proceeds as follows:<br />
# Start with the empty flow, <math>f_0(u, v) = 0</math> for all <math>u, v \in V</math>. The corresponding residual network is then equivalent to the original flow network, <math>G_0 = (V, s, t, c)</math>.<br />
# Find an [[augmenting path]]. Let <math>c_m</math> be the minimum residual capacity along the augmenting path. Push <math>c_m</math> units of flow along the path, adding to the flow <math>f_i</math> to give an augmented flow <math>f_{i+1}</math>. Corresponding to the augmented flow is a new residual network, <math>G_{i+1}</math>.<br />
# Repeat step 2 with the new flow and new residual network until an augmenting path can no longer be found. The final flow <math>f_n</math> is a max flow.<br />
<br />
==Correctness==<br />
If the algorithm terminates, then no augmenting path remains. By the augmenting path theorem, the flow <math>f_n</math> obtained is maximal.<br />
<br />
If the capacities are finite and integral, then the algorithm is guaranteed to terminate. It is not hard to see that an invariant of the algorithm is that the residual capacities are all integral after each augmenting step. Therefore <math>c_m \geq 1</math> in each step, and the flow so far increases by at least one unit in each step. The algorithm must then terminate after no more than <math>M</math> steps, where <math>M</math> is the value of the max flow. A similar argument applies when all edge weights are rational.<br />
<br />
On the other hand, the algorithm is not guaranteed to terminate when the edge weights are allowed to be irrational.<br />
<br />
==Complexity==<br />
In step 2, we have a choice in how to find the augmenting path. The overall running time of the algorithm depends on the heuristic used to choose the path. In the most common choice of heuristic, known as the [[Edmonds–Karp algorithm]], we always choose the ''shortest'' augmenting path using [[breadth-first search]]. (Note that the capacities are not lengths; each edge has length 1!) It is easy to see that the running time of such an approach is <math>O((E+V)M)</math>. Under a more sophisticated analysis, we can derive the bound <math>O(VE^2)</math>, which is independent of the edge capacities.<br />
<br />
Other heuristics are possible. We could for example use [[depth-first search]] to find an augmenting path, which also gives an <math>O((E+V)M)</math> algorithm. Another possibility is to use a modification of [[Dijkstra's algorithm]] to find an augmenting path of maximal value, ''i.e.'', that maximizes <math>c_m</math>. In this case we have the bound <math>O((E + V \log V)M)</math>.<br />
<br />
A more sophisticated approach, known as [[Dinic's algorithm]], runs in <math>O(V^2 E)</math> time, which can be sped up to <math>O(V E \log V)</math> using [[link/cut tree]]s.</div>Brian