Difference between revisions of "Maximum subvector sum"

From PEGWiki
Jump to: navigation, search
(Higher dimensions)
m (Two-dimensional example)
Line 60: Line 60:
 
         {
 
         {
 
             for (int i = 0; i < n; i++)
 
             for (int i = 0; i < n; i++)
                 B[i] += A[b][i];
+
                 B[i] += M[b][i];
 
             res = max(res, max_subvector_sum(B));
 
             res = max(res, max_subvector_sum(B));
 
         }
 
         }
Line 67: Line 67:
 
}
 
}
 
</syntaxhighlight>
 
</syntaxhighlight>
Observe that, in the above code, the outer loop fixes <math>a</math>, the starting row, and the middle loop fixes <math>b</math>, the ending row. For every combination of starting and ending row, we compute the sum of the elements in those rows ''for each column''; these sums are stored in the vector <math>B</math>, which is then passed to the one-dimensional subroutine given earlier in the article. That is, <math>B_i = A_{a, i} + A_{a+1, i} + \ldots + A_{b, i}</math>. However, we cannot afford to actually ''compute'' each entry of <math>B</math> in this way; that would give <math>O(m^3 n)</math> time. Instead, we compute <math>B</math> dynamically, by re-using the values from the previous <math>b</math>, since <math>A_{a, i} + \ldots + A_{b, i} = (A_{a, i} + \ldots + A_{b-1, i}) + A_{b, i}</math>. This accounts for the line <code>B[i] += A[b][i]</code> in the code above.
+
Observe that, in the above code, the outer loop fixes <math>a</math>, the starting row, and the middle loop fixes <math>b</math>, the ending row. For every combination of starting and ending row, we compute the sum of the elements in those rows ''for each column''; these sums are stored in the vector <math>B</math>, which is then passed to the one-dimensional subroutine given earlier in the article. That is, <math>B_i = A_{a, i} + A_{a+1, i} + \ldots + A_{b, i}</math>. However, we cannot afford to actually ''compute'' each entry of <math>B</math> in this way; that would give <math>O(m^3 n)</math> time. Instead, we compute <math>B</math> dynamically, by re-using the values from the previous <math>b</math>, since <math>A_{a, i} + \ldots + A_{b, i} = (A_{a, i} + \ldots + A_{b-1, i}) + A_{b, i}</math>. This accounts for the line <code>B[i] += M[b][i]</code> in the code above.
  
 
===Faster algorithms===
 
===Faster algorithms===

Revision as of 23:45, 19 April 2012

The maximum subvector sum problem is that of finding a segment of a vector (array of numbers)[1], possibly empty, with maximum sum. If all elements of the array are nonnegative, then the problem may be trivially solved by taking the entire vector; if all elements are negative, then the problem is again trivially solved by taking the empty segment (whose sum is conventionally defined to be zero). The problem requires more thought, however, when the vector contains a mixture of positive and negative numbers.

This problem originated in the domain of image processing;[2] applications have also been found in data mining.[3] In algorithmic programming competitions, Kadane's linear time algorithm for this problem is often useful as a building block within a more complex algorithm for processing a multidimensional array.

One-dimensional problem

Bentley[2] describes four algorithms for this problem, of running time O(n^3), O(n^2), O(n\log n), and O(n). We discuss only the latter in this article, which is known as Kadane's algorithm after its discoverer.

Kadane's algorithm is a classic example of dynamic programming. It works by scanning the vector from left to right and computing the maximum-sum subvector ending at each of the vector's entries; denote the best sum ending at entry i by M_i. Also, for convenience, set M_0 = 0 for the empty subvector. The maximum subvector sum for the entire array A is then, of course, \max(M_0, M_1, M_2, \ldots, M_n). M_i for i > 0 may then be computed as follows:

M_i = \max(A_i, A_i + M_{i-1}) = A_i + \max(0, M_{i-1})

This formula is based on the following optimal substructure: The maximum-sum subvector ending at position i consists of either only the element A_i itself, or that element plus one or more elements A_j, A_{j+1}, \ldots, A_{i-1} (that is, ending at the previous position i-1). But the sum obtained in the latter case is simply A_i plus the sum of the subvector ending at A_{i-1}, so we want to make the latter as great as possible, requiring us to choose the maximum-sum subvector ending at A_{i-1}. This accounts for the A_i + M_{i-1} term. Of course, if M_{i-1} turned out to be negative, then there is no point in including any terms before A_i at all. This is why we must take the greater of A_i and A_i + M_{i-1}.

Implementation (C++)

This snippet is a direct translation of the algorithm given in the text:

template<class Vector>
int max_subvector_sum(Vector V)
{
    int* M = new int[V.size() + 1];         // dynamically allocate array M
    M[0] = 0;
    for (int i = 0; i < V.size(); i++)
        M[i+1] = V[i] + max(0, M[i]);       // apply the formula, but with zero-indexed arrays
    int res = max_element(M, M+V.size()+1);
    delete[] M;
    return res;
}

With a bit of thought, this can be simplified. In particular, we don't need to keep the entire array M; we only need to remember the last element computed.

template<class Vector>
int max_subvector_sum(Vector V)
{
    int res = 0, cur = 0;
    for (int i = 0; i < V.size(); i++)
        res = max(res, cur = V[i] + max(0, cur));
    return res;
}

Higher dimensions

The obvious generalization of the problem is as follows: given a d-dimensional array (or tensor) of dimensions n_1 \leq n_2 \leq \ldots \leq n_d, find indices 1 \leq a_1 \leq b_1 \leq n_1, 1 \leq a_2 \leq b_2 \leq n_2, \ldots, 1 \leq a_d \leq b_d \leq n_d such that the sum \sum_{i_1=a_1}^{b_1} \sum_{i_2=a_2}^{b_2} \ldots \sum_{i_d=a_d}^{b_d} A_{i_1, i_2, \ldots, i_d} is maximized (or return 0 if all entries are negative).

We describe a simple algorithm for this problem. It works by recursively reducing a d-dimensional problem to O(n_1^2) simpler, (d-1)-dimensional problems, terminating at d=1 which can be solved in O(n_d) time using the one-dimensional algorithm. Evidently, this algorithm takes time O(n_1^2 n_2^2 \ldots n_{d-1}^2 n_d). In the case with all dimensions equal, this reduces to O(n^{2d-1}).

The details are as follows. We try all possible sets of bounds [a_1, b_1] \in [1, n_1] for the first index. For each such interval, we create a (d-1)-dimensional tensor B where B_{i_2, i_3, \ldots, i_d} = \sum_{i_1=a_1}^{b_1} A_{i_1, i_2, \ldots, i_d} and compute the maximum subtensor sum in B. The range of indices that this represents in the original array will be the Cartesian product of the indices in the maximum-sum subtensor of B and the original range [a_1, b_1], so that by trying all possibilities for the latter, we will account for all possible subtensors of the original array A.

Two-dimensional example

In two dimensions, the time required is O(m^2 n), where m < n, or O(n^3) if m = n. If we imagine a two-dimensional array as a matrix, then the problem is to pick some axis-aligned rectangle within the matrix with maximum sum. The algorithm described above can be written as follows:

template<class Matrix>
int max_submatrix_sum(Matrix M)
{
    int m = M.size();
    int n = M[0].size();
    vector<int> B(n);
    int res = 0;
    for (int a = 0; a < m; a++)
    {
        fill(B.begin(), B.end(), 0);
        for (int b = a; b < m; b++)
        {
            for (int i = 0; i < n; i++)
                B[i] += M[b][i];
            res = max(res, max_subvector_sum(B));
        }
    }
    return res;
}

Observe that, in the above code, the outer loop fixes a, the starting row, and the middle loop fixes b, the ending row. For every combination of starting and ending row, we compute the sum of the elements in those rows for each column; these sums are stored in the vector B, which is then passed to the one-dimensional subroutine given earlier in the article. That is, B_i = A_{a, i} + A_{a+1, i} + \ldots + A_{b, i}. However, we cannot afford to actually compute each entry of B in this way; that would give O(m^3 n) time. Instead, we compute B dynamically, by re-using the values from the previous b, since A_{a, i} + \ldots + A_{b, i} = (A_{a, i} + \ldots + A_{b-1, i}) + A_{b, i}. This accounts for the line B[i] += M[b][i] in the code above.

Faster algorithms

This algorithm is not asymptotically optimal; for example, the d=2 case can be solved in O(n^3 (\log n)/(\log \log n)) time by a result due to Takaoka.[3] This implies that for d > 1, we can always achieve O(n^{2d-1} (\log n)/(\log \log n)), which is slightly better than the algorithm given above. In practice, however, the gains are not large.

Problems

Notes and references

  1. In computer science, vector is often used to mean array of real numbers; sometimes it is simply synonymous with array. Here, array is used in contradistinction with linked list, which does not support efficient random access. This meaning is related to but different from the meaning of the word in mathematics and physics.
  2. 2.0 2.1 Bentley, Jon (1984), "Programming pearls: algorithm design techniques", Communications of the ACM 27 (9): 865–873, doi:10.1145/358234.381162.
  3. 3.0 3.1 Takaoka, T. (2002), "Efficient algorithms for the maximum subarray problem by distance matrix multiplication", Electronic Notes in Theoretical Computer Science 61.