Difference between revisions of "Prefix sum array and difference array"

From PEGWiki
Jump to: navigation, search
(easier example problem)
(Use of difference array)
 
(12 intermediate revisions by 5 users not shown)
Line 1: Line 1:
Given an [[array]] of numbers, we can construct a new array by replacing each element by the difference between itself and the previous element, except for the first element, which we simply ignore. This is called the '''difference array''', because it contains the first differences of the original array. For example, the difference array of <math>A = [9, 2, 6, 3, 1, 5, 0, 7]</math> is <math>D = [2-9, 6-2, 3-6, 1-3, 5-1, 0-5, 7-0]</math>, or <math>[-7, 4, -3, -2, 4, -5, 7]</math>. Evidently, the difference array can be computed in [[linear time]] from the original array.
+
Given an [[array]] of numbers, we can construct a new array by replacing each element by the difference between itself and the previous element, except for the first element, which we simply ignore. This is called the '''difference array''', because it contains the first differences of the original array. We will denote the difference array of array <math>A</math> by <math>D(A)</math>. For example, the difference array of <math>A = [9, 2, 6, 3, 1, 5, 0, 7]</math> is <math>D(A) = [2-9, 6-2, 3-6, 1-3, 5-1, 0-5, 7-0]</math>, or <math>[-7, 4, -3, -2, 4, -5, 7]</math>.
  
We can run this process in reverse: that is, given a difference array and the initial element of the original array, we can reconstruct the original array. To do this, we simply scan from left to right, and accumulate as we go along, giving <math>[9, 9-7, 9-7+4, 9-7+4-3, 9-7+4-3-2, 9-7+4-3-2+4, 9-7+4-3-2+4-5, 9-7+4-3-2+4-5+7]</math>. The reader should perform this calculation and confirm that this is indeed equal to <math>A = [9, 2, 6, 3, 1, 5, 0, 7]</math>. This reverse process can also be done in linear time, as each element in <math>A</math> is the sum of the corresponding element in <math>D</math> and the ''previous'' element in <math>A</math>. We say that <math>A</math> is a '''prefix sum array''' of <math>D</math>. However, it is not the ''only'' prefix sum array of <math>D</math>, because knowing <math>D</math> alone does not give us enough information to reconstruct <math>A</math>; we also needed to know the initial element of <math>A</math>. If we assumed that this were -3, for example, instead of 9, we would obtain <math>A' = [-3, -10, -6, -9, -11, -7, -12, -5]</math>. <math>D</math> is the difference array of both <math>A</math> and <math>A'</math>. Both <math>A</math> and <math>A'</math> are prefix sum arrays of <math>D</math>.
+
We see that the difference array can be computed in [[linear time]] from the original array, and is shorter than the original array by one element. Here are implementations in C and Haskell. (Note that the Haskell implementation actually takes a list, but returns an array.)
 +
<syntaxhighlight lang="c">
 +
// D must have enough space for n-1 ints
 +
void difference_array(int* A, int n, int* D)
 +
{
 +
    for (int i = 0; i < n-1; i++)
 +
        D[i] = A[i+1] - A[i];
 +
}
 +
</syntaxhighlight><br/>
 +
<syntaxhighlight lang="haskell">
 +
d :: [Int] -> Array Int [Int]
 +
d a = listArray (0, length a - 2) (zipWith (-) (tail a) a)
 +
</syntaxhighlight><br/>
 +
 
 +
The '''prefix sum array''' is the opposite of the difference array. Given an array of numbers <math>A</math> and an arbitrary constant <math>c</math>, we first append <math>c</math> onto the front of the array, and then replace each element with the sum of itself and all the elements preceding it. For example, if we start with <math>A = [9, 2, 6, 3, 1, 5, 0, 7]</math>, and choose to append the arbitrary value <math>-8</math> to the front, we obtain <math>P(-8, A) = [-8, -8+9, -8+9+2, -8+9+2+6, ..., -8+9+2+6+3+1+5+0+7]</math>, or <math>[-8, 1, 3, 9, 12, 13, 18, 18, 25]</math>. Computing the prefix sum array can be done in linear time as well, and the prefix sum array is longer than the original array by one element:
 +
<syntaxhighlight lang="c">
 +
// P must have enough space for n+1 ints
 +
void prefix_sum_array(int c, int* A, int n, int* P)
 +
{
 +
    P[0] = c;
 +
    for (int i = 0; i < n; i++)
 +
        P[i+1] = P[i] + A[i];
 +
}
 +
</syntaxhighlight><br/>
 +
<syntaxhighlight lang="haskell">
 +
p :: Int -> [Int] -> Array Int [Int]
 +
p c a = listArray (0, length a) (scanl (+) c a)
 +
</syntaxhighlight><br/>
 +
 
 +
Note that every array has an infinite number of possible prefix sum arrays, since we can choose whatever value we want for <math>c</math>. For convenience, we usually choose <math>c = 0</math>. However, changing the value of <math>c</math> has only the effect of shifting all the elements of <math>P(c,A)</math> by a constant. For example, <math>P(15, A) = [15, 24, 26, 32, 35, 36, 41, 41, 48]</math>. However, each element of <math>P(15, A)</math> is exactly 23 more than the corresponding element from <math>P(-8, A)</math>.
 +
 
 +
The functions <math>D</math> and <math>P</math> carry out '''reverse processes'''. Given an nonempty zero-indexed array <math>A</math>:
 +
# <math>D(P(c, A)) = A</math> for any <math>c</math>. For example, taking the difference array of <math>P(-8, A) = [-8, 1, 3, 9, 12, 13, 18, 18, 25]</math> gives <math>[9, 2, 6, 3, 1, 5, 0, 7]</math>, that is, it restores the original array <math>A</math>.
 +
# <math>P(A_0, D(A)) = A</math>. Thus, taking <math>D(A) = [-7, 4, -3, -2, 4, -5, 7]</math> and <math>A_0 = 9</math> (initial element of <math>A</math>), we have <math>P(A_0, D(A)) = [9, 2, 6, 3, 1, 5, 0, 7]</math>, again restoring the original array <math>A</math>.
  
 
==Analogy with calculus==
 
==Analogy with calculus==
These two processes&mdash;computing the difference array, and computing a prefix sum array&mdash;are the discrete equivalents of differentiation and integration in calculus, which operate on continuous domains:
+
These two processes&mdash;computing the difference array, and computing a prefix sum array&mdash;are the discrete equivalents of differentiation and integration in calculus, which operate on continuous domains. An entry in an array is like the value of a function at a particular point.
* Differentiation and integration are reverses. Computing the difference array and computing a prefix sum array are reverses.
+
* ''Reverse processes'':
* A function can only have one derivative, but an infinite number of antiderivatives. An array has only one difference array, but an infinite number of prefix sum arrays.
+
:* <math>D(P(c, A)) = A</math> for any <math>c</math>. Likewise <math>\frac{d}{dx} \int_c^x f(t)\, dt = f(x)</math> for any <math>c</math>.
* However, if we know the value of a function over an interval of the real line and we are given some real number, we can find one unique antiderivative of this function which attains this real value at the left end of the interval. Likewise, if we are given an array, and we are told the initial element of one of its prefix sum arrays, we can reconstruct the entire prefix sum array.
+
:* <math>P(A_0, D(A)) = A</math>. Likewise <math>f(a) + \int_a^x \frac{df}{dt}\, dt = f(x)</math>.
* When we take the difference array, we shorten the array by one element, since we destroy information about what the initial element was. On the other hand, when we take a prefix sum array, we lengthen the array by one element, since we introduce information about what the initial element is to be. Likewise, when we take the derivative of a function <math>f:[a,b]\to\mathbb{R}</math> on a closed interval, we obtain <math>f':(a,b)\to\mathbb{R}</math> (open interval), and we destroy the information of what <math>f(a)</math> was; conversely, if we are given <math>f':(a,b)\to\mathbb{R}</math> (open interval), and the value of <math>f(a)</math> is also told to us, we obtain its antiderivative <math>f:[a,b]</math> (closed interval).
+
* ''Uniqueness'':
* All possible prefix sum arrays differ only by an offset from each other, in the sense that one of them can be obtained by adding a constant to each entry of the other. For example, the two prefix sum arrays shown for <math>A</math> above differ by 12 at all positions. Likewise, all possible antiderivatives differ from each other only by a constant (the difference between their constants of integration).
+
:* A differentiable function <math>f(x)</math> can only have one derivative, <math>\frac{df}{dx}</math>. An array <math>A</math> can only have one difference array, <math>D(A)</math>.
Because of these similarities, we will speak simply of ''differentiating'' and ''integrating'' arrays. An array can be differentiated multiple times, but eventually it will shrink to length 0. An array can be integrated any number of times. Differentiating <math>k</math> times and integrating <math>k</math> times are still reverse processes.
+
:* A continuous function <math>f(x)</math> has an infinite number of antiderivatives, <math>F_c(x) = \int_c^x f(t)\, dt</math>, where <math>c</math> can be any number in its domain, but they differ only by a constant (their graphs are vertical translations of each other). An array <math>A</math> has an infinite number of prefix arrays <math>P(c,A)</math>, but they differ only by a constant (at each entry).
 +
* Given some function <math>f:[a,b]\to\mathbb{R}</math>, and the fact that <math>F</math>, an antiderivative of <math>f</math>, satisfies <math>F(a) = y_0</math>, we can uniquely reconstruct <math>F</math>. That is, even though <math>f</math> has an infinite number of antiderivatives, we can pin it down to one once we are given the value the antiderivative is supposed to attain on the left edge of <math>f</math>'s domain. Likewise, given some array <math>A</math> and the fact that <math>P</math>, a prefix sum array of <math>A</math>, satisfies <math>P_0 = c</math>, we can uniquely reconstruct <math>P</math>.
 +
* ''Effect on length'':
 +
:* <math>D(A)</math> is shorter than <math>A</math> by one element. Differentiating <math>f:[a,b] \to \mathbb{R}</math> gives a function <math>f':(a,b) \to \mathbb{R}</math> (shortens the closed interval to an open interval).
 +
:* <math>P(c,A)</math> is longer than <math>A</math> by one element. Integrating <math>f:(a,b) \to \mathbb{R}</math> gives a function <math>F:[a,b] \to \mathbb{R}</math> (lengthens the open interval to a closed interval).
 +
 
 +
Because of these similarities, we will speak simply of ''differentiating'' and ''integrating'' arrays. An array can be differentiated multiple times, but eventually it will shrink to length 0. An array can be integrated any number of times.
  
 
==Use of prefix sum array==
 
==Use of prefix sum array==
The Fundamental Theorem of Calculus also has an analogue, which is why the prefix sum array is so useful. If we assume both the original array and its prefix sum array are zero-indexed (or both one-indexed), for example, <math>A = [9, 2, 6, 3, 1, 5, 0, 7]</math> where <math>A_0 = 9</math> and <math>P = [0, 9, 11, 17, 20, 21, 26, 26, 33]</math> (a possible prefix sum array of <math>A</math>, where we have arbitrarily chosen 0 as the initial element), where <math>P_0 = 0</math>, then we can add up any range of elements in <math>A</math> by taking the difference of ''two'' elements in <math>P</math>. In particular, <math>\Sigma_{[i,j)}A</math>, the notation we will use for <math>A_i + A_{i+1} + ... + A_{j-1}</math>, is equal to <math>P_j - P_i</math>. (Note the use of the [[half-open interval]].) For example, <math>A_2 + A_3 + A_4 + A_5 = 6 + 3 + 1 + 5 = 15 = 26 - 11 = P_6 - P_2</math>. This property holds no matter what we choose as the initial element of <math>P</math>, because it will always cancel out of the calculation, just as we can use any antiderivative of a function to compute an integral (which is like summing a function across an entire interval) by taking the difference between the antiderivative's values at the endpoints, because the constant of integration will always cancel out. This makes the prefix sum array a useful item to [[precompute]]; once we have computed it, which only takes linear time, we can sum any range of the original array in ''constant time'', simply by taking the difference of two values in the prefix sum array.
+
The Fundamental Theorem of Calculus also has an analogue, which is why the prefix sum array is so useful. To compute an integral <math>\int_a^b f(t)\, dt</math>, which is like a continuous kind of sum of an infinite number of function values <math>f(a), f(a+\epsilon), f(a+2\epsilon), ..., f(b)</math>, we take any antiderivative <math>F</math>, and compute <math>F(b) - F(a)</math>. Likewise, to compute the sum of values <math>A_i, A_{i+1}, A_{i+2}, ..., A_{j-1}</math>, we will take any prefix array <math>P(c,A)</math> and compute <math>P_j - P_i</math>. Notice that just as we can use any antiderivative <math>F</math> because the constant cancels out, we can use any prefix sum array because the initial value cancels out. (Note our use of the [[left half-open interval]].)
 +
 
 +
''Proof'': <math>P_j = c + \sum_{k=0}^{j-1} A_k</math> and <math>P_i = c + \sum_{k=0}^{i-1} A_k</math>. Subtracting gives <math>P_j - P_i = \sum_{k=0}^{j-1} A_k - \sum_{k=0}^{i-1} A_k = \sum_{k=i}^{j-1} A_k</math> as desired. <math>_{\blacksquare}</math>
 +
 
 +
This is best illustrated ''via'' example. Let <math>A = [9,2,6,3,1,5,0,7]</math> as before. Take <math>P(0,A) = [0, 9, 11, 17, 20, 21, 26, 26, 33]</math>. Then, suppose we want <math>A_2 + A_3 + A_4 + A_5 = 6 + 3 + 1 + 5 = 15</math>. We can compute this by taking <math>P_6 - P_2 = 26 - 11 = 15</math>. This is because <math>P_6 - P_2 = (0 + A_0 + A_1 + A_2 + A_3 + A_4 + A_5) - (0 + A_0 + A_1) = A_2 + A_3 + A_4 + A_5</math>.
 +
 
 +
When we use the prefix sum array in this case, we generally use <math>c=0</math> for convenience (although we are theoretically free to use any value we wish), and speak of the prefix sum array obtained this way as simply ''the'' prefix sum array.
  
 
===Example: Counting Subsequences (SPOJ)===
 
===Example: Counting Subsequences (SPOJ)===
Line 20: Line 65:
 
We will consider the problem {{SPOJ|SUBSEQ|Counting Subsequences}} from IPSC 2006. Here we are given an array of integers <math>S</math> and asked to find the number of contiguous subsequences of the array that sum to 47.
 
We will consider the problem {{SPOJ|SUBSEQ|Counting Subsequences}} from IPSC 2006. Here we are given an array of integers <math>S</math> and asked to find the number of contiguous subsequences of the array that sum to 47.
  
To solve this, we will first transform array <math>S</math> into its prefix sum array <math>P</math>. Notice that the sum of each contiguous subsequence <math>S_i + S_{i+1} + S_{i+2} + ... + S_{j-1}</math> corresponds to the difference of two elements of <math>P</math>, that is, <math>P_j - P_i</math>. So what we want to find is the number of pairs <math>(i,j)</math> with <math>P_j - P_i = 47</math> and <math>i < j</math>. (Note that if <math>i > j</math>, we will instead get a subsequence with sum -47.)
+
To solve this, we will first transform array <math>S</math> into its prefix sum array <math>P(0,S)</math>. Notice that the sum of each contiguous subsequence <math>S_i + S_{i+1} + S_{i+2} + ... + S_{j-1}</math> corresponds to the difference of two elements of <math>P</math>, that is, <math>P_j - P_i</math>. So what we want to find is the number of pairs <math>(i,j)</math> with <math>P_j - P_i = 47</math> and <math>i < j</math>. (Note that if <math>i > j</math>, we will instead get a subsequence with sum -47.)
 +
 
 +
However, this is quite easy to do. We sweep through <math>P</math> from left to right, keeping a [[map]] of all elements of <math>P</math> we've seen so far, along with their frequencies; and for each element <math>P_j</math> we count the number of times <math>P_j - 47</math> has appeared so far, by looking up that value in our map; this tells us how many contiguous subsequences ending at <math>S_{j-1}</math> have sum 47. And finally, adding the number of contiguous subsequences with sum 47 ending at each entry of <math>S</math> gives the total number of such subsequences in the array. Total time taken is <math>O(N)</math>, if we use a [[hash table]] implementation of the map.
 +
 
 +
==Use of difference array==
 +
The difference array is used to keep track of an array when ranges of said array can be updated all at once. If we have array <math>A</math> and add an increment <math>k</math> to elements <math>A_i, A_{i+1}, ..., A_{j-1}</math>, then notice that <math>D_0, D_1, ..., D_{i-2}</math> are not affected; that <math>D_{i-1} = A_i - A_{i-1}</math> is increased by <math>k</math>; that <math>D_i, D_{i+1}, ..., D_{j-2}</math> are not affected; that <math>D_{j-1} = A_j - A_{j-1}</math> is decreased by <math>k</math>; and that <math>D_j, D_{j+1}, ...</math> are unaffected. Thus, if we are required to update many ranges of an array in this manner, we should keep track of <math>D</math> rather than <math>A</math> itself, and then integrate at the end to reconstruct <math>A</math>.
 +
 
 +
===Example: Wireless (CCC)===
 +
This is the basis of the model solution to {{Problem|ccc09s5|Wireless}} from CCC 2009. We are given a grid of lattice points with up to 30000 rows and up to 1000 columns, and up to 1000 circles, each of which is centered at a lattice point. Each circle has a particular weight associated with it. We want to stand at a lattice point such that the sum of the weights of all the circles covering it is maximized, and to determine how many such lattice points there are.
 +
 
 +
The most straightforward way to do this is by actually computing the sum of weights of all covering circles at each individual lattice point. However, doing this naively would take up to <math>30000\cdot 1000\cdot 1000 = 3\cdot 10^{10}</math> operations, as we would have to consider each lattice point and each circle and decide whether that circle covers that lattice point.
 +
 
 +
To solve this, we will treat each column of the lattice as an array <math>A</math>, where entry <math>A_i</math> denotes the sum of weights of all the circles covering the point in row <math>i</math> of that column. Now, consider any of the given circles. Either this circle does not cover any points in this column at all, or the points it covers in this column will all be ''consecutive''. (This is equivalent to saying that the intersection of a circle with a line is a line segment.) For example, a circle centered at <math>(2,3)</math> with radius <math>5</math> will cover no lattice points with x-coordinate <math>-10</math>, and as for lattice points with x-coordinate <math>1</math>, it will cover <math>(1,-1), (1,0), (1,1), (1,2), (1,3), (1,4), (1,5), (1,6), (1,7)</math>. Thus, the circle covers some first point <math>i</math> and some last point <math>j</math>, and adds some value <math>B</math> to each point that it covers; that is, it adds <math>B</math> to <math>A_i, A_{i+1}, ..., A_j</math>. Thus, we will simply maintain the difference array <math>D</math>, noting that the circle adds <math>B</math> to <math>D_{i-1}</math> and takes <math>B</math> away from <math>D_j</math>. We maintain such a difference array for each column; and we perform at most two updates for each circle-column pair; so the total number of updates is bounded by <math>2\cdot 1000\cdot 1000 = 2\cdot 10^6</math>. (The detail of how to keep track of the initial element of <math>A</math> is left as an exercise to the reader.) After this, we perform integration on each column and find the maximum values, which requires at most a constant times the total number of lattice points, which is up to <math>3\cdot 10^7</math>.
 +
 
 +
==Multiple dimensions==
 +
The prefix sum array and difference array can be extended to multiple dimensions.
 +
 
 +
===Prefix sum array===
 +
We start by considering the two-dimensional case. Given an <math>m\times n</math> array <math>A</math>, we will define the prefix sum array <math>P(A)</math> as follows: <math>P_{i,j} = \sum_{k=0}^{i-1} \sum_{m=0}^{j-1} A_{k,m}</math>. In other words, the first row of <math>P</math> is all zeroes, the first column of <math>P</math> is all zeroes, and all other elements of <math>P</math> are obtained by adding up some upper-left rectangle of values of <math>A</math>. For example, <math>P_{2,3} = A_{0,0} + A_{0,1} + A_{0,2} + A_{1,0} + A_{1,1} + A_{1,2}</math>. Or, more easily expressed, <math>P_{i,j}</math> is the sum of all entries of <math>A</math> with indices in <math>[0,i) \times [0,j)</math> (Cartesian product).
 +
 
 +
In general, if we are given an <math>n</math>-dimensional array <math>A</math> with dimensions <math>m_1 \times m_2 \times ... \times m_n</math>, then the prefix sum array has dimensions <math>(m_1+1) \times (m_2+1) \times ... \times (m_n+1)</math>, and is defined by <math>P_{i_1,i_2,...,i_n} = \sum_{k_1=0}^{i_1-1} \sum_{k_2=0}^{i_2-1} ... \sum_{k_n=0}^{i_n-1} A_{k_1,k_2,...,k_n}</math>; or we can simply say that it is the sum of all entries of <math>A</math> with indices in <math>[0,i_1) \times [0,i_2) \times ... \times [0,i_n)</math>.
 +
 
 +
To compute the prefix sum array in the two-dimensional case, we will scan first down and then right, as suggested by the following C++ implementation:
 +
<syntaxhighlight lang="cpp">
 +
vector<vector<int> > P(vector<vector<int> > A)
 +
{
 +
    int m = A.size();
 +
    int n = A[0].size();
 +
    // Make an m+1 by n+1 array and initialize it with zeroes.
 +
    vector<vector<int> > p(m+1, vector<int>(n+1, 0));
 +
    for (int i = 1; i <= m; i++)
 +
        for (int j = 1; j <= n; j++)
 +
            p[i][j] = p[i-1][j] + A[i-1][j-1];
 +
    for (int i = 1; i <= m; i++)
 +
        for (int j = 1; j <= n; j++)
 +
            p[i][j] += p[i][j-1];
 +
    return p;
 +
}
 +
</syntaxhighlight><br/>
 +
Here, we first make each column of <math>P</math> the prefix sum array of a column of <math>A</math>, then convert each row of <math>P</math> into its prefix sum array to obtain the final values. That is, after the first pair of for loops, <math>P_{i,j}</math> will be equal to <math>A_{0,j} + A_{1,j} + ... + A_{i-1,j}</math>, and after the second pair, we will then have <math>P_{i,j} := P_{i,0} + P_{i,1} + ... + P_{i,j-1} = (A_{0,0} + ... + A_{i-1,0}) + (A_{0,1} + ... + A_{i-1,1}) + ... + (A_{0,j-1} + ... + A_{i-1,j-1})</math>, as desired. The extension of this to more than two dimensions is straightforward.
 +
 
 +
(Haskell code is not shown, because using functional programming in this case does not make the code nicer.)
 +
 
 +
After the prefix sum array has been computed, we can use it to add together any rectangle in <math>A</math>, that is, all the elements with their indices in <math>[x_1,x_2), [y_1,y_2)</math>. To do so, we first observe that <math>P_{x_2,y_2} - P_{x_1,y_2}</math> gives the sum of all the elements with indices in the box <math>[x_1, x_2) \times [0, y_2)</math>. Then we similarly observe that <math>P_{x_2,y_1} - P_{x_1,y_1}</math> corresponds to the box <math>[x_1, x_2) \times [0, y_1)</math>. Subtracting gives the box <math>[x_1, x_2) \times [y_1, y_2)</math>. This gives a final formula of <math>P_{x_2,y_2} - P_{x_1,y_2} - P_{x_2,y_1} + P_{x_1,y_1}</math>.
 +
 
 +
In the <math>n</math>-dimensional case, to sum the elements of <math>A</math> with indices in the box <math>[x_{1,0}, x_{1,1}) \times [x_{2,0}, x_{2,1}) \times ... \times [x_{n,0}, x_{n,1})</math>, we will use the formula <math>\sum_{k_1=0}^1 \sum_{k_2=0}^1 ... \sum_{k_n=0}^1 (-1)^{n - k_1 - k_2 - ... - k_n} P_{x_{1,k_1}, x_{2,k_2}, ..., x_{n,k_n}}</math>. We will not state the proof, instead noting that it is a form of the inclusion&ndash;exclusion principle.
 +
 
 +
====Example: Diamonds (BOI)====
 +
The problem {{Problem|boi09p6|Diamonds}} from BOI '09 is a straightforward application of the prefix sum array in three dimensions. We read in a three-dimensional array <math>A</math>, and then compute its prefix sum array; after doing this, we will be able to determine the sum of any box of the array with indices in the box <math>[x_1, x_2) \times [y_1, y_2) \times [z_1, z_2)</math> as <math>P_{x_2,y_2,z_2} - P_{x_1,y_2,z_2} - P_{x_2,y_1,z_2} - P_{x_2,y_2,z_1} + P_{x_1,y_1,z_2} + P_{x_1,y_2,z_1} + P_{x_2,y_1,z_1} - P_{x_1,y_1,z_1}</math>.
 +
 
 +
===Difference array===
 +
To use the difference array properly in two dimensions, it is easiest to use a source array <math>A</math> with the property that the first row and first column consist entirely of zeroes. (In multiple dimensions, we will want <math>A_{i_1, i_2, ..., i_n} = 0</math> whenever any of the <math>i</math>'s are zero.)
 +
 
 +
We will simply define the difference array <math>D</math> to be the array whose prefix sum array is <math>A</math>. This means in particular that <math>D_{i,j}</math> is the sum of elements of <math>D</math> with indices in the box <math>[i,i+1) \times [j,j+1)</math> (this box contains only the single element <math>D_{i,j}</math>). Using the prefix sum array <math>A</math>, we obtain <math>D_{i,j} = A_{i+1,j+1} - A_{i,j+1} - A_{i+1,j} + A_{i,j}</math>. In general:
 +
:<math>D_{i_1, i_2, ..., i_n} = \sum_{k_1=0}^1 \sum_{k_2=0}^1 ... \sum_{k_n=0}^1 (-1)^{n - k_1 - k_2 - ... - k_n} A_{i_1+k_1,i_2+k_2,...,i_n+k_n}</math>.
 +
 
 +
Should we actually need to ''compute'' the difference array, on the other hand, the easiest way to do so is by reversing the computation of the prefix sum array:
 +
<syntaxhighlight lang="cpp">
 +
vector<vector<int> > D(vector<vector<int> > A)
 +
{
 +
    int m = A.size();
 +
    int n = A[0].size();
 +
    // Make an m+1 by n+1 array
 +
    vector<vector<int> > d(m-1, vector<int>(n-1));
 +
    for (int i = 0; i < m-1; i++)
 +
        for (int j = 0; j < n-1; j++)
 +
            d[i][j] = A[i+1][j+1] - A[i+1][j];
 +
    for (int i = m-2; i > 0; i--)
 +
        for (int j = 0; j < n-1; j++)
 +
            d[i][j] -= d[i-1][j];
 +
    return d;
 +
}
 +
</syntaxhighlight><br/>
 +
In this code, we first make each column of <math>D</math> the difference array of the corresponding column of <math>A</math>, and then treat each row as now being the prefix sum array of the final result (the row of the difference array), so we scan backward through it to reconstruct the original array (''i.e.'', take the difference array of the row). (We have to do this backward so that we won't overwrite a value we still need later.) The extension to multiple dimensions is straightforward; we simply have to walk backward over every additional dimension as well.
 +
 
 +
Now we consider what happens when all elements of <math>A</math> with coordinates in the box given by <math>[r_1, r_2) \times [c_1, c_2)</math> are incremented by <math>k</math>. If we take the difference array of each column of <math>A</math> now, as in the function <code>D</code> defined above, we see that for each column, we will have to add <math>k</math> to entry <math>r_1-1</math> and subtract it from <math>r_2-1</math> (as in the one-dimensional case). Now if we take the difference array of each row of what we've just obtained, we notice that in row number <math>r_1-1</math>, we've added <math>k</math> to every element in columns in <math>[c_1,c_2)</math> in the previous step, and in row number <math>r_2-1</math>, we've subtracted <math>k</math> to every element in the same column range, so in the end the effect is to add <math>k</math> to elements <math>D_{r_1-1,c_1-1}</math> and <math>D_{r_2-1,c_2-1}</math>, and to subtract <math>k</math> from elements <math>D_{r_2-1,c_1-1}</math> and <math>D_{r_1-1,c_2-1}</math>.
 +
 
 +
In the general case, when adding <math>k</math> to all elements of <math>A</math> with indices in the box <math>[x_{1,0}, x_{1,1}) \times [x_{2,0}, x_{2,1}) \times ... \times [x_{n,0}, x_{n,1})</math>, a total of <math>2^n</math> elements of <math>D</math> need to be updated. In particular, element <math>D_{x_{1,j_1}-1, x_{2,j_2}-1, ..., x_{n,j_n}-1}</math> (where each of the <math>j</math>'s can be either 0 or 1, giving <math>2^n</math> possibilities in total) is incremented by <math>(-1)^{j_1+j_2+...+j_n}k</math>. That is, if we consider an <math>n</math>-dimensional array to be an <math>n</math>-dimensional hypercube of numbers, then the elements to be updated lie on the corners of an <math>n</math>-dimensional hypercube; we 2-color the vertices black and white (meaning two adjacent vertices always have opposite colours), with the lowest corner (corresponding to indices <math>[x_{1,0}-1, x_{2,0}-1, ..., x_{n,0}-1]</math>) white; and each white vertex receiving <math>+k</math> and each black vertex <math>-k</math>. One can attempt to visualize the effect this has on the prefix sum array in three dimensions, and become convinced that it makes sense in <math>n</math> dimensions. Each element in the prefix sum array <math>A_{i_1, i_2, ..., i_n}</math> is the sum of all the elements in some box of the difference array with its lowest corner at the origin, <math>[0, i_1) \times [0, i_2) \times ... \times [0, i_n)</math>. If the highest corner actually lies within the hypercube, that is, <math>x_{1,0} \leq i_1 < x_{1,1}, x_{2,0} \leq i_2 < x_{2,1}, ..., x_{n,0} \leq i_n < x_{n,1}</math>, then this box is only going to contain the low corner <math>D_{x_{1,0}-1, x_{2,0}-1, ..., x_{n,0}-1}</math>, which has increased by <math>k</math>; thus, this entry in <math>A</math> has increased by <math>k</math> as well, and this is true of all elements that lie within the hypercube, <math>[x_{1,0},x_{1,1}) \times [x_{2,0},x_{2,1}) \times ... \times [x_{n,0},x_{n,1})</math>. If any of the <math>i</math>'s are less than the lower bound of the corresponding <math>x</math>, then our box doesn't hit any of the vertices of the hypercube at all, so all these elements are unaffected; and if instead any one of them goes over the upper bound, then our box passes in through one hyperface and out through another, which means that corresponding vertices on the low and high face will either both be hit or both not be hit, and each pair cancels itself out, giving again no change outside the hypercube.
 +
 
 +
====Example: The Cake is a Dessert====
 +
{{Problem|cake|The Cake is a Dessert}} from the Waterloo&ndash;Woburn 2011 Mock CCC is a straightforward application of the difference array in two dimensions.
 +
 
 +
==A word on the dynamic case==
 +
The dynamic version of the problem that the prefix sum array is intended to solve requires us to be able to carry out an intermixed sequence of operations, where each operation either changes an element of <math>A</math> or asks us to determine the sum <math>A_0 + A_1 + ... + A_{i-1}</math> for some <math>i</math>. That is, we need to be able to compute entries of the prefix sum array of an array that is changing (dynamic). If we can do this, then we can do range sum queries easily on a dynamic array, simply by taking the difference of prefix sums as in the static case. This is a bit trickier to do, but is most easily and efficiently accomplished using the [[binary indexed tree]] data structure, which is specifically designed to solve this problem and can be updated in <math>O(\log(m_1) \log(m_2) ... \log(m_n))</math> time, where <math>n</math> is the number of dimensions and each <math>m</math> is a dimension. Each prefix sum query also takes that amount of time, but we need to perform <math>2^n</math> of these to find a box sum using the formula given previously, so the running time of the query is asymptotically <math>2^n</math> times greater than that of the update. In the most commonly encountered one-dimensional case, both query and update are simply <math>O(\log m)</math>.
 +
 
 +
The dynamic version of the problem that the difference array is intended to solve is analogous: we want to be able to increment entire ranges of an array, but at any point we also want to be able to determine any element of the array. Here, we simply use a binary indexed tree for the difference array, and compute prefix sums whenever we want to access an element. Here, it is the update that is slower by a factor of <math>2^n</math> than the query (because we have to update <math>2^n</math> entries at a time, but only query one.)
 +
 
 +
On the other hand, if we combine the two, that is, we wish to be able to increment ranges of some array <math>A</math> as well as query prefix sums of <math>A</math> in an intermixed sequence, matters become more complicated. Notice that in the static case we would just track the difference array of <math>A</math> and then integrate twice at the end. In the dynamic case, however, we need to use a [[segment tree]] with lazy propagation. This will give us <math>O(2^n \log(m_1) \log(m_2) ... \log(m_n))</math> time on both the query and the update, but is somewhat more complex and slower than the binary indexed tree on the [[invisible constant factor]].
  
However, this is quite easy to do. We sweep through <math>P</math> from left to right, keeping a [[map]] of all elements of <math>P</math> we've seen so far, along with their frequencies; and for each element <math>P_j</math> we count the number of times <math>P_j - 48</math> has appeared so far, by looking up that value in our map; this tells us how many contiguous subsequences ending at <math>S_{j-1}</math> have sum 47. And finally, adding the number of contiguous subsequences with sum 47 ending at each entry of <math>S</math> gives the total number of such subsequences in the array. Total time taken is <math>O(N)</math>, if we use a [[hash table]] implementation of the map.
+
[[Category:Pages needing diagrams]]

Latest revision as of 23:35, 19 December 2023

Given an array of numbers, we can construct a new array by replacing each element by the difference between itself and the previous element, except for the first element, which we simply ignore. This is called the difference array, because it contains the first differences of the original array. We will denote the difference array of array A by D(A). For example, the difference array of A = [9, 2, 6, 3, 1, 5, 0, 7] is D(A) = [2-9, 6-2, 3-6, 1-3, 5-1, 0-5, 7-0], or [-7, 4, -3, -2, 4, -5, 7].

We see that the difference array can be computed in linear time from the original array, and is shorter than the original array by one element. Here are implementations in C and Haskell. (Note that the Haskell implementation actually takes a list, but returns an array.)

// D must have enough space for n-1 ints
void difference_array(int* A, int n, int* D)
{
    for (int i = 0; i < n-1; i++)
        D[i] = A[i+1] - A[i];
}

d :: [Int] -> Array Int [Int]
d a = listArray (0, length a - 2) (zipWith (-) (tail a) a)

The prefix sum array is the opposite of the difference array. Given an array of numbers A and an arbitrary constant c, we first append c onto the front of the array, and then replace each element with the sum of itself and all the elements preceding it. For example, if we start with A = [9, 2, 6, 3, 1, 5, 0, 7], and choose to append the arbitrary value -8 to the front, we obtain P(-8, A) = [-8, -8+9, -8+9+2, -8+9+2+6, ..., -8+9+2+6+3+1+5+0+7], or [-8, 1, 3, 9, 12, 13, 18, 18, 25]. Computing the prefix sum array can be done in linear time as well, and the prefix sum array is longer than the original array by one element:

// P must have enough space for n+1 ints
void prefix_sum_array(int c, int* A, int n, int* P)
{
    P[0] = c;
    for (int i = 0; i < n; i++)
        P[i+1] = P[i] + A[i];
}

p :: Int -> [Int] -> Array Int [Int]
p c a = listArray (0, length a) (scanl (+) c a)

Note that every array has an infinite number of possible prefix sum arrays, since we can choose whatever value we want for c. For convenience, we usually choose c = 0. However, changing the value of c has only the effect of shifting all the elements of P(c,A) by a constant. For example, P(15, A) = [15, 24, 26, 32, 35, 36, 41, 41, 48]. However, each element of P(15, A) is exactly 23 more than the corresponding element from P(-8, A).

The functions D and P carry out reverse processes. Given an nonempty zero-indexed array A:

  1. D(P(c, A)) = A for any c. For example, taking the difference array of P(-8, A) = [-8, 1, 3, 9, 12, 13, 18, 18, 25] gives [9, 2, 6, 3, 1, 5, 0, 7], that is, it restores the original array A.
  2. P(A_0, D(A)) = A. Thus, taking D(A) = [-7, 4, -3, -2, 4, -5, 7] and A_0 = 9 (initial element of A), we have P(A_0, D(A)) = [9, 2, 6, 3, 1, 5, 0, 7], again restoring the original array A.

Analogy with calculus[edit]

These two processes—computing the difference array, and computing a prefix sum array—are the discrete equivalents of differentiation and integration in calculus, which operate on continuous domains. An entry in an array is like the value of a function at a particular point.

  • Reverse processes:
  • D(P(c, A)) = A for any c. Likewise \frac{d}{dx} \int_c^x f(t)\, dt = f(x) for any c.
  • P(A_0, D(A)) = A. Likewise f(a) + \int_a^x \frac{df}{dt}\, dt = f(x).
  • Uniqueness:
  • A differentiable function f(x) can only have one derivative, \frac{df}{dx}. An array A can only have one difference array, D(A).
  • A continuous function f(x) has an infinite number of antiderivatives, F_c(x) = \int_c^x f(t)\, dt, where c can be any number in its domain, but they differ only by a constant (their graphs are vertical translations of each other). An array A has an infinite number of prefix arrays P(c,A), but they differ only by a constant (at each entry).
  • Given some function f:[a,b]\to\mathbb{R}, and the fact that F, an antiderivative of f, satisfies F(a) = y_0, we can uniquely reconstruct F. That is, even though f has an infinite number of antiderivatives, we can pin it down to one once we are given the value the antiderivative is supposed to attain on the left edge of f's domain. Likewise, given some array A and the fact that P, a prefix sum array of A, satisfies P_0 = c, we can uniquely reconstruct P.
  • Effect on length:
  • D(A) is shorter than A by one element. Differentiating f:[a,b] \to \mathbb{R} gives a function f':(a,b) \to \mathbb{R} (shortens the closed interval to an open interval).
  • P(c,A) is longer than A by one element. Integrating f:(a,b) \to \mathbb{R} gives a function F:[a,b] \to \mathbb{R} (lengthens the open interval to a closed interval).

Because of these similarities, we will speak simply of differentiating and integrating arrays. An array can be differentiated multiple times, but eventually it will shrink to length 0. An array can be integrated any number of times.

Use of prefix sum array[edit]

The Fundamental Theorem of Calculus also has an analogue, which is why the prefix sum array is so useful. To compute an integral \int_a^b f(t)\, dt, which is like a continuous kind of sum of an infinite number of function values f(a), f(a+\epsilon), f(a+2\epsilon), ..., f(b), we take any antiderivative F, and compute F(b) - F(a). Likewise, to compute the sum of values A_i, A_{i+1}, A_{i+2}, ..., A_{j-1}, we will take any prefix array P(c,A) and compute P_j - P_i. Notice that just as we can use any antiderivative F because the constant cancels out, we can use any prefix sum array because the initial value cancels out. (Note our use of the left half-open interval.)

Proof: P_j = c + \sum_{k=0}^{j-1} A_k and P_i = c + \sum_{k=0}^{i-1} A_k. Subtracting gives P_j - P_i = \sum_{k=0}^{j-1} A_k - \sum_{k=0}^{i-1} A_k = \sum_{k=i}^{j-1} A_k as desired. _{\blacksquare}

This is best illustrated via example. Let A = [9,2,6,3,1,5,0,7] as before. Take P(0,A) = [0, 9, 11, 17, 20, 21, 26, 26, 33]. Then, suppose we want A_2 + A_3 + A_4 + A_5 = 6 + 3 + 1 + 5 = 15. We can compute this by taking P_6 - P_2 = 26 - 11 = 15. This is because P_6 - P_2 = (0 + A_0 + A_1 + A_2 + A_3 + A_4 + A_5) - (0 + A_0 + A_1) = A_2 + A_3 + A_4 + A_5.

When we use the prefix sum array in this case, we generally use c=0 for convenience (although we are theoretically free to use any value we wish), and speak of the prefix sum array obtained this way as simply the prefix sum array.

Example: Counting Subsequences (SPOJ)[edit]

Computing the prefix sum array is rarely the most difficult part of a problem. Instead, the prefix sum array is kept on hand because the algorithm to solve the problem makes frequent reference to range sums.

We will consider the problem Counting Subsequences from IPSC 2006. Here we are given an array of integers S and asked to find the number of contiguous subsequences of the array that sum to 47.

To solve this, we will first transform array S into its prefix sum array P(0,S). Notice that the sum of each contiguous subsequence S_i + S_{i+1} + S_{i+2} + ... + S_{j-1} corresponds to the difference of two elements of P, that is, P_j - P_i. So what we want to find is the number of pairs (i,j) with P_j - P_i = 47 and i < j. (Note that if i > j, we will instead get a subsequence with sum -47.)

However, this is quite easy to do. We sweep through P from left to right, keeping a map of all elements of P we've seen so far, along with their frequencies; and for each element P_j we count the number of times P_j - 47 has appeared so far, by looking up that value in our map; this tells us how many contiguous subsequences ending at S_{j-1} have sum 47. And finally, adding the number of contiguous subsequences with sum 47 ending at each entry of S gives the total number of such subsequences in the array. Total time taken is O(N), if we use a hash table implementation of the map.

Use of difference array[edit]

The difference array is used to keep track of an array when ranges of said array can be updated all at once. If we have array A and add an increment k to elements A_i, A_{i+1}, ..., A_{j-1}, then notice that D_0, D_1, ..., D_{i-2} are not affected; that D_{i-1} = A_i - A_{i-1} is increased by k; that D_i, D_{i+1}, ..., D_{j-2} are not affected; that D_{j-1} = A_j - A_{j-1} is decreased by k; and that D_j, D_{j+1}, ... are unaffected. Thus, if we are required to update many ranges of an array in this manner, we should keep track of D rather than A itself, and then integrate at the end to reconstruct A.

Example: Wireless (CCC)[edit]

This is the basis of the model solution to Wireless from CCC 2009. We are given a grid of lattice points with up to 30000 rows and up to 1000 columns, and up to 1000 circles, each of which is centered at a lattice point. Each circle has a particular weight associated with it. We want to stand at a lattice point such that the sum of the weights of all the circles covering it is maximized, and to determine how many such lattice points there are.

The most straightforward way to do this is by actually computing the sum of weights of all covering circles at each individual lattice point. However, doing this naively would take up to 30000\cdot 1000\cdot 1000 = 3\cdot 10^{10} operations, as we would have to consider each lattice point and each circle and decide whether that circle covers that lattice point.

To solve this, we will treat each column of the lattice as an array A, where entry A_i denotes the sum of weights of all the circles covering the point in row i of that column. Now, consider any of the given circles. Either this circle does not cover any points in this column at all, or the points it covers in this column will all be consecutive. (This is equivalent to saying that the intersection of a circle with a line is a line segment.) For example, a circle centered at (2,3) with radius 5 will cover no lattice points with x-coordinate -10, and as for lattice points with x-coordinate 1, it will cover (1,-1), (1,0), (1,1), (1,2), (1,3), (1,4), (1,5), (1,6), (1,7). Thus, the circle covers some first point i and some last point j, and adds some value B to each point that it covers; that is, it adds B to A_i, A_{i+1}, ..., A_j. Thus, we will simply maintain the difference array D, noting that the circle adds B to D_{i-1} and takes B away from D_j. We maintain such a difference array for each column; and we perform at most two updates for each circle-column pair; so the total number of updates is bounded by 2\cdot 1000\cdot 1000 = 2\cdot 10^6. (The detail of how to keep track of the initial element of A is left as an exercise to the reader.) After this, we perform integration on each column and find the maximum values, which requires at most a constant times the total number of lattice points, which is up to 3\cdot 10^7.

Multiple dimensions[edit]

The prefix sum array and difference array can be extended to multiple dimensions.

Prefix sum array[edit]

We start by considering the two-dimensional case. Given an m\times n array A, we will define the prefix sum array P(A) as follows: P_{i,j} = \sum_{k=0}^{i-1} \sum_{m=0}^{j-1} A_{k,m}. In other words, the first row of P is all zeroes, the first column of P is all zeroes, and all other elements of P are obtained by adding up some upper-left rectangle of values of A. For example, P_{2,3} = A_{0,0} + A_{0,1} + A_{0,2} + A_{1,0} + A_{1,1} + A_{1,2}. Or, more easily expressed, P_{i,j} is the sum of all entries of A with indices in [0,i) \times [0,j) (Cartesian product).

In general, if we are given an n-dimensional array A with dimensions m_1 \times m_2 \times ... \times m_n, then the prefix sum array has dimensions (m_1+1) \times (m_2+1) \times ... \times (m_n+1), and is defined by P_{i_1,i_2,...,i_n} = \sum_{k_1=0}^{i_1-1} \sum_{k_2=0}^{i_2-1} ... \sum_{k_n=0}^{i_n-1} A_{k_1,k_2,...,k_n}; or we can simply say that it is the sum of all entries of A with indices in [0,i_1) \times [0,i_2) \times ... \times [0,i_n).

To compute the prefix sum array in the two-dimensional case, we will scan first down and then right, as suggested by the following C++ implementation:

vector<vector<int> > P(vector<vector<int> > A)
{
    int m = A.size();
    int n = A[0].size();
    // Make an m+1 by n+1 array and initialize it with zeroes.
    vector<vector<int> > p(m+1, vector<int>(n+1, 0));
    for (int i = 1; i <= m; i++)
        for (int j = 1; j <= n; j++)
            p[i][j] = p[i-1][j] + A[i-1][j-1];
    for (int i = 1; i <= m; i++)
        for (int j = 1; j <= n; j++)
            p[i][j] += p[i][j-1];
    return p;
}

Here, we first make each column of P the prefix sum array of a column of A, then convert each row of P into its prefix sum array to obtain the final values. That is, after the first pair of for loops, P_{i,j} will be equal to A_{0,j} + A_{1,j} + ... + A_{i-1,j}, and after the second pair, we will then have P_{i,j} := P_{i,0} + P_{i,1} + ... + P_{i,j-1} = (A_{0,0} + ... + A_{i-1,0}) + (A_{0,1} + ... + A_{i-1,1}) + ... + (A_{0,j-1} + ... + A_{i-1,j-1}), as desired. The extension of this to more than two dimensions is straightforward.

(Haskell code is not shown, because using functional programming in this case does not make the code nicer.)

After the prefix sum array has been computed, we can use it to add together any rectangle in A, that is, all the elements with their indices in [x_1,x_2), [y_1,y_2). To do so, we first observe that P_{x_2,y_2} - P_{x_1,y_2} gives the sum of all the elements with indices in the box [x_1, x_2) \times [0, y_2). Then we similarly observe that P_{x_2,y_1} - P_{x_1,y_1} corresponds to the box [x_1, x_2) \times [0, y_1). Subtracting gives the box [x_1, x_2) \times [y_1, y_2). This gives a final formula of P_{x_2,y_2} - P_{x_1,y_2} - P_{x_2,y_1} + P_{x_1,y_1}.

In the n-dimensional case, to sum the elements of A with indices in the box [x_{1,0}, x_{1,1}) \times [x_{2,0}, x_{2,1}) \times ... \times [x_{n,0}, x_{n,1}), we will use the formula \sum_{k_1=0}^1 \sum_{k_2=0}^1 ... \sum_{k_n=0}^1 (-1)^{n - k_1 - k_2 - ... - k_n} P_{x_{1,k_1}, x_{2,k_2}, ..., x_{n,k_n}}. We will not state the proof, instead noting that it is a form of the inclusion–exclusion principle.

Example: Diamonds (BOI)[edit]

The problem Diamonds from BOI '09 is a straightforward application of the prefix sum array in three dimensions. We read in a three-dimensional array A, and then compute its prefix sum array; after doing this, we will be able to determine the sum of any box of the array with indices in the box [x_1, x_2) \times [y_1, y_2) \times [z_1, z_2) as P_{x_2,y_2,z_2} - P_{x_1,y_2,z_2} - P_{x_2,y_1,z_2} - P_{x_2,y_2,z_1} + P_{x_1,y_1,z_2} + P_{x_1,y_2,z_1} + P_{x_2,y_1,z_1} - P_{x_1,y_1,z_1}.

Difference array[edit]

To use the difference array properly in two dimensions, it is easiest to use a source array A with the property that the first row and first column consist entirely of zeroes. (In multiple dimensions, we will want A_{i_1, i_2, ..., i_n} = 0 whenever any of the i's are zero.)

We will simply define the difference array D to be the array whose prefix sum array is A. This means in particular that D_{i,j} is the sum of elements of D with indices in the box [i,i+1) \times [j,j+1) (this box contains only the single element D_{i,j}). Using the prefix sum array A, we obtain D_{i,j} = A_{i+1,j+1} - A_{i,j+1} - A_{i+1,j} + A_{i,j}. In general:

D_{i_1, i_2, ..., i_n} = \sum_{k_1=0}^1 \sum_{k_2=0}^1 ... \sum_{k_n=0}^1 (-1)^{n - k_1 - k_2 - ... - k_n} A_{i_1+k_1,i_2+k_2,...,i_n+k_n}.

Should we actually need to compute the difference array, on the other hand, the easiest way to do so is by reversing the computation of the prefix sum array:

vector<vector<int> > D(vector<vector<int> > A)
{
    int m = A.size();
    int n = A[0].size();
    // Make an m+1 by n+1 array
    vector<vector<int> > d(m-1, vector<int>(n-1));
    for (int i = 0; i < m-1; i++)
        for (int j = 0; j < n-1; j++)
            d[i][j] = A[i+1][j+1] - A[i+1][j];
    for (int i = m-2; i > 0; i--)
        for (int j = 0; j < n-1; j++)
            d[i][j] -= d[i-1][j];
    return d;
}

In this code, we first make each column of D the difference array of the corresponding column of A, and then treat each row as now being the prefix sum array of the final result (the row of the difference array), so we scan backward through it to reconstruct the original array (i.e., take the difference array of the row). (We have to do this backward so that we won't overwrite a value we still need later.) The extension to multiple dimensions is straightforward; we simply have to walk backward over every additional dimension as well.

Now we consider what happens when all elements of A with coordinates in the box given by [r_1, r_2) \times [c_1, c_2) are incremented by k. If we take the difference array of each column of A now, as in the function D defined above, we see that for each column, we will have to add k to entry r_1-1 and subtract it from r_2-1 (as in the one-dimensional case). Now if we take the difference array of each row of what we've just obtained, we notice that in row number r_1-1, we've added k to every element in columns in [c_1,c_2) in the previous step, and in row number r_2-1, we've subtracted k to every element in the same column range, so in the end the effect is to add k to elements D_{r_1-1,c_1-1} and D_{r_2-1,c_2-1}, and to subtract k from elements D_{r_2-1,c_1-1} and D_{r_1-1,c_2-1}.

In the general case, when adding k to all elements of A with indices in the box [x_{1,0}, x_{1,1}) \times [x_{2,0}, x_{2,1}) \times ... \times [x_{n,0}, x_{n,1}), a total of 2^n elements of D need to be updated. In particular, element D_{x_{1,j_1}-1, x_{2,j_2}-1, ..., x_{n,j_n}-1} (where each of the j's can be either 0 or 1, giving 2^n possibilities in total) is incremented by (-1)^{j_1+j_2+...+j_n}k. That is, if we consider an n-dimensional array to be an n-dimensional hypercube of numbers, then the elements to be updated lie on the corners of an n-dimensional hypercube; we 2-color the vertices black and white (meaning two adjacent vertices always have opposite colours), with the lowest corner (corresponding to indices [x_{1,0}-1, x_{2,0}-1, ..., x_{n,0}-1]) white; and each white vertex receiving +k and each black vertex -k. One can attempt to visualize the effect this has on the prefix sum array in three dimensions, and become convinced that it makes sense in n dimensions. Each element in the prefix sum array A_{i_1, i_2, ..., i_n} is the sum of all the elements in some box of the difference array with its lowest corner at the origin, [0, i_1) \times [0, i_2) \times ... \times [0, i_n). If the highest corner actually lies within the hypercube, that is, x_{1,0} \leq i_1 < x_{1,1}, x_{2,0} \leq i_2 < x_{2,1}, ..., x_{n,0} \leq i_n < x_{n,1}, then this box is only going to contain the low corner D_{x_{1,0}-1, x_{2,0}-1, ..., x_{n,0}-1}, which has increased by k; thus, this entry in A has increased by k as well, and this is true of all elements that lie within the hypercube, [x_{1,0},x_{1,1}) \times [x_{2,0},x_{2,1}) \times ... \times [x_{n,0},x_{n,1}). If any of the i's are less than the lower bound of the corresponding x, then our box doesn't hit any of the vertices of the hypercube at all, so all these elements are unaffected; and if instead any one of them goes over the upper bound, then our box passes in through one hyperface and out through another, which means that corresponding vertices on the low and high face will either both be hit or both not be hit, and each pair cancels itself out, giving again no change outside the hypercube.

Example: The Cake is a Dessert[edit]

The Cake is a Dessert from the Waterloo–Woburn 2011 Mock CCC is a straightforward application of the difference array in two dimensions.

A word on the dynamic case[edit]

The dynamic version of the problem that the prefix sum array is intended to solve requires us to be able to carry out an intermixed sequence of operations, where each operation either changes an element of A or asks us to determine the sum A_0 + A_1 + ... + A_{i-1} for some i. That is, we need to be able to compute entries of the prefix sum array of an array that is changing (dynamic). If we can do this, then we can do range sum queries easily on a dynamic array, simply by taking the difference of prefix sums as in the static case. This is a bit trickier to do, but is most easily and efficiently accomplished using the binary indexed tree data structure, which is specifically designed to solve this problem and can be updated in O(\log(m_1) \log(m_2) ... \log(m_n)) time, where n is the number of dimensions and each m is a dimension. Each prefix sum query also takes that amount of time, but we need to perform 2^n of these to find a box sum using the formula given previously, so the running time of the query is asymptotically 2^n times greater than that of the update. In the most commonly encountered one-dimensional case, both query and update are simply O(\log m).

The dynamic version of the problem that the difference array is intended to solve is analogous: we want to be able to increment entire ranges of an array, but at any point we also want to be able to determine any element of the array. Here, we simply use a binary indexed tree for the difference array, and compute prefix sums whenever we want to access an element. Here, it is the update that is slower by a factor of 2^n than the query (because we have to update 2^n entries at a time, but only query one.)

On the other hand, if we combine the two, that is, we wish to be able to increment ranges of some array A as well as query prefix sums of A in an intermixed sequence, matters become more complicated. Notice that in the static case we would just track the difference array of A and then integrate twice at the end. In the dynamic case, however, we need to use a segment tree with lazy propagation. This will give us O(2^n \log(m_1) \log(m_2) ... \log(m_n)) time on both the query and the update, but is somewhat more complex and slower than the binary indexed tree on the invisible constant factor.