Editing Change problem

Jump to: navigation, search

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 5: Line 5:
  
 
==Discussion of complexity==
 
==Discussion of complexity==
The corresponding decision problem, which simply asks us to determine whether or not making change is ''possible'' with the given denominations (which it might not be, if we are missing the denomination 1) is known to be [[NP-complete]].<ref>G. S. Lueker. (1975). ''Two NP-complete problems in nonnegative integer programming.'' Technical Report 178, Computer Science Laboratory, Princeton University. (The authors were not able to obtain a copy of this paper, but in the literature it is invariably cited to back up the claim that change is NP-complete.)</ref> It follows that the optimization and counting problems are both [[NP-hard]] (''e.g.'', because the result of 0 for the counting problem answers the decision problem in the negative, and any nonzero value answers it in the affirmative).
+
The corresponding decision problem, which simply asks us to determine whether or not making change is ''possible'' with the given denominations (which it might not be, if we are missing the denomination 1) is known to be [[NP-complete]].<ref>G. S. Lueker. (1975). ''Two NP-complete problems in nonnegative integer programming.'' Technical Report 178, Computer Science Laboratory, Princeton University''</ref> It follows that the optimization and counting problems are both [[NP-hard]] (''e.g.'', because the result of 0 for the counting problem answers the decision problem in the negative, and any nonzero value answers it in the affirmative).
  
 
However, as we shall see, a simple <math>O(nT)</math> solution exists for both versions of the problem. Why then are these problems not in P? The answer is that the ''size'' of the input required to represent the number <math>T</math> is actually the ''length'' of the number <math>T</math>, which is <math>\Theta(\log T)</math> when <math>T</math> is expressed in binary (or decimal, or whatever). Thus, the time and space required by the algorithm is actually <math>O(n 2^{\lg T})</math>, that is, exponential in the size of the input. (This simplified analysis does not take into account the sizes of the denominations, but captures the essence of the argument.) This algorithm is then said to be ''pseudo-polynomial''. No true polynomial-time algorithm is known (and, indeed, none will be found unless it turns out that P = NP).
 
However, as we shall see, a simple <math>O(nT)</math> solution exists for both versions of the problem. Why then are these problems not in P? The answer is that the ''size'' of the input required to represent the number <math>T</math> is actually the ''length'' of the number <math>T</math>, which is <math>\Theta(\log T)</math> when <math>T</math> is expressed in binary (or decimal, or whatever). Thus, the time and space required by the algorithm is actually <math>O(n 2^{\lg T})</math>, that is, exponential in the size of the input. (This simplified analysis does not take into account the sizes of the denominations, but captures the essence of the argument.) This algorithm is then said to be ''pseudo-polynomial''. No true polynomial-time algorithm is known (and, indeed, none will be found unless it turns out that P = NP).
Line 12: Line 12:
 
Many real-world currency systems admit a [[greedy solution]] to the optimization version of the change problem. This algorithm is as follows: repeatedly choose the largest denomination that is less than or equal to the target amount, and ''use it'', that is, subtract it from the target amount, and then repeat this procedure on the reduced value, until the target amount decreases to zero. For example, with Canadian currency, we can greedily make change for $0.63 as follows: the largest denomination that fits into $0.63 is $0.25, so we subtract that (and thus resolve to use a $0.25 coin); we are left with $0.38, and take out another $0.25, so we subtract that again to obtain $0.13 (so that we have used two $0.25 coins so far); now the largest denomination that fits is $0.10, so we subtract that out, leaving us with $0.03; and then we subtract three $0.01 coins, leaving us with $0.00, at which point the algorithm terminates; so we have used six coins (two $0.25 coins, one $0.10 coin, and three $0.01 coins).
 
Many real-world currency systems admit a [[greedy solution]] to the optimization version of the change problem. This algorithm is as follows: repeatedly choose the largest denomination that is less than or equal to the target amount, and ''use it'', that is, subtract it from the target amount, and then repeat this procedure on the reduced value, until the target amount decreases to zero. For example, with Canadian currency, we can greedily make change for $0.63 as follows: the largest denomination that fits into $0.63 is $0.25, so we subtract that (and thus resolve to use a $0.25 coin); we are left with $0.38, and take out another $0.25, so we subtract that again to obtain $0.13 (so that we have used two $0.25 coins so far); now the largest denomination that fits is $0.10, so we subtract that out, leaving us with $0.03; and then we subtract three $0.01 coins, leaving us with $0.00, at which point the algorithm terminates; so we have used six coins (two $0.25 coins, one $0.10 coin, and three $0.01 coins).
  
It turns out that the greedy algorithm ''always'' gives the correct result for both Canadian and United States currencies (the proof is left as an exercise for the reader). There are various other real-world currency systems for which this is also true. However, there are simple examples of sets of denominations for which the greedy algorithm does ''not'' give a correct solution. For example, with the set of denominations <math>{1, 3, 4}</math>, the greedy algorithm will change 6 as 4+1+1, using three coins, whereas the correct minimal solution is obviously 3+3. There are also cases in which the greedy algorithm will fail to make change at all (consider what happens if we try to change 6 using the denominations <math>{3, 4}</math>). This usually does not occur in real-world systems because they tend to have denominations that are quite a bit more "spaced out".
+
It turns out that the greedy algorithm ''always'' gives the correct result for both Canadian and United States currencies (the proof is left as an exercise for the reader). There are various other real-world currency systems for which this is also true. However, there are simple examples of sets of denominations for which the greedy algorithm does ''not'' give a correct solution. For example, with the set of denominations <math>{1, 3, 4}</math>, the greedy algorithm will change 6 as 4+1+1, using three coins, whereas the correct minimal solution is obviously 3+3. There are also cases in which the greedy algorithm will fail to make change at all (consider what happens if we try to change 7 using the denominations <math>{3, 4}</math>). This usually does not occur in real-world systems because they tend to have denominations that are quite a bit more "spaced out".
  
 
Obviously, there is no greedy solution to the counting problem.
 
Obviously, there is no greedy solution to the counting problem.
Line 19: Line 19:
 
The optimization problem exhibits optimal substructure, in the sense that if we remove any coin of value <math>D_i</math> from the optimal means of changing <math>T</math>, then the set of coins remaining is an optimal means of changing <math>T - D_i</math>. This is because if this were ''not'' so; that is, there existed a means of changing <math>T - D_i</math> that used fewer coins than what we obtained by removing the coin <math>D_i</math> from our supposed optimal change for <math>T</math>, then we could just add the coin <math>D_i</math> back in and get change for the original amount <math>T</math> in fewer coins, a contradiction. Therefore, if we let <math>f(x)</math> denote the minimal number of coins required to change amount <math>x</math>, then we can write <math>f(x) = 1 + \min_i f(x - D_i)</math>; we consider all possible minimal solutions to <math>x</math> minus one coin, and take the best one and add that coin back in to get minimal change for <math>x</math>. The base case is <math>f(0) = 0</math>; obviously, 0 coins are required to make change for 0. See [[Dynamic_programming#Optimization_example:_Change_problem|the DP article]] for details and an implementation.
 
The optimization problem exhibits optimal substructure, in the sense that if we remove any coin of value <math>D_i</math> from the optimal means of changing <math>T</math>, then the set of coins remaining is an optimal means of changing <math>T - D_i</math>. This is because if this were ''not'' so; that is, there existed a means of changing <math>T - D_i</math> that used fewer coins than what we obtained by removing the coin <math>D_i</math> from our supposed optimal change for <math>T</math>, then we could just add the coin <math>D_i</math> back in and get change for the original amount <math>T</math> in fewer coins, a contradiction. Therefore, if we let <math>f(x)</math> denote the minimal number of coins required to change amount <math>x</math>, then we can write <math>f(x) = 1 + \min_i f(x - D_i)</math>; we consider all possible minimal solutions to <math>x</math> minus one coin, and take the best one and add that coin back in to get minimal change for <math>x</math>. The base case is <math>f(0) = 0</math>; obviously, 0 coins are required to make change for 0. See [[Dynamic_programming#Optimization_example:_Change_problem|the DP article]] for details and an implementation.
  
The counting problem is more subtle. We cannot approach it in quite the same way as we approach the optimization problem, because the counting problem does not exhibit disjoint substructure when it is "sliced" this way. For example, if the denominations are 2 and 3, and the target amount is 5, then we might try to conclude that the number of ways of changing 5 is the number of ways of changing 2 plus the number of ways of changing 3, because we can either add a coin of value 2 to any way of changing 3 or add a coin of value 3 to any way of changing 2. Alas, this gives the incorrect answer that there are 2 ways of changing 5, whereas in actual fact we double-counted the solution 2+3 and there is, in fact, only one way to change 5.
+
TODO: counting problem
 
+
The solution in this case is to compute the function <math>f(x, n)</math>, the number of ways to make change for <math>x</math> ''using only the first <math>n</math> denominations'' (and not necessarily all of them). The base case is <math>f(0, 0) = 1</math> and <math>f(x, 0) = 0</math> for all <math>x > 0</math>; we don't need any coins to make change for the amount 0, and there is exactly one way to make change for 0, that is, the zero tuple. On the other hand, we obviously cannot change any nonzero amount if we are not allowed to use any denominations at all.
+
 
+
Now here comes the disjoint and exhaustive substructure. To make change for <math>x</math> using only the first <math>n</math> denominations, we have two disjoint and exhaustive options: either we can use at least one coin of denomination <math>D_n</math>, or we can use none at all. The number of ways of making change for <math>x</math> using no coins of denomination <math>D_n</math> is <math>f(x, n-1)</math>. As for the ways of using at least one coin of denomination <math>D_n</math>, they can be put in one-to-one correspondence with ways of making change for <math>x-D_n</math> with only the first <math>n</math> coins (simply by addition or removal of one coin of denomination <math>D_n</math>). We conclude that <math>f(x, n) = f(x, n-1) + f(x - D_n, n)</math>.
+
 
+
We can implement this algorithm in <math>O(T)</math> space by running it one column at a time; that is, the value <math>f(x, n)</math> does not depend on any <math>f(x', n')</math> with <math>n' < n - 1</math>, so we only need to keep the last two columns at any given point. In fact, we do not even need two columns; the only value from the previous column that the computation of <math>f(x, n)</math> requires is <math>f(x, n-1)</math>, so we can overwrite in-place with the new value (the old value will never be needed for the rest of this column). Here is pseudocode:
+
<pre>
+
input T, n, array D
+
dp[0] &larr; 1
+
for i &isin; [1..T]
+
    dp[i] &larr; 0
+
for j &isin; [1..n]
+
    for i &isin; [D[j]..T]
+
        dp[i] &larr; dp[i] + dp[i-D[j]]
+
print dp[T]
+
</pre>
+
Should we actually wish to print out all ways of making change, the corresponding recursive descent algorithm would be a sound choice.<sup>[elaborate?]</sup>
+
  
 
==Notes and References==
 
==Notes and References==
 
<references/>
 
<references/>

Please note that all contributions to PEGWiki are considered to be released under the Attribution 3.0 Unported (see PEGWiki:Copyrights for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource. Do not submit copyrighted work without permission!

Cancel | Editing help (opens in new window)