https://wcipeg.com/wiki/index.php?title=Hidden_constant_factor&feed=atom&action=historyHidden constant factor - Revision history2024-03-28T22:21:18ZRevision history for this page on the wikiMediaWiki 1.25.2https://wcipeg.com/wiki/index.php?title=Hidden_constant_factor&diff=1604&oldid=prevBrian: invisible -> hidden (post-move)2012-02-18T08:49:54Z<p>invisible -> hidden (post-move)</p>
<table class='diff diff-contentalign-left'>
<col class='diff-marker' />
<col class='diff-content' />
<col class='diff-marker' />
<col class='diff-content' />
<tr style='vertical-align: top;'>
<td colspan='2' style="background-color: white; color:black; text-align: center;">← Older revision</td>
<td colspan='2' style="background-color: white; color:black; text-align: center;">Revision as of 08:49, 18 February 2012</td>
</tr><tr><td colspan="2" class="diff-lineno" id="L1" >Line 1:</td>
<td colspan="2" class="diff-lineno">Line 1:</td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>When the time or space required for an [[algorithm]] is expressed in terms of the input size using big O notation, constant factors are destroyed. For example, if one algorithm requires <math>n^2</math> nanoseconds on a given machine, and another requires <math>2n^2</math> nanoseconds on that machine, then both algorithms are <math>O(n^2)</math>, and from that information alone we cannot determine that the latter is faster than the former. That is, in big O notation, the constant factor is '''<del class="diffchange diffchange-inline">invisible</del>'''.</div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>When the time or space required for an [[algorithm]] is expressed in terms of the input size using big O notation, constant factors are destroyed. For example, if one algorithm requires <math>n^2</math> nanoseconds on a given machine, and another requires <math>2n^2</math> nanoseconds on that machine, then both algorithms are <math>O(n^2)</math>, and from that information alone we cannot determine that the latter is faster than the former. That is, in big O notation, the constant factor is '''<ins class="diffchange diffchange-inline">hidden</ins>'''.</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>From a theoretical point of view, this is advantageous, since we could always design a faster machine, which would make our algorithms take less time to run, but that wouldn't reflect the efficiency of the algorithm itself; and so we always want to discard the constant factor. In practice, however, the <del class="diffchange diffchange-inline">invisible </del>constant factor is very important. If one algorithm requires <math>n^2</math> nanoseconds and another requires <math>n</math> milliseconds, then the latter appears to be more efficient as it is <math>O(n)</math> rather than <math>O(n^2)</math>, but in practice is only faster when <math>n > 10^6</math>.</div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>From a theoretical point of view, this is advantageous, since we could always design a faster machine, which would make our algorithms take less time to run, but that wouldn't reflect the efficiency of the algorithm itself; and so we always want to discard the constant factor. In practice, however, the <ins class="diffchange diffchange-inline">hidden </ins>constant factor is very important. If one algorithm requires <math>n^2</math> nanoseconds and another requires <math>n</math> milliseconds, then the latter appears to be more efficient as it is <math>O(n)</math> rather than <math>O(n^2)</math>, but in practice is only faster when <math>n > 10^6</math>.</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>The discrepancy in <del class="diffchange diffchange-inline">invisible </del>constant factor between two algorithms with the same asymptotic running time (big O) is a consequence of three main factors:</div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>The discrepancy in <ins class="diffchange diffchange-inline">hidden </ins>constant factor between two algorithms with the same asymptotic running time (big O) is a consequence of three main factors:</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>* Some algorithms, by nature, simply require more operations than others. [[Bubble sort]], for example, tends to use more operations than [[insertion sort]]. Bubble sort can only reorder elements by swapping two adjacent elements at a time, and swapping two elements requires three copy operations (as an intermediate variable has to be used), and swapping two elements eliminates an inversion from the sequence. On the other hand, insertion sort moves elements longer distances at once. When an element is moved <math>m</math> positions, it eliminates <math>m</math> inversions, and requires <math>m+2</math> copy operations; and thus will insertion sort will generally average a bit more than one copy operation per inversion.</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>* Some algorithms, by nature, simply require more operations than others. [[Bubble sort]], for example, tends to use more operations than [[insertion sort]]. Bubble sort can only reorder elements by swapping two adjacent elements at a time, and swapping two elements requires three copy operations (as an intermediate variable has to be used), and swapping two elements eliminates an inversion from the sequence. On the other hand, insertion sort moves elements longer distances at once. When an element is moved <math>m</math> positions, it eliminates <math>m</math> inversions, and requires <math>m+2</math> copy operations; and thus will insertion sort will generally average a bit more than one copy operation per inversion.</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>* Some operations are slower than others. For example, multiplication and division of floating point numbers tends to be slower than addition and subtraction. Thus, for example, if a primitive in [[computational geometry]] can be implemented using either six additions and two multiplications or four additions and three multiplications, both implementations take constant time, but the former is probably faster.</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>* Some operations are slower than others. For example, multiplication and division of floating point numbers tends to be slower than addition and subtraction. Thus, for example, if a primitive in [[computational geometry]] can be implemented using either six additions and two multiplications or four additions and three multiplications, both implementations take constant time, but the former is probably faster.</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>* Some algorithms exhibit better locality of reference than others. For example, two nested for loops that iterate over a two-dimensional [[array]] should always be written so that they access the elements of the array in sequence in RAM, rather than in the other order. For example, in C this means that they should access elements in the other <code>A[0][0], A[0][1], A[0][2], ..., A[1][0], ...</code> rather than in the order <code>A[0][0], A[1][0], A[2][0], ..., A[0][1], ...</code>. The former hits the cache on almost every access; the latter always misses it.</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>* Some algorithms exhibit better locality of reference than others. For example, two nested for loops that iterate over a two-dimensional [[array]] should always be written so that they access the elements of the array in sequence in RAM, rather than in the other order. For example, in C this means that they should access elements in the other <code>A[0][0], A[0][1], A[0][2], ..., A[1][0], ...</code> rather than in the order <code>A[0][0], A[1][0], A[2][0], ..., A[0][1], ...</code>. The former hits the cache on almost every access; the latter always misses it.</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>Here are some general useful conclusions that can be drawn about the <del class="diffchange diffchange-inline">invisible </del>constant factor:</div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>Here are some general useful conclusions that can be drawn about the <ins class="diffchange diffchange-inline">hidden </ins>constant factor:</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>* [[Quicksort]] is generally faster than [[heapsort]] and [[mergesort]], though each has average-case performance <math>O(n \log n)</math>. Furthermore, most programming language standard libraries include highly optimized sorting routines. The <math>O(n \log n)</math> time required to sort generally has a lower constant factor than almost any other <math>O(n \log n)</math> algorithm you might want to implement for the same input.</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>* [[Quicksort]] is generally faster than [[heapsort]] and [[mergesort]], though each has average-case performance <math>O(n \log n)</math>. Furthermore, most programming language standard libraries include highly optimized sorting routines. The <math>O(n \log n)</math> time required to sort generally has a lower constant factor than almost any other <math>O(n \log n)</math> algorithm you might want to implement for the same input.</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>* The <math>O(\log n)</math> time associated with a [[binary heap]] operation generally has lower constant factor (is faster) than the <math>O(\log n)</math> time associated with a [[balanced binary search tree]] operation. Thus, BBSTs implement a superset of the functionality of heaps, but at the cost of slower running time.</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>* The <math>O(\log n)</math> time associated with a [[binary heap]] operation generally has lower constant factor (is faster) than the <math>O(\log n)</math> time associated with a [[balanced binary search tree]] operation. Thus, BBSTs implement a superset of the functionality of heaps, but at the cost of slower running time.</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>* [[Segment tree]]s require about twice as much memory as [[binary indexed tree]]s (and incur an additional factor of 2 for each additional dimension), and an <math>O(\log n)</math> segment tree operation is generally slower than an <math>O(\log n)</math> BIT operation, too.</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>* [[Segment tree]]s require about twice as much memory as [[binary indexed tree]]s (and incur an additional factor of 2 for each additional dimension), and an <math>O(\log n)</math> segment tree operation is generally slower than an <math>O(\log n)</math> BIT operation, too.</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>* [[Suffix tree]]s use more memory and take more time to construct than [[suffix array]]s, though both are linear.</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>* [[Suffix tree]]s use more memory and take more time to construct than [[suffix array]]s, though both are linear.</div></td></tr>
</table>Brianhttps://wcipeg.com/wiki/index.php?title=Hidden_constant_factor&diff=1602&oldid=prevBrian: moved Invisible constant factor to Hidden constant factor: more mainstream name2012-02-18T08:48:55Z<p>moved <a href="/wiki/Invisible_constant_factor" class="mw-redirect" title="Invisible constant factor">Invisible constant factor</a> to <a href="/wiki/Hidden_constant_factor" title="Hidden constant factor">Hidden constant factor</a>: more mainstream name</p>
<table class='diff diff-contentalign-left'>
<tr style='vertical-align: top;'>
<td colspan='1' style="background-color: white; color:black; text-align: center;">← Older revision</td>
<td colspan='1' style="background-color: white; color:black; text-align: center;">Revision as of 08:48, 18 February 2012</td>
</tr><tr><td colspan='2' style='text-align: center;'><div class="mw-diff-empty">(No difference)</div>
</td></tr></table>Brianhttps://wcipeg.com/wiki/index.php?title=Hidden_constant_factor&diff=1510&oldid=prevBrian: whoops2011-12-19T11:56:39Z<p>whoops</p>
<table class='diff diff-contentalign-left'>
<col class='diff-marker' />
<col class='diff-content' />
<col class='diff-marker' />
<col class='diff-content' />
<tr style='vertical-align: top;'>
<td colspan='2' style="background-color: white; color:black; text-align: center;">← Older revision</td>
<td colspan='2' style="background-color: white; color:black; text-align: center;">Revision as of 11:56, 19 December 2011</td>
</tr><tr><td colspan="2" class="diff-lineno" id="L5" >Line 5:</td>
<td colspan="2" class="diff-lineno">Line 5:</td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>The discrepancy in invisible constant factor between two algorithms with the same asymptotic running time (big O) is a consequence of three main factors:</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>The discrepancy in invisible constant factor between two algorithms with the same asymptotic running time (big O) is a consequence of three main factors:</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>* Some algorithms, by nature, simply require more operations than others. [[Bubble sort]], for example, tends to use more operations than [[insertion sort]]. Bubble sort can only reorder elements by swapping two adjacent elements at a time, and swapping two elements requires three copy operations (as an intermediate variable has to be used), and swapping two elements eliminates an inversion from the sequence. On the other hand, insertion sort moves elements longer distances at once. When an element is moved <math>m</math> positions, it eliminates <math>m</math> inversions, and requires <math>m+2</math> copy operations; and thus will insertion sort will generally average a bit more than one copy operation per inversion.</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>* Some algorithms, by nature, simply require more operations than others. [[Bubble sort]], for example, tends to use more operations than [[insertion sort]]. Bubble sort can only reorder elements by swapping two adjacent elements at a time, and swapping two elements requires three copy operations (as an intermediate variable has to be used), and swapping two elements eliminates an inversion from the sequence. On the other hand, insertion sort moves elements longer distances at once. When an element is moved <math>m</math> positions, it eliminates <math>m</math> inversions, and requires <math>m+2</math> copy operations; and thus will insertion sort will generally average a bit more than one copy operation per inversion.</div></td></tr>
<tr><td class='diff-marker'>−</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;"><div>* Some operations are slower than others. For example, multiplication and division of floating point numbers tends to be slower than addition and subtraction. Thus, for example, if a primitive in [[computational geometry]] can be implemented using either six additions and two multiplications or four additions and three multiplications, both implementations take constant time, but the <del class="diffchange diffchange-inline">latter </del>is probably faster.</div></td><td class='diff-marker'>+</td><td style="color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;"><div>* Some operations are slower than others. For example, multiplication and division of floating point numbers tends to be slower than addition and subtraction. Thus, for example, if a primitive in [[computational geometry]] can be implemented using either six additions and two multiplications or four additions and three multiplications, both implementations take constant time, but the <ins class="diffchange diffchange-inline">former </ins>is probably faster.</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>* Some algorithms exhibit better locality of reference than others. For example, two nested for loops that iterate over a two-dimensional [[array]] should always be written so that they access the elements of the array in sequence in RAM, rather than in the other order. For example, in C this means that they should access elements in the other <code>A[0][0], A[0][1], A[0][2], ..., A[1][0], ...</code> rather than in the order <code>A[0][0], A[1][0], A[2][0], ..., A[0][1], ...</code>. The former hits the cache on almost every access; the latter always misses it.</div></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"><div>* Some algorithms exhibit better locality of reference than others. For example, two nested for loops that iterate over a two-dimensional [[array]] should always be written so that they access the elements of the array in sequence in RAM, rather than in the other order. For example, in C this means that they should access elements in the other <code>A[0][0], A[0][1], A[0][2], ..., A[1][0], ...</code> rather than in the order <code>A[0][0], A[1][0], A[2][0], ..., A[0][1], ...</code>. The former hits the cache on almost every access; the latter always misses it.</div></td></tr>
<tr><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"></td><td class='diff-marker'> </td><td style="background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;"></td></tr>
</table>Brianhttps://wcipeg.com/wiki/index.php?title=Hidden_constant_factor&diff=1509&oldid=prevBrian: Created page with "When the time or space required for an algorithm is expressed in terms of the input size using big O notation, constant factors are destroyed. For example, if one algorithm r..."2011-12-19T11:56:02Z<p>Created page with "When the time or space required for an <a href="/wiki/Algorithm" title="Algorithm">algorithm</a> is expressed in terms of the input size using big O notation, constant factors are destroyed. For example, if one algorithm r..."</p>
<p><b>New page</b></p><div>When the time or space required for an [[algorithm]] is expressed in terms of the input size using big O notation, constant factors are destroyed. For example, if one algorithm requires <math>n^2</math> nanoseconds on a given machine, and another requires <math>2n^2</math> nanoseconds on that machine, then both algorithms are <math>O(n^2)</math>, and from that information alone we cannot determine that the latter is faster than the former. That is, in big O notation, the constant factor is '''invisible'''.<br />
<br />
From a theoretical point of view, this is advantageous, since we could always design a faster machine, which would make our algorithms take less time to run, but that wouldn't reflect the efficiency of the algorithm itself; and so we always want to discard the constant factor. In practice, however, the invisible constant factor is very important. If one algorithm requires <math>n^2</math> nanoseconds and another requires <math>n</math> milliseconds, then the latter appears to be more efficient as it is <math>O(n)</math> rather than <math>O(n^2)</math>, but in practice is only faster when <math>n > 10^6</math>.<br />
<br />
The discrepancy in invisible constant factor between two algorithms with the same asymptotic running time (big O) is a consequence of three main factors:<br />
* Some algorithms, by nature, simply require more operations than others. [[Bubble sort]], for example, tends to use more operations than [[insertion sort]]. Bubble sort can only reorder elements by swapping two adjacent elements at a time, and swapping two elements requires three copy operations (as an intermediate variable has to be used), and swapping two elements eliminates an inversion from the sequence. On the other hand, insertion sort moves elements longer distances at once. When an element is moved <math>m</math> positions, it eliminates <math>m</math> inversions, and requires <math>m+2</math> copy operations; and thus will insertion sort will generally average a bit more than one copy operation per inversion.<br />
* Some operations are slower than others. For example, multiplication and division of floating point numbers tends to be slower than addition and subtraction. Thus, for example, if a primitive in [[computational geometry]] can be implemented using either six additions and two multiplications or four additions and three multiplications, both implementations take constant time, but the latter is probably faster.<br />
* Some algorithms exhibit better locality of reference than others. For example, two nested for loops that iterate over a two-dimensional [[array]] should always be written so that they access the elements of the array in sequence in RAM, rather than in the other order. For example, in C this means that they should access elements in the other <code>A[0][0], A[0][1], A[0][2], ..., A[1][0], ...</code> rather than in the order <code>A[0][0], A[1][0], A[2][0], ..., A[0][1], ...</code>. The former hits the cache on almost every access; the latter always misses it.<br />
<br />
Here are some general useful conclusions that can be drawn about the invisible constant factor:<br />
* [[Quicksort]] is generally faster than [[heapsort]] and [[mergesort]], though each has average-case performance <math>O(n \log n)</math>. Furthermore, most programming language standard libraries include highly optimized sorting routines. The <math>O(n \log n)</math> time required to sort generally has a lower constant factor than almost any other <math>O(n \log n)</math> algorithm you might want to implement for the same input.<br />
* The <math>O(\log n)</math> time associated with a [[binary heap]] operation generally has lower constant factor (is faster) than the <math>O(\log n)</math> time associated with a [[balanced binary search tree]] operation. Thus, BBSTs implement a superset of the functionality of heaps, but at the cost of slower running time.<br />
* [[Segment tree]]s require about twice as much memory as [[binary indexed tree]]s (and incur an additional factor of 2 for each additional dimension), and an <math>O(\log n)</math> segment tree operation is generally slower than an <math>O(\log n)</math> BIT operation, too.<br />
* [[Suffix tree]]s use more memory and take more time to construct than [[suffix array]]s, though both are linear.</div>Brian