<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://wcipeg.com/wiki/index.php?action=history&amp;feed=atom&amp;title=Asymptotic_analysis</id>
		<title>Asymptotic analysis - Revision history</title>
		<link rel="self" type="application/atom+xml" href="https://wcipeg.com/wiki/index.php?action=history&amp;feed=atom&amp;title=Asymptotic_analysis"/>
		<link rel="alternate" type="text/html" href="https://wcipeg.com/wiki/index.php?title=Asymptotic_analysis&amp;action=history"/>
		<updated>2026-05-04T18:33:23Z</updated>
		<subtitle>Revision history for this page on the wiki</subtitle>
		<generator>MediaWiki 1.25.2</generator>

	<entry>
		<id>https://wcipeg.com/wiki/index.php?title=Asymptotic_analysis&amp;diff=1689&amp;oldid=prev</id>
		<title>Dhruvbird: Add a line explaining the non-equivalence of the limit based and Oh-based notation.</title>
		<link rel="alternate" type="text/html" href="https://wcipeg.com/wiki/index.php?title=Asymptotic_analysis&amp;diff=1689&amp;oldid=prev"/>
				<updated>2012-10-10T02:12:25Z</updated>
		
		<summary type="html">&lt;p&gt;Add a line explaining the non-equivalence of the limit based and Oh-based notation.&lt;/p&gt;
&lt;table class='diff diff-contentalign-left'&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;tr style='vertical-align: top;'&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 02:12, 10 October 2012&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;L101&quot; &gt;Line 101:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 101:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===Little O notation===&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===Little O notation===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Whereas the big O notation &amp;lt;math&amp;gt;f(n) \in O(g(n))&amp;lt;/math&amp;gt; expresses that &amp;lt;math&amp;gt;g&amp;lt;/math&amp;gt; grows ''at least as quickly'' as &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; in an asymptotic, constant-factor-oblivious sense, little O notation, as in &amp;lt;math&amp;gt;f(n) \in o(g(n))&amp;lt;/math&amp;gt;, expresses that &amp;lt;math&amp;gt;g&amp;lt;/math&amp;gt; grows ''strictly more quickly'' than &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; in the limit of infinite &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;. It means that &amp;lt;math&amp;gt;\lim_{n\to\infty} \frac{f(n)}{g(n)} = 0&amp;lt;/math&amp;gt;. This notation is not used very often in computer science (simply because it is rarely useful).&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Whereas the big O notation &amp;lt;math&amp;gt;f(n) \in O(g(n))&amp;lt;/math&amp;gt; expresses that &amp;lt;math&amp;gt;g&amp;lt;/math&amp;gt; grows ''at least as quickly'' as &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; in an asymptotic, constant-factor-oblivious sense, little O notation, as in &amp;lt;math&amp;gt;f(n) \in o(g(n))&amp;lt;/math&amp;gt;, expresses that &amp;lt;math&amp;gt;g&amp;lt;/math&amp;gt; grows ''strictly more quickly'' than &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; in the limit of infinite &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;. It means that &amp;lt;math&amp;gt;\lim_{n\to\infty} \frac{f(n)}{g(n)} = 0&amp;lt;/math&amp;gt;. This notation is not used very often in computer science (simply because it is rarely useful)&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;. Additionally, the limit based notation is not entirely equivalent to the big/little-Oh notation since the limit of the quotient might not exist in case of non-continuous functions such as the step function, etc..&lt;/ins&gt;.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===Little omega notation===&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===Little omega notation===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Dhruvbird</name></author>	</entry>

	<entry>
		<id>https://wcipeg.com/wiki/index.php?title=Asymptotic_analysis&amp;diff=1557&amp;oldid=prev</id>
		<title>Brian at 08:48, 1 January 2012</title>
		<link rel="alternate" type="text/html" href="https://wcipeg.com/wiki/index.php?title=Asymptotic_analysis&amp;diff=1557&amp;oldid=prev"/>
				<updated>2012-01-01T08:48:28Z</updated>
		
		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table class='diff diff-contentalign-left'&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;tr style='vertical-align: top;'&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 08:48, 1 January 2012&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;L312&quot; &gt;Line 312:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 312:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;references/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&amp;lt;references/&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[Category:Pages needing diagrams]]&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[Category:Pages needing diagrams]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[Category:Incomplete]]&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Brian</name></author>	</entry>

	<entry>
		<id>https://wcipeg.com/wiki/index.php?title=Asymptotic_analysis&amp;diff=1553&amp;oldid=prev</id>
		<title>Brian: /* How to analyze an algorithm */</title>
		<link rel="alternate" type="text/html" href="https://wcipeg.com/wiki/index.php?title=Asymptotic_analysis&amp;diff=1553&amp;oldid=prev"/>
				<updated>2012-01-01T06:28:38Z</updated>
		
		<summary type="html">&lt;p&gt;‎&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;How to analyze an algorithm&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;table class='diff diff-contentalign-left'&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;tr style='vertical-align: top;'&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 06:28, 1 January 2012&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;L289&quot; &gt;Line 289:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 289:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;We would like to have some way to tackle the general case, without so much messy mathematical reasoning in each individual case. By ''general case'', we mean that we assume that to solve an instance of size &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; using some algorithm, we divide it into chunks of size about &amp;lt;math&amp;gt;\frac{n}{b}&amp;lt;/math&amp;gt;, and solve &amp;lt;math&amp;gt;a&amp;lt;/math&amp;gt; subproblems using those chunks, along with extra work in the amount of &amp;lt;math&amp;gt;f(n)&amp;lt;/math&amp;gt;, unless &amp;lt;math&amp;gt;n &amp;lt; c&amp;lt;/math&amp;gt; for some constant &amp;lt;math&amp;gt;c&amp;lt;/math&amp;gt;, which represents a base case that can be solved in &amp;lt;math&amp;gt;O(1)&amp;lt;/math&amp;gt; time. For example, in the first of the three examples above, we have &amp;lt;math&amp;gt;a = 2, b = 2, f(n) = n&amp;lt;/math&amp;gt;; in the second, &amp;lt;math&amp;gt;a = 3, b = 2, f(n) = n&amp;lt;/math&amp;gt;, and in the third, &amp;lt;math&amp;gt;a = 2, b = 2, f(n) = n \log n&amp;lt;/math&amp;gt;.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;We would like to have some way to tackle the general case, without so much messy mathematical reasoning in each individual case. By ''general case'', we mean that we assume that to solve an instance of size &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; using some algorithm, we divide it into chunks of size about &amp;lt;math&amp;gt;\frac{n}{b}&amp;lt;/math&amp;gt;, and solve &amp;lt;math&amp;gt;a&amp;lt;/math&amp;gt; subproblems using those chunks, along with extra work in the amount of &amp;lt;math&amp;gt;f(n)&amp;lt;/math&amp;gt;, unless &amp;lt;math&amp;gt;n &amp;lt; c&amp;lt;/math&amp;gt; for some constant &amp;lt;math&amp;gt;c&amp;lt;/math&amp;gt;, which represents a base case that can be solved in &amp;lt;math&amp;gt;O(1)&amp;lt;/math&amp;gt; time. For example, in the first of the three examples above, we have &amp;lt;math&amp;gt;a = 2, b = 2, f(n) = n&amp;lt;/math&amp;gt;; in the second, &amp;lt;math&amp;gt;a = 3, b = 2, f(n) = n&amp;lt;/math&amp;gt;, and in the third, &amp;lt;math&amp;gt;a = 2, b = 2, f(n) = n \log n&amp;lt;/math&amp;gt;.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===The master theorem===&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;=&lt;/ins&gt;===The master theorem&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;=&lt;/ins&gt;===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The authoritative statement of the '''master theorem''' is from CLRS&amp;lt;ref name=&amp;quot;CLRS&amp;quot;&amp;gt;Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. ''Introduction to Algorithms'', Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Sections 4.3 (The master method) and 4.4 (Proof of the master theorem), pp. 73–90.&amp;lt;/ref&amp;gt;. Suppose that &amp;lt;math&amp;gt;a \geq 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b &amp;gt; 1&amp;lt;/math&amp;gt; above, and &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; is positive. Then, there are three cases:&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The authoritative statement of the '''master theorem''' is from CLRS&amp;lt;ref name=&amp;quot;CLRS&amp;quot;&amp;gt;Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. ''Introduction to Algorithms'', Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Sections 4.3 (The master method) and 4.4 (Proof of the master theorem), pp. 73–90.&amp;lt;/ref&amp;gt;. Suppose that &amp;lt;math&amp;gt;a \geq 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b &amp;gt; 1&amp;lt;/math&amp;gt; above, and &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; is positive. Then, there are three cases:&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# If &amp;lt;math&amp;gt;f(n) \in O(n^{\log_b a - \epsilon})&amp;lt;/math&amp;gt; for some &amp;lt;math&amp;gt;\epsilon &amp;gt; 0&amp;lt;/math&amp;gt;, then the overall running time is &amp;lt;math&amp;gt;\Theta(n^{\log_b a})&amp;lt;/math&amp;gt;.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# If &amp;lt;math&amp;gt;f(n) \in O(n^{\log_b a - \epsilon})&amp;lt;/math&amp;gt; for some &amp;lt;math&amp;gt;\epsilon &amp;gt; 0&amp;lt;/math&amp;gt;, then the overall running time is &amp;lt;math&amp;gt;\Theta(n^{\log_b a})&amp;lt;/math&amp;gt;.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;L303&quot; &gt;Line 303:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 303:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;An actual proof of the master theorem is given in CLRS.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;An actual proof of the master theorem is given in CLRS.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===Akra&amp;amp;ndash;Bazzi theorem===&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;=&lt;/ins&gt;===Akra&amp;amp;ndash;Bazzi theorem&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;=&lt;/ins&gt;===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;TODO&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;TODO&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Brian</name></author>	</entry>

	<entry>
		<id>https://wcipeg.com/wiki/index.php?title=Asymptotic_analysis&amp;diff=1552&amp;oldid=prev</id>
		<title>Brian: /* The master theorem */ : log -&gt; \log</title>
		<link rel="alternate" type="text/html" href="https://wcipeg.com/wiki/index.php?title=Asymptotic_analysis&amp;diff=1552&amp;oldid=prev"/>
				<updated>2012-01-01T06:27:49Z</updated>
		
		<summary type="html">&lt;p&gt;‎&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;The master theorem: &lt;/span&gt; : log -&amp;gt; \log&lt;/span&gt;&lt;/p&gt;
&lt;table class='diff diff-contentalign-left'&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;tr style='vertical-align: top;'&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 06:27, 1 January 2012&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;L292&quot; &gt;Line 292:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 292:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The authoritative statement of the '''master theorem''' is from CLRS&amp;lt;ref name=&amp;quot;CLRS&amp;quot;&amp;gt;Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. ''Introduction to Algorithms'', Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Sections 4.3 (The master method) and 4.4 (Proof of the master theorem), pp. 73–90.&amp;lt;/ref&amp;gt;. Suppose that &amp;lt;math&amp;gt;a \geq 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b &amp;gt; 1&amp;lt;/math&amp;gt; above, and &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; is positive. Then, there are three cases:&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The authoritative statement of the '''master theorem''' is from CLRS&amp;lt;ref name=&amp;quot;CLRS&amp;quot;&amp;gt;Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. ''Introduction to Algorithms'', Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Sections 4.3 (The master method) and 4.4 (Proof of the master theorem), pp. 73–90.&amp;lt;/ref&amp;gt;. Suppose that &amp;lt;math&amp;gt;a \geq 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b &amp;gt; 1&amp;lt;/math&amp;gt; above, and &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; is positive. Then, there are three cases:&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# If &amp;lt;math&amp;gt;f(n) \in O(n^{\log_b a - \epsilon})&amp;lt;/math&amp;gt; for some &amp;lt;math&amp;gt;\epsilon &amp;gt; 0&amp;lt;/math&amp;gt;, then the overall running time is &amp;lt;math&amp;gt;\Theta(n^{\log_b a})&amp;lt;/math&amp;gt;.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# If &amp;lt;math&amp;gt;f(n) \in O(n^{\log_b a - \epsilon})&amp;lt;/math&amp;gt; for some &amp;lt;math&amp;gt;\epsilon &amp;gt; 0&amp;lt;/math&amp;gt;, then the overall running time is &amp;lt;math&amp;gt;\Theta(n^{\log_b a})&amp;lt;/math&amp;gt;.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# If &amp;lt;math&amp;gt;f(n) \in \Theta(n^{\log_b a} log^k n)&amp;lt;/math&amp;gt;, for some &amp;lt;math&amp;gt;k \geq 0&amp;lt;/math&amp;gt;, then the overall running time is &amp;lt;math&amp;gt;\Theta(n^{\log_b a} log^{k+1} n)&amp;lt;/math&amp;gt;.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# If &amp;lt;math&amp;gt;f(n) \in \Theta(n^{\log_b a} &lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;\&lt;/ins&gt;log^k n)&amp;lt;/math&amp;gt;, for some &amp;lt;math&amp;gt;k \geq 0&amp;lt;/math&amp;gt;, then the overall running time is &amp;lt;math&amp;gt;\Theta(n^{\log_b a} log^{k+1} n)&amp;lt;/math&amp;gt;.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# If &amp;lt;math&amp;gt;f(n) \in \Omega(n^{\log_b a + \epsilon})&amp;lt;/math&amp;gt; for some &amp;lt;math&amp;gt;\epsilon &amp;gt; 0&amp;lt;/math&amp;gt;, and, furthermore, &amp;lt;math&amp;gt;af\left(\frac{n}{b}\right) \leq cf(n)&amp;lt;/math&amp;gt; for some &amp;lt;math&amp;gt;c &amp;lt; 1&amp;lt;/math&amp;gt; and all sufficiently large &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;, then the overall running time is &amp;lt;math&amp;gt;\Theta(f(n))&amp;lt;/math&amp;gt;.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;# If &amp;lt;math&amp;gt;f(n) \in \Omega(n^{\log_b a + \epsilon})&amp;lt;/math&amp;gt; for some &amp;lt;math&amp;gt;\epsilon &amp;gt; 0&amp;lt;/math&amp;gt;, and, furthermore, &amp;lt;math&amp;gt;af\left(\frac{n}{b}\right) \leq cf(n)&amp;lt;/math&amp;gt; for some &amp;lt;math&amp;gt;c &amp;lt; 1&amp;lt;/math&amp;gt; and all sufficiently large &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;, then the overall running time is &amp;lt;math&amp;gt;\Theta(f(n))&amp;lt;/math&amp;gt;.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;For example, Karatsuba multiplication falls into the first case, since &amp;lt;math&amp;gt;f(n) = n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\log_2 3 \approx 1.58&amp;lt;/math&amp;gt;. The theorem then tells us that the overall running time is &amp;lt;math&amp;gt;O(n^{\log_2 3})&amp;lt;/math&amp;gt;. What is happening here is that as we proceed from one level to the next, the amount of work done on the current level increases exponentially or faster. In this particular case, each level entails 1.5 times as much work as the previous level. Intuitively, this means most of the work is done on the lowest level, on which we have &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; subproblems, each with size 1. The earlier analysis then applies. The reason why we need &amp;lt;math&amp;gt;f(n) \in O(n^{\log_b a - \epsilon})&amp;lt;/math&amp;gt; is precisely to ensure this exponential growth. If &amp;lt;math&amp;gt;f(n)&amp;lt;/math&amp;gt; is exactly &amp;lt;math&amp;gt;n^{\log_b a}&amp;lt;/math&amp;gt;, then we will have &amp;lt;math&amp;gt;af\left(\frac{n}{b}\right) = a\frac{n^{\log_b a}}{b^{\log_b a}} = n^{\log_b a} = f(n)&amp;lt;/math&amp;gt;, that is, the amount of work done on each level will be the same. But if &amp;lt;math&amp;gt;f(n)&amp;lt;/math&amp;gt; grows more slowly than this, then &amp;lt;math&amp;gt;f(n)&amp;lt;/math&amp;gt; will be less than &amp;lt;math&amp;gt;af\left(\frac{n}{b}\right)&amp;lt;/math&amp;gt;. If it grows polynomially more slowly (that is, by a factor of &amp;lt;math&amp;gt;n^\epsilon&amp;lt;/math&amp;gt; or more), then the desired exponential growth will be attained.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;For example, Karatsuba multiplication falls into the first case, since &amp;lt;math&amp;gt;f(n) = n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\log_2 3 \approx 1.58&amp;lt;/math&amp;gt;. The theorem then tells us that the overall running time is &amp;lt;math&amp;gt;O(n^{\log_2 3})&amp;lt;/math&amp;gt;. What is happening here is that as we proceed from one level to the next, the amount of work done on the current level increases exponentially or faster. In this particular case, each level entails 1.5 times as much work as the previous level. Intuitively, this means most of the work is done on the lowest level, on which we have &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; subproblems, each with size 1. The earlier analysis then applies. The reason why we need &amp;lt;math&amp;gt;f(n) \in O(n^{\log_b a - \epsilon})&amp;lt;/math&amp;gt; is precisely to ensure this exponential growth. If &amp;lt;math&amp;gt;f(n)&amp;lt;/math&amp;gt; is exactly &amp;lt;math&amp;gt;n^{\log_b a}&amp;lt;/math&amp;gt;, then we will have &amp;lt;math&amp;gt;af\left(\frac{n}{b}\right) = a\frac{n^{\log_b a}}{b^{\log_b a}} = n^{\log_b a} = f(n)&amp;lt;/math&amp;gt;, that is, the amount of work done on each level will be the same. But if &amp;lt;math&amp;gt;f(n)&amp;lt;/math&amp;gt; grows more slowly than this, then &amp;lt;math&amp;gt;f(n)&amp;lt;/math&amp;gt; will be less than &amp;lt;math&amp;gt;af\left(\frac{n}{b}\right)&amp;lt;/math&amp;gt;. If it grows polynomially more slowly (that is, by a factor of &amp;lt;math&amp;gt;n^\epsilon&amp;lt;/math&amp;gt; or more), then the desired exponential growth will be attained.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Merge sort and the closest pair of points algorithm are covered by case 2. For merge sort, we have &amp;lt;math&amp;gt;k = 0&amp;lt;/math&amp;gt; (there is no log factor at all). Here the time spent on each level is the same, since &amp;lt;math&amp;gt;f(n) = af\left(\frac{n}{b}\right)&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;f(n) = n^{log_b a}&amp;lt;/math&amp;gt;, so we simply multiply &amp;lt;math&amp;gt;f(n)&amp;lt;/math&amp;gt; by the number of levels, that is, &amp;lt;math&amp;gt;\log_b n \in \Theta(\log n)&amp;lt;/math&amp;gt;. For the closest pair of points algorithm, we have &amp;lt;math&amp;gt;k = 1&amp;lt;/math&amp;gt;. When &amp;lt;math&amp;gt;k &amp;gt; 0&amp;lt;/math&amp;gt;, then there is a decrease in the amount of work done as we go further down, but it is slower than an exponential decay. In each case the average amount of work done per level is on the order of the amount of work done on the topmost level, so in each case we obtain &amp;lt;math&amp;gt;\Theta(f(n) \log n)&amp;lt;/math&amp;gt; time.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Merge sort and the closest pair of points algorithm are covered by case 2. For merge sort, we have &amp;lt;math&amp;gt;k = 0&amp;lt;/math&amp;gt; (there is no log factor at all). Here the time spent on each level is the same, since &amp;lt;math&amp;gt;f(n) = af\left(\frac{n}{b}\right)&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;f(n) = n^{&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;\&lt;/ins&gt;log_b a}&amp;lt;/math&amp;gt;, so we simply multiply &amp;lt;math&amp;gt;f(n)&amp;lt;/math&amp;gt; by the number of levels, that is, &amp;lt;math&amp;gt;\log_b n \in \Theta(\log n)&amp;lt;/math&amp;gt;. For the closest pair of points algorithm, we have &amp;lt;math&amp;gt;k = 1&amp;lt;/math&amp;gt;. When &amp;lt;math&amp;gt;k &amp;gt; 0&amp;lt;/math&amp;gt;, then there is a decrease in the amount of work done as we go further down, but it is slower than an exponential decay. In each case the average amount of work done per level is on the order of the amount of work done on the topmost level, so in each case we obtain &amp;lt;math&amp;gt;\Theta(f(n) \log n)&amp;lt;/math&amp;gt; time.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Case 3 is not often encountered in practice. When &amp;lt;math&amp;gt;f(n) \geq \frac{1}{c}af\left(\frac{n}{b}\right)&amp;lt;/math&amp;gt; (with &amp;lt;math&amp;gt;\frac{1}{c} &amp;gt; 1&amp;lt;/math&amp;gt;), there is an exponential (or faster) decay as we proceed from the top to the lower levels, so most of the work is done at the top, and hence the &amp;lt;math&amp;gt;\Theta(f(n))&amp;lt;/math&amp;gt; behaviour is obtained. The condition that &amp;lt;math&amp;gt;f(n) \in \Omega(n^{\log_b a + \epsilon})&amp;lt;/math&amp;gt; is not actually needed in the statement of Case 3, since it is implied by the condition that &amp;lt;math&amp;gt;af\left(\frac{n}{b}\right) \leq cf(n)&amp;lt;/math&amp;gt;. The reason why the theorem is stated as it is above is that if &amp;lt;math&amp;gt;f(n)&amp;lt;/math&amp;gt; is actually of the form &amp;lt;math&amp;gt;cn^{\log_b a + \epsilon}\log^k n&amp;lt;/math&amp;gt;, which covers most of the cases, then the other condition is implied, and we don't need to explicitly check it, but can instead just look at the form of &amp;lt;math&amp;gt;f(n)&amp;lt;/math&amp;gt;. However, there are some possibilities for &amp;lt;math&amp;gt;f(n)&amp;lt;/math&amp;gt; that are not &amp;quot;well-behaved&amp;quot;, and will not satisfy the condition &amp;lt;math&amp;gt;af\left(\frac{n}{b}\right) \leq cf(n)&amp;lt;/math&amp;gt;.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Case 3 is not often encountered in practice. When &amp;lt;math&amp;gt;f(n) \geq \frac{1}{c}af\left(\frac{n}{b}\right)&amp;lt;/math&amp;gt; (with &amp;lt;math&amp;gt;\frac{1}{c} &amp;gt; 1&amp;lt;/math&amp;gt;), there is an exponential (or faster) decay as we proceed from the top to the lower levels, so most of the work is done at the top, and hence the &amp;lt;math&amp;gt;\Theta(f(n))&amp;lt;/math&amp;gt; behaviour is obtained. The condition that &amp;lt;math&amp;gt;f(n) \in \Omega(n^{\log_b a + \epsilon})&amp;lt;/math&amp;gt; is not actually needed in the statement of Case 3, since it is implied by the condition that &amp;lt;math&amp;gt;af\left(\frac{n}{b}\right) \leq cf(n)&amp;lt;/math&amp;gt;. The reason why the theorem is stated as it is above is that if &amp;lt;math&amp;gt;f(n)&amp;lt;/math&amp;gt; is actually of the form &amp;lt;math&amp;gt;cn^{\log_b a + \epsilon}\log^k n&amp;lt;/math&amp;gt;, which covers most of the cases, then the other condition is implied, and we don't need to explicitly check it, but can instead just look at the form of &amp;lt;math&amp;gt;f(n)&amp;lt;/math&amp;gt;. However, there are some possibilities for &amp;lt;math&amp;gt;f(n)&amp;lt;/math&amp;gt; that are not &amp;quot;well-behaved&amp;quot;, and will not satisfy the condition &amp;lt;math&amp;gt;af\left(\frac{n}{b}\right) \leq cf(n)&amp;lt;/math&amp;gt;.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Brian</name></author>	</entry>

	<entry>
		<id>https://wcipeg.com/wiki/index.php?title=Asymptotic_analysis&amp;diff=1551&amp;oldid=prev</id>
		<title>Brian: /* Recursive functions */</title>
		<link rel="alternate" type="text/html" href="https://wcipeg.com/wiki/index.php?title=Asymptotic_analysis&amp;diff=1551&amp;oldid=prev"/>
				<updated>2012-01-01T06:19:54Z</updated>
		
		<summary type="html">&lt;p&gt;‎&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;Recursive functions&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;table class='diff diff-contentalign-left'&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;tr style='vertical-align: top;'&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 06:19, 1 January 2012&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;L276&quot; &gt;Line 276:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 276:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===Recursive functions===&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===Recursive functions===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Many algorithms work by breaking the input down into smaller pieces, [[recursively]] solving each smaller piece as though it were an independent problem, and then combining the results to obtain a solution to the original instance. This algorithmic paradigm is known as ''divide and conquer'', and its running time is usually analyzed slightly differently from that of the simpler algorithms from the previous section.&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Examples:&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;* [[Merge sort]], which sorts a list by dividing it into two sub-lists, sorting each sub-list, and then merging the two sorted sub-lists to obtain the original list, sorted. The merge step takes &amp;lt;math&amp;gt;O(n)&amp;lt;/math&amp;gt; time if the original list had &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; elements. The overall running time is &amp;lt;math&amp;gt;O(n \log n)&amp;lt;/math&amp;gt;.&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;* [[Karatsuba multiplication]], which multiplies two &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-digit numbers (see [[Big numbers]]) by dividing each number into two halves and recursively multiplying three pairs of halves, along with some additions and subtractions which take &amp;lt;math&amp;gt;O(n)&amp;lt;/math&amp;gt; time in total. The overall running time is &amp;lt;math&amp;gt;O(n^{\log_2 3})&amp;lt;/math&amp;gt;.&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;* One solution to the [[closest pair of points]] problem finds the closest pair by dividing the point set with a half plane to obtain two sets of points of approximately half the size, finding the closest pair of points within each set, and then sorting points close to the dividing line and sweeping to find the closest pair across the division, which takes &amp;lt;math&amp;gt;O(n \log n)&amp;lt;/math&amp;gt; time (for the sorting). The overall running time is then &amp;lt;math&amp;gt;O(n \log^2 n)&amp;lt;/math&amp;gt;. (NB: This can be improved to &amp;lt;math&amp;gt;O(n \log n)&amp;lt;/math&amp;gt; if we concomitantly run merge sort, rather than re-sorting along the dividing line in each step.)&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;How do we analyze the running time of such algorithms? We can reason as follows:&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;* In the case of merge sort, we start out with one instance (the original list of size &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;), which gives rise to two subinstances of size &amp;lt;math&amp;gt;n/2&amp;lt;/math&amp;gt;; these in turn give rise to a total of four subinstances of size &amp;lt;math&amp;gt;n/4&amp;lt;/math&amp;gt;, and so on down to subinstances of size 1 or 2. There are about &amp;lt;math&amp;gt;\log_2 n&amp;lt;/math&amp;gt; &amp;quot;levels&amp;quot; to this scheme, and at each level, we have about &amp;lt;math&amp;gt;2^k&amp;lt;/math&amp;gt; subinstances of size &amp;lt;math&amp;gt;n/2^k&amp;lt;/math&amp;gt;, and a linear amount of work is required in the &amp;quot;conquer&amp;quot; step of each; so that is about &amp;lt;math&amp;gt;cn&amp;lt;/math&amp;gt; steps at each level, or about &amp;lt;math&amp;gt;cn \log n&amp;lt;/math&amp;gt; steps in total, giving &amp;lt;math&amp;gt;O(n \log n)&amp;lt;/math&amp;gt;.&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;* In Karatsuba multiplication, there will again be about &amp;lt;math&amp;gt;\log_2 n&amp;lt;/math&amp;gt; levels. On the topmost level, we have one instance with size &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;. On the next level, we have three instances with size &amp;lt;math&amp;gt;n/2&amp;lt;/math&amp;gt;, and then nine instances with size &amp;lt;math&amp;gt;n/4&amp;lt;/math&amp;gt;, and so on down. A linear amount of additional work is required for the &amp;quot;conquer&amp;quot; step for each instance, so that is about &amp;lt;math&amp;gt;cn&amp;lt;/math&amp;gt; work on the topmost level, about &amp;lt;math&amp;gt;\frac{3}{2}cn&amp;lt;/math&amp;gt; on the next level, and so on down. So the total running time will be about &amp;lt;math&amp;gt;cn \sum_{k=0}^{\log_2 n} \left(\frac{3}{2}\right)^k = cn\left(\frac{\frac{3}{2}^{k+1}-1}{\frac{3}{2}-1}\right) = O\left(n\left(\frac{3}{2}\right)^{\log_2 n}\right) = O(n^{\log_2 3})&amp;lt;/math&amp;gt;. (Several steps were omitted.)&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;* In the third algorithm listed above, there are again about &amp;lt;math&amp;gt;\log_2 n&amp;lt;/math&amp;gt; levels; on the topmost level, the work done is &amp;lt;math&amp;gt;cn \log n&amp;lt;/math&amp;gt;; then &amp;lt;math&amp;gt;2c\left(\frac{n}{2}\log\left(\frac{n}{2}\right)\right)&amp;lt;/math&amp;gt; on the next level, since we will have two subinstances of size &amp;lt;math&amp;gt;\frac{n}{2}&amp;lt;/math&amp;gt;, which will each require &amp;lt;math&amp;gt;O\left(\frac{n}{2}\log\left(\frac{n}{2}\right)\right)&amp;lt;/math&amp;gt; work; then on the next level, &amp;lt;math&amp;gt;4c\left(\frac{n}{4}\log\left(\frac{n}{4}\right)\right)&amp;lt;/math&amp;gt;, and so on down; the total work done is about &amp;lt;math&amp;gt;O\left(\sum_{k=0}^{\log_2 n} 2^k \frac{n}{2^k}\log\left(\frac{n}{2^k}\right)\right) = n\cdot O\left(\sum_{k=0}^{\log_2 n}\log\left(\frac{n}{2^k}\right)\right)&amp;lt;/math&amp;gt;. This gives us an arithmetic series, with first term &amp;lt;math&amp;gt;\log n&amp;lt;/math&amp;gt; (here &amp;lt;math&amp;gt;k = 0&amp;lt;/math&amp;gt;) and last term about 0 (for when &amp;lt;math&amp;gt;\frac{n}{2^k} \approx \frac{n}{n} = 1&amp;lt;/math&amp;gt; since &amp;lt;math&amp;gt;k \approx \log_2 n&amp;lt;/math&amp;gt;) and a total of about &amp;lt;math&amp;gt;\log_2 n&amp;lt;/math&amp;gt; terms; its sum is then about &amp;lt;math&amp;gt;\log n \log_2 n&amp;lt;/math&amp;gt; giving &amp;lt;math&amp;gt;O(n \log^2 n)&amp;lt;/math&amp;gt; overall.&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;We would like to have some way to tackle the general case, without so much messy mathematical reasoning in each individual case. By ''general case'', we mean that we assume that to solve an instance of size &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; using some algorithm, we divide it into chunks of size about &amp;lt;math&amp;gt;\frac{n}{b}&amp;lt;/math&amp;gt;, and solve &amp;lt;math&amp;gt;a&amp;lt;/math&amp;gt; subproblems using those chunks, along with extra work in the amount of &amp;lt;math&amp;gt;f(n)&amp;lt;/math&amp;gt;, unless &amp;lt;math&amp;gt;n &amp;lt; c&amp;lt;/math&amp;gt; for some constant &amp;lt;math&amp;gt;c&amp;lt;/math&amp;gt;, which represents a base case that can be solved in &amp;lt;math&amp;gt;O(1)&amp;lt;/math&amp;gt; time. For example, in the first of the three examples above, we have &amp;lt;math&amp;gt;a = 2, b = 2, f(n) = n&amp;lt;/math&amp;gt;; in the second, &amp;lt;math&amp;gt;a = 3, b = 2, f(n) = n&amp;lt;/math&amp;gt;, and in the third, &amp;lt;math&amp;gt;a = 2, b = 2, f(n) = n \log n&amp;lt;/math&amp;gt;.&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;===The master theorem===&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;The authoritative statement of the '''master theorem''' is from CLRS&amp;lt;ref name=&amp;quot;CLRS&amp;quot;&amp;gt;Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. ''Introduction to Algorithms'', Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Sections 4.3 (The master method) and 4.4 (Proof of the master theorem), pp. 73–90.&amp;lt;/ref&amp;gt;. Suppose that &amp;lt;math&amp;gt;a \geq 1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;b &amp;gt; 1&amp;lt;/math&amp;gt; above, and &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; is positive. Then, there are three cases:&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;# If &amp;lt;math&amp;gt;f(n) \in O(n^{\log_b a - \epsilon})&amp;lt;/math&amp;gt; for some &amp;lt;math&amp;gt;\epsilon &amp;gt; 0&amp;lt;/math&amp;gt;, then the overall running time is &amp;lt;math&amp;gt;\Theta(n^{\log_b a})&amp;lt;/math&amp;gt;.&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;# If &amp;lt;math&amp;gt;f(n) \in \Theta(n^{\log_b a} log^k n)&amp;lt;/math&amp;gt;, for some &amp;lt;math&amp;gt;k \geq 0&amp;lt;/math&amp;gt;, then the overall running time is &amp;lt;math&amp;gt;\Theta(n^{\log_b a} log^{k+1} n)&amp;lt;/math&amp;gt;.&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;# If &amp;lt;math&amp;gt;f(n) \in \Omega(n^{\log_b a + \epsilon})&amp;lt;/math&amp;gt; for some &amp;lt;math&amp;gt;\epsilon &amp;gt; 0&amp;lt;/math&amp;gt;, and, furthermore, &amp;lt;math&amp;gt;af\left(\frac{n}{b}\right) \leq cf(n)&amp;lt;/math&amp;gt; for some &amp;lt;math&amp;gt;c &amp;lt; 1&amp;lt;/math&amp;gt; and all sufficiently large &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;, then the overall running time is &amp;lt;math&amp;gt;\Theta(f(n))&amp;lt;/math&amp;gt;.&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;For example, Karatsuba multiplication falls into the first case, since &amp;lt;math&amp;gt;f(n) = n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\log_2 3 \approx 1.58&amp;lt;/math&amp;gt;. The theorem then tells us that the overall running time is &amp;lt;math&amp;gt;O(n^{\log_2 3})&amp;lt;/math&amp;gt;. What is happening here is that as we proceed from one level to the next, the amount of work done on the current level increases exponentially or faster. In this particular case, each level entails 1.5 times as much work as the previous level. Intuitively, this means most of the work is done on the lowest level, on which we have &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; subproblems, each with size 1. The earlier analysis then applies. The reason why we need &amp;lt;math&amp;gt;f(n) \in O(n^{\log_b a - \epsilon})&amp;lt;/math&amp;gt; is precisely to ensure this exponential growth. If &amp;lt;math&amp;gt;f(n)&amp;lt;/math&amp;gt; is exactly &amp;lt;math&amp;gt;n^{\log_b a}&amp;lt;/math&amp;gt;, then we will have &amp;lt;math&amp;gt;af\left(\frac{n}{b}\right) = a\frac{n^{\log_b a}}{b^{\log_b a}} = n^{\log_b a} = f(n)&amp;lt;/math&amp;gt;, that is, the amount of work done on each level will be the same. But if &amp;lt;math&amp;gt;f(n)&amp;lt;/math&amp;gt; grows more slowly than this, then &amp;lt;math&amp;gt;f(n)&amp;lt;/math&amp;gt; will be less than &amp;lt;math&amp;gt;af\left(\frac{n}{b}\right)&amp;lt;/math&amp;gt;. If it grows polynomially more slowly (that is, by a factor of &amp;lt;math&amp;gt;n^\epsilon&amp;lt;/math&amp;gt; or more), then the desired exponential growth will be attained.&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Merge sort and the closest pair of points algorithm are covered by case 2. For merge sort, we have &amp;lt;math&amp;gt;k = 0&amp;lt;/math&amp;gt; (there is no log factor at all). Here the time spent on each level is the same, since &amp;lt;math&amp;gt;f(n) = af\left(\frac{n}{b}\right)&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;f(n) = n^{log_b a}&amp;lt;/math&amp;gt;, so we simply multiply &amp;lt;math&amp;gt;f(n)&amp;lt;/math&amp;gt; by the number of levels, that is, &amp;lt;math&amp;gt;\log_b n \in \Theta(\log n)&amp;lt;/math&amp;gt;. For the closest pair of points algorithm, we have &amp;lt;math&amp;gt;k = 1&amp;lt;/math&amp;gt;. When &amp;lt;math&amp;gt;k &amp;gt; 0&amp;lt;/math&amp;gt;, then there is a decrease in the amount of work done as we go further down, but it is slower than an exponential decay. In each case the average amount of work done per level is on the order of the amount of work done on the topmost level, so in each case we obtain &amp;lt;math&amp;gt;\Theta(f(n) \log n)&amp;lt;/math&amp;gt; time.&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Case 3 is not often encountered in practice. When &amp;lt;math&amp;gt;f(n) \geq \frac{1}{c}af\left(\frac{n}{b}\right)&amp;lt;/math&amp;gt; (with &amp;lt;math&amp;gt;\frac{1}{c} &amp;gt; 1&amp;lt;/math&amp;gt;), there is an exponential (or faster) decay as we proceed from the top to the lower levels, so most of the work is done at the top, and hence the &amp;lt;math&amp;gt;\Theta(f(n))&amp;lt;/math&amp;gt; behaviour is obtained. The condition that &amp;lt;math&amp;gt;f(n) \in \Omega(n^{\log_b a + \epsilon})&amp;lt;/math&amp;gt; is not actually needed in the statement of Case 3, since it is implied by the condition that &amp;lt;math&amp;gt;af\left(\frac{n}{b}\right) \leq cf(n)&amp;lt;/math&amp;gt;. The reason why the theorem is stated as it is above is that if &amp;lt;math&amp;gt;f(n)&amp;lt;/math&amp;gt; is actually of the form &amp;lt;math&amp;gt;cn^{\log_b a + \epsilon}\log^k n&amp;lt;/math&amp;gt;, which covers most of the cases, then the other condition is implied, and we don't need to explicitly check it, but can instead just look at the form of &amp;lt;math&amp;gt;f(n)&amp;lt;/math&amp;gt;. However, there are some possibilities for &amp;lt;math&amp;gt;f(n)&amp;lt;/math&amp;gt; that are not &amp;quot;well-behaved&amp;quot;, and will not satisfy the condition &amp;lt;math&amp;gt;af\left(\frac{n}{b}\right) \leq cf(n)&amp;lt;/math&amp;gt;.&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;An actual proof of the master theorem is given in CLRS.&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;===Akra&amp;amp;ndash;Bazzi theorem===&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;TODO&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;TODO&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Brian</name></author>	</entry>

	<entry>
		<id>https://wcipeg.com/wiki/index.php?title=Asymptotic_analysis&amp;diff=1548&amp;oldid=prev</id>
		<title>Brian: added citation</title>
		<link rel="alternate" type="text/html" href="https://wcipeg.com/wiki/index.php?title=Asymptotic_analysis&amp;diff=1548&amp;oldid=prev"/>
				<updated>2011-12-29T09:30:11Z</updated>
		
		<summary type="html">&lt;p&gt;added citation&lt;/p&gt;
&lt;table class='diff diff-contentalign-left'&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;col class='diff-marker' /&gt;
				&lt;col class='diff-content' /&gt;
				&lt;tr style='vertical-align: top;'&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan='2' style=&quot;background-color: white; color:black; text-align: center;&quot;&gt;Revision as of 09:30, 29 December 2011&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;L227&quot; &gt;Line 227:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 227:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* A cubic-time algorithm will suffice for &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; up to about 300.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* A cubic-time algorithm will suffice for &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; up to about 300.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* A quadratic-time algorithm will suffice for &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; up to about 5000.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* A quadratic-time algorithm will suffice for &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; up to about 5000.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;−&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* An &amp;lt;math&amp;gt;O(n \log n)&amp;lt;/math&amp;gt; time algorithm will suffice for &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; up to about 200000. The number 100000 is a very common bound on input size in olympiad problems, and almost always suggests that a contestant should aim to invent an algorithm with time complexity of around &amp;lt;math&amp;gt;O(n \log n)&amp;lt;/math&amp;gt;. (This aspect of olympiad contests has been criticized.) &amp;lt;math&amp;gt;O(n^2)&amp;lt;/math&amp;gt; is almost always too slow in this case.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* An &amp;lt;math&amp;gt;O(n \log n)&amp;lt;/math&amp;gt; time algorithm will suffice for &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; up to about 200000. The number 100000 is a very common bound on input size in olympiad problems, and almost always suggests that a contestant should aim to invent an algorithm with time complexity of around &amp;lt;math&amp;gt;O(n \log n)&amp;lt;/math&amp;gt;. (This aspect of olympiad contests has been criticized.)&lt;ins class=&quot;diffchange diffchange-inline&quot;&gt;&amp;lt;ref&amp;gt;Ribiero, P. and Guerreiro, P. ''Improving the Automatic Evaluation of Problem Solutions in Programming Contests''. Olympiads in Informatics, 2009, Vol. 3, 132–143. Retrieved from http://www.mii.lt/olympiads_in_informatics/pdf/INFOL048.pdf&amp;lt;/ref&amp;gt; &lt;/ins&gt;&amp;lt;math&amp;gt;O(n^2)&amp;lt;/math&amp;gt; is almost always too slow in this case.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* A linear-time algorithm is typically required if the input can be even larger than this, usually 300000 or more lines.&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;* A linear-time algorithm is typically required if the input can be even larger than this, usually 300000 or more lines.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;L280&quot; &gt;Line 280:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 280:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===Amortized analysis===&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;===Amortized analysis===&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;TODO&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;TODO&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;==References==&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot;&gt;&amp;#160;&lt;/td&gt;&lt;td class='diff-marker'&gt;+&lt;/td&gt;&lt;td style=&quot;color:black; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;&amp;lt;references/&amp;gt;&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[Category:Pages needing diagrams]]&lt;/div&gt;&lt;/td&gt;&lt;td class='diff-marker'&gt;&amp;#160;&lt;/td&gt;&lt;td style=&quot;background-color: #f9f9f9; color: #333333; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #e6e6e6; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[Category:Pages needing diagrams]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>Brian</name></author>	</entry>

	<entry>
		<id>https://wcipeg.com/wiki/index.php?title=Asymptotic_analysis&amp;diff=1547&amp;oldid=prev</id>
		<title>Brian at 08:55, 28 December 2011</title>
		<link rel="alternate" type="text/html" href="https://wcipeg.com/wiki/index.php?title=Asymptotic_analysis&amp;diff=1547&amp;oldid=prev"/>
				<updated>2011-12-28T08:55:52Z</updated>
		
		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;a href=&quot;https://wcipeg.com/wiki/index.php?title=Asymptotic_analysis&amp;amp;diff=1547&amp;amp;oldid=1543&quot;&gt;Show changes&lt;/a&gt;</summary>
		<author><name>Brian</name></author>	</entry>

	<entry>
		<id>https://wcipeg.com/wiki/index.php?title=Asymptotic_analysis&amp;diff=1543&amp;oldid=prev</id>
		<title>Brian: Created page with &quot;In theoretical computer science, '''asymptotic analysis''' is the most frequently used technique to quantify the performance of an algorithm. Its name refers to the fact that...&quot;</title>
		<link rel="alternate" type="text/html" href="https://wcipeg.com/wiki/index.php?title=Asymptotic_analysis&amp;diff=1543&amp;oldid=prev"/>
				<updated>2011-12-25T07:13:57Z</updated>
		
		<summary type="html">&lt;p&gt;Created page with &amp;quot;In theoretical computer science, &amp;#039;&amp;#039;&amp;#039;asymptotic analysis&amp;#039;&amp;#039;&amp;#039; is the most frequently used technique to quantify the performance of an &lt;a href=&quot;/wiki/Algorithm&quot; title=&quot;Algorithm&quot;&gt;algorithm&lt;/a&gt;. Its name refers to the fact that...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;In theoretical computer science, '''asymptotic analysis''' is the most frequently used technique to quantify the performance of an [[algorithm]]. Its name refers to the fact that this form of analysis neglects the exact amount of time or memory that the algorithm uses on specific cases, but is concerned only with the algorithm's asymptotic behaviour&amp;amp;mdash;that is, how the algorithm performs in the limit of infinite problem size.&lt;br /&gt;
&lt;br /&gt;
==Irrelevance of constant factor==&lt;br /&gt;
It does this by neglecting the so-called [[invisible constant factor]]. The reasoning is as follows. Suppose Alice and Bob each write a program to solve the same task. Whenever Alice's program and Bob's program are executed with the same input on a given machine (which we'll call ALOPEX), Alice's program is always about 25% faster than Bob's program. The table below shows the number of lines of input in the first column, the execution time of Alice's program in the second, and the execution time of Bob's program in the third:&lt;br /&gt;
{|&lt;br /&gt;
! N&lt;br /&gt;
! Alice&lt;br /&gt;
! Bob&lt;br /&gt;
|-&lt;br /&gt;
| 10&amp;lt;sup&amp;gt;4&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 0.026&lt;br /&gt;
| 0.032&lt;br /&gt;
|-&lt;br /&gt;
| 10&amp;lt;sup&amp;gt;5&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 0.36&lt;br /&gt;
| 0.45&lt;br /&gt;
|-&lt;br /&gt;
| 10&amp;lt;sup&amp;gt;6&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 3.8&lt;br /&gt;
| 4.7&lt;br /&gt;
|}&lt;br /&gt;
Practically speaking, Alice's program is faster. In theory, however, the two programs have equally efficient algorithms. This is because an algorithm is an abstraction independent of the machine on which it runs, and we could make Bob's program just as fast as Alice's by running it on a computer that is 25% faster than the original one.&lt;br /&gt;
&lt;br /&gt;
If this argument sounds unconvincing, consider the following argument. Programs, as they run on real machines, are built up from a series of very simple instructions, each of which is executed in a period of time on the order of a nanosecond on a typical microcomputer. However, some instructions happen to be faster than others.&lt;br /&gt;
&lt;br /&gt;
Now, suppose we have the following information:&lt;br /&gt;
* On ALOPEX, the &amp;lt;code&amp;gt;MULTIPLY&amp;lt;/code&amp;gt; instruction takes 1 nanosecond to run whereas the &amp;lt;code&amp;gt;DIVIDE&amp;lt;/code&amp;gt; instruction takes 2 nanoseconds to run.&lt;br /&gt;
* We have another machine, which we'll call KITSUNE, on which these two instructions both take 1.5 nanoseconds to run.&lt;br /&gt;
* When Bob's program is given 150000 lines of input, it uses 1.0&amp;amp;times;10&amp;lt;sup&amp;gt;8&amp;lt;/sup&amp;gt; multiplications and 3.0&amp;amp;times;10&amp;lt;sup&amp;gt;8&amp;lt;/sup&amp;gt; divisions, whereas Alice's program uses 4.0&amp;amp;times;10&amp;lt;sup&amp;gt;8&amp;lt;/sup&amp;gt; multiplications and 0.8&amp;amp;times;10&amp;lt;sup&amp;gt;8&amp;lt;/sup&amp;gt; divisions on the same input.&lt;br /&gt;
Then we see that:&lt;br /&gt;
# On ALOPEX, Bob's program will require (1&amp;amp;nbsp;ns)&amp;amp;times;(1.0&amp;amp;times;10&amp;lt;sup&amp;gt;8&amp;lt;/sup&amp;gt;) + (2&amp;amp;nbsp;ns)&amp;amp;times;(3.0&amp;amp;times;10&amp;lt;sup&amp;gt;8&amp;lt;/sup&amp;gt;) = 0.70&amp;amp;nbsp;s to complete all these instructions, whereas Alice's program will require (1&amp;amp;nbsp;ns)&amp;amp;times;(4.0&amp;amp;times;10&amp;lt;sup&amp;gt;8&amp;lt;/sup&amp;gt;) + (2&amp;amp;nbsp;ns)&amp;amp;times;(0.8&amp;amp;times;10&amp;lt;sup&amp;gt;8&amp;lt;/sup&amp;gt;) = 0.56&amp;amp;nbsp;s; Alice's program is faster by 25%.&lt;br /&gt;
# On KITSUNE, Bob's program will require (1.5&amp;amp;nbsp;ns)&amp;amp;times;(1.0&amp;amp;times;10&amp;lt;sup&amp;gt;8&amp;lt;/sup&amp;gt;) + (1.5&amp;amp;nbsp;ns)&amp;amp;times;(3.0&amp;amp;times;10&amp;lt;sup&amp;gt;8&amp;lt;/sup&amp;gt;) = 0.60&amp;amp;nbsp;s, whereas Alice's will require (1.5&amp;amp;nbsp;ns)&amp;amp;times;(4.0&amp;amp;times;10&amp;lt;sup&amp;gt;8&amp;lt;/sup&amp;gt;) + (1.5&amp;amp;nbsp;ns)&amp;amp;times;(0.8&amp;amp;times;10&amp;lt;sup&amp;gt;8&amp;lt;/sup&amp;gt;) = 0.72&amp;amp;nbsp;s. Now Bob's program is 20% faster than Alice's.&lt;br /&gt;
Thus, we see that a program that is faster on one machine is not necessarily faster on another machine, so that comparing exact running times in order to decide which algorithm is faster is not meaningful.&lt;br /&gt;
&lt;br /&gt;
The same can be said about memory usage. On a 32-bit machine, an &amp;lt;code&amp;gt;int&amp;lt;/code&amp;gt; (integer data type in the C programming language) and an &amp;lt;code&amp;gt;int*&amp;lt;/code&amp;gt; (a [[pointer]] to an &amp;lt;code&amp;gt;int&amp;lt;/code&amp;gt;) each occupy 4 bytes of memory, whereas on a 64-bit machine, an &amp;lt;code&amp;gt;int&amp;lt;/code&amp;gt; uses 4 bytes and an &amp;lt;code&amp;gt;int*&amp;lt;/code&amp;gt; uses 8 bytes; so that an algorithm that uses more pointers than integers may use more memory on a 64-bit machine and less on a 32-bit machine when compared to a competing algorithm that uses more integers than pointers.&lt;br /&gt;
&lt;br /&gt;
==Why the machine is now irrelevant==&lt;br /&gt;
The preceding section should convince you that if two programs's running times differ by at most a constant factor, then their algorithms should be considered equally efficient, as we can make one or the other faster in practice just by designing our machine accordingly. Let's now consider a case in which one algorithm can actually be faster than another.&lt;br /&gt;
&lt;br /&gt;
Suppose that Alice and Bob are now given a new problem to solve and their algorithms perform as follows on ALOPEX:&lt;br /&gt;
{|&lt;br /&gt;
! N&lt;br /&gt;
! Alice&lt;br /&gt;
! Bob&lt;br /&gt;
|-&lt;br /&gt;
| 10&amp;lt;sup&amp;gt;3&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 0.040&lt;br /&gt;
| 0.0013&lt;br /&gt;
|-&lt;br /&gt;
| 10&amp;lt;sup&amp;gt;4&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 0.38&lt;br /&gt;
| 0.13&lt;br /&gt;
|-&lt;br /&gt;
| 10&amp;lt;sup&amp;gt;5&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 3.7&lt;br /&gt;
| 12&lt;br /&gt;
|-&lt;br /&gt;
| 10&amp;lt;sup&amp;gt;6&amp;lt;/sup&amp;gt;&lt;br /&gt;
| 37&lt;br /&gt;
| 1500&lt;br /&gt;
|}&lt;br /&gt;
Observe that when we multiply the problem size by 10, Alice's program takes about 10 times longer, but Bob's program takes about 100 times longer. Thus, even though Bob's program is faster than Alice's program for small inputs (10000 lines or fewer), '''in the limit of infinite problem size''', Alice's program will ''always'' be faster. Even if we run Bob's program on a supercomputer and Alice's program on a 386, for sufficiently large input, Alice's program will always finish first, since no matter how much of a disadvantage Alice's program has, it catches up more and more as we increase the problem size; it ''scales better'' than Bob's algorithm.&lt;br /&gt;
&lt;br /&gt;
Also observe that Alice's algorithm has running time approximately proportional to the size of its input, whereas Bob's algorithm has running time approximately proportional to the square of the size of its input. We write that Alice's algorithm has running time &amp;lt;math&amp;gt;\Theta(N)&amp;lt;/math&amp;gt; whereas Bob's has running time &amp;lt;math&amp;gt;\Theta(N^2)&amp;lt;/math&amp;gt;. Notice that we don't bother to write the constant factor, since this varies from machine to machine; we write only the part that tells us how the algorithm scales. Since &amp;lt;math&amp;gt;N^2&amp;lt;/math&amp;gt; grows much more quickly than &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; as &amp;lt;math&amp;gt;N&amp;lt;/math&amp;gt; increases, we see that Bob's algorithm is slower than Alice's.&lt;br /&gt;
&lt;br /&gt;
==Notation==&lt;br /&gt;
We are now ready to give an explanation of the &amp;lt;math&amp;gt;\Theta&amp;lt;/math&amp;gt;-notation used in the previous section. (See section [[#Big theta notation|Big theta notation]] below.) We suppose that functions &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;g&amp;lt;/math&amp;gt; map the positive integers to the positive real numbers. The interpretation is that &amp;lt;math&amp;gt;f(n)&amp;lt;/math&amp;gt; represents the amount of time (in some units) that it takes a certain algorithm to process an input of size &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; when run on a certain machine.&lt;br /&gt;
&lt;br /&gt;
===Big O notation===&lt;br /&gt;
We say that &amp;lt;math&amp;gt;f(n) \in O(g(n))&amp;lt;/math&amp;gt; when &amp;lt;math&amp;gt;g(n)&amp;lt;/math&amp;gt; is asymptotically ''at least as large'' as &amp;lt;math&amp;gt;f(n)&amp;lt;/math&amp;gt;. This is read ''f is big-O of g''. Mathematically, this is defined to mean that there are some constants &amp;lt;math&amp;gt;n_0, c&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;c\cdot g(n) \geq f(n)&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;n \geq n_0&amp;lt;/math&amp;gt;; to read it idiomatically, ''for sufficiently large &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;f(n)&amp;lt;/math&amp;gt; is bounded by a constant times &amp;lt;math&amp;gt;g(n)&amp;lt;/math&amp;gt;.''&lt;br /&gt;
&lt;br /&gt;
This means that if we plotted &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;g&amp;lt;/math&amp;gt; on the same set of axes, with &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; along the positive x-axis, then we can scale the curve &amp;lt;math&amp;gt;g&amp;lt;/math&amp;gt; such that it always lies above the curve &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
For example, if we plot &amp;lt;math&amp;gt;f(n) = 3n+1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;g(n) = 2n+1&amp;lt;/math&amp;gt; on the same set of axes, then we see that by scaling &amp;lt;math&amp;gt;g&amp;lt;/math&amp;gt; by a factor of &amp;lt;math&amp;gt;\frac{3}{2}&amp;lt;/math&amp;gt;, we can make the curve of &amp;lt;math&amp;gt;g&amp;lt;/math&amp;gt; lie above the curve of &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. Therefore, &amp;lt;math&amp;gt;3n+1 \in O(2n+1)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Similarly, if we plot &amp;lt;math&amp;gt;f(n) = 5n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;g(n) = n^2&amp;lt;/math&amp;gt; on the same set of axes, and we scale &amp;lt;math&amp;gt;g&amp;lt;/math&amp;gt; by a factor of 5, we will make the curve of &amp;lt;math&amp;gt;g&amp;lt;/math&amp;gt; lie above the curve of &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. So &amp;lt;math&amp;gt;5n \in O(n^2)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
On the other hand, if &amp;lt;math&amp;gt;f(n) = n^2&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;g(n) = 5n&amp;lt;/math&amp;gt;, no matter how much we scale &amp;lt;math&amp;gt;g&amp;lt;/math&amp;gt;, we cannot make it lie above &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt;. We could for example try scaling &amp;lt;math&amp;gt;g&amp;lt;/math&amp;gt; by a factor of 100. Then, we will have &amp;lt;math&amp;gt;f(n) \leq g(n)&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; up to 500, but for &amp;lt;math&amp;gt;n &amp;gt; 500&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; will be larger. If we try scaling &amp;lt;math&amp;gt;g&amp;lt;/math&amp;gt; by a factor of 1000, then &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; will still catch up at &amp;lt;math&amp;gt;n &amp;gt; 5000&amp;lt;/math&amp;gt;, and so on. So &amp;lt;math&amp;gt;n^2 \notin O(5n)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Big O notation is the most frequently used notation for describing an algorithm's running time, since it provides an ''upper bound''. If, for example, we write that an algorithm's running time is &amp;lt;math&amp;gt;O(n^2)&amp;lt;/math&amp;gt;, then this is a guarantee that, as &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; grows larger, the relative increase in the algorithm's running time is at most that of &amp;lt;math&amp;gt;n^2&amp;lt;/math&amp;gt;, so that when we double &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;, our algorithm's running time will be at most quadrupled, and so on. This characterization is '''machine-independent'''; if an algorithm is &amp;lt;math&amp;gt;O(n^2)&amp;lt;/math&amp;gt; on one machine then it will be &amp;lt;math&amp;gt;O(n^2)&amp;lt;/math&amp;gt; on any other machine as well. (This holds true under the assumption that they are both classical von Neumann machines, which is the case for all computing machines on Earth right now.)&lt;br /&gt;
&lt;br /&gt;
An algorithm that is &amp;lt;math&amp;gt;O(n^k)&amp;lt;/math&amp;gt; for some constant &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt; is said to be '''polynomial-time'''; its running time is bounded above by a polynomial in its input size. Polynomial-time algorithms are considered &amp;quot;efficient&amp;quot;, because they provide the guarantee that as we double the problem size, the running time increases by a factor of at most &amp;lt;math&amp;gt;2^k&amp;lt;/math&amp;gt;. If an algorithm is not polynomial-time, it could be that doubling the problem size from 1000 to 2000 increases the running time by a factor of 10, but doubling the problem size from 10000 to 20000 increases the running time by a factor of 100, and the factor becomes worse and worse as the problem size increases; these algorithms are considered inefficient. The first polynomial-time algorithm discovered that solves a particular problem is usually considered a breakthrough.&lt;br /&gt;
&lt;br /&gt;
===Big omega notation===&lt;br /&gt;
Big omega notation is the opposite of big O notation. The statement that &amp;lt;math&amp;gt;f(n) \in O(g(n))&amp;lt;/math&amp;gt; is equivalent to the statement that &amp;lt;math&amp;gt;g(n) \in \Omega(f(n))&amp;lt;/math&amp;gt;. Conceptually, it means that &amp;lt;math&amp;gt;f(n)&amp;lt;/math&amp;gt; is asymptotically at least as large as &amp;lt;math&amp;gt;g(n)&amp;lt;/math&amp;gt;. Therefore, big omega notation provides a lower bound, instead of an upper bound. For example, if an algorithm's running time is &amp;lt;math&amp;gt;\Omega(k^n)&amp;lt;/math&amp;gt; for some &amp;lt;math&amp;gt;k &amp;gt; 1&amp;lt;/math&amp;gt;, then when we increase the problem size from &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; to &amp;lt;math&amp;gt;n+c&amp;lt;/math&amp;gt;, the running time of our algorithm will increase by a factor of ''at least'' &amp;lt;math&amp;gt;k^c&amp;lt;/math&amp;gt;. Approximately doubling the input size will then approximately ''square'' the running time. This scaling behaviour, called '''exponential time''', is considered extremely inefficient.&lt;br /&gt;
&lt;br /&gt;
===Big theta notation===&lt;br /&gt;
If &amp;lt;math&amp;gt;f(n) \in O(g(n))&amp;lt;/math&amp;gt; ''and'' &amp;lt;math&amp;gt;f(n) \in \Omega(g(n))&amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;g&amp;lt;/math&amp;gt; differ by only a constant factor from each other, like the running time of Alice and Bob's algorithms from the first section. In this case, we write &amp;lt;math&amp;gt;f(n) \in \Theta(g(n))&amp;lt;/math&amp;gt; or equivalently &amp;lt;math&amp;gt;g(n) \in \Theta(f(n))&amp;lt;/math&amp;gt;. Whereas big O notation gives us an ''upper bound'' on a function, such as an algorithm's running time as a function of its input size, and big omega notation gives us a ''lower bound'', big theta notation gives us both simultaneously&amp;amp;mdash;it gives us an ''expectation'' of that function. This is often expressed as ''f is on the order of g''.&lt;br /&gt;
&lt;br /&gt;
Thus, if an algorithm's running time is &amp;lt;math&amp;gt;\Theta(n^2)&amp;lt;/math&amp;gt;, and we double the input size &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;, then we have an expectation that the running time of the algorithm will be nearly exactly four times what it was before, whereas big O notation would only tell us that it is not more than four times greater, and big omega notation that it is at least four times greater.&lt;br /&gt;
&lt;br /&gt;
===Little O notation===&lt;br /&gt;
Whereas the big O notation &amp;lt;math&amp;gt;f(n) \in O(g(n))&amp;lt;/math&amp;gt; expresses that &amp;lt;math&amp;gt;g&amp;lt;/math&amp;gt; grows ''at least as quickly'' as &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; in an asymptotic, constant-factor-oblivious sense, little O notation, as in &amp;lt;math&amp;gt;f(n) \in o(g(n))&amp;lt;/math&amp;gt;, expresses that &amp;lt;math&amp;gt;g&amp;lt;/math&amp;gt; grows ''strictly more quickly'' than &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; in the limit of infinite &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;. It means that &amp;lt;math&amp;gt;\lim_{n\to\infty} \frac{f(n)}{g(n)} = 0&amp;lt;/math&amp;gt;. This notation is not used very often in computer science (simply because it is rarely useful).&lt;br /&gt;
&lt;br /&gt;
===Little omega notation===&lt;br /&gt;
Little omega notation is the opposite of little O notation; the statement that &amp;lt;math&amp;gt;f(n) \in o(g(n))&amp;lt;/math&amp;gt; is equivalent to the statement that &amp;lt;math&amp;gt;g(n) \in \omega(f(n))&amp;lt;/math&amp;gt;. Little omega notation, like little O notation, is rarely used in computer science.&lt;br /&gt;
&lt;br /&gt;
===Tilde/swung dash===&lt;br /&gt;
The notation &amp;lt;math&amp;gt;f(n) \sim g(n)&amp;lt;/math&amp;gt; means that &amp;lt;math&amp;gt;\lim_{n\to\infty} \frac{f(n)}{g(n)} = 1&amp;lt;/math&amp;gt;. This is usually not used when discussing running times of algorithms, because it does ''not'' ignore the constant factor the way that the other notations do; hence &amp;lt;math&amp;gt;n \in \Theta(2n)&amp;lt;/math&amp;gt; but &amp;lt;math&amp;gt;n \nsim 2n&amp;lt;/math&amp;gt;. On the other hand, &amp;lt;math&amp;gt;n \sim n+1&amp;lt;/math&amp;gt;. However, this notation may be useful when we care about the constant factor because we are quantifying something that depends on only the algorithm itself instead of the machine it runs on. For example, the amount of time an algorithm uses to sort its input depends on what machine it runs on, but the number of comparisons that the algorithm uses for a given input does not depend on the machine. In this case we might say that one algorithm requires &amp;lt;math&amp;gt;\sim N \log_2 N&amp;lt;/math&amp;gt; comparisons, whereas another requires &amp;lt;math&amp;gt;\sim 2 N \log_2 N&amp;lt;/math&amp;gt; comparisons. Both algorithms are &amp;lt;math&amp;gt;\Theta(N \log N)&amp;lt;/math&amp;gt;, but the tilde notation distinguishes their performance.&lt;br /&gt;
&lt;br /&gt;
[[Category:Pages needing diagrams]]&lt;/div&gt;</summary>
		<author><name>Brian</name></author>	</entry>

	</feed>