When analyzing algorithms we want to reason (quickly) about the running time of algorithms. We use the RAM model in which the following are considered to cost 1 CPU operation:
Addition or subtraction of two numbers
Multiplication of two numbers
Comparison of two numbers
A function call
An if statement
Any memory access
Other attributes:
Memory is unlimited
Recursive calls, for loops, and while loops all require analysis
The goal is to allow you to count operations and CPU cost without needing to know the specific architecture you are running on.
Big-oh, \( \mathcal{O}( \cdot ) \), has nothing to do with CPU cost, persay, it is just a way to fuzzily compare functions. We say that \( f(n) = \mathcal{O}(g(n)) \) if and only if there is some cutoff \( N\) and positive real \(c\) such that for all \(n \geq N\) we have \(|f(n)| \leq c \cdot |g(n)| \).
The official definition is there to get you out of trouble when you feel lost, but it seems way more complicated than it really is. It's really used to compare functions.
Intro Problems
Prove, using the definition, that \(3n^2 = \mathcal{O}(n^2)\).
Do the same to show that \( \frac{1}{5} n = \mathcal{O}(n^3)\).
Show that \( 2^n \neq \mathcal{O}(n^4) \)
Properties of \(\mathcal{O}\)
For \(f, g, f_1, g_1\) functions from \(\mathbb{N} \to \mathbb{R}\) we know that:
If \(f = \mathcal{O}(g)\) then \( f + g = \mathcal{O}(g) \)
If \(f = \mathcal{O}(f_1)\) and \(g = \mathcal{O}(g_1)\) then \(f \cdot g = \mathcal{O}(f_1 \cdot g_1)\)
\( c \cdot f = \mathcal{O}(f)\) for any constant real \(c\)
Upshot is that you pretty much ignore small terms.
Next batch
Prove that \(a_n x^n + \cdots + a_0 = \mathcal{O}(x^n)\)
For \(b \geq a\) prove that \(n^a = \mathcal{O}(n^b)\).