When analyzing algorithms we want to reason (quickly) about the running time of algorithms. We use the RAM model in which the following are considered to cost 1 CPU operation:
Other attributes:
The goal is to allow you to count operations and CPU cost without needing to know the specific architecture you are running on.
Some notes on algorithm analysis.
Big-oh, \( \mathcal{O}( \cdot ) \), has nothing to do with CPU cost, persay, it is just a way to fuzzily compare functions. We say that \( f(n) = \mathcal{O}(g(n)) \) if and only if there is some cutoff \( N\) and positive real \(c\) such that for all \(n \geq N\) we have \(|f(n)| \leq c \cdot |g(n)| \).
The official definition is there to get you out of trouble when you feel lost, but it seems way more complicated than it really is. It's really used to compare functions.
For \(f, g, f_1, g_1\) functions from \(\mathbb{N} \to \mathbb{R}\) we know that:
So to use big-oh in algorithms we ballpark the cost of loops and multiplications ignoring specific details.
Sample algorithms to practice on: sorting!
This is insert sort done via Folk Dance!
or a gif: