Two Wrongs

Intuition for Time Complexity of Algorithms

Intuition for Time Complexity of Algorithms

I'm currently taking an algorithms class to fill out my spring semester. Since I'm already reasonably familiar with the contents of the course, I pass the time during lectures trying to figure out some deeper, more fundamental properties of algorithms. It's fun! And probably very useful to know. Here are some general notes on complexity, and one completely unrelated note on dynamic programming.

This is not a complete list of anything. It's just a loose assortment of insights I've had, which may or may not be helpful to anyone else. This article might get updated with time as I discover additional interesting things.

Complexity Categories

O(n!)

This means "bruteforcing through all permutations of the input". It's easy to come up with algorithms in this category. For example, you could sort an array by generating all permutations of the array and then returning the one that is sorted. A depth-first search will also belong to this category, such as a naive solution to the travelling salesman problem, or finding the longest simple path in a graph by checking all permutations of nodes that are also valid paths.

O(2ⁿ)

Any time you're trying all possible ways to split the input into two groups you're likely dealing with an O(2ⁿ) algorithm. Intuition: assign 0 or 1 to each element of the input, indicating whether they belong to the first or second group. The range of the output will be a number of n binary digits, i.e. between 0 and 2ⁿ.

For example, a "cut" is any legal way to split a graph into two smaller graphs. The "value" of the cut is the sum of the weights of all edges that disappear. Finding the minimal cut of a graph means finding a way to split the graph into two smaller graphs in a way that removes as few edges as possible. Doing this is an O(2ⁿ) problem.

O(n²)

Things start getting really interesting here. The most obvious interpretation of O(n²) is "a nested loop over the input". A more fundamental way of formulating that is "pairwise comparisons between elements". When each element of the input is compared with every other element, you have an O(n²) algorithm.

When does this happen? Well, one obvious example is when you have a bunch of points in 2D space and you want to figure out which two points are closest to each other. The naive way to do that is to compare each point to every other point, and keep the shortest distance so far across all comparisons.

This is also what the simplest sensible sorting algorithms (like insertion sort) do. They take each element that is not yet sorted, and compare it to every other sorted element to figure out where in the sorted sequence it goes. The "pairwise" interpretation of this is that if you compare each element with every other element, each element will get a unique number of "true" comparisons. The number of "true" comparisons an element gets is its index in the sorted list.

O(n log n)

I've always been surprised at how some comparison-based sorting algorithms – which intuitively should be O(n²) (see above) – can be O(n log n). It's not at all obvious to me. It appears that by using divide-and-conquer strategies, you can take an O(n²) problem and turn it into an O(n log n) problem. The way this works is that you

  1. Find a way to split the input into halves and then join them together again. These two operations must be O(n) in total. The way merge sort does it is by splitting in O(1) and joining in O(n), the way quick sort does it is by splitting in O(n) and joining in O(1).
  2. At the leaves of the binary tree that forms from the above operation, you perform O(1) work. This is often the case, since at the leaves you only have to do whatever real work you need to do on a single element. (In the case of sorting algorithms, sorting a list with only one element is easy: do nothing at all!))

The consequence of the above is that you will have a binary tree where the root node performs O(n) work (splitting and joining into halves.) The two child nodes of the root each do O(n/2) (I'm being a bit handwavy with the notation here) work: each node only needs to split and join half the original input. The four child nodes below those each do O(n/4) work. You see that at each "level of splitting", only O(n) work is performed in total: there are i nodes at this level, and they all perform O(n/i) work.

Now, how many "levels" are there, i.e. how deep is this binary tree? Only O(log n). So there are O(log n) levels and at each level you are doing O(n) work. Multiply and that makes O(n log n).

I suppose, at least from the perspective of sorting algorithms, that there should be some sort of intuitive connection between pairwise comparison and whatever this is – somehow comparing each element only to log n other elements? Something something binary search?

Dynamic Programming

Completely unrelated to all of the above:

When you read the term "dynamic programming", do not read the word "programming" as "writing code", because that's not what it means. Rather, read it as "making a schedule" or "filling a table". The word "program" is used in its broader "table of things" meaning.

I believe knowing that would have made it much easier for me to grok what dynamic programming actually is.