Beginning With Big O Notation

What is it? Big O Notation describes the limiting behavior of a function when the argument trends toward a certain value. Basically, it helps to describe the complexity and performance of an algorithm. This notation can also help to understand the execution time needed or space required by an algorithm.

O(1) This describes an algorithm that executes in the same time/space regardless of the input.

O(N) This describes an algorithm where the performance is directly dependent on the size of the input data. The time/space will grow linearly in proportion to the data set’s size.

O(N^2) This notation is similar to the previous, where the performance is dependent on the size of the input data. Except in this case, the performance is proportional the the square of the total of the input data’s size.

O(2^N) This describes an algorithm where the performance will double with every additional item in the input data. The time/size of execution for a function described by this notation will become large really fast.

O(logN) This describes an algorithm whose performance will grow gradually over time depending on the size of the input. Graphically, logarithms tend to grow really fast in the beginning and then flatten out over time. Algorithms described by this notation can sometimes handle large data sets efficiently without astronomical increases in the time/size of execution for the function.

Additional Resources

Written on September 20, 2010