Big O notation and algorithm complexity analysis

Coccagerman
3 min readAug 7, 2021

Big O or also called asymptotic notation is a system that allows us to analyze and compare the performance of an algorithm as it’s input grows. In programming there’re always many ways to do the same thing, and when comparing ways and deciding which one to take, we normally take a look at how does this way (algorithm/function) perform (how long it takes to run) and what resources does it need to achieve its goal.

So when comparing runtime one option could be to measure the exact time the function takes to run, right? The problem with this is the time will vary between different computers, and even within the same computer each time measurement might be different.

Big O notation is the solution to this problem. Big O is a standardized method to analyze and compare the complexity (in terms of runtime and space) of different algorithms. The big O complexity of an algorithm will always be the same no matter what computer you “calculate it” in, because the complexity is calculated on how the number of operations of the algorithm varies when the input varies, and that relation always stays the same no matter the environment.

Different types of complexities are:

  • Constant — O(1): When the number of operations/space required is always the same independently from the input. Take for example a function that takes a number as input and returns that number minus 10. No matter if you give it 100 or 1000000 as input, that function will always perform a single operation (rest 10), so the complexity is constant O(1).
  • Logarithmic — O(log n): When the number of operations/space required grows at an increasingly slower rate compared to the growth of the input. This type of complexity is often found in algorithms that take a divide and conquer approach or in search algorithms. The classic example is binary search, in which the dataset you have to go through continually halves until you reach the final result.
  • Linear —O(n): When the number of operations/space required grow at the same rate as the input. Take for example a loop that prints every single value found in an array. The number of operations will grow together with the length of the array, so the complexity is linear O(n).
  • Quadratic — O(n²): When the number of operations/space required grow at the power of two regarding to the input. Nested loops are the classic example for this one. Imagine we have a loop that iterates through an array of numbers, and within that loop we have another one that iterates the whole array again. For every value in the array we’re iterating over the array twice, so the complexity is quadratic O(n²).

Note that the same notation is used when talking about both time and space complexity. Say for example we have a function that always creates an array with a single value no matter the input it receives, then the space complexity will be constant O(1), and so on with the other complexity types.

Time complexity rules of thumb:

  • Arithmetic operations and variable assignments are always constant.
  • Accessing elements in arrays (by index) or in objects (by key) is always constant.
  • In a loop, the complexity is the length of the loop times whatever happens inside the loop.

Space complexity rules of thumb:

  • Most primitive data types (booleans, numbers, null, undefined) are constant space.
  • Strings require O(n) space while n being the number of characters.
  • Reference types (arrays and objects) require o(n) space while n being the length of the array or the amount of keys in the object.

--

--