Address
Whiteland, IN 46184

Work Hours
Monday to Friday: 9AM - 5PM
Weekend: 1PM - 3PM

The Big O Notation

In the computer world, 1 second can make a big difference when we are making use of huge datasets. In these conditions, the algorithms used to sort, iterate and manipulate our data generally must be optimal to prevent lags or hangs in our software. The goal is to find an algorithm that has the best runtime and an easy way to know which algorithm has the best runtime.
HOW CAN WE DO THIS?
Well, we can do this manually by timing our code from when the code starts running till when it ends, we do this for all the algorithms we come up with. This is not ideal, it is a waste of time(what if the code will take 4 hours or more to run) and does not give correct results; this is because different machines have different capabilities and some will run a piece of code faster than others will. Therefore this solution is not the best.
IS THERE A BETTER WAY?
Well, of course, there is. We have to find something that will remain constant no matter the machine the code runs on. So instead of counting the number of seconds, it takes a code to run, which can change so much which makes it unreliable we count the number of simple operations a computer must do when running our code. This will remain constant no matter the situation.
Example
Let us write a small function addUpTo(n)  in Javascript that adds up to a particular number n and returns an Integer as the answer. That is if n is 2 the function add 1 +2 so the answer is 3
The first solution that may pop into your mind is using a for loop, how many simple operations does this solution have. Let us have a look together.

starting from the top
we have one assignment (let total =0) that is an operation the computer has to carry out once.
In the for loop, we have one assignment (let i = 1) this is an operation the computer has to carry out once.
We have i <= n this is n comparisons depending on the variable n in the params.
We have i++ this has n additions and n assignments also depending on the variable n.
We have total += i this has n addition and n assignments also depending on the variable n.
So in this algorithm as n grows the number of operations grows roughly in proportion with n, so if n is 5 the number of operations is 5n + 2.
Because the number of simple operations depends on n, the operations grow as n grows.
Let us run this function and display the time it took using n as 1 billion.

The result of this is 1.26000…….. Seconds, of course, this may vary in your machine and may also vary when run different times
Here is a better algorithm

starting from the top
the number of operations to be carried out is three which are multiplication (*), addition(+) and division(/). It does not matter what n is, the number of operations remains the same. In this algorithm, the number of simple operations does not depend on the number n. The number of simple operations remains constant. But doing our counting like this is hard, it is better than using seconds because it is more constant but it can still be time-consuming. We need a more constant, easier way to do this. Cause we cannot keep counting the operations for every code written. Let us run this function and display the time it took using n as 1 billion.
 
As we can see from the result of the two algorithms that do the same thing the runtime of the second function with a lesser number of operations is way faster than the first function.
AN EVEN BETTER WAY
Yes the Big O, what is the Big O anyway. In simple terms, Big O is a way of comparing code in its performance to other pieces of code. Big O is used to measure the time complexity(The runtime of a particular algorithm) and the space complexity(The space an algorithm takes during run time) of an algorithm.
Who cares about BigO?
1 It is important to have a precise vocabulary to talk about how our code performs.
2 Useful for discussing trade-offs between different approaches.
3 When your code slows down or crashes, identifying parts of the code that are inefficient can help us find pain points in our applications.
What does better mean?

  • Faster
  • Less memory-intensive

Big O is counting the number of simple operations the computer has to perform. It allows us to talk formally about how the runtime of an algorithm grows as the inputs grow. It enables us to assign names (sort of) to the performance of code.
Big O official Definition
We say that an algorithm is O(f(n)) if the number of simple operations the computer has to do is eventually less than a constant times f(n), as n increases.
There are a bunch of options f(n) = n means a function with the input of n and then the output.
WHAT DOES THE DEFINITION MEAN?

  • The function f(n) could be linear (f(n)=n) as n grows runtime grows as well. Written as O(n).
  • The function f(n) could be quadratic (f(n)=n^2) as n grows the runtime squares related to the square of n. Written as O(n^2)
  • The function f(n) could be constant (f(n) = 1) as n grows the runtime is constant which is simplified to 1. Written as O(1)

LISTING OF BIG O NOTATION ACCORDING TO SPEED OR TIME COMPLEXITY

  • O(1) Excellent
  • O(logn) Excellent
  • O(n) Fair
  • O(nlogn) Bad
  • O(n^2) Horrible
  • O(2^n) Horrible
  • O(n!) Horrible

LOGARITHMS
What are logs?
This is simply to what power of the log base will give you n.
that is log base 2 (8) = 3 ie 2^3 will give 8 so the answer is 3.
log == log base 2(shorthand)
REPRESENTING ABOVE FUNCTIONS IN O NOTATION
The faster addUpTo(n) has an O(1). it remains roughly constant as n grows because it has a definite number of operations

The slower addUpTo(n) has an O(n) because the runtime keeps changing depending on the number of n. So the runtime increases as n increases because of the simple operations it has to carry out in the for loop.


 
SPACE COMPLEXITY
This is the amount of space taken in memory for the algorithm to run(Auxiliary space complexity)
SPACE COMPLEXITY RULE OF THUMB

  • Most primitives (booleans, numbers, undefined, null) are constant space
  • Strings require O(n) space (where n is the string length)
  • Reference types are generally O(n), where n is the length(for arrays) or the number of keys (for objects)

Here is an example

HERE we are declaring two variables total, an i and then just adding to the array so the BIGO of space of this function is O(1)
Here is another example 

Here we declared a variable called newArr which is an array and in the for-loop we push a multiplication of each number in the arr and then return the newArr so the space complexity of this function is O(n) since the space taken depends on the initial arr put in.
BIG O SHORTHANDS

  •  Arithmetic Operations are constant
  • Variable assignments are constant
  •  Accessing elements in an array (by index) or object (by key) is constant
  •  In a loop, the complexity is the length of the loop times the complexity of whatever happens inside the loop

Conclusion

  • To analyze the performance of an algorithm, we use BIG O NOTATION.
  • BIG O NOTATION can give us a high-level understanding of the time or space complexity of an algorithm.
  • BIG O NOTATION does not care about precision, only general trends(linear, quadratic, constant).
  • The time or space complexity(as measured by Big O) depends only on the algorithm, not the hardware used to run the algorithm. 

 
 
written by Ebube Okocha.

Get our Latest Insights right in your Inbox.

Enter your email address below to subscribe to our weekly newsletter.

Leave a Reply

Your email address will not be published. Required fields are marked *