Time Complexity of Algorithms Explained with Examples

Share your love

There are indeed multiple ways of solving any problem that we might face, but the question comes down to which one among them is the best and should be chosen. But let’s focus only on algorithms, the best way to find the right solution for a specific problem is by comparing the performances of each available solution. Here Time complexity of algorithms plays a crucial role with Space Complexity as well, but let’s keep it for some other time.

In this blog, we will see what is time complexity, how to calculate it and how many common types of time complexities are there.

Let’s begin…

What is Time Complexity of algorithms?

Time complexity is the amount of time taken by an algorithm to run, as a function of the length of the input. Here, the length of input indicates the number of operations to be performed by the algorithm.

It depends on lots of things like hardware, operating system, processors, etc, and not just on the length of the input. However, we don’t consider any of these factors while analyzing the algorithm. We will only consider the execution time of an algorithm.

From this, we can conclude that if we had an algorithm with a fixed length or size of the input, the time taken by that algorithm will always remain the same.

Let’s take an example to understand this better – 

Time Complexity of Algorithms
Constant Time Complexity

The above statement is only printed one as no input value was provided (number of times it should run), thus the time taken by the algorithm is constant.

Linear Time Complexity

But for the above code, the time taken by the algorithm will not be constant as the above code contains a ‘for loop’ iterating the algorithm equal to the size of the input.  In the above code, the size of the input is taken as 5, thus the algorithm is executed 5 times.

From this, we can conclude that if the statement of an algorithm has only been executed once, the time taken will always remain constant, but if the statement is in a for loop, the time taken by an algorithm to execute the statement increases as the size of the input increases.

But if the algorithm contains nested loops or a combination of both, a single executed statement and a loop statement, the time is taken by the algorithm will increase according to the number of times each statement is executed as shown 

Quadratic Time Complexity

From this, we can draw a graph between the size of the input and the number of operations performed by the algorithm which will give us a clear picture of different types of algorithms and time taken by them or the number of operations performed by that algorithm of the given size of the input.

Graphs between the number of operations(N) and the size of the input(n)

From the above graph, we can say that there exists a relationship between the size of the input and the number of operations performed by an algorithm, and this relation is known as the order of growth and is denoted by Big Oh (O) notation which is an asymptotic notation.

Big O Notation

It is used to express the upper limit of an algorithm’s running time, or we can also say that it tells us the maximum time an algorithm will take to execute completely. It is also used to determine the worst-case scenario of an algorithm.

Asymptotic Notation is used to describe the running time of an algorithm with a given input.  There are mainly three types of asymptotic notations – 

Big Oh (O) – used to calculate the maximum time taken by an algorithm to execute completely.

Big Theta (Θ) – used to calculate the average time taken by an algorithm to execute completely.

Big Omega (Ω) – used to calculate the minimum time taken by an algorithm to execute completely.

How to Calculate Time Complexity

We have understood what is the time complexity and also that the time complexity of an algorithm is not just about the time but also about the order or rate at which it increases as we increase the size of the input.

Many other factors might affect the time complexity of an algorithm like computer hardware, programmer’s experience, etc, but we can not consider them while calculating the time complexity.

This brings us to how is Time Complexity even calculated. We have seen above that there are many types of time complexities (I have just written the most common ones but there are way more than that), and simply checking the algorithm for any hints like sorting, searching, or looping might not help every time.

Most of the time, we have to solve the code by putting in random values to check its time complexity, and yet sometimes those shortcuts will help us in determining the time complexity of the algorithm, but some questions despite having those hints are not what they seem.

Let’s take an example to understand this process – 

var total = 0, i, sum = 0, n = 4; // C1 - constant
for (i = 0; i < n; i++) {
  // C2 no of times=n+1 (+1 for the end false condition)
  sum = sum + i; // C3 no of times=n
  console.log(sum); // C1 - constant
}

Let’s denote the value of time complexity by Tn, then – 

Tn = C1 + C2(n+1) + nC3 + C1

     = C1 + nC2 + C2 + nC3 + C1

Tn  = n(C2+C3) + 2C1

The above equation is very similar to an equation we are very familiar with  – y = mx + c which is the equation that represents a linear function and now it will represent the linear time complexity.

But will we have to do all this just to say whether an algorithm has a linear or quadratic or any other time complexity? The answer is no. Some tricks can be used to find the time complexity just by seeing an algorithm once.

Drop the Constants

Let’s face it, will it have any effect on time complexity if we change its value?  The answer is no, changing its value will not affect the time complexity.

This is because big-O notation only describes the long-term growth of the rate of functions (n -> infinite), rather than their absolute magnitudes. Multiplying a function by a constant only influences its growth rate by a constant amount, so linear functions still grow linearly only. 

So it is better to drop any constants no matter their value while calculating time complexity or Big O of an algorithm

Remove all the non-dominant terms from the equation

The same reason as above as having a constant term with the dominant value (n) will not have any effect on the nature of the graph or on time complexity.

Examples

We have taken only 2 questions just to give you an idea of how to quickly calculate the time complexity of an algorithm with the above-mentioned two tricks.

Q1. What is the time complexity of the following code

var a = 0, b = 0, i, j, k, N; //C1
for (i = 0; i < N; i++) { //C2(n+1)
    for (j = 0; j < N; j++) { //C2(n+1)
        a = a + j; //C1
    }
}
for (k = 0; k < N; k++) { //C3(n+1)
    b = b + k; //C1
}

Tn = 3C1 + C2(n+1)*C2(n+1) + C3(n+1) = C2(n^2)  + nC3 + 2C2 + 3C1 + C3

Removing all constant and non-dominant terms, we will simply get n^2 as our time complexity. And because time complexity is denoted by Big O notation, thus time complexity of the above algorithm is O(n^2)

Here n will also be taken as a non-dominant term as n^2 will have a greater impact on the time complexity of the algorithm than n for very large values.

Q2. Find the time complexity for the following function –

var a = 0, b = 0, i, j, N, M;    
for (i = 0; i < N; i++) {
    a = a + rand();  
}
for (j = 0; j < M; j++) {
    b = b + rand();
}

Consider rand() to have a constant time complexity Here the time complexity is O(N + M), you can test it if you want with the above method

Notice that in both the questions, there were two for loops, but in one question we have multiplied the values, while we have added them in the next one.

If the loops are in nested condition (it doesn’t matter how many there are), the dominant terms will always get multiplied with each other.

But if they are not in nested condition and have their own loops just like the second question, we will add the dominant terms

Types of Time Complexity

Constant time – O(1)

An algorithm is said to have a constant time complexity when the time taken by the algorithm remains constant and does not depend upon the number of inputs.

Constant Time Complexity
Constant Time Complexity

In the above image, the statement has been executed only once and no matter how many times we execute the same statement, time will not change. This is an example of O(1).

Other examples include getting a value from an array using its index value or push() or pop() methods of an array.

Linear Time – O(n)

The time complexity of an algorithm is said to be linear when the number of operations performed is directly proportional to the size of the input.

Linear Time Complexity
Linear Time Complexity

The code in the above image is the perfect example of linear time complexity as the number of operations performed by the algorithm is determined by the size of the input, which is five in the above code.

The best and the easiest way to find the linear time complexity is to look for loops

Quadratic Time  – O(n^2)

An algorithm has quadratic time complexity if the time to execute it is proportional to the square of the input size.

Quadratic Time Complexity
Quadratic Time Complexity

The above code is quadratic because there are two loops and each one will execute the algorithm n times – n*n or n^2.

Other examples of quadratic time complexity include bubble sort, selection sort, and insertion sort.

Polynomial Time Complexity – O(n^k) where k > 2

We know that an algorithm has quadratic time complexity when the number of operations performed by the algorithm is twice the size of the input given. 

From this, we can conclude that an algorithm is said to have a polynomial-time complexity when the number of operations performed by the algorithm is k times the size of the input where k > 2

Having a nested loop in our code doesn’t always mean that we have a Polynomial Time Complexity. Sometimes it can even be Linear or Logarithmic Time Complexity.

This all depends upon how many iterations are we talking about if we open the loop using the loop table. Let’s take an example to understand –

Extra Case

From the above image, it is clear that for n=10 we have 18 iterations. If we change the value of n to 20 we get 38 iterations and to 4 we get seven iterations. Thus we came to a conclusion that for the given nested loops, the number of iterations for n=N is 2N-C where C is constant. We can even calculate this with the help of a table.

Loop Table
Calculation Part

And by removing both the lower order terms and constants, we get O(n) or Linear Time Complexity.

Logarithm Time – O(log n)

An algorithm is said to have a logarithmic time complexity when it reduces the size of the input data in each step by 50 percent or the number of operations gets reduced by 50 percent as the input size increases. This is because the algorithm divides the working area in half with each iteration.

This can be better explained with the help of an example.

function binarySearch(arr, val) {
  var arr = [1, 2, 3, 6, 7, 4];
  var val = 7;
 
  let start = 0;
  var end = arr.length - 1;
 
  while (start <= end) {
    let middle = Math.floor((start + end) / 2);
    if (arr[middle] == val) {
      return middle;
    } else if (arr[middle] < val) {
      start = middle + 1;
    } else {
      end = middle - 1;
    }
  }
  return -1;
}

An easy way to understand this type of time complexity is to search for strategies like divide and conquer directly affecting the number of operations.

The above code will be solved in this manner – 

  1. Binary Search finds the middle of the given array
  2. It then compares it with the desired value – in this case, 7
  3. If the value is matched, the function will be terminated
  4. If it isn’t, then it will go to the next step
  5. This will eliminate half of the list as only half will satisfy the given condition 
  6. And this process continues

The most common examples of O(log n ) are binary search and binary trees.

Linearithmic Time Complexity – O(n log n)

It should be quite clear from the notation itself, it is a combination of Linear and Logarithmic Time Complexities. The time taken by this is slightly less than the linear time complexity but not as slow as the quadratic time complexity.

Heap Sort and Quick Sort are some examples of Linearithmic Time Complexity 

Exponential Time Complexity – O(2^n)

When we see exponential growth in the number of operations performed by the algorithm with the increase in the size of the input, we can say that that algorithm has exponential time complexity.

The best example to explain this kind of time complexity is the Fibonacci Series.

Exponential Time Complexity
Exponential Time Complexity

In the above code, we have taken the value of the number as 15, and to reach the 15 Fibonacci number, there will be 32768 operations which can also be said as 2^15 operations. Thus an exponential time complexity.

And because of this exponential growth, this is considered as the worst time complexity that an algorithm should avoid.

Let’s see the above discussed time complexities through a graph between the number of operations and the input size by Medium.

Different Timr Complexities
Different Timr Complexities

There are many more types of time complexities out there, you can read about them in this article by Wikipedia

Conclusion

Phew, that was a lot of theory but we did learn something new as well. So let’s have a recap of all of the things we discussed in this blog.

We understood what time complexity is and why it is necessary for an algorithm. We have also seen ways to calculate it and how many types of time complexities are there, well there can be more than we have discussed here, but we have discussed the most common ones.

And at the end, we just want to say that understanding the time complexity and how to calculate it is the need of the hour as knowing it can help plan the resources, process and provide the results efficiently and effectively. 

Thus, knowing the time complexity of your algorithm, can help you do that and also makes you an effective programmer.

Frequently Asked Questions

How is the Time Complexity of an algorithm calculated?

It will be better to explain this with the help of an example. Consider an algorithm with a loop. For this situation, we have found out the number of times the loop will execute itself which again depends upon the size of the input. In the case of only one loop algorithm, the time complexity is O(n) where O is called the big O notation and is used for calculating time complexities.

What is the time complexity of an algorithm?

Time complexity is the amount of time taken by an algorithm to run, as a function of the length of the input. Here, the length of input indicates the number of operations to be performed by the algorithm.

How many types of time complexities are there?

  • O(1) – constant time complexity
  • O(n) – linear time complexity
  • O(log n) – logarithmic time complexity
  • O(N logn) – linearithmic time complexity
  • O(n*2) – quadratic time complexity
  • O(2^n) – exponential time complexity

What is Big O notation?

It is used to express the upper limit of an algorithm’s running time, or we can also say that it tells us the maximum time an algorithm will take to execute completely. It is also used to determine the worst-case scenario of an algorithm.

\

Share your love