The worst case complexity of an algorithm is represented using the Big-O notation, but what exactly is Big-O notation means? let's understand in this article
If I have to explain in very simple terms, the Big-O notation describes the complexity of an algorithm using algebraic terms
Big-O notation Characteristics
The Big-O notation has two important characteristics
1. It is expressed in terms of the input
2. It focuses on the bigger picture without getting caught up in the minute details
Let's understand these two points under the context of time complexity, we can then extend that knowledge to understand space complexity
Time Complexity
All right, let's calculate the worst case time complexity of our first program, I called it as program because the algorithm has already been implemented with JavaScript as the language. The algorithm is to find the sum of first n natural numbers. We have an input “n” and the function returns the sum of all the natural numbers from 1 to n
For example, summation when called with “n” is equal to 4 will return 10.
We already know that we cannot calculate the absolute running time of an algorithm and hence that cannot be the time complexity. What we do instead is count the number of times a statement executes based on the input size
Let's do the same for our program as well, a program has three main statements to execute, line two, line four and line six, the for loop is just a repetition of line four
Now given n is equal to four, let's calculate the number of times each statement is executed, line two executes only once, line four however executes four times, I is equal to one to I is equal to four and line 6 again is executed just once
So the total count is 4 plus 2.
I say 4 plus 2 because we can generalise it to n plus 2 where n is input to the function
if n is equal to 4, total count is 4 plus 2.
if n is equal to 10, total count is 10 plus 2.
if n is equal to 1 million, total count is one million plus two
Our time complexity is dependent on the input size and this is the first characteristic I mentioned above Big-O notation is expressed in terms of the input
The second point is that Big-O focuses on the bigger picture without getting caught up in the minor details
Let's plug in a few values to n and understand this point
If n is equal to 100 then n plus 2 is 100 plus 2. Similarly we have 1000 plus 2, 10 000 plus 2 and as the input grows to 100 million then it is 100 million plus 2.
At that point the plus 2 is very insignificant we can actually drop it “n” plus 2 can be approximated to just “n”. Since “n” contributes the most to the total value and not the additional 1 or 5 or 50 extra steps or in our case two extra steps. You don't have to get caught up in the smaller steps that don't affect the performance as much as the others.
So the worst case time complexity which is also referred to as just the time complexity of our sumupto algorithm is Big-O of n which is referred to as linear time complexity. What this means is that as the size of the input increases the time complexity also increases.
Hopefully, you now have a good idea of what the Big-O notation is?
Now you might ask do I have to calculate the count line by line to determine the time complexity? well you could but you can also perform some safe calculations anytime you see a loop in your algorithm most of the times you can safely say the time complexity is at least linear.
Of course there are exceptions and we will look at them throughout this course but a loop's worst case is when it iterates over the entire input and hence the time complexity is linear. If “n” is equal to 10 the statement will be executed 10 times in the worst case.
Here is another algorithm for the same problem sumupto of first n natural numbers.
The time complexity of this algorithm is O(1) which is called constant time complexity irrespective of what the value of n is, the line two is executed only once. Once you get the hang of it, it becomes easier to analyse the time complexity.
For example, if there are two nested loops the time complexity is quadratic, once again the Big-O notation drops the less dominant terms. If you calculate the complexity to be three n square plus five n plus one. We narrow that down to just n square and call it quadratic.
Hundred million squared is so large compared to five into 100 then we can drop the latter term now if there are three nested loops it is cubic if the input size reduces by half every iteration it is logarithmic.
What we will be doing is determining the time complexity of the various algorithms we will be looking at throughout the series, so you will slowly but surely get the hang of it.
Space Complexity
Let's quickly talk about space complexity, the idea remains same. If the algorithm does not need extra memory or the memory needed does not depend on the input size the space complexity is constant. An example would be sorting algorithms which sort within the array without utilising additional arrays.
You can also have algorithms with linear space complexity where the extra space needed grows as the input size grows and you can also have a logarithmic space complexity in which case the extra space needed grows but not at the same rate as the input size.
Typically you would find algorithms with these three space complexities. Although it is common to solve with quadratic time complexity quadratic space complexity is something you should try to avoid.
Here is a graph plotting the number of operations versus elements and understand how the input size affects the performance.
You can see o of log n and o of 1 are very good whereas o of 2 power n and o of n factorial are just really bad time complexities and should be avoided when possible.
With that we have now covered the theoretical part of this course time and space complexity and Big-O notation are fundamentals to learning algorithms and hopefully you now have a good understanding about them.
Few points to note
If you have understood so far let me go over a few more points that you should keep in mind before continuing
1. Multiple algorithms exist for the same problem and there is no one right solution. Different algorithms work well under different constrains.
2. The same algorithm with the same programming language can be implemented in different ways.
3. When writing programs at work, don’t lose sight of the big picture. Rather than writing clever code, write code that is simple to read and maintain.
So let's take a look at object and array operations in the next article.
0 Comments