Back to basics #1 – Big O notation
I am going to start the “Back to basics” series with something really basic – Big O notation. I know, I know, quite simple, even trivial, but it’s good to go back to the real basics once in a while. Without further ado – let’s go!
Table of Contents
ToggleWhat’s the Big O?
In short – Big O notation – is a way of describing the performance of the algorithms. As with (almost) every description of this concept, let’s start with binary search. In the base scenario, we have two people. One of them chooses a number within 0 to 100 range, and the other one must guess, what the number is. Person guessing the number asks if the specific one is the right one, and gets one of three responses:
- it is the right number
- the right number is bigger
- the right numbe is smaller
In the simple search, we could just go step by step – starting from number one, and making our way up to the 100. In the worst case scenario, we could have ended with 100 attempts. However, there’s the alternative way. Based on the fact, that the numbers are ordered (that’s crucial here), we can devise more sophisticated algorithm (please, don’t laugh 😉 ). Instead of going step by step, we can choose a number that lies exactly in the middle of the searched range, removing half of the numbers with every attempt.
So the process could look like this (for the number 83)
- (1-100) Is it 50? No, the number is bigger
- (51-100) Is it 75? No, the number is bigger
- (76-100) Is it 87? No, the number is smaller
- (75-86) Is it 81? No, the number is bigger
- (82-86) Is it 84? No the number is smaller
- (82-83) Is it 82? No, the number is bigger
- It is 83 then! Yes, you’re right!
We needed only 7 attempts here, and compared with 83 ones in the step-by-step process, the difference in the time spent is huge! With this trivial example of binary search, we can see now in practice two types of the big O notation. These are:
- O(n) – which represents the execution time which is linear – it depends on the number of the elements
- O(log n) – which represents the execution time which is logarithmic (as seen in the above example)
Before we go further
Obviously, two types mentioned above do not exhaust all the possibilities here. However, before we go further, three things should be mentioned. First one relates to the execution time. I have mentioned above, that Big O notation is mostly used for describing the execution time. That is not exactly what it shows. The better description would be (and aforementioned Wiki covers it), that Big O notation is used to indicate the amount of time/resource/attempts depending on the input size. So, the notation is more generic, and applies not only to time itself.
Second thing, which is more of a trivia, is the origin of the actual letter O being used in the name. It comes from the German word “Ordnung” (which means ‘order’ as in ‘ordered list’).
The last thing worth mentioning is the other way we can categorize the performance of the specific operation or algorithm. In short:
- best case
- worst case
- expected case
In our example from before (number range 1-100 using binary search), we could get lucky. If the number chosen to be guessed was 50, we could get the winning result in the first attempt. That would be best case. Of course, it could be on the opposite side of the spectrum – and we actually reached it (log2 100 is circa 6.64, but we round up). To be honest – for most algorithms the worst case and expected case are the same, and that’s what Big O notation describes.
Gimme more Os?
Let’s go back to our notation – what other O-s are there in computer science? A couple actually, and I’m just providing the list here with samples – that will do as a simple reminder.
- O(1) – no matter the number of elements, the operation will perform the same. Example could be putting an element at the beginning or end of the array.
- O(n!) – the most famous factorial execution time is brute-force travelling salesman problem solution.
- O(n log n) – loglinear which is the performance found in fast sorting algorithms, like heapsort
- O(n2) – bubblesort is the best example
Summary
To sum up – that was just a quick recap. If I decide to cover algorithms&data structures in this post series (and I assume I will), this article may serve as a humble starting point.
Sources used:
- “Cracking the coding interview” by Gyle Laakmann McDowell
- “Grokking algorithms” by Aditya Barghava
Leave a Reply
You must be logged in to post a comment.