Shares

# Big O Notation Simplified to the Max

Big O Notation? Big O Time Complexity?  Big O Space Complexity?

Do these terms send a Big Oh My Goodness signal to your brain? Pun intended by the way.

After you read through this article, hopefully those thoughts will all be a thing of the past!

One day, while I was lost in thoughts, I began to ask myself:

How would I explain the Big O Notation to a seven year old child?

What is going on inside of the mind of an seven year old?

Probably food, drinks, their favorite TV show or toy right?

The beauty of being a child is that you often absorb knowledge like a sponge and accept it for what it is without over complicating things. Sometime for the good, sometimes for the bad.

Have you ever seen a seven year old scrutinize and critically assess whether Santa Claus exists?

In this post, we will be examining the time complexity of an algorithm. Depending on demand, I may cover space complexity in a future post.

In this post, I will be

• providing explanations as though I were explaining to a seven year old child
• providing a contextual “in a nutshell” summary
• writing code that fits the Big O Notation specified.

## What is the Big O Notation?

### Explanation to the Seven Year Old

Big O notation addresses the crappiest situation. You know your friend George right? You know how he sucks at League of Legend right? Big O measures the worst case situation of George’s feeding tendencies. We don’t take into account any of his good games okay?

In another words, we don’t take into account games where he gets fed first blood on a silver platter and the enemy intentionally feeds him kills. What matters in Big O notation is where everything goes wrong. Where he gets ganked 100 times and feeds like 20 plus kills.

Got that?

### In a nutshell

in the Big O notation, we are only concerned about the worst case situation of an algorithm’s runtime. For example, lets take a look at the following code.

``````var items = ["hat", "pants", "tshirt", "shorts"];
function itemExists(itemWeAreLookingFor, items) {
for (var i = 0; i < items.length; i++ {
var item = items[i];
if (item === itemWeAreLookingFor) {
return true;
}
}
return false;
}
var hatExists = itemExists("hat", items);  // true.``````

In the code above, in the worst case situation, we will be looking for “shorts” or the item exists. In another words, the code executes four times, or the number of items in the list. Therefore, this is why the following code has a time complexity of O(n). We don’t factor in the best case, which is if we are looking for “hat”, which in that case, the loop only executes once.

## What is Big O Time Complexity?

Image credit: Time complexity graph made by Yaacov Apelbaum, apelbaum.wordpress.com.

### Explanation to the Seven Year Old

Lets say I am thinking of 10 different numbers. If you want to find the largest number out of the 10 numbers, you will have to look at all ten numbers right?

Time complexity simply measures how much work you have to do, when the number of items that you have to work with increases.

### In a nutshell

Time complexity, is simply  a coefficient that describes the rate of growth of an algorithm. The simplest example would be the constant time complexity O(1). Even if the input size growth, operations that take constant time, take the same amount of time.

Now, with small amount of inputs, this won’t nearly matter as much (as you can see in the graph above). However, will larger data sets, the time complexity of the algorithm will start to have a much greater impact.

Think of Facebook for an example. Facebook has more than 1.9 billion users. Imagine if their user searching algorithms run in exponential time. No matter how powerful their server is, searching for a user under that time complexity will most likely crash their server.

You see, just how important fast and efficient algorithms are right?

## Constant Time O(1)

### Explanation to the Seven Year Old

Hey, can you grab a soft drink for me from the fridge? Any drink would do. Just give me the drink now, okay?

### In a nutshell

If the operation takes the same amount of time, no matter how much data we are dealing with, the operation is said to take constant time O (1).

### Code

``````var drinks = ["coke, "sprite", "fanta", "Dr. Pepper"];
// getting an item by index value takes O(1) constant time
var fanta = drinks[2];``````

## Linear Time O(N)

### Explanation to the Seven Year Old

I placed 10 cans of soft drinks inside of the fridge. Only one of them is your favorite coca cola soft drink.

I lined the drinks up in a single file. Take one out and check the drink. If it is not coke, you will have to keep searching that line of drinks.

If you have bad luck, you might have to go through 9 cans of soft drinks before you find coca cola.

### In a nutshell

Linear search takes linear time, because you are searching from the start of a line, one by one, each item in a list/line until you find the item you are looking for.

If an operation is performed n amount of times, the big o of that algorithm is O (n) AKA linear time. The most common and simplest example is your classic for loop, in which a certain operation is repeated n amount of times.

### Code

``````var drinks = ["coke", "sprite", "fanta", "Dr. Pepper"];
// Return true if drink is in the list. Otherwise, return false.
function drinkExists(drinkweAreLookingFor, drinkList) {
for (var i = 0; i < drinkList.length; i++) {
var drink = drinkList[i];
if (drink === drinkWeAreLookingFor) {
return true;
}
}
return false;
}
var drinkIsInFridge = drinkExists("sprite", drinks);  // true.``````

## Logarithmic Time O(log N)

### Explanation to the Seven Year Old

Lets play a game. I am thinking of a letter in the alphabet.

After you take a guess, I will let you know if you guess is right or wrong. If it is wrong, I will let you know if the letter I am thinking of is before or after the letter that you guessed in the alphabet.

Keep going until you guess the letter that I am thinking of. If you can get it in under 4 guesses, I will give you some chocolate. Deal?

### In a nutshell

In each iteration of the list or data set that you are working with, we utilize the divide and conquer strategy.  A perfect example of an algorithm that utilizes this approach is the binary search or the merge sort algorithm.

For example, in the code snippet below, we guess a letter in the alphabet.

If the letter we are looking for is alphabetically higher up than our guess, we discuss all the items that are before our guess alphabetically. Hence, we discuss a sample of our data and repeat this operation until we find the answer.

### Code

``````var abcArray = "abcdefghijklmnopqrstuvwxyz".split("");

var charactersLeftRemaining = alphabet.length;
var asciiAlphabetOffset = 97;
var guessLetterIndex = guessLetter.charCodeAt(0) - asciiAlphabetOffset
+ (charactersLeftRemaining - 26);
return alphabet.slice(0, guessLetterIndex);
} else {
return alphabet.slice(guessLetterIndex, charactersLeftRemaining);
}
}
}
// Abc array = ["a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z"]
// Abc array = ["g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z"]
var guessOne = guessLetter(abcArray, "g", answer);
// Abc array = ["g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r"]
var guessTwo = guessLetter(guessOne, "s", answer);
var guessThree = guessLetter(guessTwo, "p", answer);

console.log(guessThree);``````

## Quadratic Time O (n ^ 2)

### Explanation to the Seven Year Old

Okay son, lets play a game. Since you now know your ABCs, lets see whether or not you can put these letters in order. I have laid out all the ABC characters down onto the mat.

I want you to put them in the right (alphabetical) order okay?

Once you find A, I want you to place A at the front. What comes after A? B right? Find B and place it right after A.

What comes after B? Okay good, you are right, it is C. Find C and place it right after B. Now, repeat that until you have all the characters in the order of that ABC song that you heard at school.

### In a Nutshell

I only have three words: nested for loops. Remember that in the linear time example, in the worst case situation, the code executes n times, where n is the amount of data processed. Well, in the O (n^2) example, in the worst case situation, the code executes n * n times. In order to execute a program n * n times, we need to nest a for loop like in the example below.

Examples of algorithms taking quadratic time include the following algorithms

### Code

Credit to Anukur Agarwal at codingmiles for the selection sort implementation.

``````function selectionSort(items) {
var length = items.length;
for (var i = 0; i < length - 1; i++) {
//Number of passes
var min = i; //min holds the current minimum number position for each pass; i holds the Initial min number
for (var j = i + 1; j < length; j++) { //Note that j = i + 1 as we only need to go through unsorted array
if (items[j] < items[min]) { //Compare the numbers
min = j; //Change the current min number position if a smaller num is found
}
}
if (min != i) {
//After each pass, if the current min num != initial min num, exchange the position.
//Swap the numbers
var tmp = items[i];
items[i] = items[min];
items[min] = tmp;
}
}
}``````

## Hey, what about Exponential or Factorial Time Complexity?

Once you understand the general gist of the aforementioned cases, it is simply a matter of applying the same concepts to the other time complexities.

For example,  what is quadratic time (O (n ^ 2))? It is simply having a single nested for loop.

If this article helped, please share with other people. If it helps them out, they will thank you for it :).

Eager for more huh? Hopefully after reading this article, the big O will seem more palatable now.

Here are some recommended online readings that you can do to further bolster up your knowledge.

Reinforcing what you learned in this post will also be a good idea if you want to truly master the concept of big O.