The binary search tree is a fundamental data structure in computer science. Just as with all other data structures and algorithms, the binary search tree is built on specific rules and assumptions. These assumptions, enable us to examine the shortcomings of previous data structures such as the linked list, and improve the search time. In this article, we will explore the binary search tree data structure and some of the searching algorithms used.

Please do take note that this is part one of a series of posts on the Binary Search Tree. The Implementation details will be provided in part 2, along with a detailed guide on how to implement the binary search tree.

Well, it is because of these assumptions that we are able to get such high performance from certain data structures. Assumptions enable us to code specifically. For example, if we only had forks to eat food with, it would definitely be inconvenient to eat rice. Hope you understood my analogy in the previous sentence. As the same suggests, the binary search tree enables us to **search** for items much more quickly. If we didn’t have the binary search tree, we might have to just do a brute force search. The time complexity for the brute force search would be O(n). With binary search trees, it is possible to reduce the search time to O(log n). In the worst case situation however, the search time can be the same as with a brute-force search O(n).

Now that we understand the need for assumptions, we will examine the key features.

- Each piece of data that is part of the tree is referred as a
**node**. - The
**root node**is the first element inserted, or the one that sits at the top of the tree. - Each node has at most, two children.
- Nodes at the bottom of the tree without any child are called
**leaf nodes**. - The left node has a key (or value) that is less than the parent node.
- The right node has a key (or value) greater than the parent node.

Our aim is to develop a strong understanding of how a binary search tree works, its benefits, as well as its pros and cons. Before proceeding, readers must have a solid understanding of the linked list data structure. Before proceeding, I recommend readers to read through the following article on linked lists. Please note that due to requests, the binary search tree in this post will be written in JavaScript. However, on demand, I am also more than happy to write implementation details in Java or C++.

First of all, both the binary search tree and linked list contain pointers to the next node. However, the difference between the two is that the tree branches out in two separate directions. This is what enables us to achieve the fast O(log n) search algorithms.

The binary search tree compares the item we are looking for and the target. Based on the results of the comparison, it chooses which direction to traverse. This way, it is able to discard quite a large portion of data that is (thanks to the assumption) not what we are looking for.

```
binarySearchTree
.add(3)
.add(2)
.add(4)
.add(5)
.add(1)
.add(7)
.add(100);
```

In the code snippet above, we have added seven elements to the binary search tree. The number (three) will be added as the root node without the need for a comparison. When we add two however, we will compare the data to insert(2) with the root node.

Since the data to insert is less than three, we will traverse to the left and insert 2 as the left child node of the root node. Likewise, when adding four, we will add it as the right child of three since 4 > 3.

When adding five, since it is greater than the root node three, we will go to the right. Since it is also greater than four and the current node has no children, we will add five as the right child. And so on and so on …

The image above is what the binary search tree created in the code snippet above would look like. By examining the image, you might have a general idea of what the limitation of the binary search tree is.

The binary search tree can be a very powerful tool when used in the appropriate setting. Below are some of the benefits of a binary search tree.

**Memory efficient**. Binary search trees do not use up more memory than needed.- Efficient range searches (E.g. Searching for a value between 1000 and 3000). Binary searches, on each traversal, discards an entire sub-tree worth of values. This enables quick and efficient searches. For example, using the example above, if we are looking for the number 1, we will first ask ourselves: is 1 less than the root node 3? The answer is yes, therefore, we can discard the whole right sub-tree (containing values greater than 3 e.g. 3,4,5,7,10).

Like most data structures, the use cases are very situational. In data structures and algorithms, there is no such thing as one size fits all. In this section, we will be discussing some of the common pitfalls of the binary search tree. This will also be used as a brief introduction to how other data structures that came to be.

You may have already gotten the gist from the first image. However, for the sake of clarity, I am just going to say it. When unbalanced, the big O notation for the search time becomes linear. Before I explain why, lets take a look at a balanced search tree.

First of all, lets start off by asking the following questions:

what is a balanced tree? How can we determine whether a tree is balanced?

One of the most common measurement method is by determining whether the height of the left sub-tree and the right sub-tree differ by more than one. The tree in the example above has a height of three (from the root). Let us go back to the unbalanced tree (the first image) and take a look at node **four**. ** **Its left sub-tree has a height of **ZERO **(since it has no left child). On the other hand, its right sub-tree has a height of 3 (elements 5,7 and 10). Therefore, since it has a height difference of three ( > maximum of one), the tree is unbalanced. This leads us to a somewhat interesting fact.

A picture here is worth a thousand words. Let me demonstrate this fact with a visual illustration.

Readers might be familiar with tree traversal algorithms. By definition, a tree traversal is defined as visiting each node in a tree **exactly** **once** in a specific order. There are two common algorithms or strategies, which are the **depth-first** and **breadth-first** traversal strategies. Note that these strategies are used to traverse graphs. For now, just know that trees are a type of graph. I will be covering graphs in a future post. To keep this article less jam-packed, I will be covering depth-first search only. Breadth-first search techniques will come in the future. However, if somebody strongly insists, I will update this post to include breadth-first search techniques, along with the implementation of breadth-first traversal in our final binary search tree implementation, which will be available in the second part of this series.

In this post, we will first be covering the **depth-first search**. Using the depth-first search, we attempt to go deep into the tree as possible, before moving horizontally. The way that I learnt the depth-first search quickly is by simplifying the terms. Below, I am going to list the key terms that are commonly mentioned in depth first search.

**Left**: Refers to the left sub-tree. For example, in the balanced sub-tree example, the left sub-tree contains values 5,2 and 8. The sub-tree with root 5 also has a left-sub-tree with root value 2. As we see in the example, it is a leaf-node with zero children.**Root**: Refers to the root of the current sub-tree. For example, in the balanced sub-tree example, on the first right sub-tree, the root is 23. Its left and right child are 13 and 25 respectively.**Right**: Refers to the right sub-tree. Examples should be similar to what was given for**left.**

In the depth first search algorithm, there are three main traversal methods. These are: the in-order, pre-order and post-order search. In the upcoming sections, we will be discussing how each of the methods are implemented. We will also explore the inner mechanics of each of the aforementioned traversal methods. In each example, we will refer to the following set of data

The in-order traversal is aptly named, because this algorithm is used to traverse the tree in order from least to greatest. In another words, the results are displayed sorted in ascending order e.g. 1,2,3,4,7,9, etc. Below is a working example of how the in-order traversal works. Until all nodes are traversed −

- Recursively traverse
**left**sub-tree. - Visit
**root**node. - Recursively traverse
**right**sub-tree.

**Tip from Jay**: In-order traversal is simply **Left, Root, Right**. Just remember this and you are good. Why? Left is the lowest element. Root is the middle. Right is the greatest. Logically, we are going from lowest to greatest. I hope that this makes sense. For more clarification, please check out the implementation detail below. Take note than a recursive approach was implemented to improve the readability of the code.

```
/**
* In-order traversal implementation details. For the sake of clarity,
* I have decided to go with a recursive approach.
* @param node The current node we are at.
* @param fn accepts a function. First argument is the data, second is the node object.
* */
function inOrderTraversalImpl(node, fn) {
// Visit left subtree first
if (node.getLeftChild() != null) {
inOrderTraversalImpl(node.getLeftChild(), fn);
}
// Apply function to the node
fn(node.data, node);
// Then visit the right subtree
if (node.getRightChild() != null) {
inOrderTraversalImpl(node.getRightChild(), fn);
}
}
```

Unfortunately, for pre-order traversal, I don’t have a nifty trick that will help you memorize its implementation. To explain the implementation, we start off with the root, go left and then right. In the sample data diagram, the result would be as follows.

6 –> 3 –> 1 –> 5 –> 10 –> 7 –> 15 –> 100

The steps listed below are in logical order of the sequences. For the steps below, we are assuming that the user decided to

`console.log()`

the results in the callback function.

- Start at root node 6. Log it.
- Go left to the sub-tree with node 3 as the root. Log it.
- Go left again to the sub-tree with node 1 as root. Log it.
- Go left. No left-child, so attempt to go right. No right-child. Last recursive function returns and is popped off the call stack.
- We are back to where the data is equal to 3. This time, go right.
- We are at the sub-tree with node 5 as the root. Repeat the process at step 4, since 5 is also a leaf node.
- We are now back the root of the binary search tree: Node 6. This time go right.
- At sub-tree with node 10 as the root. Log it.
- Go the left. It is a leaf node with value of 7. Log it.
- We are back at node 10. Go right.
- Log 7. No left child, so go right.
- Log 15. Repeat step 11.
- Log 100.

The steps above will most likely confuse you at first. My advice: run the code on chrome developer tools. Be sure to watch **node.data** to keep track of the actual values.

You probably know the drill by now: This time, we recursively traverse the left sub-tree. Afterwards, we recursively traverse the right sub-tree. Finally, we visit the root node. Once again, I will print out the results below and write out the steps taken.

1 –> 5 –> 3 –> 7 –> 100 –> 15 –> 10 –> 6

- Recursively traverse left sub-tree. Can’t go left any further, so attempt to go right from there. Cannot do so too. Therefore, log 1.
- Return back to 3. Recursively visit right sub-tree. 5 is a leaf node, so log it.
- Finally return back to 3 after recursively visiting both the right and left sub-tree. So therefore, log 3.
- Return to 6. Now, recursively visit right sub-tree.
- Visit 10. Recursively visit left sub-tree. 7 is a leaf node, so log it.
- Return back to 10. Recursively visit right sub-tree. Arrive at 15. It doesn’t have left-child, so recursively visit right sub-tree.
- Arrived at 100. Leaf node, so log it.
- Return to 15. Since we recursively visited both left and right sub-tree of 15, log it.
- Return to 10. Repeat step 8 for 10.
- Finally, return to root node 6. Repeat step 8 for root node 6.

Hopefully, by now, these implementation details and patterns are starting to become more familiar to you.

```
function postOrderTraversalImpl(node, fn) {
// Visit left subtree first
if (node.getLeftChild() != null) {
postOrderTraversalImpl(node.getLeftChild(), fn);
}
// Then visit the right subtree
if (node.getRightChild() != null) {
postOrderTraversalImpl(node.getRightChild(), fn);
}
// Apply function to the node
fn(node.data, node);
}
```

Once again, the source is available GitHub.

This was a relatively long post, so in conclusion, thank you so much for taking time to read through this lengthy post. Once again, please leave a comment if this article helped you.

After reading this post, you should be ready to start implementing your own binary search tree. Grab a nice cup of coffee as a reward, and feel free to jump right onto it.

I am a programmer currently living in Seoul, South Korea. I created this blog as an outlet to express what I know / have been learning in text form for retaining knowledge and also to hopefully help the wider community. I am passionate about data structures and algorithms. The back-end and databases is where my heart is at.

follow me on:

Red Black Tree Data Structure – Detailed Language Agnostic Intro

Master the AVL Tree Implementation in Java – Detailed Guide

Write your own Binary Search Tree Implementation in JavaScript

Mastering the JavaScript this Keyword – Detailed Guide