Master the AVL Tree Implementation in Java – Detailed Guide

Before proceeding, be warned: The AVL tree implementation in Java is fairly challenging. It requires users to have a strong working knowledge of the Java programming language. Furthermore, I also recommend users to have an understanding of the binary search tree.

For the sake of technicality, we are now going to refer to the data node values as keys or refer to them simply by the numeric value E.g. 7. In past tutorials, I called them node values, but from now on, we will refer to its proper term: keys.

In this post, rather than using JavaScript, we will stick to the Class-based object-oriented approach. We will be implementing the AVL Tree in Java. The AVL tree was named after the surnames of the inventors of the AVL tree (Adelson, Velski and Landis).

We will also be writing out pseudo code for each of the operations. Pseudo code is a high-level description of an algorithm or a computer program. Pseudo code is generally written to be read by other human beings. Therefore, pseudo code enables us to abstract out the low-level operation, and focus purely on the logic.

As a software engineer, writing pseudo code can help you organize your thoughts and extract it into written form. Thus, it opens up a pathway to share and communicate your thought process and ideas to your team.

1. Why AVL Tree?

The AVL tree is a self-balancing binary search tree. In a traditional sorted binary search tree (BST), the search time is identical to that of a linked list (log n). However, because the AVL tree balances itself by making rotations when the tree becomes unbalanced, O(log n) search time is guaranteed, thus making the AVL tree very consistent in terms of performance.

Readers will get a good workout on solving problems and breaking down a big problem into smaller tasks.

The AVL Tree is a balanced data structure, ensuring O (log n) time complexity for basic operations such as insert, remove and search.

Self-balancing trees such as the AVL Tree and Red-black trees are used heavily in operating systems. It is a highly optimized data structure. Therefore, knowing how it works, and being able to implement one will help readers solve other complex problems that they come across.

2. Performance Ratings

Because it is self-balancing, the performance of the AVL tree is generally more consistent than that of a BST. As you can see, the worst case performance for the major operations are O (log n). Therefore, the AVL tree is a very consistent data structure, which performs well, especially when used to look up data. However, since it has to check and the rotation operations are performed frequently, in contrast to the red black tree, its performance suffers when inserting/removing data on a regular basis. For your information, the red black tree is also another self-balancing binary search tree.

Operation Average Worst Case
Space O (n) O (n)
search/traverse O (log n) O (log n)
Insert O (log n) O (log n)
Remove O (log n) O (log n)

3. What is the AVL Tree Useful for?

The next question readers might be pondering is: when should I use AVL trees? AVL trees are ideal in cases where searches are performed frequently and insertion/deletion operations, rarely. This is because AVL trees require a little more rotation in comparison to other tree data structures such as the red-black tree.

Sorts can also be implemented with the AVL tree. However, despite having a O (n log n) time complexity, other sorting algorithms such as the quick or merge sort generally outperform. This is because the AVL tree has more memory overhead, which is why we prefer the aforementioned sorting algorithms over the AVL sort.

Lookup tables can also be implemented in AVL trees. Because AVL trees support operations such as sorting, it can provide more options in comparison to the hash table data structure.

Bottom line: AVL trees are useful for lookup, or searching for data in an existing collection.

4. Determining Whether the Tree is Balanced

A tree is balanced, if the height difference between the left and right sub tree is not greater than one.

The formula for calculating the height is as follows.

height = max (leftSubTreeHeight, rightSubTreeHeight) + 1

Very simple right? The height of the tree is simply the height of the left or right sub-tree (the greater of the two) plus one. On each data manipulating operation (such as insertion and deletion), the AVL tree checks itself to see if it is balanced.To the math nerds, the formula will look something like the following.

| leftSubTreeHeight - rightSubTreeHeight | <= 1

We are able to calculate the height of a sub tree recursively. Below, I have attached an example of both a balanced and unbalanced tree.

For example, the node with key value 6 in the example below has a height parameter of three. If the example does not make sense, don’t worry: it will slowly begin to make sense as we go through examples.

AVL tree balanced example

On detecting an unbalance, the AVL tree will perform a balancing operation. Why is this balancing operation necessary? The balancing operation is what makes the AVL tree special. By preserving the balance of the tree, we are able to achieve a consistent O (log n) time complexity on all the major operations, such as the insert, traverse and delete methods.

AVL tree unbalanced example

The -1 marked above is the height parameter value. Consider the -1 as a null pointer. Nodes with a height parameter of 0 indicates that it has a left/right child. That left/right child then has zero children. The best way to understand is that, in AVL tree height calculation, we start counting from minus one.

5. Differences Between AVL Tree and BST

The only difference between the AVL tree and the binary search tree is that the AVL-tree balances itself after each operation that involves manipulating data.

As explained in previous sections, the re-balancing process is good because it guarantees logarithmic time complexity for all the major operations performed. Now that we have established this fact, we will be going into the actual implementation details.

6. Tree Balancing Algorithm

When balancing a tree, we also have to be mindful of the fact that balancing a tree could cause a parent tree to become unbalanced. Therefore, we need to check all the nodes to find the right place to make the substitution. The implementation details for the tree balancing algorithm is fairly challenging. There are four possible rotation cases.

  1. Left rotation
  2. Right rotation
  3. Left right rotation
  4. Right left rotation

The four possible rotation cases for the AVL Tree

If we only need to make one rotation, it can be done in O(1) or constant time. However, there are situations where we will be making multiple rotations. This will take logarithmic time complexity O(log n). Let us examine each of the cases in detail. We need to understand the theory before jumping into writing the code.

Important: We will assign a height parameter of negative one if the left/right sub-tree does not exist. To illustrate this concept, I have added an example below.

7. Exploring Left Rotation

First of all, let us look at a visual diagram that illustrates how the left rotation. The reason why we call it a left rotation is because we make the current root the left sub-tree of its right child. The left rotation is made, when the right sub-tree has a greater height than the left sub-tree.

The first time around, the operations might not make sense. However, do press on. Utilize the diagrams to wrap your brain around how these operations work. Once you figure it out, you will realize that these operations are not as complicated as you thought.

For the sake clarity, we will be writing pseudo code. Writing pseudo code is another nifty technique to write out and express your idea as a sequence of logical operations.

Example of AVL tree implementation java left rotation

7.1 Pseudo Code and Logic Review

In the example below, think of node as 6, the current root node. In the first line, we are setting tempRightChild to 8. Afterwards, we are getting the leftChild of 8, which is null. After the rotation, tempRightChild 8 will become the new root.

Finally, we will update the references. First, we will set 6 as the left child of 8. The new root node, 8, already has its right child pointer pointing at 9, so we don’t have to make any changes here. The last rotational adjustment before updating the height we need to make is to set the right child of the former root to the left child to the former left child of the new root (which holds a value of null).

What would happen if 8 had a left child of 7? Remember that the new root node was pointing at 7. After updating the references, 8 now points at the former root 6. In this situation, 7 will be garbage collected, since there is no reference pointing at 7. That will result in an unintended removal of a value from the tree, which could spell disaster. That is why we have the following line the pseudo code below.

currentNode.setRightChild(leftChildOfRight);        

// The root node with the value of 6 (in the example above) will be passed into rotateLeft
Begin rotateLeft(Node currentNode)
    // store node 8 in temp right child.
    Node newRootNode = currentNode.getRightChild();
    // Store the null pointer in temporary variable
    Node leftChildOfRight = newRootNode.getLeftChild(); 
  
     // Set node with key value of 6 as the left child of 8.
     newRootNode.setLeftChild(currentNode);         
     // In this example, it is a null pointer, but this is necessary. 
     currentNode.setRightChild(leftChildOfRight);  
          
     // Update the height of the sub-trees that were updated.
     currentNode.updateHeight();                         
     newRootNode.updateHeight();
 End rotateLeft

8. Examining Right Rotation

The right rotation works very similarly to the left rotation. If the left sub-tree is greater in proportion to the right sub-tree, we need to rotate to the right.

The right rotation of an AVL tree visualized

In the example above, the node with key 6 has a left child of 4, which also has another left child with key value of 2. After the right rotation, the left child 4 will become the new root. Subsequently, the minimum value in this sub-tree will become the left child and the previous root will become the right sub-child. Because the balancing operation is done on every data manipulating method, the height deviation will never exceed 2.

Please do take note that we are not done after just a single operation. The check must be performed sequentially until we reach the root node of the tree. This is to perform additional balancing operations in case our current operation threw off the balance of other sub-trees.

To sum it up, when we have to do a right rotation, we perform the following operation.

  1. Set the left child as the new root.
  2. Set the left child of the left child as the left child of the new root. I should receive an award for creating an awesome tongue twister.
  3. Finally, set the previous root as the right child of the new root.

Hopefully by now, handling the right rotation should be fairly clear.

8.1 Pseudo Code and Logic Review

For the sake of this tutorial, we will be writing pseudo code. Writing pseudo code is another nifty technique to write out and express your idea as sequence of logical operations. Since the right rotation is very similar to the left rotation, the pseudo code will not contain unnecessary comments.

For the same reason as in the left rotation case, we will be updating the left child of 6 to point to null. What if 4 had a right child with a value of 5? For the sake of the reader’s benefit, please review the previous section on left rotation if the reason is not clear.

Begin rotateRight(Node currentNode)
    Node newRootNode = currentNode.getLeftChild();
    Node rightChildOfLeft = tempLeftChild.getRightChild();
    currentNode.setLeftChild(newRootNode);
    newRootNode.setLeftChild(rightChildOfLeft);
    currentNode.updateHeight();                         
    newRootNode.updateHeight();
End rotateLeft

Finally, we finish by updating the height of the root and the left sub-tree.

9. The Left-Right Rotation

By now, you get the drill right? The left-right situation is a situation where the root node only has a left child. That left child has a right child. Naturally, this results in the following imbalance. The balanced sub-tree will look like the tree on the right in the image displayed below.

Please do note however, we do not modify the pointers for the left sub-child of the root node, 4. For example, 4 can have a left child, but we will ignore it. We will explore the reason why very shortly by reviewing an example.

left right rotation before after image

The steps for making the adjustment are as follows. Firstly, make a left-rotation on the node with key value 4. You know the drill now right? 5 will now become the left child of 6. 4 will then become the left child of 5. The result will be as follows

After making a left rotation on the left-right situation, we are left with the double-left situation

Looks awfully similar to the right rotation case right? Needless to say, all we need to do here, is just simply make a right rotation. Needless to say, the results will look like the “after left-right rotation” image, with 5 as the new root. The left and right child will be 4 and 6 respectively.

Remember when I said that we did not need to update the left child reference for four? Via the visual diagram above, can you see why? Yes, because right rotation will not force the left child reference of four to point to a different element. The pointer will continue to point at the same element.

Hopefully, it is clear why this rotation is called the left-right rotation. In case it is not clear, it is because we first make a left rotation, followed by a right rotation. No rocket science whatsoever.

9.1 Logic Review and Summary

Now that we understand how the left-right rotation works, lets dive straight into the logic. There is nothing much to say except that we will first do the left-rotation, followed by the right rotation. For more information how each of these rotations work, please re-read the previous sections. It is very important that the reader has a firm understanding on how the left and right rotation works before proceeding.

In Summary: Execute the following steps in order

1. Do a left rotation.

2. Afterwards, perform a right roation.

And we are done. For more information on left and right rotations, review the information in the previous section.

10. The Right-Left Rotation

Readers might be sick of this by now. If you are yelling in the back of your mind “Okay, I understand it all … get to the code already!”, feel free to skip this section. It would be quite obvious to state that the right-left rotation is very similar to the left-right rotation. But for the sake of completeness, I just stated the obvious.

Once again, the nodes may have right children, but for the same reasons as the left-right rotation, we do not update the references. The reason is simple: the right rotation does not require us to update the right child of node with key value 8.

Before and after image of the right-left rotation

Yes, once you get the hang of it, this becomes quite repetitive. If it seems so, give yourself a pat on the back. You are well on your way to understanding AVL trees. Firstly, make a right rotation. After the rotation, the tree should look like this.

The left rotation after the right rotation in the left-right rotation

Afterwards, the only thing left now is to make a left rotation. After each rotation, be sure to update the height parameter of each of the manipulated sub-trees.

Finally, we have gone through all four possible rotation cases in the AVL tree. Get yourself a glass of water. In the upcoming section, we will dive into the AVL tree implementation in Java.

11. AVL Tree Implementation Overview

Despite being similar to the binary search tree, the balancing process adds a layer of complexity to the AVL Tree implementation.

  1. Insertion.
  2. Removal.

I will be assuming that readers are comfortable with binary search trees. For more information on binary search trees, refer to my article on binary search trees. If the reader has fully understood all the theory discussed in previous sections, the implementation details should not be above you. However, the implementation will require strong knowledge of recursion and a basic understanding of object-oriented programming.

The source code is available on GitHub, so please download the source code before proceeding. On GitHub, I have uploaded two templates. The first folder contains the completed source code. The second is the template with the bare bones. For readers who want a challenge, download the second template. While reading along, without looking at the code snippets, try writing out the implementation details yourself.

Be forewarned, the implementation details may be quite challenging for those that are doing it for the first time. Do not beat yourself up for failing. The problem is meant to be hard and you aren’t the only one in that boat. Take time to read through the theory and also read up on the supplementary reference materials. The more you read and try, the easier it will become.

11.1 Inserting an element into the AVL Tree

We will at long last, be writing code. When writing the algorithm for calculating the height, it is important to consider null pointers when calculating the height of a node. Remember how we wrote -1 in the diagrams above? In the upcoming section, we will explore exactly why that was recommended. Let us review the formula once again to refresh our memory

height = max (leftSubTreeHeight, rightSubTreeHeight) + 1

For example, let us assume that we are attempting to insert the element 7 into the following tree.

Insertion operation on an AVL tree

If you have been following this tutorial carefully, it is clear that this will require a right rotation. Why? Because the sub-tree with root node 9 is unbalanced. After the right rotation, 5 will have a right child of 8. The node with key value of 8 will have left and right child values of 7 and 9 respectively.

Afterwards, we would need to check whether the entire tree is balanced. By just examining the diagram, we can tell that the tree will be balanced. Nevertheless, we need to be able to implement the logic that carries out these calculations. Firstly, we will start by implementing the calculateHeight() method.

11.2 Implementing the calculateTreeHeight() method

Okay here we go! Once again, because each sub-tree is a tree in it of itself, we can easily express the calculateHeight() algorithm recursively.

When we do balance checks, we start with the node that we inserted. This is in contrast to all the other operations, which all start off at the root node. In the example above, first of all, we will check whether 7 is balanced. Because the tree has no children, it is considered to be balanced. So we go to its parent 8. The height value of the right child 8 is -1, and left child height is 0. Therefore, it is balanced. When we reach 9 however, its left and right child height are 1 and -1 respectively. Its height difference is greater than the absolute value of 1 (|1|). Therefore, we need to perform the right rotation.

After the right rotation, 5’s right child will now be 8. Its left and right child will be 7 and 9 respectively. We have gone through rotations like gazillion times now, so this should be fairly familiar.

To sum it up, in order to insert an element, we must take the following steps.

  1. Traverse the tree and find the location to place the new node.
  2. Balance the tree by implementing the appropriate rotation.
  3. Update the height of all sub-trees affected by the rotations.

Readers might be asking: “Wait, we haven’t gone through a code example on how to calculate the height”. Don’t worry, it is very simple. We just need to apply the formula mentioned in the previous section.

11.3 calculateTreeHeight() code review

As you can see, the implementation detail is very self-explanatory. We are simply getting the greater height value between the left and right sub-tree and adding one to the result.

/**
 * Get the height of the tree and increment it by 1 to get the actual height.
 * Get it by taking the greater value between the height of the left and right sub-tree
 * and add one to that value.
 * Null pointers return a height of zero, because -1 + 1 = 0.
 * */
private int calculateTreeHeight(Node<T> currentNode) {
    return Math.max(height(currentNode.getLeftChild()), height(currentNode.getRightChild())) + 1;
}

11.4 leftRotation() Code Analysis

Since all the theory has been explained in the previous sections, lets jump straight into the code. A simple left rotation involves the following actions. Finally, we are returning the new root node, because the implementation is a recursive approach.

private Node<T> leftRotation(Node<T> currentNode) {

    System.out.println("Beginning left rotation ... on node: " + currentNode.getData());

    Node<T> newRootNode = currentNode.getRightChild();
    Node<T> leftChildOfRight = newRootNode.getLeftChild();

    // Step 1. set the left child of the new root node
    newRootNode.setLeftChild(currentNode);

    // Step 2. set the right child of the new left child
    currentNode.setRightChild(leftChildOfRight);

    // Step 3. Update the height of the trees that were updated.
    newRootNode.setHeight(calculateTreeHeight(newRootNode));
    currentNode.setHeight(calculateTreeHeight(currentNode));

    return newRootNode;
}

11.5 rightRotation() Code Analysis

Essentially the same as the left rotation, so I will just post the code snippet below.

private Node rightRotation(Node currentNode) {
     System.out.println("Beginning right rotation ... on node: "
                         + currentNode.getData());

     Node newRootNode = currentNode.getLeftChild();
     Node rightChildOfLeft = newRootNode.getRightChild();

     // Step 1. Set newRootNode as the new root node.
     newRootNode.setRightChild(currentNode);

     // Step 2. Set the right child of the new left child of the new root node. Quite a tongue twister right?
     currentNode.setLeftChild(rightChildOfLeft);

     // Step 3. Update the height of the trees that were updated.
     newRootNode.setHeight(calculateTreeHeight(newRootNode));
     currentNode.setHeight(calculateTreeHeight(currentNode));

     return newRootNode;
}

11.6 Detecting the four rotation cases

Now that we have examined the left and right rotation, all we need to do is check for all four possible cases. To review the four possible cases, they are

  1. Left rotation
  2. Right rotation
  3. Left-right rotation
  4. Right-left rotation

In the upcoming section, we will be utilizing the following code. Before proceeding, please familiarize yourself with the following code before proceeding.

Be warned that the AVL tree implementation relies heavily on recursion. I am going to assume that readers are comfortable with recursive method calls. If not, please read my post on understanding recursion and get comfortable with recursion before proceeding.

/**
 * Check if tree is left heavy based on balance value.
 * Left heavy trees have more items on the left sub-tree than the right.
 * */
private boolean isLeftHeavy(int balanceValue) {
    return balanceValue > 1;
}

/**
 * Check if tree is right heavy based on balance value\
 * Right heavy trees have more items on the right sub-tree than the left.
 * */
private boolean isRightHeavy(int balanceValue) {
    return balanceValue < -1;
}

11.6.1 Left Rotation Case Check – Right Heavy Situation

Example of AVL tree implementation java left rotation

To verify whether we just need to implement a left rotation, we need to check for the following conditions. Firstly, is the current sub-tree right heavy? Secondly, is the data to insert greater than the right child of the current root? If both conditions pass, we have a case where we can balance the tree with a simple left rotation. The code will look like this.

// Right heavy situation - left rotation
if (isRightHeavy(balanceValue)
        && dataToInsert.compareTo(currentNode.getRightChild().getData()) > 0) {
    return leftRotation(currentNode);
}

11.6.2 Right Rotation Case Check – Left Heavy Situation

The right rotation of an AVL tree visualized

Afterwards, if it doesn’t fit the left rotation case, we need to check if the following case can be handled with a right rotation. Firstly, is the current sub-tree left heavy? Secondly, is the data to insert less than the left child of the current root? If both conditions pass, we can handle the current situation with a simple right rotation.

// Left heavy situation - Right rotation
if (isLeftHeavy(balanceValue)
        && dataToInsert.compareTo(currentNode.getLeftChild().getData()) < 0) {
    return rightRotation(currentNode);
}

11.6.3 Left – Right Rotation Case Check

left right rotation before after image

For the left-right rotation case, we would need to check the following conditions. Firstly, is the current sub-tree left-heavy? Secondly, is the data to insert greater than the left child of the current root? In the example above, if we were to insert the number 5 into a tree with elements 6 and 4, this would trigger the left – right rotation case. The code will look like below.

// Left right
if (isLeftHeavy(balanceValue) &&
        dataToInsert.compareTo(currentNode.getLeftChild().getData()) > 0) {
    currentNode.setLeftChild(insertNode(currentNode.getLeftChild(), dataToInsert));
    return rightRotation(currentNode);
}

11.6.4 Right – left Rotation Case Check

The right left rotation of AVL tree.

Finally, onto the last case. By now, I am certain that you are sick of tree balancing rotations. Afterwards, for the removal section, I will jump straight into the code without spending so much time on explaining the different rotation cases.

If none of the previous conditions met, this could only mean two possible things. Firstly, the right-left rotation needs to be implemented to balance the tree. Otherwise, the tree is already balanced. Therefore, we do not need to do anything but just simply return the current node.

The right rotation will be implemented provided that two conditions are met. Firstly, is the current sub-tree right-heavy? Secondly, is the data to insert less than the right child of the current root node? Using the example above, if 7 were to be inserted into a tree with elements 6 and 8, this would trigger the right-left rotation case.

// right-left
if (isRightHeavy(balanceValue) &&
        dataToInsert.compareTo(currentNode.getLeftChild().getData()) < 0) {
    currentNode.setRightChild(insertNode(currentNode.getRightChild(), dataToInsert));
    return leftRotation(currentNode);
}

For more information on the insertion method, please refer to the source code posted on GitHub. If the explanation is not adequate or somewhat lacking, please contact me. I will be more than happy to update the information, to make this guide more comprehensive.

11.7 Bringing it all together

Finally, we can take a look at what the insert code looks like. I highly recommend readers to download the source code from GitHub and review the whole implementation before proceeding. Because we will be applying similar logic and concepts when implementing the remove() method.

/**
 * Implement recursively to avoid storing a node pointer to the parent.
 * Remember that in the JavaScript implementation of the Binary Search Tree,
 * we required the pointer to the parent.
 *
 * This method will ALWAYS return the root.
 *
 * @param currentNode
 * @param dataToInsert
 * */
private Node<T> insertNode(Node<T> currentNode, T dataToInsert) {

    // The current root node is empty. Create a new node here
    if (currentNode == null) {
        return new Node<T>(dataToInsert);
    }

    // Is data to insert smaller than the current key value.
    // Go to the left.
    if (dataToInsert.compareTo(currentNode.getData()) < 0) {
        currentNode.setLeftChild(insertNode(currentNode.getLeftChild(), dataToInsert));
    } else {
        currentNode.setRightChild(insertNode(currentNode.getRightChild(), dataToInsert));
    }

    currentNode = balanceTree(currentNode, dataToInsert);

    // Finally, update the height calculateTreeHeight(rootNode)
    currentNode.setHeight(calculateTreeHeight(currentNode));

    return currentNode;
}

11.8 Implementing the remove() method

Implementing the remove method() is, in essence, very similar to the insert operation. We first need to find the node to delete. Once we have found the node, we need to remove the node. However, unlike the binary search tree, we need to ensure that the tree remains balanced after the node has been removed.

When removing the node, we come across three possible cases. If you have gone through my binary search tree tutorial, this will sound quite familiar. Number one, the node to delete has no children. The other two cases occur when the node to delete has one or two children respectively. For more information on this process, please refer to the post on the binary search tree implementation. For those that were expecting a Java implementation, unfortunately, due to high demands, the binary search tree was written in JavaScript. However, the fundamental concepts should remain the same.

For the sake of completeness, I will write briefly about removing a node from a tree with one or two children. For those that are not interested, please feel free to skip ahead.

11.8.1 Removing a node with one or two children

The time complexity for the remove() method is O (log n).

When removing a node with a single child, all we need to do is update the references. In the example above in the right-left rotation case, if we were to remove 8, all we need to do is ensure that 6 no longer points at 8. Instead, have it point to 7. Afterwards, we just need to remove all the references from the node with key value of 8.

Using another example, if we were to delete 15 from the example below, all we need to do is update the reference. Have the right child pointer of 10 point directly to 12. Deference 15 so that the node gets garbage collected. Simple right?

Deleting a node with a single child

In the case where we have two children, after finding the node to remove, we have two options. First option is, we look for the largest item in the left-sub-tree. The other option is to find the smallest item in the left sub-tree. Since the item we want to remove is the root node, we swap values with the one of the two values mentioned above. After the swap, all we need to do is set the item that we want to remove to null. In the example below, we would swap 7 and 8, since 8 is the largest value in the right sub-tree. Then we would delete 10 and link 8 and 7. Essentially, we converted the case with two children to a case where the node to delete has a single child.

How do we delete a node with two children in a binary search tree?

11.8.2 remove() Code overview

Enough with theory. The remove() implementation is very similar to that of the binary search tree. Except for adjusting the height parameter and the balancing job that we need to do at the end. Lets take a look at the code.

/**
 * Recursive implementation of removing data from a tree
 * Three cases:
 * 1. No child nodes.
 * 2. Single child.
 * 3. Two children.
 * @return the node that is to be removed. Return null if no data is removed.
 * */
private Node<T> removeNode(Node<T> currentNode, T dataToRemove) {

    // Base case
    if (currentNode == null) {
        return null;
    }

    Node<T> leftChild = currentNode.getLeftChild();
    Node<T> rightChild = currentNode.getRightChild();
    T currentData = currentNode.getData();

    if (dataToRemove.compareTo(currentData) == 0) {

        System.out.println("Found the data that we want to remove: " + currentData);

        if (leftChild == null && rightChild == null) {
            System.out.println("Removing a leaf node");
            return null;
        } else if (leftChild == null) {
            System.out.println("Removing a node with a right child");
            currentNode = null;
            return rightChild;
        } else if (rightChild == null) {
            System.out.println("Removing a node with a left child");
            currentNode = null;
            return leftChild;
        } else {
            System.out.println("Removing a node with two children");
            // Find the largest node on the left sub-tree
            Node<T> largestInLeftSubtree = getMaxNode(leftChild);

            // Swap the root node with the largest in left sub-tree
            currentNode.setData(largestInLeftSubtree.getData());
            // Set left-child recursively. Remove the copy left of the largest left child
            currentNode.setLeftChild(removeNode(currentNode.getLeftChild(), largestInLeftSubtree.getData()));

        }
    } else if (dataToRemove.compareTo(currentData) < 0) {
        System.out.println("Traversing to the left ---");
        currentNode.setLeftChild(removeNode(leftChild, dataToRemove));
    } else {
        System.out.println("Traversing to the right ---");
        currentNode.setRightChild(removeNode(rightChild, dataToRemove));
    }

    // Update the height parameter
    currentNode.setHeight(calculateTreeHeight(currentNode));

    // Check on every delete operation whether tree has become unbalanced
    return balanceTreeAfterDeletion(currentNode);
}

Okay, everything is identical to the binary search tree, except for the last two lines. Updating the height remains the same as before. All we need to do is simply grab the greater height value between the left and right sub-tree and add one to the result. The balancing part is where things get tricky. However, the balancing is not anything foreign.

11.8.3 Balancing the tree after Deletion – Code Review

We are almost at the finish line. Just a little more and you can give your brain a rest and award yourself with that nice cool cup of ice coffee.

/**
 * Check whether the tree is unbalanced after a delete operation
 * @return Node The node that is deleted.
 * */
private Node<T> balanceTreeAfterDeletion(Node<T> currentNode) {
    int balanceValue = getBalanceValue(currentNode);
    // Left heavy situation. Can be left-left or left-right
    if (balanceValue > 1) {
        // Left-right rotation required. Left rotation on the right child of the root node.
        if (getBalanceValue(currentNode.getLeftChild()) < 0) {
            currentNode.setLeftChild(leftRotation(currentNode.getLeftChild()));
        }
        return rightRotation(currentNode);
    }
    // Right heavy situation. Can be right-right or right-left
    if (balanceValue < -1) {
        // right - left situation. Left rotation on the right child of the root node.
        if (getBalanceValue(currentNode.getRightChild()) > 0) {
            currentNode.setRightChild(rightRotation(currentNode.getRightChild()));
        }
        // left rotation on the root node
        return leftRotation(currentNode);
    }
    return currentNode;
}

Okay, lets take a look at the code above. In the left heavy situation, we first need to check whether the left child of the root node has a left or a right child. If the balance is less than zero, it means that it has a right child. In the left-right rotation below, the following line

currentNode.setLeftChild(leftRotation(currentNode.getLeftChild)));

will be run. If however, instead of 5, the 4 had a left-child of three, the line above will not be run. Therefore, only a single right rotation would be made. The same logic is also applied to the right-heavy situation. Congratulations! You now know how to insert and remove items from the AVL tree.

left right rotation before after image

12. Supplementary reference materials

When learning data structures, I highly recommend reading as much as possible. Below, I am going to make a list of references that is of great use to me. Along with the references, I will throw in my own two cent on why I find the references useful.

  1. tutorialspoint.com remains one of my favorite websites for learning programming. Albeit the content is very brief, I believe that is its ultimate selling point. Despite the brevity, it is succinct and rich in content. I highly recommend their section on AVL trees as a supplementary reading material.
  2. visualalgo.net as the name suggests, visualizes the algorithm to viewers. Since I am a visual learner, this neat application/website was of tremendous help. Being able to see how the insertion, removal methods as well as the rotation helped me understand how these algorithms worked. It is great for all the other algorithms, including the sorts. Therefore, I highly recommend everybody who is going through this tutorial to test the AVL tree by inserting sample data via this site. Trust me, it will make all the explanations that much clearer.
  3. Scribd.com has a good guide on the rotation process of the AVL tree. The reason why most of the tutorials mentioned focus on the rotation process is because the rotation process is the foundation of the AVL tree data structure. Understand the rotation, and you have the AVL tree down, provided that you also understand how binary search trees work. In my opinion, the only thing work looking in this post is the rotation diagram. But that is exactly why I attached the link. The diagram on this page is well-drawn and should help readers understand how the rotation works.

13. Source Code

Once again, the source code can be found on GitHub. For additional questions, please contact me via email. Alternatively, feel free to leave a comment down below. Once again, thank you so much for being patient and reading all the way through. This is the longest post to date so far, and the amount of hours I put into creating this … was quite a few! Anyway, I hope that by now, every reader will fully understand the AVL tree, what it is all about and how to implement one from scratch.

About the Author Jay

I am a programmer currently living in Seoul, South Korea. I created this blog as an outlet to express what I know / have been learning in text form for retaining knowledge and also to hopefully help the wider community. I am passionate about data structures and algorithms. The back-end and databases is where my heart is at.

follow me on:
48 Shares