Before proceeding, be warned: The AVL tree implementation in Java is fairly challenging. It requires users to have a strong working knowledge of the Java programming language. Furthermore, I also recommend users to have an understanding of the binary search tree.
For the sake of technicality, we are now going to refer to the data node values as keys or refer to them simply by the numeric value E.g. 7. In past tutorials, I called them node values, but from now on, we will refer to its proper term: keys.
In this post, rather than using JavaScript, we will stick to the Classbased objectoriented approach. We will be implementing the AVL Tree in Java. The AVL tree was named after the surnames of the inventors of the AVL tree (Adelson, Velski and Landis).
We will also be writing out pseudo code for each of the operations. Pseudo code is a highlevel description of an algorithm or a computer program. Pseudo code is generally written to be read by other human beings. Therefore, pseudo code enables us to abstract out the lowlevel operation, and focus purely on the logic.
As a software engineer, writing pseudo code can help you organize your thoughts and extract it into written form. Thus, it opens up a pathway to share and communicate your thought process and ideas to your team.
The AVL tree is a selfbalancing binary search tree. In a traditional sorted binary search tree (BST), the search time is identical to that of a linked list (log n). However, because the AVL tree balances itself by making rotations when the tree becomes unbalanced, O(log n) search time is guaranteed, thus making the AVL tree very consistent in terms of performance.
Readers will get a good workout on solving problems and breaking down a big problem into smaller tasks.
The AVL Tree is a balanced data structure, ensuring O (log n) time complexity for basic operations such as insert, remove and search.
Selfbalancing trees such as the AVL Tree and Redblack trees are used heavily in operating systems. It is a highly optimized data structure. Therefore, knowing how it works, and being able to implement one will help readers solve other complex problems that they come across.
Because it is selfbalancing, the performance of the AVL tree is generally more consistent than that of a BST. As you can see, the worst case performance for the major operations are O (log n). Therefore, the AVL tree is a very consistent data structure, which performs well, especially when used to look up data. However, since it has to check and the rotation operations are performed frequently, in contrast to the red black tree, its performance suffers when inserting/removing data on a regular basis. For your information, the red black tree is also another selfbalancing binary search tree.
Operation  Average  Worst Case 

Space  O (n)  O (n) 
search/traverse  O (log n)  O (log n) 
Insert  O (log n)  O (log n) 
Remove  O (log n)  O (log n) 
The next question readers might be pondering is: when should I use AVL trees? AVL trees are ideal in cases where searches are performed frequently and insertion/deletion operations, rarely. This is because AVL trees require a little more rotation in comparison to other tree data structures such as the redblack tree.
Sorts can also be implemented with the AVL tree. However, despite having a O (n log n) time complexity, other sorting algorithms such as the quick or merge sort generally outperform. This is because the AVL tree has more memory overhead, which is why we prefer the aforementioned sorting algorithms over the AVL sort.
Lookup tables can also be implemented in AVL trees. Because AVL trees support operations such as sorting, it can provide more options in comparison to the hash table data structure.
Bottom line: AVL trees are useful for lookup, or searching for data in an existing collection.
A tree is balanced, if the height difference between the left and right sub tree is not greater than one.
The formula for calculating the height is as follows.
height = max (leftSubTreeHeight, rightSubTreeHeight) + 1
Very simple right? The height of the tree is simply the height of the left or right subtree (the greater of the two) plus one. On each data manipulating operation (such as insertion and deletion), the AVL tree checks itself to see if it is balanced.To the math nerds, the formula will look something like the following.

leftSubTreeHeight  rightSubTreeHeight  <= 1
We are able to calculate the height of a sub tree recursively. Below, I have attached an example of both a balanced and unbalanced tree.
For example, the node with key value 6 in the example below has a height parameter of three. If the example does not make sense, don’t worry: it will slowly begin to make sense as we go through examples.
On detecting an unbalance, the AVL tree will perform a balancing operation. Why is this balancing operation necessary? The balancing operation is what makes the AVL tree special. By preserving the balance of the tree, we are able to achieve a consistent O (log n) time complexity on all the major operations, such as the insert, traverse and delete methods.
The 1 marked above is the height parameter value. Consider the 1 as a null pointer. Nodes with a height parameter of 0 indicates that it has a left/right child. That left/right child then has zero children. The best way to understand is that, in AVL tree height calculation, we start counting from minus one.
The only difference between the AVL tree and the binary search tree is that the AVLtree balances itself after each operation that involves manipulating data.
As explained in previous sections, the rebalancing process is good because it guarantees logarithmic time complexity for all the major operations performed. Now that we have established this fact, we will be going into the actual implementation details.
When balancing a tree, we also have to be mindful of the fact that balancing a tree could cause a parent tree to become unbalanced. Therefore, we need to check all the nodes to find the right place to make the substitution. The implementation details for the tree balancing algorithm is fairly challenging. There are four possible rotation cases.
If we only need to make one rotation, it can be done in O(1) or constant time. However, there are situations where we will be making multiple rotations. This will take logarithmic time complexity O(log n). Let us examine each of the cases in detail. We need to understand the theory before jumping into writing the code.
Important: We will assign a height parameter of negative one if the left/right subtree does not exist. To illustrate this concept, I have added an example below.
First of all, let us look at a visual diagram that illustrates how the left rotation. The reason why we call it a left rotation is because we make the current root the left subtree of its right child. The left rotation is made, when the right subtree has a greater height than the left subtree.
The first time around, the operations might not make sense. However, do press on. Utilize the diagrams to wrap your brain around how these operations work. Once you figure it out, you will realize that these operations are not as complicated as you thought.
For the sake clarity, we will be writing pseudo code. Writing pseudo code is another nifty technique to write out and express your idea as a sequence of logical operations.
In the example below, think of node
as 6, the current root node. In the first line, we are setting tempRightChild
to 8. Afterwards, we are getting the leftChild
of 8, which is null. After the rotation, tempRightChild
8 will become the new root.
Finally, we will update the references. First, we will set 6 as the left child of 8. The new root node, 8, already has its right child pointer pointing at 9, so we don’t have to make any changes here. The last rotational adjustment before updating the height we need to make is to set the right child of the former root to the left child to the former left child of the new root (which holds a value of null
).
What would happen if 8 had a left child of 7? Remember that the new root node was pointing at 7. After updating the references, 8 now points at the former root 6. In this situation, 7 will be garbage collected, since there is no reference pointing at 7. That will result in an unintended removal of a value from the tree, which could spell disaster. That is why we have the following line the pseudo code below.
currentNode.setRightChild(leftChildOfRight);
// The root node with the value of 6 (in the example above) will be passed into rotateLeft
Begin rotateLeft(Node currentNode)
// store node 8 in temp right child.
Node newRootNode = currentNode.getRightChild();
// Store the null pointer in temporary variable
Node leftChildOfRight = newRootNode.getLeftChild();
// Set node with key value of 6 as the left child of 8.
newRootNode.setLeftChild(currentNode);
// In this example, it is a null pointer, but this is necessary.
currentNode.setRightChild(leftChildOfRight);
// Update the height of the subtrees that were updated.
currentNode.updateHeight();
newRootNode.updateHeight();
End rotateLeft
The right rotation works very similarly to the left rotation. If the left subtree is greater in proportion to the right subtree, we need to rotate to the right.
In the example above, the node with key 6 has a left child of 4, which also has another left child with key value of 2. After the right rotation, the left child 4 will become the new root. Subsequently, the minimum value in this subtree will become the left child and the previous root will become the right subchild. Because the balancing operation is done on every data manipulating method, the height deviation will never exceed 2.
Please do take note that we are not done after just a single operation. The check must be performed sequentially until we reach the root node of the tree. This is to perform additional balancing operations in case our current operation threw off the balance of other subtrees.
To sum it up, when we have to do a right rotation, we perform the following operation.
Hopefully by now, handling the right rotation should be fairly clear.
For the sake of this tutorial, we will be writing pseudo code. Writing pseudo code is another nifty technique to write out and express your idea as sequence of logical operations. Since the right rotation is very similar to the left rotation, the pseudo code will not contain unnecessary comments.
For the same reason as in the left rotation case, we will be updating the left child of 6 to point to null. What if 4 had a right child with a value of 5? For the sake of the reader’s benefit, please review the previous section on left rotation if the reason is not clear.
Begin rotateRight(Node currentNode)
Node newRootNode = currentNode.getLeftChild();
Node rightChildOfLeft = tempLeftChild.getRightChild();
currentNode.setLeftChild(newRootNode);
newRootNode.setLeftChild(rightChildOfLeft);
currentNode.updateHeight();
newRootNode.updateHeight();
End rotateLeft
Finally, we finish by updating the height of the root and the left subtree.
By now, you get the drill right? The leftright situation is a situation where the root node only has a left child. That left child has a right child. Naturally, this results in the following imbalance. The balanced subtree will look like the tree on the right in the image displayed below.
Please do note however, we do not modify the pointers for the left subchild of the root node, 4. For example, 4 can have a left child, but we will ignore it. We will explore the reason why very shortly by reviewing an example.
The steps for making the adjustment are as follows. Firstly, make a leftrotation on the node with key value 4. You know the drill now right? 5 will now become the left child of 6. 4 will then become the left child of 5. The result will be as follows
Looks awfully similar to the right rotation case right? Needless to say, all we need to do here, is just simply make a right rotation. Needless to say, the results will look like the “after leftright rotation” image, with 5 as the new root. The left and right child will be 4 and 6 respectively.
Remember when I said that we did not need to update the left child reference for four? Via the visual diagram above, can you see why? Yes, because right rotation will not force the left child reference of four to point to a different element. The pointer will continue to point at the same element.
Hopefully, it is clear why this rotation is called the leftright rotation. In case it is not clear, it is because we first make a left rotation, followed by a right rotation. No rocket science whatsoever.
Now that we understand how the leftright rotation works, lets dive straight into the logic. There is nothing much to say except that we will first do the leftrotation, followed by the right rotation. For more information how each of these rotations work, please reread the previous sections. It is very important that the reader has a firm understanding on how the left and right rotation works before proceeding.
In Summary: Execute the following steps in order
1. Do a left rotation.
2. Afterwards, perform a right roation.
And we are done. For more information on left and right rotations, review the information in the previous section.
Readers might be sick of this by now. If you are yelling in the back of your mind “Okay, I understand it all … get to the code already!”, feel free to skip this section. It would be quite obvious to state that the rightleft rotation is very similar to the leftright rotation. But for the sake of completeness, I just stated the obvious.
Once again, the nodes may have right children, but for the same reasons as the leftright rotation, we do not update the references. The reason is simple: the right rotation does not require us to update the right child of node with key value 8.
Yes, once you get the hang of it, this becomes quite repetitive. If it seems so, give yourself a pat on the back. You are well on your way to understanding AVL trees. Firstly, make a right rotation. After the rotation, the tree should look like this.
Afterwards, the only thing left now is to make a left rotation. After each rotation, be sure to update the height parameter of each of the manipulated subtrees.
Finally, we have gone through all four possible rotation cases in the AVL tree. Get yourself a glass of water. In the upcoming section, we will dive into the AVL tree implementation in Java.
Despite being similar to the binary search tree, the balancing process adds a layer of complexity to the AVL Tree implementation.
I will be assuming that readers are comfortable with binary search trees. For more information on binary search trees, refer to my article on binary search trees. If the reader has fully understood all the theory discussed in previous sections, the implementation details should not be above you. However, the implementation will require strong knowledge of recursion and a basic understanding of objectoriented programming.
The source code is available on GitHub, so please download the source code before proceeding. On GitHub, I have uploaded two templates. The first folder contains the completed source code. The second is the template with the bare bones. For readers who want a challenge, download the second template. While reading along, without looking at the code snippets, try writing out the implementation details yourself.
Be forewarned, the implementation details may be quite challenging for those that are doing it for the first time. Do not beat yourself up for failing. The problem is meant to be hard and you aren’t the only one in that boat. Take time to read through the theory and also read up on the supplementary reference materials. The more you read and try, the easier it will become.
We will at long last, be writing code. When writing the algorithm for calculating the height, it is important to consider null pointers when calculating the height of a node. Remember how we wrote 1 in the diagrams above? In the upcoming section, we will explore exactly why that was recommended. Let us review the formula once again to refresh our memory
height = max (leftSubTreeHeight, rightSubTreeHeight) + 1
For example, let us assume that we are attempting to insert the element 7 into the following tree.
If you have been following this tutorial carefully, it is clear that this will require a right rotation. Why? Because the subtree with root node 9 is unbalanced. After the right rotation, 5 will have a right child of 8. The node with key value of 8 will have left and right child values of 7 and 9 respectively.
Afterwards, we would need to check whether the entire tree is balanced. By just examining the diagram, we can tell that the tree will be balanced. Nevertheless, we need to be able to implement the logic that carries out these calculations. Firstly, we will start by implementing the calculateHeight()
method.
Okay here we go! Once again, because each subtree is a tree in it of itself, we can easily express the calculateHeight()
algorithm recursively.
When we do balance checks, we start with the node that we inserted. This is in contrast to all the other operations, which all start off at the root node. In the example above, first of all, we will check whether 7 is balanced. Because the tree has no children, it is considered to be balanced. So we go to its parent 8. The height value of the right child 8 is 1, and left child height is 0. Therefore, it is balanced. When we reach 9 however, its left and right child height are 1 and 1 respectively. Its height difference is greater than the absolute value of 1 (1). Therefore, we need to perform the right rotation.
After the right rotation, 5’s right child will now be 8. Its left and right child will be 7 and 9 respectively. We have gone through rotations like gazillion times now, so this should be fairly familiar.
To sum it up, in order to insert an element, we must take the following steps.
Readers might be asking: “Wait, we haven’t gone through a code example on how to calculate the height”. Don’t worry, it is very simple. We just need to apply the formula mentioned in the previous section.
As you can see, the implementation detail is very selfexplanatory. We are simply getting the greater height value between the left and right subtree and adding one to the result.
/**
* Get the height of the tree and increment it by 1 to get the actual height.
* Get it by taking the greater value between the height of the left and right subtree
* and add one to that value.
* Null pointers return a height of zero, because 1 + 1 = 0.
* */
private int calculateTreeHeight(Node<T> currentNode) {
return Math.max(height(currentNode.getLeftChild()), height(currentNode.getRightChild())) + 1;
}
Since all the theory has been explained in the previous sections, lets jump straight into the code. A simple left rotation involves the following actions. Finally, we are returning the new root node, because the implementation is a recursive approach.
private Node<T> leftRotation(Node<T> currentNode) {
System.out.println("Beginning left rotation ... on node: " + currentNode.getData());
Node<T> newRootNode = currentNode.getRightChild();
Node<T> leftChildOfRight = newRootNode.getLeftChild();
// Step 1. set the left child of the new root node
newRootNode.setLeftChild(currentNode);
// Step 2. set the right child of the new left child
currentNode.setRightChild(leftChildOfRight);
// Step 3. Update the height of the trees that were updated.
newRootNode.setHeight(calculateTreeHeight(newRootNode));
currentNode.setHeight(calculateTreeHeight(currentNode));
return newRootNode;
}
Essentially the same as the left rotation, so I will just post the code snippet below.
private Node rightRotation(Node currentNode) {
System.out.println("Beginning right rotation ... on node: "
+ currentNode.getData());
Node newRootNode = currentNode.getLeftChild();
Node rightChildOfLeft = newRootNode.getRightChild();
// Step 1. Set newRootNode as the new root node.
newRootNode.setRightChild(currentNode);
// Step 2. Set the right child of the new left child of the new root node. Quite a tongue twister right?
currentNode.setLeftChild(rightChildOfLeft);
// Step 3. Update the height of the trees that were updated.
newRootNode.setHeight(calculateTreeHeight(newRootNode));
currentNode.setHeight(calculateTreeHeight(currentNode));
return newRootNode;
}
Now that we have examined the left and right rotation, all we need to do is check for all four possible cases. To review the four possible cases, they are
In the upcoming section, we will be utilizing the following code. Before proceeding, please familiarize yourself with the following code before proceeding.
Be warned that the AVL tree implementation relies heavily on recursion. I am going to assume that readers are comfortable with recursive method calls. If not, please read my post on understanding recursion and get comfortable with recursion before proceeding.
/**
* Check if tree is left heavy based on balance value.
* Left heavy trees have more items on the left subtree than the right.
* */
private boolean isLeftHeavy(int balanceValue) {
return balanceValue > 1;
}
/**
* Check if tree is right heavy based on balance value\
* Right heavy trees have more items on the right subtree than the left.
* */
private boolean isRightHeavy(int balanceValue) {
return balanceValue < 1;
}
To verify whether we just need to implement a left rotation, we need to check for the following conditions. Firstly, is the current subtree right heavy? Secondly, is the data to insert greater than the right child of the current root? If both conditions pass, we have a case where we can balance the tree with a simple left rotation. The code will look like this.
// Right heavy situation  left rotation
if (isRightHeavy(balanceValue)
&& dataToInsert.compareTo(currentNode.getRightChild().getData()) > 0) {
return leftRotation(currentNode);
}
Afterwards, if it doesn’t fit the left rotation case, we need to check if the following case can be handled with a right rotation. Firstly, is the current subtree left heavy? Secondly, is the data to insert less than the left child of the current root? If both conditions pass, we can handle the current situation with a simple right rotation.
// Left heavy situation  Right rotation
if (isLeftHeavy(balanceValue)
&& dataToInsert.compareTo(currentNode.getLeftChild().getData()) < 0) {
return rightRotation(currentNode);
}
For the leftright rotation case, we would need to check the following conditions. Firstly, is the current subtree leftheavy? Secondly, is the data to insert greater than the left child of the current root? In the example above, if we were to insert the number 5 into a tree with elements 6 and 4, this would trigger the left – right rotation case. The code will look like below.
// Left right
if (isLeftHeavy(balanceValue) &&
dataToInsert.compareTo(currentNode.getLeftChild().getData()) > 0) {
currentNode.setLeftChild(insertNode(currentNode.getLeftChild(), dataToInsert));
return rightRotation(currentNode);
}
Finally, onto the last case. By now, I am certain that you are sick of tree balancing rotations. Afterwards, for the removal section, I will jump straight into the code without spending so much time on explaining the different rotation cases.
If none of the previous conditions met, this could only mean two possible things. Firstly, the rightleft rotation needs to be implemented to balance the tree. Otherwise, the tree is already balanced. Therefore, we do not need to do anything but just simply return the current node.
The right rotation will be implemented provided that two conditions are met. Firstly, is the current subtree rightheavy? Secondly, is the data to insert less than the right child of the current root node? Using the example above, if 7 were to be inserted into a tree with elements 6 and 8, this would trigger the rightleft rotation case.
// rightleft
if (isRightHeavy(balanceValue) &&
dataToInsert.compareTo(currentNode.getLeftChild().getData()) < 0) {
currentNode.setRightChild(insertNode(currentNode.getRightChild(), dataToInsert));
return leftRotation(currentNode);
}
For more information on the insertion method, please refer to the source code posted on GitHub. If the explanation is not adequate or somewhat lacking, please contact me. I will be more than happy to update the information, to make this guide more comprehensive.
Finally, we can take a look at what the insert code looks like. I highly recommend readers to download the source code from GitHub and review the whole implementation before proceeding. Because we will be applying similar logic and concepts when implementing the remove()
method.
/**
* Implement recursively to avoid storing a node pointer to the parent.
* Remember that in the JavaScript implementation of the Binary Search Tree,
* we required the pointer to the parent.
*
* This method will ALWAYS return the root.
*
* @param currentNode
* @param dataToInsert
* */
private Node<T> insertNode(Node<T> currentNode, T dataToInsert) {
// The current root node is empty. Create a new node here
if (currentNode == null) {
return new Node<T>(dataToInsert);
}
// Is data to insert smaller than the current key value.
// Go to the left.
if (dataToInsert.compareTo(currentNode.getData()) < 0) {
currentNode.setLeftChild(insertNode(currentNode.getLeftChild(), dataToInsert));
} else {
currentNode.setRightChild(insertNode(currentNode.getRightChild(), dataToInsert));
}
currentNode = balanceTree(currentNode, dataToInsert);
// Finally, update the height calculateTreeHeight(rootNode)
currentNode.setHeight(calculateTreeHeight(currentNode));
return currentNode;
}
remove()
methodImplementing the remove method() is, in essence, very similar to the insert operation. We first need to find the node to delete. Once we have found the node, we need to remove the node. However, unlike the binary search tree, we need to ensure that the tree remains balanced after the node has been removed.
When removing the node, we come across three possible cases. If you have gone through my binary search tree tutorial, this will sound quite familiar. Number one, the node to delete has no children. The other two cases occur when the node to delete has one or two children respectively. For more information on this process, please refer to the post on the binary search tree implementation. For those that were expecting a Java implementation, unfortunately, due to high demands, the binary search tree was written in JavaScript. However, the fundamental concepts should remain the same.
For the sake of completeness, I will write briefly about removing a node from a tree with one or two children. For those that are not interested, please feel free to skip ahead.
The time complexity for the remove()
method is O (log n).
When removing a node with a single child, all we need to do is update the references. In the example above in the rightleft rotation case, if we were to remove 8, all we need to do is ensure that 6 no longer points at 8. Instead, have it point to 7. Afterwards, we just need to remove all the references from the node with key value of 8.
Using another example, if we were to delete 15 from the example below, all we need to do is update the reference. Have the right child pointer of 10 point directly to 12. Deference 15 so that the node gets garbage collected. Simple right?
In the case where we have two children, after finding the node to remove, we have two options. First option is, we look for the largest item in the leftsubtree. The other option is to find the smallest item in the left subtree. Since the item we want to remove is the root node, we swap values with the one of the two values mentioned above. After the swap, all we need to do is set the item that we want to remove to null. In the example below, we would swap 7 and 8, since 8 is the largest value in the right subtree. Then we would delete 10 and link 8 and 7. Essentially, we converted the case with two children to a case where the node to delete has a single child.
11.8.2 remove()
Code overviewEnough with theory. The remove()
implementation is very similar to that of the binary search tree. Except for adjusting the height parameter and the balancing job that we need to do at the end. Lets take a look at the code.
/**
* Recursive implementation of removing data from a tree
* Three cases:
* 1. No child nodes.
* 2. Single child.
* 3. Two children.
* @return the node that is to be removed. Return null if no data is removed.
* */
private Node<T> removeNode(Node<T> currentNode, T dataToRemove) {
// Base case
if (currentNode == null) {
return null;
}
Node<T> leftChild = currentNode.getLeftChild();
Node<T> rightChild = currentNode.getRightChild();
T currentData = currentNode.getData();
if (dataToRemove.compareTo(currentData) == 0) {
System.out.println("Found the data that we want to remove: " + currentData);
if (leftChild == null && rightChild == null) {
System.out.println("Removing a leaf node");
return null;
} else if (leftChild == null) {
System.out.println("Removing a node with a right child");
currentNode = null;
return rightChild;
} else if (rightChild == null) {
System.out.println("Removing a node with a left child");
currentNode = null;
return leftChild;
} else {
System.out.println("Removing a node with two children");
// Find the largest node on the left subtree
Node<T> largestInLeftSubtree = getMaxNode(leftChild);
// Swap the root node with the largest in left subtree
currentNode.setData(largestInLeftSubtree.getData());
// Set leftchild recursively. Remove the copy left of the largest left child
currentNode.setLeftChild(removeNode(currentNode.getLeftChild(), largestInLeftSubtree.getData()));
}
} else if (dataToRemove.compareTo(currentData) < 0) {
System.out.println("Traversing to the left ");
currentNode.setLeftChild(removeNode(leftChild, dataToRemove));
} else {
System.out.println("Traversing to the right ");
currentNode.setRightChild(removeNode(rightChild, dataToRemove));
}
// Update the height parameter
currentNode.setHeight(calculateTreeHeight(currentNode));
// Check on every delete operation whether tree has become unbalanced
return balanceTreeAfterDeletion(currentNode);
}
Okay, everything is identical to the binary search tree, except for the last two lines. Updating the height remains the same as before. All we need to do is simply grab the greater height value between the left and right subtree and add one to the result. The balancing part is where things get tricky. However, the balancing is not anything foreign.
We are almost at the finish line. Just a little more and you can give your brain a rest and award yourself with that nice cool cup of ice coffee.
/**
* Check whether the tree is unbalanced after a delete operation
* @return Node The node that is deleted.
* */
private Node<T> balanceTreeAfterDeletion(Node<T> currentNode) {
int balanceValue = getBalanceValue(currentNode);
// Left heavy situation. Can be leftleft or leftright
if (balanceValue > 1) {
// Leftright rotation required. Left rotation on the right child of the root node.
if (getBalanceValue(currentNode.getLeftChild()) < 0) {
currentNode.setLeftChild(leftRotation(currentNode.getLeftChild()));
}
return rightRotation(currentNode);
}
// Right heavy situation. Can be rightright or rightleft
if (balanceValue < 1) {
// right  left situation. Left rotation on the right child of the root node.
if (getBalanceValue(currentNode.getRightChild()) > 0) {
currentNode.setRightChild(rightRotation(currentNode.getRightChild()));
}
// left rotation on the root node
return leftRotation(currentNode);
}
return currentNode;
}
Okay, lets take a look at the code above. In the left heavy situation, we first need to check whether the left child of the root node has a left or a right child. If the balance is less than zero, it means that it has a right child. In the leftright rotation below, the following line
currentNode.setLeftChild(leftRotation(currentNode.getLeftChild)));
will be run. If however, instead of 5, the 4 had a leftchild of three, the line above will not be run. Therefore, only a single right rotation would be made. The same logic is also applied to the rightheavy situation. Congratulations! You now know how to insert and remove items from the AVL tree.
When learning data structures, I highly recommend reading as much as possible. Below, I am going to make a list of references that is of great use to me. Along with the references, I will throw in my own two cent on why I find the references useful.
Once again, the source code can be found on GitHub. For additional questions, please contact me via email. Alternatively, feel free to leave a comment down below. Once again, thank you so much for being patient and reading all the way through. This is the longest post to date so far, and the amount of hours I put into creating this … was quite a few! Anyway, I hope that by now, every reader will fully understand the AVL tree, what it is all about and how to implement one from scratch.
I am a programmer currently living in Seoul, South Korea. I created this blog as an outlet to express what I know / have been learning in text form for retaining knowledge and also to hopefully help the wider community. I am passionate about data structures and algorithms. The backend and databases is where my heart is at.