The quick sort algorithm (sometimes known as QuickSort or partition-exchange sort) is a very useful sorting algorithm that employs the divide and conquer approach.

Before proceeding, if you do not understand how the merge sort algorithm works, I recommend reading up on how the merge sort algorithm works before proceeding.

This will help you conceptualize the quick sort much more quickly.

Other topics that will help you conceptualize the quick sort algorithm include a knowledge and understanding of how recursive data structures such as the binary search tree works.

Once again, this post will be **language agnostic**. I will be writing out the logic in pseudo code so that regardless of programming language, you will be able to follow along.

At the end of this post, the source for the quick sort algorithm will be provided in **JavaScript, C++, Java and Python**.

Before going into the differences (on a high level), I would like to list the commonalities between the two sorting algorithm

- Both employ the divide and conquer approach.
- Both have an average time complexity of O(n log n).

However, the implementation details of the two algorithms differ

- Quick sort uses
**less memory**and has good locality of references. - On the downside, quick sort performs terribly when working with
**sorted lists.**It has a worst case scenario of O(n^2). This can be avoided by choosing the appropriate pivot, or by generating a random pivot instead of hard coding the pivot as the last element. You will learn more about pivots in this tutorial.

In practice, the memory friendliness, along with optimization for speed has made quick sort the more favored algorithm.

However, merge sort is favored over QuickSort in the case where the underlying data structure we are sorting is a linked list. Merge sort is faster when data won’t fit into memory or when it’s expensive to get to an item (source: stackoverflow.com) .

This should be fairly obvious, but quick sort scales extremely well with large sets. When dealing with extremely small lists, it may be outperformed by some of the other sorting algorithms such as the insertion sort.

Knowing both these algorithms enable developers to analyze the situation to make an informed decision to maximize the performance of the server.

I am going to assume that you know the **divide and conquer approach**. If not, please read up on divide and conquer techniques.

Because divide and conquer involves breaking down the problem into smaller, identical parts, a solid understanding of recursion will enable you to grasp how the quick sort algorithm works far more quickly.

Quick sort uses a pivot to divide the list into the **left and right list**. In merge sort, we just divided the list in half until we reached the base case where the list has a length of one, which is technically, a sorted list.

In contrast,

Quick sort algorithm uses a

pivotto divide an existing list into two lists.

All the concepts that are discussed here will be explained very shortly.

Let us look at the steps of the Quick Sort Algorithm.

- Determine a Pivot.
**Ideally**, the pivot will divide the array into two equal halves. - Partition (QuickSort) the array. You will have a left list and a right list from the operation.
- Repeat Steps 1 and 2 with the left and right list until we reach the base case.

The pivot is an item in a list that becomes the point of comparison. With the pivot, we begin to partition the current list into two smaller lists, with the pivot acting as the central point of comparison.

Therefore, after running the partitioning algorithm, the following statements must be fulfilled.

- Items to the left are smaller.
- Items to the right are greater.

Imagine there is a **wall** in between the left and right list. For example, like the following:

**Original List: **`[3, 5, 8, 2, 6, 4]`

–> last element **4** will act as the pivot

We will be making the following comparisons.

- Is 3 less than 4? Push into
`leftList`

. - Is 5 less than 4? Push into
`rightList`

. - Is 8 less than 4? Push into
`rightList`

. - Is 2 less than 4? Push into
`leftList`

. - Is 6 less than 4? Push into
`rightList`

.

If the comparison returns `true`

, the item will be placed in the left list. Otherwise, it will be placed in the right list.

Sometimes, we will actually have a `leftList`

and a `rightList`

. However, the more performant approach is to emulate the left and right list by making swaps.

We will discuss how the latter approach works later. The former approach albeit less efficient, is a great way to understand and conceptualize how the partitioning operation (using the pivot) works. In the next section, we will be discussing the aforementioned “* naive*” method of implementing the quick sort algorithm.

Lets take a look at the result of the partitioning operation.

**After Partitioning Original List**

**Left list: **`[3, 2] `

**Pivot:** 4

**Right list:** `[5, 8, 6]`

**After partitioning:** `[3, 2]`

| 4 | `[5, 8, 6]`

If you don’t understand what is going on, don’t worry. I will break down every single step in this operation so that you can see exactly what is happening under the hood.

The official term for the operation above, in case you didn’t get it, is called **partitioning a list or array**. The image below should accurately portray what a pivot is used for in the partitioning operation of the quick sort algorithm.

As you can see, after partitioning, the pivot is placed exactly right of the wall.

In this tutorial, when **tracking the index of the wall**, we will set the wall index as the index of the item to the left of the wall. In the example above, the wall index will be ** 1**, not 2.

Please note that the wall is there simply for guiding beginners to visualize the partitioning operation concept and is not necessary. Now that we know what the pivot is used for, let us look at the partition operation.

For example, suppose that we have the following list

`[4 ,8 ,7 ,5 ,6]`

Lets say that `6`

is the pivot in this example, we will be applying the following test if we are sorting in **ascending order**.

When placing item in the left or right list, we go through the original list **from left to right**.

In order to make swaps during the partition, we need to keep a track of the `currentIndex`

and `wallIndex`

.

All the elements that are less than the pivot will be placed to the left of the “wall”. Conversely, elements greater than the pivot will be placed to the right of the “wall.

Once we have finished going through the whole list, we will end the partition operation by placing the **pivot** right smack down on the “wall”.

**I.e. Compare 4, 8, 7 and lastly, 5. **Naturally, we don’t have to compare the pivot with itself.

If the current item is less than the pivot, place into the left list. Increment `currentIndex`

and `wallIndex`

.

Otherwise, place current item into the right list. Increment `currentIndex`

.

Result from pivoting data

Original List: `[4, 8, 7, 5, 6]`

Left list: `[4, 5]`

Right list: `[8, 7]`

Pivot: **6**

In the previous section(s), we examined how the** naive partition algorithm** works step by step. Now, your job is to write out the pseudo code for the **naive partitioning operation**.

I highly recommend that everybody who is learning the quick sort algorithm for the first time to do this exercise before proceeding.

Once you have written down the pseudo code, feel free to click below to read my pseudo code logic.

This is fairly tricky on your first try, so don’t beat yourself up if you didn’t get it the first time. Please also note that your pseudo code does not have to be exactly the same as mine.

Here is the pseudo code for the `QuickSort()`

method.

```
begin QuickSort(list):
if list length is less than or equal to one:
return list
end if;
return PartitionList(list)
end QuickSort;
```

We just return the list to the user when the length is one or less, because technically, that is a partitioned list. Forgive me for sounding like a broken record, but this point is very important. If you understand recursion then you might be saying “duhhhh” no base case = exploding stack.

Better safe than sorry right?

Here is the pseudo code for the `partition()`

method. Please note that the** quick sort algorithm can be implemented in many different ways**. This is simply one way of implementing it.

The objective of this tutorial is to help you understand how `QuickSort`

works so that you can implement it and make different versions of it.

Furthermore, learning algorithms will build up your experience and problem solving skills, making you a more versatile programmer.

A programmer is somebody who solves problems, using technology, so regardless of what area of programming you specialize in, every programmer should aspire to improve their problem solving skills.

```
begin PartitionList(list):
pivot = list[lastIndex]
Declare leftList and rightList
currentIndex = 0
wallIndex = 1
Set first item of right list as pivot.
while i is less than length minus one:
currentItem = list[i]
if currentItem is less than pivot
swap currentItem with item at wallIndex
increment currentIndex
else:
do nothing
end if else;
increment i
END while;
return combined list ( QuickSort(leftList) AND QuickSort(rightList) )
end PartitionList;
```

As you can see, the concept of the “wall” is just an **illustration to help people understand** that all items to the left of the wall belong to the left list. Conversely, all items to the right of the wall belong to the right list.

Hopefully, now you understand the role and importance of the pivot in the QuickSort algorithm.

In case you are wondering why the previous approach is not that performant, the reason is because of the additional space and time required.

Think about it: we are allocating at least double the space required to store a copy of the original list. Some implementations using the naive approach also involves splitting the array, which takes time, as the list has to be reconstructed.

Anyway, the bottom line is: we can do better than the previous implementation that we worked through just now.

Lets use another set of data to examine how we will perform the partition operation without the need of a `leftList`

and `rightList`

.

`list = [4, 8, 7, 5, 2, 3, 6]`

Please do take note that for the sake of clarity, we will be setting the** last element (6) as the pivot**. In an upcoming section, we will be examining additional optimization techniques to reduce the possibility of the worst case scenario surfacing.

Here are the values of our indexes. The `wallIndex`

is the position of the wall that you saw in the image before. The value of the index will be identical to the index of the item on the left (**starting at -1 initially, instead of 0)**.

Variable `currentIndex`

will keep track of the element to swap with the element at the `wallIndex`

. If this is not making sense right now, don’t worry. Once you see an example, things will start to click.

When the current item is **less than** (`<`

) the pivot, we swap the item at the `currentIndex`

with the item at the `wallIndex`

.

After each iteration, we always increment the `currentIndex`

. We only increment `wallIndex`

when the comparison between the current item and wall index item yields `true`

.

At the start of the operation, initialize and set `currentIndex`

and `wallIndex`

to 0.

In the series of images below, **ci** and **wi** stands for `currentIndex`

and `wallIndex`

respectively.

**Step #1**

Current item (4) is less than pivot (6).

Increment `wallIndex`

and `currentIndex`

.

Perform `swap (list[currentIndex], list[wallIndex])`

.

Swap 4 and 4.

**Step #2:**

Current item (8) is greater than pivot (6).

increment currentIndex.

**Step #3:**

Current item (7) is greater than pivot (6).

increment currentIndex.

**Step #4:**

Current item (8) is less than pivot (6).

Increment `wallIndex`

and `currentIndex`

.

Perform `swap (list[currentIndex], list[wallIndex])`

.

Swap 8 and 5.

**Step #5:**

Current item (8) is less than pivot (6).

Increment `wallIndex`

and `currentIndex`

.

Perform `swap (list[currentIndex], list[wallIndex])`

.

Swap 7 and 2.

**Step #6:**

Current item (8) is less than pivot (6).

Increment `wallIndex`

and `currentIndex`

.

Perform `swap (list[currentIndex], list[wallIndex])`

.

Swap 8 and 3.

**Step #7:**

Finally, all that is left is to plug the pivot exactly to the right of the wall and we are done.

Perform `swap (list[currentIndex], list[wallIndex])`

.

Swap 7 and 6.

Like the merge sort algorithm, the list is truly sorted when we reach the base case where the list is either empty or has only a single element.

Good question. Lets take a look. In the example above, we dealt with the following list.

`list = [4, 8, 7, 5, 2, 3, 6]`

After the partitioning, the list should look like this

`[4, 5, 2, 3, 6, 8, 7]`

If we were to draw on the concept of the left and right list, it will look something like this.

leftList: `[4, 5, 2, 3]`

rightList: `[8, 7]`

Note that the pivot (6) is omitted from the list. **Because it has already been sorted**. By placing it immediately on the wall, we can be certain that the number is sorted.

We travel recursively to the leftList until we reach the base case. The base case result will be the **FIRST element** in our sorted list.

The** second element **in the sorted list would be pivot of immediate parent and so on.

If you understand tree data structures such as the binary search tree, this way of thinking will seem natural.

If you know depth first search algorithms, think of it as applying the in-order traversal method.

Here is a visual diagram of the sorting process via the quick sort algorithm. The orange tiles represent the pivot of the current list.

If you have completed the coding challenge above, you should have a fairly good understanding of how to implement the optimized `QuickSort`

.

This optimization will increase the memory and also the time complexity when working with arrays. If we have to reconstruct arrays, it takes O(n) time and memory complexity. Although access by index is constant time, changing the length of an array takes O(n), because arrays are static data structures. In another words, arrays have to be reconstructed, each time their length property is changed, which is why dynamic data structures such as the linked lists were developed.

Okay, now we know the importance of making swaps instead of new arrays when sorting an array, as opposed to a dynamic data structure.

Even if we were working with dynamic data structures, having two new lists to hold the `leftList`

and `rightList`

items is not the most optimized solution.

Lets take a look at the source code.

```
begin QuickSortRevised(list, lowIndex, highIndex):
if lowIndex < highIndex:
declare pivotIndex equals call partitionList(list, lowIndex, highIndex)
// Quick sort leftList recursively
call quickSortRevised(list, lowIndex, pivotIndex - 1)
// Quick sort rightList (side to the right of the wall) recursively
call quickSortRevised(list, pivotIndex + 1, highIndex)
end if;
return PartitionList(list)
end QuickSort;
```

In this approach, we are not creating the `leftList`

and `rightList`

objects, for the sake of memory efficiency.

Instead, we will be using the `lowIndex`

and `highIndex`

to implement the divide and conquer approach by only iterating through the left and right section using **index ranges**.

However, the logic will be almost identical to the previous approach, with the difference that we are dividing our left and right list using index ranges, instead of creating actual lists.

`currentPartitionIndex()`

returns the `wallIndex`

of the current list plus one. Because we want to avoid the current partition index when calling quick sort recursively, we subtract and add one to the `leftList`

and `rightList`

recursively call respectively.

Let me demonstrate this statement with an example.

First of all, we have the base case:

`if (lowIndex < highIndex)`

The `lowIndex`

and `highIndex`

is used to place ranges to emulate the left and right list. This approach is much more memory efficient than the naive implementation we walked through previously.

However, the naive implementation is much easier to conceptualize inserting into left and right array instead of making swaps when starting out.

Hence, in the pseudo code, we have added the left and right list to place our items before delving deeper into the call stack.

So, here is the low and high index values for the left list in the example below:

**1st iteration**

low: 0

high: 7

**2nd iteration**

low: 0

high: 3

**3rd iteration**

low: 0

high: 2

**4th iteration**

low: 0

high: 1

We have now reached the base case for the left side.

Lets take a look at the low and high index of list elements to the right of the first pivot.

**1st iteration**

low: 5

high: 7

**2nd iteration**

low: 5

high: 6

Hopefully this added explanation brings clarity to what the purpose of the `lowIndex`

and `highIndex`

. Before moving onto the partition operation, we still have one more question to answer.

If you were asking this question and had it in mind, good! It seems as though you were paying attention to the following line.

```
// Quick sort leftList recursively
call quickSortRevised(list, lowIndex, pivotIndex - 1)
// Quick sort rightList (side to the right of the wall) recursively
call quickSortRevised(list, pivotIndex + 1, highIndex)
```

Take note that we do not include the `pivotIndex`

in the recursive call to QuickSort. If you already know the reason, great! If not, try thinking about exactly why we don’t include it. Don’t worry if you don’t get it after pondering for a few moments, because we will be covering the reason in the next section.

In the example above, when working with the left half of the current, the pivot is 7. Therefore, its index value is `4`

.

Therefore, the left list we will be working with will range `0`

to `pivotIndex - 1`

(0 to (4 – 1))

**left list:** `[4, 5, 3, 2]`

As a result, the right list, whose range is `pivotIndex + 1`

to `highIndex`

((4+1) to 6)

**right list:** `[8, 6]`

Here is the pseudo code for the partition operation. It should feel very similar to the naive partitioning, except for the fact that instead of pushing into a `leftList`

or `rightList`

container, we will be making swaps.

```
begin PartitionList(list, lowIndex, highIndex):
pivot = lastElement in List
wallIndex = lowIndex - one.
currentIndex = lowIndex.
for each element in list minus last element:
Define currentItem
if currentItem is less than pivot
increment wallIndex
Swap item at wallIndex with item at currentIndex
increment currentIndex
end if;
end for loop;
Swap pivot with element immediately to the right of the wall
(I.e. item at wallIndex + 1)
// Wall index + 1 is the final position of the pivot for the current list.
return wallIndex + 1;
end PartitionList;
```

After making the necessary swaps, we swap the position of the pivot with the element to the right of the wall, as shown in the image in the previous section.

If the pseudo code below did not make sense to you in the previous section, hopefully by now, it does.

```
// Quick sort leftList recursively
call quickSortRevised(list, lowIndex, pivotIndex - 1)
// Quick sort rightList (side to the right of the wall) recursively
call quickSortRevised(list, pivotIndex + 1, highIndex)
```

After partitioning the list, **we don’t need to sort the pivot**, because it has already been sorted (by being placed immediately to the right of the wall).

Why? Because logically speaking,** the pivot should be the smallest item on the right side of the wall**.

The aforementioned statement is important. If you don’t understand why the pivot is the smallest item on the right side of the wall, please review what you have read up until now and do not proceed until this makes sense to you.

To be fair, there was a lot of information to process. If you are more of a visual learner, perhaps you may need to solidify the knowledge acquired by watching a video tutorial. The tutorial below is about the **partition operation**. It is very concise, to the point and explains the partitioning operation of the quick sort algorithm very thoroughly.

Once you have the partitioning part down, the rest is just figuring out how to implement this operation recursively. This is why I strongly emphasize that you should have a solid working knowledge of recursion.

I will say it now: If you are serious about becoming a software engineer or a back-end developer, **learn recursion**. It teaches you a new way of thinking when solving problems. The more tools you have under your belt, the more successful you will be as a software engineer.

It happens when you are trying to** sort a sorted list** and you use the **last element as a pivot**. Lets say we have the following list.

`[1, 2, 3, 4, 5, 6, 7, 8]`

Lets take a look at each step. The pivot is 8. After the first partition operation, we have the following result

Step #1: [1,2,3,4,5,6,7] [8].

Step #2: [1,2,3,4,5,6][7]

Step #3: [1,2,3,4,5][6]

Step #4: [1,2,3,4][5]

Step #5: [1,2,3][4]

Step #6: [1,2][3]

Step #7: [1][2]

As you can see, **we can’t make a single rotation**.

Therefore, we have to go through the list O(n ^2) times, thus making the worst case scenario O (n ^ 2).

If you have been carefully following along, you would have noticed that we always make an unnecessary swap in the algorithm.

This unnecessary swap **occurs on step 1**. If the item is smaller than the pivot, it always makes a swap with itself.

If you are lucky and your initial element is 7 or something greater than 6 and this swap doesn’t occur, then you just got lucky.

Fortunately for us, there is a good way to avoid it.

Lets take a look at the pseudo code for the `partition()`

method.

```
begin PartitionList(list, lowIndex, highIndex):
pivot = last element in List
wallIndex = lowIndex - 1
currentIndex = lowIndex
```

Here, if we change starting `wallIndex`

to `lowIndex`

and `currentIndex`

to `lowIndex`

+ 1, we will skip the unnecessary check on element number one.

The pseudo code will change to something like this.

```
wallIndex = lowIndex
currentIndex = lowIndex + 1
```

By applying the aforementioned change, we are skipping the unnecessary and wasteful swap on the first iteration of the partitioning method, making your quick sort algorithm more efficient.

Yes, we can do even better than what we have done up until now. Because the location of the pivot affects the performance of the algorithm, we can hedge our bets by ideally placing the location of the pivot.

For choosing the right pivot for our merge sort algorithms, we can employ the following techniques.

One easy way is to choose the pivot at random, but as you can see, this is just like throwing a dice. But surprisingly, in many cases, this approach of choosing a pivot at random has worked surprisingly well.

Note that the list of techniques will continue to be updated. I will be adding onto this tutorial to make the content more comprehensive.

In future posts, if time permits, I will write a separate post on some of the quick sort variations. Knowing these algorithms will help you, as a software developer, form and write algorithms that best suit the specific problem that you are facing.

Using this method, we look at the first, middle and last element of the list. For example in the following list

`[3, 8, 6, 10, 7, 13, 25, 7, 5, 1]`

we choose 3, 7 and 1. When choosing the middle element in an even list, we can choose to either round up/down the middle index. In another words, depending on how we implement the algorithm, the middle value, in this example could either be 7 or 13.

For this example, we will have the middle item be 7. **After each partition/sort operation**, in the median of three technique, we choose the middle item as the pivot with the **assumption** that the **middle item is close to the median** of the entire list.

I will update this post in the near future to contain the source code for implementing this approach.

The source code will be made available in the following languages. Click the tabs below to view the source right away, or the links below to view the source on GitHub.

Here, we will put in the concept of the wall when partitioning the list. This is in line with the visual step by step diagram with **one difference**.

In the diagram, I added the pivot to the right of the wall (or `rightList`

) as the final step. Here, I have added it to the right list before starting the partition operation.

Why? Because arrays are NOT dynamic data structures. In JavaScript, we can just use the `unshift(item)`

method to add it to the front, but in other languages, we will have to potentially grow the array which will take O(n) time.

So, as a challenge, I highly recommend you to try implementing the swapping version of the quick sort algorithm that does not use a `leftList`

and `rightList`

. It might be challenging at the start but by the end of it, you will never forget how QuickSort works even if you tried to get it out of your head.

We already went through all the theory, so in time, through trial and error, you should arrive at the solution.

Best of luck and I hope to see you again 🙂

Here is the basic, naive implementation.

```
/*
* The quick sort implementation using an imaginary wall.
* The implementation details that were shown with the diagram above.
* Note that the wall is used as a visual guide to help programmers learn
* how quick sort works.
*
* The pivot is the most important concept in the quick sort.
* */
function quickSortWithWall(list) {
// Base case
if (list.length <= 1) {
return list;
}
return partitionList(list);
}
/**
* Partitioning operation here.
* */
function partitionList(list) {
var len = list.length;
var leftList = [];
var rightList = [];
var pivot = list[len - 1]; // LastElement
// Indexes used for wall partitioning
var currentIndex = 0,
wallIndex = 1; // Going to add pivot to the rightList in advance, so start at index 1
// Add pivot as the first element in the right list pre-emptively
rightList[0] = pivot;
for (var i = 0; i < len - 1; i++) {
var currentItem = list[i];
if (currentItem < pivot) {
leftList[currentIndex] = currentItem;
currentIndex++;
} else {
rightList[wallIndex] = currentItem;
wallIndex++;
}
}
// Recursively call quick sort with partitioned lists. Divide and conquer YEAH!
return quickSortWithWall(leftList).concat(quickSortWithWall(rightList));
}
```

Later on, I want to show you the quick sort algorithm without using the “wall”.

Because the wall is just an illustration to demonstrate where the pivot will be placed, which is either to the right or left of the wall, depending on personal preferences.

Step by step, I want you, as the reader, to be able to conceptualize how the quick sort algorithm works to make your own personal adjustments when required.

```
function quickSort(list) {
var len = list.length;
if (len <= 1) {
return list;
}
// Partitioning operation
var leftList = [];
var rightList = [];
var pivot = list[len - 1]; // LastElement
for (var i = 0; i < len - 1; i++) {
var currentItem = list[i];
// Add to left list if pivot is less than array.
if (currentItem < pivot) {
leftList.push(currentItem)
} else {
rightList.push(currentItem)
}
}
// Add pivot to front of right array
rightList.unshift(pivot);
return quickSort(leftList).concat(quickSort(rightList));
}
```

As you may have already deduced, we are using two additional lists to store the results of the left and right list. Here is a more optimized approach that makes use of swaps to remove the need for a `leftList`

and `rightList`

.

```
function quickSortOptimized(list, lowIndex, highIndex) {
if (lowIndex < highIndex) {
// Get the pivot index index after partition operation plus make swaps needed.
var currentPartitionIndex = partitionList(list, lowIndex, highIndex);
// E.g. On first iteration low index is 0, 4 - 1.
quickSortOptimized(list, lowIndex, currentPartitionIndex - 1);
// E.g. On first iteration low index is 4 + 1, 7. We are leaving out the index of the pivot
quickSortOptimized(list, currentPartitionIndex + 1, highIndex);
}
}
function partitionList(list, lowIndex, highIndex) {
// Indexes used for wall partitioning
var wallIndex = lowIndex - 1;
var pivot = list[highIndex]; // LastElement
// INFO: HEADS UP
// Set wall index and lowIndex to
// 0 and 1 respectively to avoid unnecessary swap at the start
for (var currentIndex = lowIndex; currentIndex < highIndex; currentIndex++) {
var currentItem = list[currentIndex];
// Perform swap
if (currentItem < pivot) {
// Increment wall index
wallIndex++;
// perform swap
// swap item at wall index with item at current index
var temp = list[wallIndex];
list[wallIndex] = list[currentIndex];
list[currentIndex] = temp;
}
}
// Swap pivot with item immediately to the right of the wall
var temp = list[wallIndex + 1];
list[wallIndex + 1] = list[highIndex];
list[highIndex] = temp;
return wallIndex + 1; // Return pivot index
}
function quickSort(list) {
quickSortOptimized(list, 0, list.length - 1);
}
var item = [3, 5, 9, 83, 123, 18, 8, 10];
quickSort(item);
console.log(item); // [3, 5, 8, 9, 10, 18, 83, 123]
```

Here is the basic quick sort that we have been implementing to get the general gist of what the wall and pivot represents. Needless to say, this implementation of the quick sort leaves much to be desired, especially in terms of performance (both time and memory complexity wise).

However, it is a great way to get acquainted with how QuickSort works (especially understanding the role of the pivot in the quick sort algorithm).

```
static <T extends Comparable<T>> List<T> quickSort(List<T> list) {
int len = list.size();
if (len <= 1) {
return list;
}
// Partition logic here. Feel free to make this polymorphic.
// For this tutorial, we will make array lists
List<T> leftList = new ArrayList<>();
List<T> rightList = new ArrayList<>();
// pivots. No need for currentIndex and wallIndex since we are pushing into left or right list
T pivot = list.get(len - 1);
// At the start, add pivot to the list so that it is to the right of the wall
rightList.add(pivot);
for (int i = 0; i < len - 1; i++) {
T currentItem = list.get(i);
// Move to left side of the wall
if (currentItem.compareTo(pivot) < 0) {
leftList.add(currentItem);
} else {
rightList.add(currentItem);
}
}
// Combine the partitioned left and right list.
List<T> combinedList = new ArrayList<>(quickSort(leftList));
combinedList.addAll(quickSort(rightList));
return combinedList;
}
```

**Sorts.java.**

The version below implements the index swapping approach that we talked about in the section: **QuickSort revised**.

```
public class Sorts {
static <T extends Comparable<T>> void quickSort(Comparable<T>[] list) {
quickSortImpl(list, 0, list.length - 1);
}
private static <T extends Comparable<T>> void quickSortImpl(Comparable<T>[] list, int lowIndex, int highIndex) {
if (lowIndex < highIndex) {
// Get the pivot index index after partition operation plus make swaps needed.
int currentPartitionIndex = partitionList(list, lowIndex, highIndex);
// E.g. On first iteration low index is 0, 4 - 1.
quickSortImpl(list, lowIndex, currentPartitionIndex - 1);
// E.g. On first iteration low index is 4 + 1, 7. We are leaving out the index of the pivot
quickSortImpl(list, currentPartitionIndex + 1, highIndex);
}
}
private static <T extends Comparable<T>> int partitionList(Comparable<T>[] list, int lowIndex, int highIndex) {
T pivot = (T) list[highIndex];
int wallIndex = lowIndex - 1; // index of smaller element. On first iteration, it is minus one
// As you saw in the image, you swap with the first and first element
// in the first iteration of the loop, wallIndex becomes 0
// currentIndex is 0 as well.
/**
* Optimization hint: make the following changes to avoid unnecessary swap at the start of partition
* int wallIndex = lowIndex;
* for (int currentIndex = lowIndex + 1;
*/
for (int currentIndex = lowIndex; currentIndex < highIndex; currentIndex++) {
// If currentItem is less than or equal to pivot
T currentItem = (T) list[currentIndex];
if (currentItem.compareTo(pivot) < 0) {
// Increment Wall index
wallIndex++;
// swap
T temp = (T) list[wallIndex];
list[wallIndex] = list[currentIndex];
list[currentIndex] = temp;
}
}
// Swap pivot with item at wallIndex
T temp = (T)list[wallIndex + 1];
list[wallIndex+1] = list[highIndex];
list[highIndex] = temp;
return wallIndex+1;
}
}
```

**App.java**

```
public static void main(String[] args) {
Integer[] arr = {1,45,2,3,82,7,10,9};
Sorts.quickSort(arr);
System.out.println("----------------------------");
System.out.println("Iterating through the SORTED list ...");
for (Integer i : arr) {
System.out.println(i);
}
}
```

App.java is pretty self-explanatory if you ask me. Just a simple main method with a sample list that we iterate over, printing the results, once we have applied our quick sort algorithm to the list.

```
std::vector<int> QuickSortImpl::Sort(std::vector<int> list)
{
int size = list.size();
if (list.size() <= 1)
{
return list;
}
// Partition operation
std::vector<int> leftList;
std::vector<int> rightList;
// Set pivot as last element
int pivot = list.back();
// Insert pivot as first element of rightList
rightList.push_back(pivot);
for (int i = 0; i < size - 1; i++)
{
int currentElement = list[i];
if (currentElement < pivot)
{
leftList.push_back(currentElement);
}
else
{
rightList.push_back(currentElement);
}
}
// partition recursively
leftList = Sort(leftList);
rightList = Sort(rightList);
// Combine the leftList and rightList
leftList.insert(leftList.end(), rightList.begin(), rightList.end());
// Return result
return leftList;
}
```

```
/**
* \brief Recursively call the Quick Sort Algorithm until we reach base case where index ranges overlap
* \param list
* \param lowIndex
* \param highIndex
* \return
*/
void QuickSortImpl::SortEfficientImpl(std::vector<int> &list, int lowIndex, int highIndex)
{
if (lowIndex < highIndex)
{
int pivotIndex = PartitionList(list, lowIndex, highIndex);
std::cout << "pivot index: " << pivotIndex << std::endl;
// Recursively sort left list
SortEfficientImpl(list, lowIndex, pivotIndex - 1);
// Sort right list
SortEfficientImpl(list, pivotIndex + 1, highIndex);
}
}
/**
* \brief Partition the list and return the index of the pivot.
* \param list
* \param lowIndex
* \param highIndex
* \return
*/
int QuickSortImpl::PartitionList(std::vector<int> &list, int lowIndex, int highIndex)
{
int pivot = list.back();
// set wallIndex to zero and currentIndex to 1 to
// avoid unnecessary swapping first element with itself.
int wallIndex = lowIndex - 1;
int currentIndex = lowIndex;
for (; currentIndex < highIndex; currentIndex++)
{
int currentItem = list[currentIndex];
if (currentItem < pivot)
{
// Increment wall index
wallIndex++;
// Perform swap
// swap item at wall index with item at current index
std::iter_swap(list.begin() + currentIndex, list.begin() + wallIndex);
}
}
// Swap pivot with item immediately to the right of the wall
std::iter_swap(list.begin() + (wallIndex + 1), list.begin() + highIndex);
return wallIndex + 1; // Return pivot index
}
```

```
test_list = [3, 8, 123, 9, 28, 172, 40, 8, 27, 13]
def quicksort(list_of_nums):
# Base case
if len(list_of_nums) <= 1:
return list_of_nums
pivot = list_of_nums[-1]
left_list = []
# Append pivot to right list
right_list = [pivot]
for currentItem in list_of_nums[:(len(list_of_nums) - 1)]:
if currentItem <= pivot:
left_list.append(currentItem)
else:
right_list.append(currentItem)
# partition the lists and return the concatenated result
return quicksort(left_list) + quicksort(right_list)
new_list = quicksort(test_list)
for num in new_list:
print(num)
```

```
# Function that is called to run quick sort
def quicksortefficient(list_of_items):
quicksortimpl(list_of_items, 0, (len(list_of_items) - 1))
# function that runs the partition logic
def partitionlist(list_of_items, low_index, high_index):
# initialize
# Change wall_index to low_index
# and current_index to low_index + 1 to avoid unnecessary swap at the start
wall_index = low_index - 1
current_index = low_index
pivot = list_of_items[high_index]
for index in range(current_index, high_index):
current_item = list_of_items[current_index]
if current_item <= pivot:
# increment wall index
wall_index += 1
# Perform swap with item at wall index and item at currentIndex
temp = list_of_items[wall_index]
list_of_items[wall_index] = list_of_items[current_index]
list_of_items[current_index] = temp
current_index += 1
# Lastly, swap pivot with item to the right of the wall
temp = list_of_items[wall_index + 1]
list_of_items[wall_index + 1] = list_of_items[high_index]
list_of_items[current_index] = temp
return wall_index + 1
# function that recursively calls itself until base case is reached
def quicksortimpl(list_of_items, low_index, high_index):
if low_index < high_index:
pivot_index = partitionlist(list_of_items, low_index, high_index)
quicksortimpl(list_of_items, low_index, pivot_index - 1)
quicksortimpl(list_of_items, pivot_index + 1, high_index)
print("quick sort improved ----------")
quicksortefficient(test_list)
for item in test_list:
print(item)
```

I am a programmer currently living in Seoul, South Korea. I created this blog as an outlet to express what I know / have been learning in text form for retaining knowledge and also to hopefully help the wider community. I am passionate about data structures and algorithms. The back-end and databases is where my heart is at.

follow me on:

Algorithmic Problem Solving for Programmers

Introduction to Algorithms for Beginners and Aspiring Programmers

Introduction to Data Structures – A Beginner Friendly Guide

Heap Data Structure – Simple Introduction to a Complex Topic – Part 1