MAKAUT BCA DSA Previous Year Questions (PYQ) with Solutions
This page provides a comprehensive collection of previous year questions (PYQ) from MAKAUT's BCA Data Structures and Algorithm (DSA) exams. Many questions are frequently repeated, making this resource essential for exam preparation. It includes key theory questions with detailed answers that can be easily memorized or conceptually understood. Whether you're revising for upcoming exams or looking for solved papers, this resource is designed to help you excel in your DSA subject.
View the answers in google docs.
Q1. Write a short note on priority queue?
Ans: A priority queue is a special type of data structure where each element has a priority. Elements with higher priority are processed first, even if they were added later. If two elements have the same priority, they are processed in the order they were added.
Key Points:
- Priority-Based Order: Elements are processed based on their priority, not just when they were added. Higher-priority elements get processed first.
- Heap Implementation: A priority queue is usually built using a heap because it efficiently maintains the order.
Main Operations:
- Enqueue (Insertion): Adds an element according to its priority.
- Dequeue (Deletion): Removes the element with the highest priority.
- Peek: Shows the element with the highest priority without removing it.
Applications:
- CPU Scheduling: Important tasks are processed before less important ones.
- Pathfinding Algorithms: Algorithms like Dijkstra’s use a priority queue to find the shortest path in a graph.
- Event Simulation: Events are processed based on their priority, like in a timeline or simulation.
Q2. Write the structure of a node for linked implementation of a polynomial.
Ans: A node in a linked list representing a polynomial typically contains three components:
- Coefficient: This stores the numerical value associated with the term.
- Exponent: This stores the power of the variable for the term.
- Next: This is a pointer to the next node in the linked list representing the next term of the polynomial.
Structure of a Node in C/C++:
struct Node {
int coefficient; // Stores the coefficient
int exponent; // Stores the exponent
struct Node* next; // Points to the next term in the list
};
Q3. What is hashing ? How is collision problem solved in hashing?
Ans: Hashing is a technique used to convert a large set of data (like a string or number) into a fixed-size value, called a hash code or hash value. It is primarily used in data structures like hash tables to quickly locate data. The process involves using a hash function that takes an input (or key) and returns an index, which tells us where to store or retrieve the data.
Example:
Imagine you want to store student records using their roll numbers. Instead of searching through all records one by one, you can use hashing. The roll number is passed through a hash function, which directly tells you where the student’s data is stored.
Collision Problem in Hashing
A collision happens when two different keys generate the same hash value, meaning they would be stored at the same index in the hash table. Since each index is meant to hold only one element, this creates a problem.
Following are the ways to solve chaining problems in hashing:
Chaining:In chaining, each index of the hash table points to a linked list (or a chain) of elements. If a collision occurs, the new element is simply added to the linked list at that index. This way, multiple elements can be stored in the same index.
Example:
If both roll numbers 12 and 22 produce the same hash index (say 2), they will be stored in a linked list at that index.
Open Addressing:In open addressing, when a collision happens, the algorithm searches for the next available index in the hash table according to a certain sequence until it finds an empty slot. Some common strategies include:
Linear Probing: Search the next index (i.e., index + 1) until an empty slot is found.
Quadratic Probing: Search in steps like 1², 2², 3², and so on.
Double Hashing: Apply a second hash function to determine the step size for finding the next index.
Rehashing:If collisions become frequent, the hash table is resized, and all elements are rehashed into the new table. This reduces the number of collisions by distributing elements more evenly.
Q4. What is recursion? How does it differ from iteration?
Ans: Recursion is a programming method where a function calls itself to solve a problem. It breaks a large problem into smaller, similar subproblems until it reaches a simple case that can be solved directly. This simple case is called the base case. Recursion is often used for problems like calculating factorials, finding Fibonacci numbers, or working with tree structures.
Difference Between Recursion and Iteration:
Feature |
Recursion |
Iteration |
Definition |
A function calls itself repeatedly to solve smaller parts of the problem. |
Repeating a set of instructions using loops like for or while. |
End Condition |
Stops when it reaches a base case (a simple solution). |
Stops when the loop condition becomes false. |
Memory Usage |
Uses more memory because each function call is stored on the call stack. |
Uses less memory since it only uses loop variables. |
Code Structure |
Can be shorter and cleaner for complex problems like tree traversal. |
Usually longer but easier to understand for simple tasks like counting or searching. |
Speed |
Slower due to multiple function calls and stack usage. |
Faster because it doesn’t involve extra function calls. |
Examples |
Calculating factorial, finding Fibonacci numbers, tree and graph traversals. |
Iterating over an array, counting, or simple loops. |
Q5.Write short notes on sparse matrices?
Ans: A sparse matrix is a matrix where most of the elements are zero. These matrices are common in areas like computer science, engineering, and data analysis. Since most elements are zero, storing all the elements (including zeros) wastes memory.
Key Points:
- Efficient Storage: Sparse matrices store only the non-zero elements, saving space.
- Types of Sparse Matrices:
- Diagonal Matrix: Only the diagonal has non-zero elements.
- Triangular Matrix: Non-zero elements are only in the upper or lower triangle of the matrix.
- Storage Methods:
- COO (Coordinate List): Stores non-zero values along with their row and column indices.
- CSR (Compressed Sparse Row): Stores non-zero elements row-wise.
- CSC (Compressed Sparse Column): Stores non-zero elements column-wise.
Applications:
- Graph Algorithms: Representing connections in a graph.
- Image Processing: Handling large image data with mostly blank areas.
- Machine Learning: Managing data with many missing or zero values.
Sparse matrices help save memory and improve efficiency when dealing with large datasets that have mostly zero values. Special storage methods are used to make them more manageable.
Q6. Write a non recursive function to traverse a tree using inorder traversal?
Ans: Please read the explanation of the code for better understanding:
Node Structure:
- The struct Node defines each node in the tree, holding the data and pointers to the left and right child nodes.
Stack Structure:
- The struct Stack is used to simulate the call stack of recursion. It stores tree nodes as we traverse the tree.
Stack Operations:
- push: Adds a tree node to the stack.
- pop: Removes and returns the top node from the stack.
- isEmpty: Checks if the stack is empty.
Inorder Traversal Logic:
- Start from the root and move left, pushing each node onto the stack until you reach the leftmost node.
- Pop the top node from the stack, print its data (in order visit), and then move to its right child.
- Repeat this until all nodes are visited.
Helper Functions (Optional):
- createNode is used to quickly build the tree for testing. You can skip this in exams if needed.
#include
#include
struct Node {
int data;
struct Node* left;
struct Node* right;
};
struct Stack {
struct Node* data;
struct Stack* next;
};
// Stack operations
void push(struct Stack** top, struct Node* node) {
struct Stack* newNode = (struct Stack*)malloc(sizeof(struct Stack));
newNode->data = node;
newNode->next = *top;
*top = newNode;
}
struct Node* pop(struct Stack** top) {
struct Stack* temp = *top;
struct Node* node = temp->data;
*top = (*top)->next;
free(temp);
return node;
}
int isEmpty(struct Stack* top) {
return top == NULL;
}
void inorder(struct Node* root) {
struct Stack* stack = NULL;
struct Node* curr = root;
while (curr != NULL || !isEmpty(stack)) {
while (curr != NULL) {
push(&stack, curr);
curr = curr->left;
}
curr = pop(&stack);
printf("%d ", curr->data);
curr = curr->right;
}
}
// Helper function to create nodes (optional in exams)
struct Node* createNode(int data) {
struct Node* newNode = (struct Node*)malloc(sizeof(struct Node));
newNode->data = data;
newNode->left = NULL;
newNode->right = NULL;
return newNode;
}
int main() {
struct Node* root = createNode(1);
root->left = createNode(2);
root->right = createNode(3);
root->left->left = createNode(4);
root->left->right = createNode(5);
inorder(root); // Expected Output: 4 2 5 1 3
return 0;
}
Q7. What are the advantages of linked lists over arrays?
Ans: Advantages of Linked Lists Over Arrays:
- Dynamic Size:
- Linked lists can grow or shrink dynamically. In arrays, the size is fixed and must be known in advance.
- Efficient Insertion and Deletion:
- In linked lists, inserting or deleting nodes is easier and faster. You just need to update the pointers. In arrays, shifting elements is required, which takes more time.
- No Wasted Memory:
- Linked lists do not need a pre-allocated memory block like arrays. Memory is allocated as needed, reducing wastage.
- Flexibility:
- Elements in linked lists are not stored in contiguous memory locations, which provides more flexibility in memory management.
- Ease of Implementation for Complex Data Structures:
- Linked lists are the building blocks for implementing advanced data structures like stacks, queues, and graphs.
Summary:
Linked lists offer better flexibility and memory efficiency for dynamic operations, while arrays are better for indexed access and constant-time lookup.
Q8 What are the disadvantages of linked lists over arrays?
Ans: Disadvantages of Linked Lists Over Arrays:
- No Direct Access:
- In linked lists, accessing elements is sequential (linear search). You must traverse from the head to reach any element, making access slower compared to arrays, which allow direct indexing.
- Extra Memory Usage:
- Each node in a linked list requires additional memory for storing pointers (next/previous), which increases memory usage compared to arrays.
- Cache Unfriendliness:
- Elements in linked lists are stored at scattered memory locations, making them less cache-friendly. In arrays, elements are stored contiguously, making better use of CPU cache.
- Complexity in Implementation:
- Operations like traversing, reversing, or sorting linked lists are more complex to implement compared to arrays.
- Higher Overhead for Simple Operations:
- For simple operations like searching or random access, linked lists are slower than arrays due to the need for pointer traversal.
Q9. Write a C function to implement PUSH and POP function in a stack?
Ans:
#include
#include // Include the stack library
// Driver code to test stack operations
int main() {
std::stack stack; // Create a stack of integers
// Push elements onto the stack
stack.push(10);
stack.push(20);
stack.push(30);
// Display the stack (note: displayStack function is removed here)
std::cout << "Stack elements:" << std::endl;
while (!stack.empty()) {
std::cout << stack.top() << " "; // Print the top element
stack.pop(); // Remove the top element
}
std::cout << std::endl;
// Push elements again for further testing
stack.push(40);
stack.push(50);
// Pop elements from the stack
if (!stack.empty()) {
std::cout << "Popped: " << stack.top() << std::endl;
stack.pop();
}
if (!stack.empty()) {
std::cout << "Popped: " << stack.top() << std::endl;
stack.pop();
}
return 0;
}
Q10. What is a B tree? What is the difference between a B tree and a B+tree?
Ans: A B-tree is a type of balanced tree used to keep data sorted and easily accessible. It’s great for managing large amounts of data, like in databases or files.
Key Features:
- Balanced: All leaf nodes are at the same level, so the tree is evenly balanced.
- Multi-Child Nodes: Each node can have multiple children, making it good for handling lots of data efficiently.
- Sorted Data: Keys in the nodes are kept in a sorted order, which helps in fast searching.
Feature |
B-Tree |
B+ Tree |
Data Storage |
Keys and values in all nodes |
Keys in internal nodes, values only in leaf nodes |
Leaf Node Linking |
Leaf nodes are not linked |
Leaf nodes are linked sequentially |
Internal Nodes |
Contain both keys and values |
Contain only keys |
Traversal |
Can be less efficient for range queries |
More efficient for range queries due to linked leaves |
Range Queries |
Less efficient, requires more effort |
More efficient, as linked leaves allow for quick range traversal |
Search |
Search can be less efficient due to unlinked leaves |
Search is efficient, especially for ordered data |
Summary
- B-Tree: Good for general use with balanced nodes storing both keys and values, but less efficient for range queries.
- B+ Tree: Optimised for range queries and ordered data with linked leaf nodes and keys stored in internal nodes only.
Q11 What is a deque? What is the advantage of dequeue over an ordinary queue?
Ans: A deque (double-ended queue) is a data structure that allows you to add and remove elements from both the front and the back. This makes it more flexible than a regular queue.
Key Features:
- Two Ends: You can insert and remove elements from both the front and back of the deque.
Advantages of Deque Over an Ordinary Queue
- Bidirectional Operations:
- Deque: Allows insertion and deletion of elements from both the front and the back.
- Ordinary Queue: Only allows insertion at the back and removal from the front.
- Greater Flexibility:
- Deque: Can be used to implement both queues and stacks, making it versatile for different types of operations.
- Ordinary Queue: Limited to first-in-first-out (FIFO) operations.
- Efficient for Certain Tasks:
- Deque: Useful when you need to frequently add or remove elements from either end.
- Ordinary Queue: Less efficient for tasks that require modifications at both ends.
- Use Cases:
- Deque: Ideal for situations where you need quick access to both ends of the data structure, such as in certain algorithms or managing tasks.
- Ordinary Queue: Suitable for simple FIFO operations, like scheduling tasks or processing requests in order.
Q12 Write a short note on AVL tree?
Ans: An AVL tree is a type of binary search tree (BST) that stays balanced to keep operations fast. It’s named after its creators, Adelson-Velsky and Landis.
Key Features:
- Balanced Tree: The height difference between the left and right subtrees of any node is at most 1. This balance helps keep the tree’s height small.
- Efficient Operations: Because the tree is balanced, searching, adding, and removing items are done quickly, in O(log n) time.
Rotations:
To maintain balance, the tree uses rotations:
- Left Rotation: Used when the tree is too heavy on the right side.
- Right Rotation: Used when the tree is too heavy on the left side.
- Left-Right and Right-Left Rotations: Used for more complex imbalances.
Advantages:
- Fast Operations: Keeps operations like search, insert, and delete fast, with time complexity of O(log n).
- Always Balanced: The tree remains balanced, ensuring good performance.
Q13. Write a short note on a threaded binary tree?
Ans: A threaded binary tree is a type of binary tree that makes in-order traversal more efficient by using "threads" to link nodes in the tree. This structure helps in accessing nodes quickly without the need for recursion or a stack.
Key Features:
- Threads: In a threaded binary tree, empty pointers (usually pointing to NULL) in nodes are used to point to their in-order successor or predecessor instead. These pointers are called "threads."
- In-Order Traversal: Threads help in performing in-order traversal more efficiently. Instead of using a stack or recursion, you can follow these threads to visit nodes in the correct order.
Advantages:
- Efficient Traversal: Makes in-order traversal faster and easier because you can follow the threads directly.
- Reduced Space Complexity: Eliminates the need for a stack or recursion in traversal algorithms.
Q14. Write a short note on priority queue?
Ans: A priority queue is a type of data structure where each element is assigned a priority. In this structure, elements with higher priority are processed before elements with lower priority, regardless of the order in which they were added.
Key Features:
- Priority-Based Processing: In a priority queue, the order of processing depends on the priority of elements rather than the order of their insertion.
- Types of Priority Queues:
- Max-Priority Queue: The element with the highest priority is removed first.
- Min-Priority Queue: The element with the lowest priority is removed first.
Implementation:
Priority queues are usually implemented using:
- Binary Heaps: The most common and efficient implementation where insertion and deletion take O(log n) time.
- Arrays or Linked Lists: Simpler but less efficient options for large datasets.
Operations:
- Enqueue (Insertion): Adding an element based on its priority.
- Dequeue (Removal): Removing the element with the highest or lowest priority.
- Peek: Viewing the element with the highest or lowest priority without removing it.
Q15. Write a short note on the searching algorithm of BS Tree?
Ans: Searching in a Binary Search Tree (BST) is an efficient process due to its structure. In a BST, the left child of any node contains values smaller than the node, and the right child contains values larger than the node.
Steps of the Search Algorithm:
- Start from the Root:
- Begin at the root node and compare the value to be searched with the root’s value.
- Comparison:
- If the value is equal to the root’s value, the search is successful.
- If the value is less than the root’s value, move to the left child.
- If the value is greater than the root’s value, move to the right child.
- Repeat Until Found or End:
- Keep comparing and moving left or right until you find the value or reach a NULL, which indicates the value is not present.
- Return Result:
- If the value is found, return the node containing the value.
- If the value is not found (i.e., you reach a NULL), the value does not exist in the tree.
Q16. Write a short note on Linear Search?
Ans: Linear search is the simplest searching algorithm used to find a specific element in a list, array, or any collection. In this method, each element is checked one by one from the beginning until the desired element is found or the list ends.
Steps in Linear Search:
- Start from the First Element:
- Begin at the first element of the list or array.
- Compare Each Element:
- Compare the current element with the target element.
- If they match, the search is successful, and the position of the element is returned.
- Move to the Next Element:
- If the current element does not match, move to the next element.
- Repeat Until Found or End of List:
- Continue the process until you find the target element or reach the end of the list.
- If Not Found:
- If the end of the list is reached and the element is not found, the search is unsuccessful.
Example:
Consider an array: [3, 8, 2, 5, 9].
To search for 5:
- Start at 3 (no match).
- Move to 8 (no match).
- Move to 2 (no match).
- Move to 5 (match found).
Time Complexity:
- Best Case: O(1) (if the element is found at the beginning).
- Worst and Average Case: O(n) (if the element is at the end or not present).
Advantages:
- Simple and easy to implement.
- Works on both sorted and unsorted data.
Disadvantages:
- Inefficient for large datasets as it checks each element one by one
Q17. Write a short note on Binary Search?
Ans: Binary Search is an efficient algorithm used to find a specific element in a sorted array or list. It works by repeatedly dividing the search interval in half, which makes it much faster than linear search.
Steps of Binary Search:
- Initial Setup:
- Start with two pointers: one at the beginning (left) and one at the end (right) of the list.
- Find the Middle Element:
- Calculate the middle index using:
middle = (left + right) / 2
- Compare the Middle Element:
- If the middle element matches the target, the search is successful.
- If the target is smaller than the middle element, narrow the search to the left half by moving the right pointer to middle - 1.
- If the target is larger than the middle element, narrow the search to the right half by moving the left pointer to middle + 1.
- Repeat Until Found or Left > Right:
- Continue the process until the target element is found or the left pointer crosses the right pointer, indicating the element is not in the list.
Example:
Consider a sorted array: [2, 5, 8, 12, 16, 23, 38]
To search for 16:
- Start with the middle element 12.
- Since 16 is greater than 12, focus on the right half.
- The next middle element is 16, which matches the target.
Time Complexity:
- Best Case: O(1) (if the middle element is the target).
- Average and Worst Case: O(log n) (due to repeatedly halving the search space).
Advantages:
- Much faster than linear search, especially for large datasets.
- Efficient for sorted data.
Disadvantages:
- Only works on sorted arrays or lists.
- Requires additional steps to sort the data if it’s not already sorted.
Q18. Differentiate between Linear and Binary Search?
Ans:
Feature |
Linear Search |
Binary Search |
Definition |
Sequentially checks each element in the list until the target is found or the list ends. |
Repeatedly divides the sorted list into halves to locate the target element. |
Data Requirement |
Works on both sorted and unsorted data. |
Requires the data to be sorted. |
Time Complexity |
O(n) in the worst and average cases. |
O(log n) in the worst and average cases. |
Best Case |
O(1) (if the element is at the first position). |
O(1) (if the element is the middle element). |
Efficiency |
Less efficient for large datasets. |
More efficient for large datasets. |
Use Case |
Suitable for small or unsorted lists. |
Suitable for large, sorted lists or arrays. |
Implementation |
Simple and easy to implement. |
Slightly more complex due to the need for sorting and mid-point calculations. |
Space Complexity |
O(1) (no extra space needed). |
O(1) (no extra space needed for iterative implementation). |
Flexibility |
Can be used on any type of list. |
Limited to sorted lists only. |
Q19. Write a short note on Selection Sort?
Ans: Selection Sort is a simple and intuitive sorting algorithm used to arrange elements in a list or array in a specific order (ascending or descending). It works by repeatedly finding the minimum (or maximum) element from the unsorted part of the list and moving it to the sorted part.
Steps of Selection Sort:
- Start from the First Element:
- Begin at the first element of the list and consider it as the starting point for finding the minimum (or maximum) element.
- Find the Minimum (or Maximum) Element:
- Scan the entire unsorted portion of the list to find the smallest (or largest) element.
- Swap with the First Unsorted Element:
- Swap the found minimum (or maximum) element with the first element of the unsorted portion.
- Move to the Next Element:
- Consider the next element as the new starting point and repeat the process until the whole list is sorted.
Example:
Consider the array: [64, 25, 12, 22, 11]
- First Pass: Find the minimum (11) and swap it with the first element (64). Result: [11, 25, 12, 22, 64]
- Second Pass: Find the next minimum (12) and swap it with the second element (25). Result: [11, 12, 25, 22, 64]
- Third Pass: Find the next minimum (22) and swap it with the third element (25). Result: [11, 12, 22, 25, 64]
- Remaining Passes: No more swaps needed as the array is already sorted.
Time Complexity:
- Best, Average, and Worst Case: O(n^2) where n is the number of elements in the list.
Advantages:
- Simple to understand and implement.
- Works well for small datasets or lists.
Disadvantages:
- Inefficient for large lists due to its O(n^2) time complexity.
- Performs a lot of unnecessary comparisons and swaps.
Q20. Write a short note on insertion Sort?
Ans: Insertion Sort is a simple and intuitive sorting algorithm that builds the final sorted list one item at a time. It is similar to how people sort playing cards in their hands.
Steps of Insertion Sort:
- Start with the Second Element:
- Consider the first element as sorted. Start with the second element and compare it with the elements before it.
- Compare and Insert:
- Compare the current element (key) with the elements in the sorted part of the list.
- Shift all larger elements in the sorted part to the right to make space.
- Insert the key in its correct position.
- Repeat for All Elements:
- Move to the next element and repeat the process until all elements are inserted into their correct positions.
Example:
Consider the array: [5, 2, 9, 1, 5, 6]
- Initial Array: [5, 2, 9, 1, 5, 6]
- Insert 2: [2, 5, 9, 1, 5, 6] (2 is inserted before 5)
- Insert 9: [2, 5, 9, 1, 5, 6] (9 remains in place as it’s already in order)
- Insert 1: [1, 2, 5, 9, 5, 6] (1 is inserted at the beginning)
- Insert 5: [1, 2, 5, 5, 9, 6] (Second 5 is inserted before 9)
- Insert 6: [1, 2, 5, 5, 6, 9] (6 is inserted before 9)
Time Complexity:
- Best Case: O(n) (when the array is already sorted).
- Average and Worst Case: O(n^2) (when the array is in reverse order).
Advantages:
- Simple to implement and understand.
- Efficient for small or nearly sorted datasets.
Disadvantages:
- Inefficient for large datasets due to its quadratic time complexity.
- Performs many comparisons and shifts for large or unsorted lists.
Q21. Write a short note on Merge Sort?
Ans: Merge Sort is a highly efficient and stable sorting algorithm that uses the divide-and-conquer approach to sort elements. It divides the list into smaller sublists, sorts each sublist, and then merges the sorted sublists to produce the final sorted list.
Steps of Merge Sort:
- Divide:
- Split the list into two roughly equal halves.
- Continue dividing each half into smaller sublists until each sublist contains only one element (a list with one element is inherently sorted).
- Conquer (Sort and Merge):
- Merge the sublists back together in a sorted manner.
- Compare the elements of the sublists and combine them to form sorted lists.
- Combine:
- Continue merging the sorted sublists to form larger sorted sublists until the entire list is merged into a single sorted list.
Example:
Consider the array: [38, 27, 43, 3, 9, 82, 10]
- Split into [38, 27, 43, 3] and [9, 82, 10]
- Further split into [38, 27], [43, 3], [9], [82, 10]
- Continue splitting until each sublist has one element.
- Merge [38] and [27] to form [27, 38]
- Merge [43] and [3] to form [3, 43]
- Merge [82] and [10] to form [10, 82]
- Combine sorted sublists to get [3, 27, 38, 43] and [10, 82]
- Finally, merge these two to get the fully sorted array [3, 9, 10, 27, 38, 43, 82]
Time Complexity:
- Best, Average, and Worst Case: O(n log n) where n is the number of elements in the list.
Advantages:
- Efficient: Provides consistent performance with O(n log n) time complexity.
- Stable: Maintains the relative order of equal elements.
Disadvantages:
- Space Complexity: Requires additional space for merging, O(n) extra space.
- Not In-Place: Does not sort the array in place; additional space is needed for temporary storage.
Q22. Write a short note on Quick Sort?
Ans: Quick Sort is a widely used and efficient sorting algorithm that follows the divide-and-conquer approach. It is known for its fast performance on average and is often used in practical applications.
Steps of Quick Sort:
- Choose a Pivot: Select an element from the array to act as the pivot. Different strategies can be used to choose the pivot (e.g., first element, last element, middle element, or a random element).
- Partition: Rearrange the elements in the array such that all elements less than the pivot come before it, and all elements greater than the pivot come after it. The pivot is placed in its correct position.
- Recursively Apply: Apply the same process to the subarrays formed by dividing the array at the pivot. Recursively sort the left and right subarrays.
- Combine: Since the subarrays are sorted in place, no additional work is needed to combine them. The array is sorted when all recursive calls are complete.
Example:
Consider the array: [10, 7, 8, 9, 1, 5]
- Choose Pivot: Select 5 (last element).
- Partition: Rearrange so that elements less than 5 are on the left and elements greater than 5 are on the right. Result: [1, 5, 8, 9, 7, 10]
- Recursive Calls: Apply Quick Sort to [1] and [8, 9, 7, 10]
- For [8, 9, 7, 10], choose 10 as the pivot and partition: [8, 9, 7, 10]
- Further sort [8, 9, 7] with 7 as the pivot: [7, 8, 9]
Final sorted array: [1, 5, 7, 8, 9, 10]
Time Complexity:
- Best and Average Case: O(n log n) where n is the number of elements in the array.
- Worst Case: O(n^2) (when the pivot selection results in unbalanced partitions, e.g., if the smallest or largest element is always chosen as the pivot).
Advantages:
- Efficient: Often faster in practice than other O(n log n) algorithms like Merge Sort or Heap Sort.
- In-Place: Sorts the array with minimal additional memory usage.
Disadvantages:
- Worst-Case Performance: Can degrade to O(n^2) if not implemented with good pivot selection strategies.
- Unstable: May change the relative order of equal elements.