ISC Computer Science — Exam Ready

Big O Notation
& Complexity

An interactive guide to mastering computational complexity — from O(1) to O(n!) — built for your ISC exam.

Section 01
What is Big O Notation?
Big O describes how an algorithm's time or space requirements scale with input size n. It gives us a language to compare algorithms — independent of hardware or implementation.
📐

Input Size n

The measure of how much data the algorithm processes. Could be array length, number of nodes in a tree, digits in a number, or rows in a matrix. As n grows, runtime changes.

🎯

Computational Complexity

The number of elementary operations (comparisons, assignments, arithmetic) an algorithm performs. We express this as a function of n and classify it using Big O.

Why It Matters

An O(n²) algorithm on 1 million elements takes ~10¹² operations. An O(n log n) sort takes ~20 million. Choosing the right algorithm is often more important than faster hardware.

The Big O Classes — from fastest to slowest

NotationNameRatingn=10n=100n=1000Example
O(1) Constant Excellent 111 Array access by index
O(log n) Logarithmic Great 3710 Binary Search
O(n) Linear Good 101001,000 Linear Search
O(n log n) Linearithmic Fair 336649,966 Merge Sort, Quick Sort (avg)
O(n²) Quadratic Poor 10010,0001,000,000 Bubble Sort, Selection Sort
O(2ⁿ) Exponential Terrible 1,02410³⁰10³⁰¹ Recursive Fibonacci (naive)
O(n!) Factorial Unusable 3.6M Brute-force permutations
Section 02
Interactive Growth Chart
Toggle complexity classes on and off. Drag the slider to zoom in on small values of n. Watch how each curve behaves differently.
15
Section 03
The Dominant Term
As n grows large, one term swamps all others. Constants and lower-order terms become irrelevant. Big O keeps only the dominant term.

Expression: T(n) = 3n³ + 5n² + 100n + 500

Drag n to see which term dominates the total count

5 Total: 0

Big O rule: Drop constants and lower-order terms.
3n³ + 5n² + 100n + 500O(n³) because for large n, the n³ term is overwhelmingly dominant.

How to read Big O expressions

Constant
O(1)
e.g. arr[5]
Time does not change with input size. Accessing any array element by index always takes the same number of operations, regardless of array length.
Logarithmic
O(log n)
e.g. Binary Search
Input is halved each step. If n = 1 million, only ~20 steps needed. log₂(1,000,000) ≈ 20. Extremely efficient.
Linear
O(n)
e.g. Linear Search
One operation per element. If n doubles, time doubles. A single for-loop over n elements is always O(n).
Linearithmic
O(n log n)
e.g. Merge Sort
Divide-and-conquer algorithms. Split the problem (log n levels) and do O(n) work at each level. The best possible for comparison-based sorting.
Quadratic
O(n²)
e.g. Bubble Sort
A nested loop — for every element, iterate all elements again. If n = 1000, roughly 1 million operations. Avoid for large inputs.
Exponential
O(2ⁿ)
e.g. Naive Fibonacci
Each additional element doubles the operations. n=50 means over a quadrillion operations. Only feasible for very small inputs.
Section 04
Best, Average & Worst Case
The same algorithm can behave very differently depending on the input. We analyse all three scenarios for a complete picture.
Ω

Best Case — Omega (Ω)

The minimum number of operations needed. Occurs when the input is perfectly arranged for the algorithm. Rare in practice.

Ω(f(n)) — lower bound
Θ

Average Case — Theta (Θ)

Expected performance over all possible inputs. Usually requires probability analysis. Most realistic measure for real-world use.

Θ(f(n)) — tight bound
O

Worst Case — Big O

The maximum operations needed. Guaranteed upper bound — algorithm never performs worse than this. Most commonly used in analysis.

O(f(n)) — upper bound

Linear Search — All Three Cases

CaseScenarioOperationsBig O / Notation
Best Target is at index 0 (first element) 1 comparison Ω(1)
Average Target found somewhere in middle n/2 comparisons Θ(n)
Worst Target at last position or not found n comparisons O(n)

ISC Exam tip: When asked for the complexity of an algorithm, always state the worst case using Big O unless specifically asked for best or average case. For Linear Search, the answer is O(n).

Section 05
Algorithm Complexity Analysis
Every ISC searching and sorting algorithm with its Java code, step-by-step complexity analysis, and all three cases.
int linearSearch(int[] arr, int target) { for (int i = 0; i < arr.length; i++) { // n iterations max if (arr[i] == target) // 1 comparison per step return i; // found! } return -1; // not found }
Worst case complexity
O(n)

Target not found — must check all n elements. The loop runs n times.

Best / Average / Worst
Ω(1)  /  Θ(n/2)  /  O(n)

Best: first element. Average: middle. Worst: last or not present.

Space complexity
O(1) — constant extra space

No auxiliary data structure needed. Only a loop variable.

Key property
Works on unsorted arrays

Does not require the array to be sorted. Simple but inefficient for large datasets.

int binarySearch(int[] arr, int target) { int low = 0, high = arr.length - 1; while (low <= high) { // log₂n iterations int mid = (low + high) / 2; if (arr[mid] == target) return mid; // found else if (arr[mid] < target) low = mid + 1; // search right half else high = mid - 1; // search left half } return -1; }
Worst case complexity
O(log n)

Each iteration halves the search space. For n=1024, only 10 comparisons needed.

Best / Average / Worst
Ω(1)  /  Θ(log n)  /  O(log n)

Best: target is the midpoint on first try. Worst: keep halving until 1 element.

Precondition — very important!
Array MUST be sorted

Binary search only works on sorted data. Sorting first = O(n log n) + O(log n) = O(n log n) overall.

Why log n?
n → n/2 → n/4 → ... → 1

After k steps, n/2ᵏ = 1, so k = log₂(n). The search space halves each time.

void bubbleSort(int[] arr) { int n = arr.length; for (int i = 0; i < n-1; i++) { // n passes for (int j = 0; j < n-i-1; j++) { // n-i-1 comparisons if (arr[j] > arr[j+1]) { int temp = arr[j]; // swap arr[j] = arr[j+1]; arr[j+1] = temp; } } } }
Worst case complexity
O(n²)

Two nested loops: outer runs n−1 times, inner runs n−i−1 times. Total ≈ n(n−1)/2 comparisons.

Best / Average / Worst
Ω(n)  /  Θ(n²)  /  O(n²)

Best (already sorted, with flag optimization): O(n). Without flag: always O(n²).

Why n²?
n + (n−1) + (n−2) + ... + 1 = n(n−1)/2

n(n−1)/2 is a quadratic expression. Drop constants → O(n²).

Stability
Stable sort

Equal elements maintain their original relative order after sorting.

void selectionSort(int[] arr) { int n = arr.length; for (int i = 0; i < n-1; i++) { // n-1 passes int minIdx = i; for (int j = i+1; j < n; j++) { // find minimum if (arr[j] < arr[minIdx]) minIdx = j; } int temp = arr[minIdx]; // swap minimum to position i arr[minIdx] = arr[i]; arr[i] = temp; } }
Worst case complexity
O(n²)

Always performs n(n−1)/2 comparisons regardless of input. No early exit possible.

Best / Average / Worst
Ω(n²)  /  Θ(n²)  /  O(n²)

Unlike Bubble Sort, Selection Sort is always O(n²) — even on sorted arrays.

Advantage over Bubble Sort
Fewer swaps

At most n−1 swaps (one per pass) vs Bubble Sort's O(n²) swaps. Better when swaps are expensive.

Stability
Not stable

Long-range swaps can disrupt the relative order of equal elements.

void insertionSort(int[] arr) { int n = arr.length; for (int i = 1; i < n; i++) { // start from 2nd element int key = arr[i]; int j = i - 1; while (j >= 0 && arr[j] > key) { // shift larger elements arr[j+1] = arr[j]; j--; } arr[j+1] = key; // place key in correct position } }
Worst case complexity
O(n²)

Worst case: reverse-sorted array. Each element must shift all the way to the beginning.

Best / Average / Worst
Ω(n)  /  Θ(n²)  /  O(n²)

Best case (already sorted): inner while never executes → O(n). Excellent for nearly-sorted data!

Practical use
Great for small / nearly-sorted arrays

Used in hybrid algorithms (like TimSort) for small subarrays. Low overhead, stable, in-place.

void mergeSort(int[] arr, int l, int r) { if (l < r) { int mid = (l + r) / 2; mergeSort(arr, l, mid); // sort left half → T(n/2) mergeSort(arr, mid+1, r); // sort right half → T(n/2) merge(arr, l, mid, r); // merge → O(n) } } // Recurrence: T(n) = 2T(n/2) + O(n) → O(n log n)
Worst case complexity
O(n log n)

Always O(n log n) — even for worst-case input. Derived from recurrence T(n) = 2T(n/2) + n.

Best / Average / Worst
Ω(n log n)  /  Θ(n log n)  /  O(n log n)

All three cases are the same — guaranteed O(n log n). More predictable than Quick Sort.

Recurrence relation — ISC key point
T(n) = 2T(n/2) + n

Split into 2 halves [2T(n/2)] + merge step [O(n)]. Master theorem → O(n log n).

Space complexity
O(n) — needs auxiliary array

Unlike Bubble/Selection/Insertion, Merge Sort needs extra space for the merge step.

Section 06
Algorithm Visualizer
Watch the algorithms search and sort in real time. Step through each operation and count comparisons.
Search Visualizer — enter target and choose algorithm
Target:
Press a search button to start
Sort Visualizer — watch comparisons happen live
Press a sort button to start
Section 07
ISC Exam Quick Reference
Every algorithm you need to know — with worst-case complexity and exam notes. Memorise this table.
AlgorithmBestAverageWorstSpaceStable?
Linear SearchΩ(1)Θ(n)O(n)O(1)
Binary SearchΩ(1)Θ(log n)O(log n)O(1)
Bubble SortΩ(n)Θ(n²)O(n²)O(1)Yes
Selection SortΩ(n²)Θ(n²)O(n²)O(1)No
Insertion SortΩ(n)Θ(n²)O(n²)O(1)Yes
Merge SortΩ(n log n)Θ(n log n)O(n log n)O(n)Yes
Quick SortΩ(n log n)Θ(n log n)O(n²)O(log n)No
Section 08
Test Yourself — Quiz
10 ISC-style questions on Big O and complexity. Aim for a perfect score before your exam!
Question 1 of 10